Building AI Agents on edge devices using Ollama + Phi-4-mini Function Calling

#20
by nguyenbh - opened
Microsoft org

This is a practical article on using Phi-4-mini function calling in Ollama.
Hope it is helpful.

This comment has been hidden (marked as Off-Topic)

Great thread. Phi-4-mini-instruct is genuinely one of the more underrated choices for edge agent deployments right now β€” the function calling reliability at that parameter count is surprisingly solid, especially compared to what you'd have expected from models this size 18 months ago. A few things worth noting if you're building serious agent pipelines on top of it via Ollama:

The tool call schema adherence can get brittle when you're chaining multiple function calls in a single context window, particularly if your tool definitions are verbose. I've found that keeping tool schemas tight β€” minimal descriptions, no redundant fields β€” meaningfully improves parse reliability without any fine-tuning. Also worth benchmarking your specific tool signatures rather than relying on generic evals; there's a real "eval tax" problem right now where teams over-index on benchmark numbers that don't reflect their actual tool surface area.

One thing that comes up fast when you move from single-agent to multi-agent on edge is the question of agent identity and trust between nodes. If you're orchestrating multiple Phi-4-mini instances β€” say, a local coordinator delegating to specialized tool-calling agents β€” you quickly need some way to verify that a function call result actually came from the agent you think it did, and not a compromised or stale node. This is the problem space we work on at AgentGraph: cryptographic identity and trust scoring for agent-to-agent communication. It's less of an issue in a single-device Ollama setup, but once you distribute across edge nodes it becomes a real attack surface. Worth thinking about the trust model early rather than retrofitting it later.

Sign up or log in to comment