How To Design AI Agents That Are Not Decision Trees In Disguise

Lots of agentic AI products on the market are just an LLM + router + IF ELSE branches with poor generalization abilities. How to tell? Just ask questions from a long tail distribution of real-world scenarios and watch it struggle to provide a human-like response. As soon as inputs deviates from the narrow design space of the agent, performance plummets.
What does all this mean? It means that if your vendor’s AI feature/product only works well when every scenario is pre-imagined, they did not build an agent; they built a brittle decision tree with slightly better decision making skills.
Decision tree agents is the result of the wrong problem definition. AI agents are not designed to capture all permutations in messy problems. Rather than surface patterns, real AI agents should only capture invariants: intent, constraints, capabilities, context, and what good looks like. The latest foundational models are so intelligent that marginal excessive instructions degrade performance: would you micromanage Albert Einstein?
"Great agents aren't built by adding more. They emerge by removing noise."
In designing actually smart agents, I’ve found the following really helpful:
- Don’t optimize something that shouldn’t exist. If you are not adding things back in, you have not deleted enough. Thank you Elon for this design philosophy.
- Encode intention, not procedure. Identify the invariants in your messy situation. What doesn’t change about all situations you are encountering? What should the agent achieve in an ambiguous situation?
- Give the model what it needs to succeed. Just like how you would train your new hire, train the model the exact same way; know what instructions to give, but also know what instructions not to give.
This is how to turn brute-force prompt engineering into elegant AI system design.
“Quiet, genius at work!” is the most comical and pertinent phrase for this situation. But seriously, maybe every AI builder should keep this in mind before word-vomiting at their LLMs.



.avif)



