ResourcesBlogs
How To Design AI Agents That Are Not Decision Trees In Disguise

How To Design AI Agents That Are Not Decision Trees In Disguise

Xinjing Xu, FDE

The goal of agent design is not to pre-specify every move. It is to preserve good judgment across messy variations of the same problem.

Lots of agentic AI products on the market are just an LLM + router + IF ELSE branches with poor generalization abilities. How to tell? Just ask questions from a long tail distribution of real-world scenarios and watch it struggle to provide a human-like response. As soon as inputs deviates from the narrow design space of the agent, performance plummets.

What does all this mean? It means that if your vendor’s AI feature/product only works well when every scenario is pre-imagined, they did not build an agent; they built a brittle decision tree with slightly better decision making skills.

Decision tree agents is the result of the wrong problem definition. AI agents are not designed to capture all permutations in messy problems. Rather than surface patterns, real AI agents should only capture invariants: intent, constraints, capabilities, context, and what good looks like. The latest foundational models are so intelligent that marginal excessive instructions degrade performance: would you micromanage Albert Einstein?

How To Design AI Agents That Are Not Decision Trees In Disguise

Xinjing Xu, FDE
Apr 3, 2026

Heading

Increase in patient engagement

Heading

Reduction in appointment cancellations

Heading

Improvement in treatment adherence

Lots of agentic AI products on the market are just an LLM + router + IF ELSE branches with poor generalization abilities. How to tell? Just ask questions from a long tail distribution of real-world scenarios and watch it struggle to provide a human-like response. As soon as inputs deviates from the narrow design space of the agent, performance plummets.

What does all this mean? It means that if your vendor’s AI feature/product only works well when every scenario is pre-imagined, they did not build an agent; they built a brittle decision tree with slightly better decision making skills.

Decision tree agents is the result of the wrong problem definition. AI agents are not designed to capture all permutations in messy problems. Rather than surface patterns, real AI agents should only capture invariants: intent, constraints, capabilities, context, and what good looks like. The latest foundational models are so intelligent that marginal excessive instructions degrade performance: would you micromanage Albert Einstein?

"Great agents aren't built by adding more. They emerge by removing noise."

- Xinjing Xu, Forward Deployed Engineer

In designing actually smart agents, I’ve found the following really helpful:

  1. Don’t optimize something that shouldn’t exist. If you are not adding things back in, you have not deleted enough. Thank you Elon for this design philosophy.
  2. Encode intention, not procedure. Identify the invariants in your messy situation. What doesn’t change about all situations you are encountering? What should the agent achieve in an ambiguous situation?
  3. Give the model what it needs to succeed. Just like how you would train your new hire, train the model the exact same way; know what instructions to give, but also know what instructions not to give.

This is how to turn brute-force prompt engineering into elegant AI system design.

“Quiet, genius at work!” is the most comical and pertinent phrase for this situation. But seriously, maybe every AI builder should keep this in mind before word-vomiting at their LLMs.

Thank you! The case study will be emailed to you shortly. Please check your spam and junk folders

Oops! Something went wrong while submitting the form.

Thank you! The case study will be emailed to you shortly. Please check your spam and junk folders

Oops! Something went wrong while submitting the form.