Skip to content
What are you feeding your AI?

What are you feeding your AI?

April 13, 2026

What are you feeding your AI?

We’ve all been there: trying to get an AI to understand a specific task without having to explain the “why, why, why” every single time. It’s the reason we started using context files. We’re tasked with giving our AI a consistent “source of truth” so it can just get to work (if you missed my previous post on why consistency and context files are important, check it out here).

But as we move from simple chat prompts to autonomous agents, the kind that navigate entire codebases and solve complex bugs, we’re hitting a wall. For technical leaders and established AI users, the "more context is better" strategy isn't just failing; it’s actually becoming a liability.

The Evolving Relationship: Human vs. Machine Documentation

We are currently witnessing a shift in the long-term trajectory of AI at scale. We are moving from human-centric documentation (written to be read) to machine-centric documentation (written to be executed).

A recent 2026 study titled Evaluating AGENTS.md highlights that our current approach of dumping massive overviews into context files isn’t the answer for tomorrow’s scaled AI.

The research confirms that context files are highly effective when they provide specific, actionable technical details, like telling an agent to use a specific package manager like uv. Agents are incredibly obedient; they follow these instructions to the letter.

However, when we use agents to generate broad “overviews,” the results are often redundant. These files fail to help agents find relevant files any faster, essentially turning “helpful context” into expensive noise that provides no meaningful speed-up.

5 Hard Truths of AI at Scale

To understand why your agents might be struggling, we have to look at the data. Here is what the research really tells us about the performance of agents when they are forced to “wade through” context:

1. AI is a "Literalist," not a "Generalist"

Agents tend to respect the instructions they are given in context files. If those instructions are broad or unnecessary, the agent will take them literally and waste time on "thorough testing" and "broader exploration" that isn't required for the task at hand.

2. More Context ≠ More Intelligence

In fact, the opposite is often true. The study found that LLM-generated context files actually **reduced** task success rates by an average of 3% on popular repositories. Adding an automated overview doesn't make the model smarter; it often just makes the task harder.

3. Human Documentation Still Rules

While we want to automate everything, human-written context files still outperform LLM-generated ones. However, there's a catch: they only work when they describe minimal requirements, like specific tooling, rather than trying to explain the whole world to the agent.

4. The "Thinking Tax" is Real

This is the part that hits your bottom line. Providing context files increases inference costs by over 20%. Why? Because models use significantly more "reasoning tokens" (up to 22% more for models like GPT-5.2) to process the extra instructions, meaning you pay more for a lower success rate.

5. Documentation is for Humans OR Machines, Rarely Both

Context files are most "helpful" when a repository has no other documentation. In well-documented, large-scale projects, these files act as redundant noise. The agent gets confused between the existing docs and the new context file, leading to unnecessary steps and higher failure rates.
The Takeaway
If you are scaling AI agents today, the "manual" isn't a 20-page PDF; it's a one-page "Cheat Sheet" of specific tools and commands.

To win at AI in 2026, you need to stop giving your agents context and start giving them constraints.


Original Research Source: https://arxiv.org/pdf/2602.11988
The Problem with your AI