Improve agents-md prompt to force doc retrieval (#88997)
## Summary
Updates the instruction in `agents-md` [CLAUDE.md](http://CLAUDE.md)
output to force LLMs to actually read the docs instead of relying on
stale pre-training knowledge.
**Before:**
```
IMPORTANT: Prefer retrieval-led reasoning over pre-training-led reasoning for any Next.js tasks.
```
**After:**
```
STOP. What you remember about Next.js is WRONG for this project. Always search docs and read before any task.
```
## Why This Matters
Through systematic testing with the
[next-evals-oss](https://github.com/vercel/next-evals-oss) eval suite,
we discovered that the original "prefer retrieval-led reasoning"
instruction was ineffective. Agents would ignore it and use outdated
pre-training knowledge (e.g., creating `middleware.ts` instead of the
new Next.js 16 `proxy.ts` convention).
## Testing Methodology
We created an "indirect proxy" eval that tests if agents know `proxy.ts`
is needed for request interception in Next.js 16+, using only the prompt
"Log every request to the console" (no mention of proxy/middleware).
| Instruction | Pass Rate |
| --- | --- |
| "Prefer retrieval-led reasoning" | 0% |
| "CRITICAL: MUST read docs" | ~20-40% |
| "What you remember is WRONG" | **100% (6/6)** |
## Why It Works
The phrase **"What you remember is WRONG"** creates psychological doubt
in the model's confidence, forcing it to actually read the docs rather
than trusting its pre-training.
Key elements:
- **"STOP"** - Attention grabber, interrupts default behavior
- **"WRONG for this project"** - Creates doubt without claiming
universal wrongness
- **"Always search docs and read"** - Actionable instruction
- **"before any task"** - Applies to everything, not just code writing
## Full Suite Results
- The new indirect-proxy eval passed consistently