Blog posts on how others code with AI tools

AI Coding Issues

Edward Yang maintains a compendium of issues, “AI blindspots”, he has found while coding with AI, particularly using Claude models1 in Cursor:

IssueDescriptionSolution
Stop diggingA model will continue to try and solve a problem even if it is heading down the wrong path. This can lead to wasted effort and time2.Be more specific about tasks. Watch the agent in action. Potentially supervise with another LLM.
Culture Eats StrategyThe code the model is fine tuned with and the recent code in the context will inform the way the model stylistically produces code. So if you need the model to change its behavior (culture) you need to change the prompt or the codebase. The codebase will have a dominating effect because of its size3.Load relevant samples into the context or re-factor the existing code.
Preparatory RefactoringAI will make refactors as it tries to solve a problem, this complicates the review. It would be better if these two activities were separated4.
Black box testingAI loads the implementation of a piece of code into its codebase which results in coupling between the test and implementation (i.e. not black box tests)5.
Respect the specLLMs do not respect the things that should not change in order to solve a problem. For example, an external API, alternatively modifying a spec rather than fixing the implementation6.Careful review of LLM changes.
MementoLLMs context windows and memory are small and short relative to large codebases7.Helper docs to provide context memory.
Scientific DebuggingLLMs tend to just do things to try and solve a problem rather than trying to understand the system’s failure and then making a change8Try to develop a theory about a change and ask the the LLM to make it.
The tail wagging the dogSmall, irrelevant pieces of context can throw the LLM off track9Context hygiene
Rule of threeLLMs favor code reproduction and will not do a “rule of three” refactor (refactor on third duplication) unless specifically prompted10.
Know your limitsLLMs do not know when they can’t or won’t be able to do something, leading to functional hallucinations11.Only ask agents to do things they can do. Create bash scripts to give the LLM tools where the functionality doesn’t exist.
Read the docsFor more obscure frameworks and libraries the model will not have the context and often hallucinate functions12.Feed the model relevant doc pages before proceeding.

Tools

  • aider is a terminal based chat coding assistant
  • CodeRabbitis a AI assisted code review used in github or gitlab
  • VS Code based
    • Cursor is a commercial AI IDE built on top of VSCode
    • Cline is an extension for VS Code using foundational model APIs
    • continue is a coder assistant extension for VSCode IDE

Tactics

Adding the repo to the context window

Learning

HackerNews thread on when to use AI as junior engineer. HackerNews thread on getting started with LLM assisted programming.

1. Yang, E. Z. AI Blindspots. https://ezyang.github.io/ai-blindspots/ (2025).

2. Yang, E. Z. Stop Digging. https://ezyang.github.io/ai-blindspots/stop-digging/ (2025).

3. Yang, E. Z. Culture Eats Strategy. https://ezyang.github.io/ai-blindspots/culture-eats-strategy/ (2025).

4. Yang, E. Z. Preparatory Refactoring. https://ezyang.github.io/ai-blindspots/preparatory-refactoring/ (2025).

5. Yang, E. Z. Black Box Testing. https://ezyang.github.io/ai-blindspots/black-box-testing/ (2025).

6. Yang, E. Z. Respect the Spec. https://ezyang.github.io/ai-blindspots/respect-the-spec/ (2025).

7. Yang, E. Z. Memento. https://ezyang.github.io/ai-blindspots/memento/ (2025).

8. Yang, E. Z. Scientific Debugging. https://ezyang.github.io/ai-blindspots/scientific-debugging/ (2025).

9. Yang, E. Z. The tail wagging the dog. https://ezyang.github.io/ai-blindspots/the-tail-wagging-the-dog/ (2025).

10. Yang, E. Z. Rule of Three. https://ezyang.github.io/ai-blindspots/rule-of-three/ (2025).

11. Yang, E. Z. Know Your Limits. https://ezyang.github.io/ai-blindspots/know-your-limits/ (2025).

12. Yang, E. Z. Read the Docs. https://ezyang.github.io/ai-blindspots/read-the-docs/ (2025).