Blog posts on how others code with AI tools
- Harper Reed’s My LLM codegen workflow atm
- Simon Willson’s Here’s how I use LLMs to help me write code
AI Coding Issues
Edward Yang maintains a compendium of issues, “AI blindspots”, he has found while coding with AI, particularly using Claude models1 in Cursor:
Issue | Description | Solution |
---|---|---|
Stop digging | A model will continue to try and solve a problem even if it is heading down the wrong path. This can lead to wasted effort and time2. | Be more specific about tasks. Watch the agent in action. Potentially supervise with another LLM. |
Culture Eats Strategy | The code the model is fine tuned with and the recent code in the context will inform the way the model stylistically produces code. So if you need the model to change its behavior (culture) you need to change the prompt or the codebase. The codebase will have a dominating effect because of its size3. | Load relevant samples into the context or re-factor the existing code. |
Preparatory Refactoring | AI will make refactors as it tries to solve a problem, this complicates the review. It would be better if these two activities were separated4. | |
Black box testing | AI loads the implementation of a piece of code into its codebase which results in coupling between the test and implementation (i.e. not black box tests)5. | |
Respect the spec | LLMs do not respect the things that should not change in order to solve a problem. For example, an external API, alternatively modifying a spec rather than fixing the implementation6. | Careful review of LLM changes. |
Memento | LLMs context windows and memory are small and short relative to large codebases7. | Helper docs to provide context memory. |
Scientific Debugging | LLMs tend to just do things to try and solve a problem rather than trying to understand the system’s failure and then making a change8 | Try to develop a theory about a change and ask the the LLM to make it. |
The tail wagging the dog | Small, irrelevant pieces of context can throw the LLM off track9 | Context hygiene |
Rule of three | LLMs favor code reproduction and will not do a “rule of three” refactor (refactor on third duplication) unless specifically prompted10. | |
Know your limits | LLMs do not know when they can’t or won’t be able to do something, leading to functional hallucinations11. | Only ask agents to do things they can do. Create bash scripts to give the LLM tools where the functionality doesn’t exist. |
Read the docs | For more obscure frameworks and libraries the model will not have the context and often hallucinate functions12. | Feed the model relevant doc pages before proceeding. |
Tools
- aider is a terminal based chat coding assistant
- CodeRabbitis a AI assisted code review used in github or gitlab
- VS Code based
Tactics
Adding the repo to the context window
Learning
HackerNews thread on when to use AI as junior engineer. HackerNews thread on getting started with LLM assisted programming.
1. Yang, E. Z. AI Blindspots. https://ezyang.github.io/ai-blindspots/ (2025).
2. Yang, E. Z. Stop Digging. https://ezyang.github.io/ai-blindspots/stop-digging/ (2025).
3. Yang, E. Z. Culture Eats Strategy. https://ezyang.github.io/ai-blindspots/culture-eats-strategy/ (2025).
4. Yang, E. Z. Preparatory Refactoring. https://ezyang.github.io/ai-blindspots/preparatory-refactoring/ (2025).
5. Yang, E. Z. Black Box Testing. https://ezyang.github.io/ai-blindspots/black-box-testing/ (2025).
6. Yang, E. Z. Respect the Spec. https://ezyang.github.io/ai-blindspots/respect-the-spec/ (2025).
7. Yang, E. Z. Memento. https://ezyang.github.io/ai-blindspots/memento/ (2025).
8. Yang, E. Z. Scientific Debugging. https://ezyang.github.io/ai-blindspots/scientific-debugging/ (2025).
9. Yang, E. Z. The tail wagging the dog. https://ezyang.github.io/ai-blindspots/the-tail-wagging-the-dog/ (2025).
10. Yang, E. Z. Rule of Three. https://ezyang.github.io/ai-blindspots/rule-of-three/ (2025).
11. Yang, E. Z. Know Your Limits. https://ezyang.github.io/ai-blindspots/know-your-limits/ (2025).
12. Yang, E. Z. Read the Docs. https://ezyang.github.io/ai-blindspots/read-the-docs/ (2025).