Assuming AI erodes accountability a logical progress when we apply AI to code is that AI will generate a system that no one understands and no one is accountable for.

Chad Fowlerexplores this issue in depth in Relocating Rigor1, he argues that if you are “mistaking velocity for progress” you are indeed in inducing this hazard. The solution is to relocate where the control is taking place. Rather than control happening at the point where code is written. Control moves to where the specification of the intent of the code takes place. This could mean:

  • tests written by a human, implementation written by AI
  • explicit interface contracts with probabilistic internal implementations

Generation can be flexible, but evaluation must explicit.

Code is not correct because it runs, it is correct because we have judged it to have met the intent of the problem it is trying to solve.

1. Fowler, C. Relocating Rigor. https://aicoding.leaflet.pub/3mbrvhyye4k2e?__readwiseLocation= (26 AD).