AI erodes accountability: If AI creates content who is accountable for that content? With current LLM usage most generated content is just a human slapping their name on that content, and therefore it is still attributable to that person. Its clear though that AI companies are pushing agents and systems that generate content that is not attributable to anyone. What about when a system makes a decision who is responsible for that decision? If a well qualified candidate is screened out of a hiring process by an AI system because their name doesn’t fit well with the training data, who is responsible for that failure? Dan Davies argues that we already have a lack accountability within large companies and bureaucracies, and AI just muddy waters the further1. We must continue to hold the people in charge accountable (and have a mechanism to do so), even when we cannot establish a clear chain of attribution.

1. Davies, D. The Unaccountability Machine: Why Big Systems Make Terrible Decisions – and How the World Lost Its Mind. (Profile Books Ltd, 2024).