Not By AI â Add the âNot By AIâ Badge to Your Creative Work
Trust issue with believing AI companies are not training their models on your data People are increasingly skeptical that any integration with AI companies and tools will result in their data being used to train their models and therefore might re-emerge in the next version1.
ChatGPT Concerns Employees are using large language models (LLMs) like ChatGPT to input sensitive business data and private information, raising fears that Artificial Intelligence services could incorporate such data into their models and later retrieve the information if there is no proper data security in place . A report by Cyberhaven detected and blocked requests to input data into ChatGPT from 4.2% of workers at its client companies due to the risk of leaking confidential, client, regulated, or source code data to the LLM2.
AI erodes accountability: If AI creates content who is accountable for that content? With current LLM usage most generated content is just a human slapping their name on that content, and therefore it is still attributable to that person. Its clear though that AI companies are pushing agents and systems that generate content that is not attributable to anyone. What about when a system makes a decision who is responsible for that decision? If a well qualified candidate is screened out of a hiring process by an AI system because their name doesnât fit well with the training data, who is responsible for that failure? Dan Davies argues that we already have a lack accountability within large companies and bureaucracies, and AI just muddy waters the further3. We must continue to hold the people in charge accountable (and have a mechanism to do so), even when we cannot establish a clear chain of attribution.
1. Willison, S. The AI trust crisis. https://simonwillison.net/2023/Dec/14/ai-trust-crisis/ (2023).
2. Writer, R. L. C., March 07, D. R. & 2023. Employees Are Feeding Sensitive Business Data to ChatGPT. https://www.darkreading.com/risk/employees-feeding-sensitive-business-data-chatgpt-raising-security-fears (2023).
3. Davies, D. The Unaccountability Machine: Why Big Systems Make Terrible Decisions â and How the World Lost Its Mind. (Profile Books Ltd, 2024).