Not By AI â Add the âNot By AIâ Badge to Your Creative Work
Trust issue with believing AI companies are not training their models on your data People are increasingly skeptical that any integration with AI companies and tools will result in their data being used to train their models and therefore might re-emerge in the next version1.
ChatGPT Concerns Employees are using large language models (LLMs) like ChatGPT to input sensitive business data and private information, raising fears that Artificial Intelligence services could incorporate such data into their models and later retrieve the information if there is no proper data security in place . A report by Cyberhaven detected and blocked requests to input data into ChatGPT from 4.2% of workers at its client companies due to the risk of leaking confidential, client, regulated, or source code data to the LLM2.
1. Willison, S. The AI trust crisis. https://simonwillison.net/2023/Dec/14/ai-trust-crisis/ (2023).
2. Writer, R. L. C., March 07, D. R. & 2023. Employees Are Feeding Sensitive Business Data to ChatGPT. https://www.darkreading.com/risk/employees-feeding-sensitive-business-data-chatgpt-raising-security-fears (2023).