Samsung has made itself an example of how you aren't supposed to use AI-powered chatbots such as OpenAI's ChatGPT, Microsoft's Bing Chat, or Google's Bard.

A new report from The Economist claims that at least three Samsung employees have accidentally leaked sensitive information with OpenAI's ChatGPT, with one instance involving a source code of a confidential database being entered into the chatbot to check for errors. Another instance includes a Samsung employee sharing code optimization, and the last was an employee requesting the chatbot convert an internal Samsung video of a meeting into minutes.
Problems such as what Samsung is facing right now are real examples of what some digital privacy experts have sounded alarms about since the emergence of AI chatbots. It was only yesterday that a law professor from George Washington University revealed that OpenAI's ChatGPT wrongfully accused him of sexual assault, which falls under defamation and disinformation. The law professor asked who is culpable for AI chatbots spewing misinformation about individuals that could have very real impacts on reputations and, by extension, careers, and lives.
Samsung's issue has seemingly been fixed, with reports indicating the company immediately rolled out a response that prevents each Samsung employee from uploading more than 1024 bytes to ChatGPT. Additionally, Samsung has launched an internal investigation into the people involved in to leak and will be creating its own internal chatbot to prevent any further embarrassment. However, this new limitation is more so a band-aid fix on the larger issue. Individuals will continue to accidentally share sensitive information with AI chatbots the more they become accessible and integrated into different facets of society and business.
Privacy experts have already voiced their concerns about hypothetical scenarios of someone sharing confidential legal documents, medical information, contact information, or any other sensitive data with an AI chatbot. The basis of this concern is the warning from developers to refrain from sharing any sensitive information with any AI chatbot, as developers are unable to delete specific prompts from users' history. OpenAI warns it's "not able to delete specific prompts from your history" and that the only way to delete information on ChatGPT is to delete the account that inputs the data, which can take up to four weeks to complete.
Notably, all data entered into ChatGPT is being fed right back into the machine to improve its efficiency, response variety, accuracy, creativity, and more. So, if you enter sensitive information into ChatGPT and don't delete the account, the AI will consume that information and add it to its database. This is if you haven't chosen to opt out of your chat histories being used to train ChatGPT.