A new report claims Google is warning its employees about the dangers of AI-powered chatbots and how they should go about using them.

Reuters has reported that four people familiar with the matter have informed the publication that Google is warning its employees about chatbot dangers, which includes the company's own AI-powered chatbot, Bard. The warnings from the company are similar to what other companies have told their employees, which is to refrain from entering any confidential information, whether that be personal or company IP, into any chatbot as the information.
Why is entering confidential information into an AI chatbot a bad idea? Researchers have found that confidential or personal information entered into AI chatbots can and is read to by human reviewers and that the chatbot is also capable of bringing up this information when prompted appropriately. These facts mean that any AI chatbot is a potential information leak risk.
Additionally, Alphabet, Google's parent company, has warned employees not to use code generated by AI chatbots, even from its own chatbot Bard, which the company says can make "undesired code suggestions".
These requests from Google are also being heard by Samsung employees after a Samsung staffer leaked insider source code with OpenAI's ChatGPT, resulting in the company banning the use of AI chatbots on any company-provided device. Additionally, Samsung announced that it was working on its own AI-powered tool that would be used internally.
Matthew Prince, CEO of Cloudflare, said that typing confidential matters into chatbots was like "turning a bunch of PhD students loose in all of your private records."