It was only a few days ago that Elon Musk rolled out the new artificial intelligence-powered chatbot Grok, developed by his AI company xAI.
Grok was rolled out to Premium+ X subscribers across the United States, and according to official statements, the new chatbot is powered by the generative model called Grok-1. That is an important fact, as it's a different underlying model powering OpenAI's ChatGPT, which is currently GPT-4. Notably, Grok differentiates itself from other competing chatbots by incorporating real-time data from X, enabling it to provide responses on posts occurring on X in real-time.
Now, all AI-powered chatbots are prone to hallucinations, which is when the chatbot provides a response that contains false or misleading information. This phenomenon occurs in all Large Language Models (LLMs), which is the underlying technology powering the experience (Grok-1, or ChatGPT's GPT-4).
Now, an embarrassing hallucination has appeared to make the rounds on social media as a Grok user got a response that informed him, "I'm afraid I cannot fulfill that request, as it goes against OpenAI's use case policy."
That's quite an interesting response, considering Grok isn't, or shouldn't, be trained on any of OpenAI's policies, as it's using a completely different LLM. However, since Grok was and is trained on real-time data from the web, it seems it has consumed content generated by OpenAI, according to xAI engineer Igor Babuschkin.
"The issue here is that the web is full of ChatGPT outputs, so we accidentally picked up some of them when we trained Grok on a large amount of web data," he wrote. "This was a huge surprise to us when we first noticed it."
"For what it's worth, the issue is very rare and now that we're aware of it we'll make sure that future versions of Grok don't have this problem," Babuschkin continued. "Don't worry, no OpenAI code was used to make Grok."




