The developers behind the immensely popular ChatGPT have announced that they will be pulling the ai classifier from the chatbot service, citing concerning inaccuracies.

Open AI, the developers behind ChatGPT, have announced via its website that the online tool known as AI classifier, which is used to determine if text inputted into ChatGPT has been generated by other artificial intelligence text generators. The tool was completely free to use, and many individuals interested in checking if an AI-powered program had generated the information that they were reading would take to the website and analyze any material that was of concern.
Notably, individuals would typically check text-based content such as emails, blog posts, and essays were written by a human or an AI. Open AI admitted to the lack of accuracy behind its ai classifier, saying that it would "sometimes be extremely confident in a wrong prediction," referencing instances where it flagged content as AI generated when it was created by a human.
"Our intended use for the AI Text Classifier is to foster conversation about the distinction between human-written and AI-generated content. The results may help, but should not be the sole piece of evidence, when deciding whether a document was generated with AI," added OpenAI. "The model is trained on human-written text from a variety of sources, which may not be representative of all kinds of human-written text."
"We caution that the model has not been carefully evaluated on many of the expected principle targets - including student essays, automated disinformation campaigns, or chat transcripts. Indeed, classifiers based on neural networks are known to be poorly calibrated outside of their training data. For inputs that are very different from text in our training set, the classifier is sometimes extremely confident in a wrong prediction," warned OpenAI.