As an Amazon Associate, we earn from qualifying purchases. TweakTown may also earn commissions from other affiliate partners at no extra cost to you.
Since the explosion of AI-powered models, academics, teachers, and many other professions have been rightfully concerned about tools such as ChatGPT being used to assist in the writing of essays and research papers.

The easy accessibility to powerful AI models such as ChatGPT has given rise to a substantial increase in the amount of AI-generated content across the internet. Unfortunately, it appears some of this content has already made its way into scientific journals within submitted papers, and classrooms with students taking advantage of these free tools to write assignments and papers.
To prevent the abuse of this technology in the academic setting, companies rolled out AI detection tools. Unfortunately, these detection tools proved unreliable, but according to a new article by The Washington Post, OpenAI, the creators of ChatGPT, have developed a new method that can detect ChatGPT-generated content with a 99.9% accuracy rate. The new system issues a watermark to ChatGPT-generated content that cannot be seen by the human user but detected by the detection too.
The report states the technology has been ready to be rolled out for nearly a year, but OpenAI is hesitant due to internal reactions being mixed. One fear is that OpenAI will cut out a significant portion of its user base that is bigger than initially anticipated, resulting in a mass exodus of the platform. Further concerns are based on the ease of removing watermarks, such as running the AI-generated text through Google Translate to change it to another language. Then, change it back to English. The watermark is now removed.
OpenAI staff pushing for the technology's release appear to want to uphold the company's initial values of being an open-source AI safety company.