AI-powered tools such as OpenAI's ChatGPT have certainly attracted a lot of attention through their raw power and seemingly endless capabilities.

With their popularity, there has been growing concern from researchers regarding the honesty and truth levels of tools and the underlying AI models powering these tools. These concerns stem from the real possibility that AI tools will be able to spread disinformation at an alarming rate, manipulate users into specific results, or even intentionally mislead or deceive users with a lie. A new article penned in The Conversation details an example with Meta's CICERO AI, which the company says was designed to be "largely honest and helpful".
Researchers put the AI model to the test and instructed it to participate in a game of Diplomacy, and the results were published in a new study in Science. Unlike Chess, Poker, and Go, Diplomacy requires an understanding of competition players' motivations, leading to negotiating complex, forward-thinking plans. The underlying idea is to see if CICERO was able to participate in the game at the level of a human.
The CICERO model participated in 40 games of anonymous online Diplomacy matches, and displayed more than double the average score of the human players, and ranked in the top 10% of participants who played more than one game.
More surprisingly, CICERO displayed deception capabilities by conspiring with Germany's player while simultaneously working with England, leaving England open.