OpenAI has announced a new bug bounty scheme whereby intrepid security buffs who find flaws in ChatGPT will be rewarded with payments.
The company will pay up to $20,000 for the discovery of bugs by white hat hackers and security experts.
As you might guess, then, this isn't about everyday folks stumbling across a dodgy response from the chatbot, and then flagging that as a bug (which it might be, technically).
This is about full-on security flaws and OpenAI lists what it's interested in to clarify the bugs that'll earn cash for those who stumble across them.
That includes authentication or authorization (login) issues, bugs relating to payments, glitches that expose data, and also the ability to use pre-release (or private) models for queries.
"The initial priority rating for most findings will use the Bugcrowd Vulnerability Rating Taxonomy. However, vulnerability priority and reward may be modified based on likelihood or impact at OpenAI's sole discretion. In cases of downgraded issues, researchers will receive a detailed explanation."
Safety issues, such as getting ChatGPT to tell you how to do "bad things" for example, are not counted as bugs - and nor is getting the AI to write malicious code for you, as another example.
Chatbot hallucinations - where the AI goes off the rails and gives inaccurate or perhaps even absurd responses - are not part of the bounty program, either.
Mind you, it isn't that OpenAI doesn't want you to report such issues. The company just says that those kinds of problems don't fit too well within a bug bounty scheme, as they "are not individual, discrete bugs that can be directly fixed" and that resolving them "often involves substantial research and a broader approach."
You can still, and should still, report them, OpenAI notes, but you can do this through the appropriate form (and, of course, you won't be getting any financial compensation for doing so).