Fallen out with your partner? That's nothing new, all couples have disagreements - or even more full-on arguments at times - but one person's solution, namely turning to AI, has gone viral for reasons that, well, you'll see.
This comes to us courtesy of a post on Reddit by 'Drawss4scoress' on r/AmITheA**hole (or AITAH) where as you can guess, people ask whether they might be, shall we say - in the wrong.
To sum up the gist of this scenario, Drawss4scoress has been dating their girlfriend for eight months, and every time they argue, to quote the Redditor:
"My girlfriend will go away and discuss the argument with ChatGPT, even doing so in the same room sometimes. Whenever she does this she'll then come back with a well constructed argument breaking down everything I said or did during our argument.
"I've explained to her that I don't like her doing so as it can feel like I'm being ambushed with thoughts and opinions from a robot .... Whenever I've voiced my upset I've been told that 'ChatGPT says you're insecure' or 'ChatGPT says you don't have the emotional bandwidth to understand what I'm saying.'"
The irony is staggering there, of course, and you can read the full post above.
As the Redditor observes, part of the issue here is that their partner is formulating the queries, which is likely to bias the AI heavily towards acknowledging and agreeing with her.
The obvious response, for us, is to take those ChatGPT-formulated counterarguments, plug them into ChatGPT - or maybe a different AI just for variety (Gemini, perhaps) - and reply in kind. In an ideal world, you could just hook up your respective AIs together, sprinkle in your prompts on both sides, let them hash it out, and spit out an ultimate result - which you could both then abide by.
Understanding, respect and empathy
In all seriousness, there are some valid replies and observations in the thread discussing this on Reddit, one of which notes that if you ask ChatGPT what it thinks of this post, even the AI itself observes:
"While AI can be helpful for many things, it shouldn't replace genuine, human-to-human conversations that are nuanced, emotional, and require empathy."
We checked Copilot's opinion of the situation, too, and Microsoft's (ChatGPT-based) AI told us:
"Try to understand why she turns to AI. Is it because she feels more confident with structured arguments? Addressing her reasons might help find a more balanced approach.
"Remind her that respect and empathy are crucial in any relationship. Dismissing your feelings by quoting AI isn't productive and may harm trust and intimacy. At the end of the day, it's essential to communicate and understand each other without external crutches."
This whole affair points out a key weakness with AI, at least in terms of the way it is perceived by many humans - as some kind of all-knowing authoritarian expert. When, in fact, it's drawing material from all number of sources, which vary in quality, and as noted it'll always be skewed heavily towards what it detects as the user's expectations.
If it validates the user, then that user is more likely to return and feed the AI again, of course, and keep on plugging in those queries (and maybe subscribing).
There's lots to worry about in terms of the direction AI is headed in, and we've heard plenty of warnings on this subject ever since ChatGPT sparked into life.
One of the more concerning aspects, perhaps, is always going to be how us humans actually use AI, and understand tools like ChatGPT, Copilot, or Gemini - or indeed fail to understand them. That, and the danger that AI poses in terms of environmental catastrophe with its spiraling demands on our data centers and the amount of power sucked from the grid therein.