AI can give as good as it gets ... or better: The moral dilemma of combative chatbots
Peer-Reviewed Publication
Updates every hour. Last Updated: 27-Apr-2026 02:16 ET (27-Apr-2026 06:16 GMT/UTC)
AI systems ‘can learn to seek revenge’ because they are able to grasp reciprocating verbal violence when exposed to conflict, new research from Lancaster University shows.
In short, AI can give as good as it gets and, eventually, go one step further.
Published in the journal of Pragmatics, the study ‘Can ChatGPT reciprocate impoliteness? The Al moral dilemma’, is authored by Dr Vittorio Tantucci and Prof Jonathan Culpeper, both from Lancaster University.
In a world that feels like it's growing more negative by the day it may be a surprise that talking about what we're against has its value, at least when it comes to engaging people who disagree with us.
Over a series of studies with nearly 6,000 people, researcher Rhia Catapano tested what happened when participants were presented with viewpoints they disagreed with and how open they were to them when those viewpoints were expressed in support terms instead of oppositional ones. Think of "I support abortion rights," versus "I'm against making abortion illegal."
Turns out, those two ways of expressing the same idea can land very differently with someone else. And we're really good at getting that wrong.
Gender-diverse adolescents who experience bullying and live in states with persistently unsupportive gender identity laws are significantly more likely to suffer escalating psychological distress compared to their peers, according to new research by UCLA Health.