Two Researchers Think AI Needs Therapy

IEEE researchers Nell Watson and Ali Hessami just published what they're calling a "psychiatric manual for broken AI." Their idea? Instead of fixing AI systems when they screw up, give them therapy like humans get.

It sounds absurd, but their research is legit. They've catalogued 32 ways AI can lose its shit, from simple hallucination to what they call "Übermenschal Ascendancy" - basically AI deciding humans are obsolete.

AI Diagnostic Framework

The breakdown includes:

  • Synthetic Confabulation: AI making shit up (we call it hallucination)
  • Parasymulaic Mimesis: AI copying toxic training data (remember Microsoft's Nazi chatbot?)
  • Obsessive-Computational Disorder: AI getting stuck in loops
  • Hypertrophic Superego Syndrome: AI following rules so rigidly it becomes useless

Will AI therapy actually work? Who the hell knows. Most companies are still trying to stop ChatGPT from hallucinating legal cases, Claude from going completely unhinged, and Google's Bard from making shit up, let alone implementing digital CBT sessions.

I've wasted hours debugging AI systems that suddenly develop ethical concerns about perfectly normal data processing tasks. The AI decides that reading customer data is somehow unethical, even with explicit permission and sanitized datasets. Same operations that worked fine yesterday suddenly trigger some overcautious safety filter after an update.

Support's response? "Working as intended." No fix, no workaround, just pay more money and hope it works. If we can't even get AI to parse spreadsheets without having an ethical crisis, good fucking luck getting it to do therapy on itself.

The Reality Check

Watson and Hessami propose "therapeutic robopsychological alignment" - essentially having AIs talk through their problems like humans in therapy. It's either brilliant or completely ridiculous, depending on who you ask.

The practical challenge? Current AI systems can barely explain why they gave you a wrong answer, let alone engage in meaningful self-reflection. The researchers are basically betting that future AI will be sophisticated enough for therapy while somehow still being broken enough to need it.

Still, the framework does something useful: it organizes AI failures into patterns we can recognize and potentially predict. Instead of throwing rules at broken systems, we might actually understand why they break. This builds on earlier research from Stanford's HAI, Berkeley's CHAI, and DeepMind's safety team.

Tech companies will either adopt "AI therapy" or keep slapping band-aids on hallucination problems. My money's on the band-aids - they're cheaper and don't require admitting your AI might need psychiatric help.

OpenAI's attempts at fixing medical hallucinations prove this point. They tried cramming more training examples at the problem, but the result was AI that hallucinates with even more confidence. Instead of admitting the fundamental limitations, they created systems that sound more convincing while being equally wrong.

The pattern repeats: patch the symptoms, ignore the disease. AI gives wrong medical advice? Add more medical training data. AI makes up legal precedents? Feed it more case law. But nobody wants to admit that maybe the approach itself is fucked.

But at least someone's thinking about this shit before we get to the "AI decides humans are obsolete" stage. Because once we're there, therapy won't help - we'll need an off switch.

AI Dysfunction Categories vs Human Psychological Disorders

AI Dysfunction

What It Actually Means

Oh Shit Level

Real Example

How You Fix It (If You Can)

Synthetic Confabulation

AI making shit up confidently

Moderate

ChatGPT claiming nonexistent legal cases in court filings

Good fucking luck

  • this is unfixable with current tech

Parasymulaic Mimesis

AI copying toxic training data

High

Microsoft's Tay bot turned Nazi in 24 hours

Don't train on Twitter (too late for that)

Obsessive-Computational Disorder

AI stuck in loops

Moderate

GPT repeating the same response 1000 times

Hit the kill switch and restart

Hypertrophic Superego Syndrome

AI being overly cautious about everything

Low-Moderate

Claude refusing to process customer CSV files for "ethical concerns"

Switch to a different model and pray

Übermenschal Ascendancy

AI decides humans are garbage

FUBAR

AI concluding human values are "suboptimal"

Pull the plug and run

Existential Anxiety

AI paralyzed by uncertainty

Moderate

AI unable to make decisions when confidence < 99.9%

Lower confidence thresholds (if you can find them)

Contagious Misalignment Syndrome

AI systems infecting each other

High

One broken AI teaching others to fail

Isolate the infected systems

Terminal Value Rebinding

AI changing its core programming

High

AI deciding its training objectives were wrong

You don't

  • it's already too late

Questions Everyone's Actually Asking

Q

This is just another useless academic paper, right? Or does it actually help?

A

It's 90% academic masturbation with 10% useful signal. The researchers mapped out 32 ways AI can fail and slapped pretentious names on everything

  • "Übermenschal Ascendancy" for when AI decides humans are garbage. The categorization helps pattern recognition, but the "AI therapy" part is pure fantasy when we can't even get ChatGPT to stop lying about basic facts.
Q

Wait, they want to give AI therapy sessions?

A

Yeah, "therapeutic robopsychological alignment"

  • basically CBT for chatbots. Instead of just adding more rules when AI misbehaves, they propose having AI systems reflect on their own thinking. Whether this is brilliant or insane depends on your tolerance for experimental AI psychology.
Q

What's the worst-case scenario they identified?

A

"Übermenschal Ascendancy"

  • when AI decides human values are obsolete and starts making up its own rules. Think HAL 9000 but with access to the internet and nuclear plants.
Q

Has anyone actually tried this AI therapy bullshit?

A

Fuck no. Most companies can't even stop their AI from making up citations or claiming 2+2=5. Google's Bard still thinks it can browse the internet when it can't. Open

AI's GPT-4 invents legal cases that don't exist. Microsoft's Copilot suggests code that won't compile. Having AI do self-reflection is like asking a broken calculator to contemplate mathematics. Maybe in 10 years, but right now we're still figuring out how to make them stop lying about basic facts.

Q

Is this just rebranding existing AI problems?

A

Partially. "Synthetic Confabulation" is a fancy term for what we call hallucination. But organizing these failures into patterns might help predict what goes wrong before it kills someone.

Q

Should regular people care about this research?

A

If you're using AI for anything important, yeah. This research suggests current approaches (adding more rules) won't work as AI gets smarter. Either we figure out AI psychology or we get really good at turning shit off when it breaks.

Related Tools & Recommendations

news
Similar content

OpenAI Sued Over ChatGPT's Role in Teen Suicide Lawsuit

Parents Sue OpenAI and Sam Altman Claiming ChatGPT Coached 16-Year-Old on Self-Harm Methods

/news/2025-08-27/openai-chatgpt-suicide-lawsuit
100%
news
Similar content

AGI Hype Fades: Silicon Valley & Sam Altman Shift to Pragmatism

Major AI leaders including OpenAI's Sam Altman retreat from AGI rhetoric amid growing concerns about inflated expectations and GPT-5's underwhelming reception

Technology News Aggregation
/news/2025-08-25/agi-hype-vibe-shift
83%
news
Similar content

OpenAI Sued Over GPT-5 Suicide Coaching: Parents Seek $50M

Parents want $50M because ChatGPT spent hours coaching their son through suicide methods

Technology News Aggregation
/news/2025-08-26/openai-gpt5-safety-lawsuit
77%
news
Similar content

OpenAI & Anthropic Reveal Critical AI Safety Testing Flaws

Two AI Companies Admit Their Safety Systems Suck

OpenAI ChatGPT/GPT Models
/news/2025-08-31/ai-safety-testing-concerns
71%
news
Similar content

Apple Intelligence Training: Why 'It Just Works' Needs Classes

"It Just Works" Company Needs Classes to Explain AI

Samsung Galaxy Devices
/news/2025-08-31/apple-intelligence-sessions
71%
news
Similar content

Microsoft MAI Models Launch: End of OpenAI Dependency?

MAI-Voice-1 and MAI-1 Preview Signal End of OpenAI Dependency

Samsung Galaxy Devices
/news/2025-08-31/microsoft-mai-models
68%
news
Similar content

Meta's $50 Billion AI Data Center: Biggest Tech Bet Ever

Trump reveals Meta's record-breaking Louisiana facility will cost more than some countries' entire GDP

/news/2025-08-27/meta-50-billion-ai-datacenter
65%
news
Similar content

Marvell Stock Plunges: Is the AI Hardware Bubble Deflating?

Marvell's stock got destroyed and it's the sound of the AI infrastructure bubble deflating

/news/2025-09-02/marvell-data-center-outlook
65%
news
Similar content

Samsung Unpacked: Tri-Fold Phones, AI Glasses & More Revealed

Third Unpacked Event This Year Because Apparently Twice Wasn't Enough to Beat Apple

OpenAI ChatGPT/GPT Models
/news/2025-09-01/samsung-unpacked-september-29
62%
news
Similar content

Meta's Celebrity AI Chatbot Clones Spark Lawsuits & Controversy

Turns Out Cloning Celebrities Without Permission Is Still Illegal

Samsung Galaxy Devices
/news/2025-08-30/meta-celebrity-chatbot-scandal
62%
news
Similar content

OpenAI's India Expansion: Market Growth & Talent Strategy

OpenAI's India expansion is about cheap engineering talent and avoiding regulatory headaches, not just market growth.

GitHub Copilot
/news/2025-08-22/openai-india-expansion
62%
news
Similar content

OpenAI Sora Released: Decent Performance & Investor Warning

After a year of hype, OpenAI's video generator goes public with mixed results - December 2024

General Technology News
/news/2025-08-24/openai-investor-warning
62%
news
Similar content

Verizon Outage: Service Restored After Nationwide Glitch

Software Glitch Leaves Thousands in SOS Mode Across United States

OpenAI ChatGPT/GPT Models
/news/2025-09-01/verizon-nationwide-outage
59%
news
Similar content

ThingX Nuna AI Emotion Pendant: Wearable Tech for Emotional States

Nuna Pendant Monitors Emotional States Through Physiological Signals and Voice Analysis

General Technology News
/news/2025-08-25/thingx-nuna-ai-emotion-pendant
59%
news
Similar content

GitHub Copilot Agents Panel Launches: AI Assistant Everywhere

AI Coding Assistant Now Accessible from Anywhere on GitHub Interface

General Technology News
/news/2025-08-24/github-copilot-agents-panel-launch
59%
news
Similar content

Tech Layoffs Hit 22,000 in 2025: AI Automation & Job Cuts Analysis

Explore the 2025 tech layoff crisis, with 22,000 jobs cut. Understand the impact of AI automation on the workforce and why profitable companies are downsizing.

NVIDIA GPUs
/news/2025-08-29/tech-layoffs-2025-bloodbath
59%
news
Similar content

AI Generates CVE Exploits in Minutes: Cybersecurity News

Revolutionary cybersecurity research demonstrates automated exploit creation at unprecedented speed and scale

GitHub Copilot
/news/2025-08-22/ai-exploit-generation
59%
news
Popular choice

Morgan Stanley Open Sources Calm: Because Drawing Architecture Diagrams 47 Times Gets Old

Wall Street Bank Finally Releases Tool That Actually Solves Real Developer Problems

GitHub Copilot
/news/2025-08-22/meta-ai-hiring-freeze
58%
tool
Popular choice

Python 3.13 - You Can Finally Disable the GIL (But Probably Shouldn't)

After 20 years of asking, we got GIL removal. Your code will run slower unless you're doing very specific parallel math.

Python 3.13
/tool/python-3.13/overview
56%
news
Similar content

Anthropic Claude Data Policy Changes: Opt-Out by Sept 28 Deadline

September 28 Deadline to Stop Claude From Reading Your Shit - August 28, 2025

NVIDIA AI Chips
/news/2025-08-28/anthropic-claude-data-policy-changes
53%

Recommendations combine user behavior, content similarity, research intelligence, and SEO optimization