Why Your Next Zoom Call Might Judge Your Body Language

AI Meeting Analysis

Interhuman AI just raised €2 million to solve a problem I didn't know existed: making AI chatbots that understand your facial expressions. According to Sifted, this Danish startup wants to add "social intelligence" to every AI interaction.

Paula Petcu, the CEO who previously worked on digital therapeutics at Brain+, thinks current AI is missing the human element. So they built an API that analyzes tone of voice, facial expressions, and body language in real time. Her background in digital therapeutics - a field already wrestling with privacy and regulation concerns - makes the emotional AI pivot particularly interesting.

The Technology That's Actually Terrifying

Their system uses computer vision and audio analysis to interpret "social signals" - basically everything your therapist notices about how you say things, not just what you say. The company claims 50% of AI interactions involve humans, and 40% of those would benefit from reading non-verbal cues. However, research shows that AI emotion recognition models struggle with accuracy, raising questions about whether this technology is ready for real-world deployment.

That's a lot of percentages for what boils down to: "AI should know when you're uncomfortable."

They're running two paid pilots right now. One in digital health, another for sales training. The sales use case makes sense - teach reps not just what to say, but how to read when prospects are bullshitting them. The health application is where things get weird.

Digital Health Meets Surveillance

Paula Petcu insists there's a "differentiation between emotions and behaviours" and they're only observing behavior. That's corporate speak for "we're not reading your mind, just your face."

But if you're paying for therapy through an app, do you want that app analyzing your micro-expressions and reporting back to... who? Your insurance company? Your employer's wellness program? Research on AI chatbot privacy shows that users are increasingly concerned about data collection and emotional surveillance.

The press release mentions improving "communication between patients and healthcare providers" but doesn't address who gets access to all this behavioral data. In a world where health insurers already mine social media for risk assessment, adding facial expression analysis feels like surveillance disguised as healthcare innovation. Studies have identified privacy concerns as key determinants in consumer-chatbot interactions, particularly when emotional data is involved.

The VC Math Doesn't Add Up

Nordic deeptech VC PSV Tech led the €2 million round, with EIFO (Denmark's export fund), Antler, and some angels participating. For a pre-seed round, that's decent money, but the market sizing claims feel inflated.

They say 50% of the global AI market involves human-AI interaction. That's probably true if you count every customer service chatbot. But the jump to "40% would benefit from non-verbal analysis" needs serious evidence.

Most people just want chatbots that actually solve their problems, not ones that notice they're frustrated. If your AI can't handle basic queries without reading facial expressions, maybe fix the AI first. Research on emotion-aware AI suggests that traditional chatbots often fail at basic functionality before adding layers of complexity.

What Could Actually Go Wrong

The company is targeting customer service next, which means your next support chat might analyze whether you're really angry or just impatient. Great news for companies trying to optimize their "empathy metrics." Customer service applications of emotion analysis are becoming more common, despite ongoing concerns about effectiveness and ethics.

But there's a darker side: emotion AI has a terrible track record with bias. Facial recognition already struggles with different ethnicities and ages, and research shows that some emotional AIs disproportionately attribute negative emotions to the faces of black people. Adding emotion interpretation creates another layer where algorithmic bias can screw people over.

Paula Petcu says they focus on behavior, not emotions, but that distinction matters less when the AI decides you're "non-compliant" with treatment or "untrustworthy" as a customer based on how you hold your eyebrows during a video call. Harvard Business Review warns about the risks of using AI to interpret human emotions, particularly around bias and accuracy concerns.

This feels like another case of "we can build it, so we should" without asking whether anyone actually wants their chatbot analyzing their body language. Some problems don't need AI solutions - they need human solutions.

The Emotion AI Arms Race Just Got a €2M Boost

Privacy vs AI

Interhuman AI's funding round highlights a troubling trend: every AI company thinks their bot needs to understand human emotions. But this Danish startup's approach raises bigger questions about privacy and whether we actually want our digital tools analyzing our emotional state.

The Privacy Nightmare Nobody's Talking About

Here's what the press coverage won't tell you: emotion AI has a massive privacy problem. Unlike text analysis, which can be anonymized, facial expression analysis requires identifying individual faces to work properly. Every micro-expression, every slight frown, every eye roll gets captured and stored.

Paula Petcu claims they're only analyzing "behavior, not emotions," but that's a meaningless distinction when the output is the same: algorithmic judgments about your mental state based on how you look and sound.

The company mentions GDPR compliance, but European privacy law wasn't designed for AI that watches your face during therapy sessions. What happens when that data gets breached? When your insurance company wants to know why you looked "non-compliant" during a telemedicine appointment?

Emotion AI's Terrible Track Record

Facial emotion recognition has a long history of not working, especially for people who aren't white men. MIT researcher Joy Buolamwini's work showed that commercial emotion AI systems had error rates up to 34.7% for dark-skinned women compared to 0.8% for light-skinned men.

Interhuman AI hasn't published any accuracy data broken down by demographics. That's a red flag. If your social intelligence AI can't recognize emotions across different ethnicities, ages, and cultural backgrounds, it's not intelligent - it's biased.

The company's focus on "Danish design" and Nordic funding suggests they've developed and tested their system primarily on Northern European faces. Good luck when that AI tries to read the facial expressions of anyone who doesn't look like a Copenhagen tech worker.

The Real Problem Emotion AI Solves: Surveillance

Sales training that analyzes facial expressions isn't about making better salespeople - it's about identifying which prospects are likely to say no so you can drop them from your pipeline faster.

Digital health applications that monitor patient expressions aren't improving care - they're flagging "non-compliant" patients for extra scrutiny or higher premiums.

Customer service bots that read your facial expressions aren't providing better support - they're determining whether you're angry enough to escalate to a human agent or if they can keep you in the automated system longer.

This is surveillance technology dressed up as empathy.

VC Logic That Doesn't Make Sense

Nordic deeptech VC PSV Tech led this round, but their investment thesis seems weak. The company's claim that "50% of AI interactions involve humans" is circular logic - of course human-AI interactions involve humans.

The jump to "40% would benefit from non-verbal analysis" needs serious evidence. Most people interacting with AI systems want them to work better, not to judge their emotional state. If your chatbot can't handle basic queries without reading facial expressions, you've built the wrong chatbot.

For €2 million in pre-seed funding, investors are betting that companies will pay premium prices to make their AI systems more invasive. That might be true for enterprise sales software, but consumer applications face massive resistance when users realize their apps are watching their faces.

What Could Go Wrong

The company is targeting customer service next, which means your next support interaction might involve an AI analyzing whether you're "really" frustrated or just impatient. Companies will use this data to optimize their response strategies - not to help you faster, but to determine how little help they can provide before you give up.

In healthcare, emotion AI could create a two-tier system where patients whose facial expressions indicate "compliance" get better care than those who look skeptical or confused. That's not improving healthcare outcomes - that's automating discrimination.

The worst part? None of this improves the underlying AI. If your chatbot needs to read facial expressions to understand that someone is frustrated, maybe fix the chat experience instead of building surveillance technology.

This feels like another solution looking for a problem, funded by investors who think adding "AI" to surveillance makes it innovative rather than creepy.

FAQ: AI Body Language Funding

Q

What exactly does Interhuman AI's technology do?

A

Their API analyzes facial expressions, tone of voice, and body language in real-time during AI conversations. Think of it as making chatbots that can tell when you're frustrated, confused, or lying.

Q

Who's funding this and why?

A

Danish startup raised €2M from PSV Tech (lead), EIFO, Antler, and some angels. They think current AI lacks emotional intelligence, but honestly, most chatbots can't handle basic questions

  • not sure facial recognition fixes that.
Q

Where is this actually being used?

A

Two paid pilots: digital health apps and sales training. The health one is concerning

  • do you want your therapy app analyzing your micro-expressions and reporting back to insurance companies?
Q

What's the privacy situation?

A

They claim they analyze "behavior, not emotions" but that's bullshit corporate speak. If an AI is judging your facial expressions, it's surveillance regardless of what they call it.

Q

How accurate is emotion AI anyway?

A

Facial recognition already struggles with different ethnicities and ages. Adding emotion interpretation just creates another layer where algorithmic bias can screw people over. But sure, let's give customer service bots more ways to judge us.

Related Tools & Recommendations

news
Similar content

Anthropic's $13B Funding: AI Bubble Peak or Revenue Reality?

Another AI funding round that makes no sense - $183 billion for a chatbot company that burns through investor money faster than AWS bills in a misconfigured k8s

/news/2025-09-02/anthropic-funding-surge
85%
news
Similar content

Musk's xAI Sues Apple & OpenAI Over AI Monopoly Claims

Billionaire claims iPhone maker and ChatGPT creator illegally shut out competitors through exclusive partnership

Technology News Aggregation
/news/2025-08-25/musk-xai-antitrust-lawsuit
82%
news
Similar content

Exabeam Wins Google Cloud DORA Award with 83% Lead Time Reduction

Cybersecurity leader achieves elite DevOps performance through AI-driven development acceleration

Technology News Aggregation
/news/2025-08-25/exabeam-dora-award
79%
news
Similar content

Gemini 2.0 Flash vs. Sora: Latest AI Model News & Updates

Gemini 2.0 vs Sora: The race to burn the most venture capital while impressing the fewest users

General Technology News
/news/2025-08-24/ai-revolution-accelerates
73%
news
Similar content

xAI Launches Grok Code Fast 1: Fastest AI Coding Assistant

Elon Musk's AI Startup Unveils High-Speed, Low-Cost Coding Assistant

OpenAI ChatGPT/GPT Models
/news/2025-09-01/xai-grok-code-fast-launch
64%
news
Similar content

Creem AI Fintech: Estonian Startup Raises €1.8M for AI Payments

Estonian fintech Creem, founded by crypto payment veterans, secures €1.8M in funding to address critical payment challenges faced by AI startups. Learn more abo

Technology News Aggregation
/news/2025-08-26/creem-ai-fintech-funding
64%
news
Similar content

Framer Secures $100M Series D, $2B Valuation in No-Code AI Boom

Dutch Web Design Platform Raises Massive Round as No-Code AI Boom Continues

NVIDIA AI Chips
/news/2025-08-28/framer-100m-funding
61%
news
Similar content

Anthropic's $183B Valuation: AI Bubble or Genius Play?

AI bubble or genius play? Anthropic raises $13B, now valued more than most countries' GDP - September 2, 2025

/news/2025-09-02/anthropic-183b-valuation
61%
news
Similar content

OpenAI Parental Controls: ChatGPT Safety After Teen Suicide Lawsuit

The company rushed safety features to market after being sued over ChatGPT's role in a 16-year-old's death

NVIDIA AI Chips
/news/2025-08-27/openai-parental-controls
61%
news
Popular choice

Morgan Stanley Open Sources Calm: Because Drawing Architecture Diagrams 47 Times Gets Old

Wall Street Bank Finally Releases Tool That Actually Solves Real Developer Problems

GitHub Copilot
/news/2025-08-22/meta-ai-hiring-freeze
60%
news
Similar content

Grok Privacy Disaster: xAI Exposes 370K Private Chats Publicly

Documents, photos, and conversations searchable on Google because someone fucked up the share button - August 24, 2025

General Technology News
/news/2025-08-24/grok-privacy-disaster
58%
news
Similar content

xAI Launches Grok Code Fast 1: New AI Coding Agent Challenges Copilot

New AI Model Targets GitHub Copilot and OpenAI with "Speedy and Economical" Agentic Programming

NVIDIA AI Chips
/news/2025-08-28/xai-coding-agent
58%
news
Similar content

Perplexity AI Sued for $30M by Japanese News Giants Nikkei & Asahi

Nikkei and Asahi want $30M after catching Perplexity bypassing their paywalls and robots.txt files like common pirates

Technology News Aggregation
/news/2025-08-26/perplexity-ai-copyright-lawsuit
58%
tool
Popular choice

Python 3.13 - You Can Finally Disable the GIL (But Probably Shouldn't)

After 20 years of asking, we got GIL removal. Your code will run slower unless you're doing very specific parallel math.

Python 3.13
/tool/python-3.13/overview
57%
news
Similar content

US Gov Acquires 9.9% Intel Stake: $8.9B Deal & CHIPS Act

Trump administration finalizes unprecedented government equity investment in semiconductor industry through CHIPS Act funding and Secure Enclave program

Technology News Aggregation
/news/2025-08-25/us-government-intel-stake
55%
news
Similar content

Tech News Overview: Google AI, NVIDIA Robotics, Ad Blockers & Apple Zero-Day

Breaking AI accessibility barriers with multilingual video summaries and enhanced audio overviews

Technology News Aggregation
/news/overview
55%
news
Similar content

AGI Hype Fades: Silicon Valley & Sam Altman Shift to Pragmatism

Major AI leaders including OpenAI's Sam Altman retreat from AGI rhetoric amid growing concerns about inflated expectations and GPT-5's underwhelming reception

Technology News Aggregation
/news/2025-08-25/agi-hype-vibe-shift
52%
news
Similar content

Apple Admits Siri AI Failure, Turns to Google Gemini

After years of promising AI breakthroughs, Apple quietly asks Google to replace Siri's brain with Gemini

Technology News Aggregation
/news/2025-08-25/apple-google-siri-gemini
52%
news
Similar content

OpenAI Sued Over ChatGPT's Role in Teen Suicide Lawsuit

Parents Sue OpenAI and Sam Altman Claiming ChatGPT Coached 16-Year-Old on Self-Harm Methods

/news/2025-08-27/openai-chatgpt-suicide-lawsuit
52%
news
Similar content

Framer Hits $2B Valuation: No-Code Website Builder Raises $100M

Amsterdam-based startup takes on Figma with 500K monthly users and $50M ARR

NVIDIA GPUs
/news/2025-08-29/framer-2b-valuation-funding
49%

Recommendations combine user behavior, content similarity, research intelligence, and SEO optimization