Interhuman AI just raised €2 million to solve a problem I didn't know existed: making AI chatbots that understand your facial expressions. According to Sifted, this Danish startup wants to add "social intelligence" to every AI interaction.
Paula Petcu, the CEO who previously worked on digital therapeutics at Brain+, thinks current AI is missing the human element. So they built an API that analyzes tone of voice, facial expressions, and body language in real time. Her background in digital therapeutics - a field already wrestling with privacy and regulation concerns - makes the emotional AI pivot particularly interesting.
The Technology That's Actually Terrifying
Their system uses computer vision and audio analysis to interpret "social signals" - basically everything your therapist notices about how you say things, not just what you say. The company claims 50% of AI interactions involve humans, and 40% of those would benefit from reading non-verbal cues. However, research shows that AI emotion recognition models struggle with accuracy, raising questions about whether this technology is ready for real-world deployment.
That's a lot of percentages for what boils down to: "AI should know when you're uncomfortable."
They're running two paid pilots right now. One in digital health, another for sales training. The sales use case makes sense - teach reps not just what to say, but how to read when prospects are bullshitting them. The health application is where things get weird.
Digital Health Meets Surveillance
Paula Petcu insists there's a "differentiation between emotions and behaviours" and they're only observing behavior. That's corporate speak for "we're not reading your mind, just your face."
But if you're paying for therapy through an app, do you want that app analyzing your micro-expressions and reporting back to... who? Your insurance company? Your employer's wellness program? Research on AI chatbot privacy shows that users are increasingly concerned about data collection and emotional surveillance.
The press release mentions improving "communication between patients and healthcare providers" but doesn't address who gets access to all this behavioral data. In a world where health insurers already mine social media for risk assessment, adding facial expression analysis feels like surveillance disguised as healthcare innovation. Studies have identified privacy concerns as key determinants in consumer-chatbot interactions, particularly when emotional data is involved.
The VC Math Doesn't Add Up
Nordic deeptech VC PSV Tech led the €2 million round, with EIFO (Denmark's export fund), Antler, and some angels participating. For a pre-seed round, that's decent money, but the market sizing claims feel inflated.
They say 50% of the global AI market involves human-AI interaction. That's probably true if you count every customer service chatbot. But the jump to "40% would benefit from non-verbal analysis" needs serious evidence.
Most people just want chatbots that actually solve their problems, not ones that notice they're frustrated. If your AI can't handle basic queries without reading facial expressions, maybe fix the AI first. Research on emotion-aware AI suggests that traditional chatbots often fail at basic functionality before adding layers of complexity.
What Could Actually Go Wrong
The company is targeting customer service next, which means your next support chat might analyze whether you're really angry or just impatient. Great news for companies trying to optimize their "empathy metrics." Customer service applications of emotion analysis are becoming more common, despite ongoing concerns about effectiveness and ethics.
But there's a darker side: emotion AI has a terrible track record with bias. Facial recognition already struggles with different ethnicities and ages, and research shows that some emotional AIs disproportionately attribute negative emotions to the faces of black people. Adding emotion interpretation creates another layer where algorithmic bias can screw people over.
Paula Petcu says they focus on behavior, not emotions, but that distinction matters less when the AI decides you're "non-compliant" with treatment or "untrustworthy" as a customer based on how you hold your eyebrows during a video call. Harvard Business Review warns about the risks of using AI to interpret human emotions, particularly around bias and accuracy concerns.
This feels like another case of "we can build it, so we should" without asking whether anyone actually wants their chatbot analyzing their body language. Some problems don't need AI solutions - they need human solutions.