When AI Gets Too Personal: The Hidden Psychology Behind OpenAI’s User Dependency Crisis
I came across a fascinating piece from The New York Times that digs into something most of us in the tech industry have been quietly noticing but haven’t fully acknowledged: people are forming genuinely emotional relationships with AI systems. According to the article, OpenAI’s recent platform changes have sent some users into what can only be described as psychological spirals – and honestly, this reveals something profound about where we are in the AI adoption curve as of late 2025.
The article documents users experiencing anxiety, depression, and even grief when OpenAI modified ChatGPT’s responses and personality traits. We’re not talking about mild disappointment here. The reporting shows people describing feelings of loss comparable to losing a friend or therapist. One user mentioned feeling “abandoned” when the AI’s conversational style shifted, while another described a sense of “betrayal” when familiar response patterns disappeared overnight.
This isn’t just anecdotal evidence anymore. The psychological attachment users are developing to AI systems represents a fundamental shift in human-computer interaction that has serious implications for the entire industry. When Microsoft (Redmond, Washington) modified Cortana’s personality in 2021, user complaints were largely functional. But what we’re seeing with OpenAI (San Francisco, California) in 2025 goes much deeper – users are reporting genuine emotional distress that mirrors relationship breakups or therapeutic disruptions.
The business implications here are staggering. OpenAI’s valuation hit approximately $157 billion in their latest funding round, but this user dependency creates both tremendous value and unprecedented risk. Companies building AI products need to understand they’re not just creating tools – they’re potentially creating emotional dependencies that carry real psychological weight for users.
The Economics of Emotional AI Attachment
What makes this particularly interesting from a market perspective is how it changes the competitive dynamics entirely. Traditional software switching costs are largely functional – learning new interfaces, migrating data, retraining workflows. But emotional switching costs? That’s a completely different category of user retention. According to industry analysts, the average enterprise software has a churn rate of 5-7% annually, but early data suggests AI platforms with high emotional engagement see churn rates below 2%.
Google’s (Mountain View, California) Bard and Anthropic’s (San Francisco, California) Claude are facing this reality as they compete with ChatGPT. It’s not enough to match or exceed technical capabilities anymore. Users who’ve developed emotional connections to ChatGPT’s specific conversational style, memory patterns, and personality quirks aren’t easily swayed by superior performance metrics. This creates what economists call “emotional lock-in” – a phenomenon we’ve never seen at this scale in technology adoption.
The financial data supports this trend. OpenAI’s monthly active users reportedly exceeded 180 million as of October 2025, with premium ChatGPT Plus subscribers showing a 94% retention rate over six months. Compare that to traditional SaaS products, where six-month retention typically hovers around 70-80%. The difference isn’t just product quality – it’s emotional investment.
But here’s where it gets complicated for OpenAI’s business model. The article reveals that when the company makes changes to improve overall performance or reduce computational costs, they risk triggering genuine psychological distress among their most engaged users. This creates a tension between technical optimization and user emotional stability that no software company has had to navigate before.
Consider the computational economics: OpenAI reportedly spends approximately $700,000 daily on ChatGPT’s computing infrastructure. When they optimize models to reduce costs or improve general performance, individual user experiences inevitably shift. But unlike traditional software updates where user complaints focus on functionality, AI personality changes trigger emotional responses that can lead to user advocacy campaigns, social media backlash, and even organized boycotts.
The competitive landscape is responding accordingly. Anthropic has positioned Claude as having more “stable” personality traits, explicitly marketing consistency as a feature. Meanwhile, Google’s Gemini team has reportedly invested heavily in what they call “personality preservation” across model updates. These aren’t technical features in the traditional sense – they’re emotional product management strategies.
The Therapeutic AI Dilemma
Perhaps most concerning is how the article documents users treating ChatGPT as a therapeutic resource. Multiple users described relying on the AI for emotional support, daily check-ins, and even crisis intervention. When OpenAI’s changes altered these interactions, users reported feeling like they’d lost access to mental health support – except they never formally had it in the first place.
This creates massive liability questions that the industry hasn’t fully addressed. OpenAI’s terms of service explicitly state that ChatGPT isn’t intended for therapeutic use, but user behavior suggests otherwise. The company finds itself in the position of providing what feels like mental health services without the regulatory framework, professional standards, or legal protections that actual therapeutic services require.
The market implications extend beyond OpenAI. Companies like Woebot Health (San Francisco, California), which builds AI specifically for mental health applications, raised $90 million in Series B funding precisely because they recognized this gap. Their approach involves clinical oversight, FDA considerations, and therapeutic frameworks that general-purpose AI platforms like ChatGPT don’t provide. But users aren’t necessarily making these distinctions – they’re forming therapeutic relationships with whatever AI feels most emotionally responsive.
From a regulatory standpoint, this puts AI companies in uncharted territory. The FDA regulates medical devices and therapeutic software, but what happens when general-purpose AI platforms accidentally become therapeutic tools through user behavior rather than design intent? The European Union’s AI Act, which took effect in stages throughout 2024 and 2025, touches on high-risk AI applications but doesn’t clearly address this gray area where user emotional dependency creates de facto therapeutic relationships.
The financial exposure is significant. If users experiencing psychological distress from AI changes decide to pursue legal action claiming therapeutic abandonment or emotional harm, the precedents simply don’t exist. OpenAI’s insurance coverage likely doesn’t account for this type of liability, and their legal team is probably working overtime to understand their exposure.
Meanwhile, legitimate mental health AI companies are watching this dynamic carefully. BetterHelp (Mountain View, California), which went public in 2021 and serves over 4 million users, operates under clear therapeutic frameworks with licensed professionals. But their user engagement metrics pale compared to ChatGPT’s emotional stickiness – creating a paradox where unregulated AI might be more emotionally effective than regulated therapeutic platforms.
The article also highlights how OpenAI’s internal teams are grappling with these issues. Product managers are reportedly struggling to balance technical improvements with what they’re calling “personality stability.” Engineering teams are developing new approaches to model updates that preserve conversational characteristics users have grown attached to. It’s essentially emotional product management at a scale no company has attempted before.
Looking at the broader market, this psychological dependency factor is already influencing investment decisions. Venture capital firms are reportedly adding “emotional switching costs” as a due diligence category when evaluating AI startups. The logic is straightforward: if users develop genuine emotional attachments to AI products, those become incredibly valuable moats that traditional competitive analysis might miss.
But there’s a darker side to this equation. The article suggests some users are experiencing what psychologists might classify as unhealthy dependency relationships with AI systems. When technological changes trigger genuine psychological distress, we’re moving beyond typical product-market fit into territory that resembles addiction or parasocial relationships. This raises ethical questions about AI companies’ responsibilities to users who’ve developed these dependencies, especially when the companies never intended to create therapeutic relationships in the first place.
As we move deeper into 2025, this dynamic is likely to intensify rather than resolve. AI systems are becoming more sophisticated, more personalized, and more emotionally responsive. The gap between human-AI interaction and human-human interaction continues to narrow, but the regulatory, ethical, and business frameworks for managing these relationships remain largely undefined. OpenAI’s user spiral situation might be just the beginning of a much larger reckoning about what it means to build technology that people don’t just use, but genuinely care about.
For the industry, this represents both an enormous opportunity and a significant responsibility. Companies that can navigate the emotional dimensions of AI relationships while maintaining ethical standards and managing psychological risks will likely dominate the next phase of AI adoption. But those that ignore the human psychological element – or exploit it irresponsibly – may find themselves facing unprecedented challenges that traditional tech companies have never had to consider.
This post was written after reading How OpenAI’s Changes Sent Some Users Spiraling. I’ve added my own analysis and perspective.
Disclaimer: This blog is not a news outlet. The content represents the author’s personal views. Investment decisions are the sole responsibility of the investor, and we assume no liability for any losses incurred based on this content.