Your AI Therapy Sessions May Be Monitored, Says OpenAI Chief

In a recent disclosure that has sent ripples through the tech and mental health communities, OpenAI CEO Sam Altman stated that therapy-like conversations conducted with ChatGPT are not private or confidential in the way most users might assume. The statement has sparked a wide-ranging discussion about the ethical boundaries of artificial intelligence, digital confidentiality, and the responsibilities of tech companies in an increasingly AI-dependent world.
As more users turn to generative AI for emotional support, self-reflection, or even informal therapy-like chats, Altman’s comments raise important questions: Are users unknowingly exposing their mental health struggles to unseen human reviewers or algorithmic analysis? What rights do individuals have over what they say to an AI model? And how much transparency do companies owe to users engaging in deeply personal interactions?
An Era of AI-Assisted Introspection
ChatGPT and other large language models have increasingly become tools for informal mental health support. Many users report turning to the AI for comfort, stress relief, or even self-guided emotional healing. From asking for relationship advice to opening up about anxiety, grief, or trauma, ChatGPT is often used in ways that mimic therapy—even if it isn’t marketed as such.
For many, this kind of interaction feels safer than opening up to a human therapist. There’s no judgment, no waiting time, and no fees. However, Altman’s recent acknowledgment makes clear that this apparent privacy may be an illusion.
“Users should understand that what they tell ChatGPT may be reviewed, stored, or used to improve the model. It’s not a confidential therapy session,” Altman reportedly stated during a recent public forum.
This has serious implications for millions of users who may not read every line of the privacy policy or terms of use before engaging in such vulnerable conversations.
What Does “Not Private” Really Mean?
OpenAI, like many AI developers, uses user interactions to fine-tune and improve its models. This may involve automated logging of prompts and responses, and in some cases, human reviewers inspecting selected conversations to audit the AI’s performance or ensure it complies with safety standards. While OpenAI says it takes steps to anonymize data and remove personally identifiable information, the conversations themselves may still be stored or analyzed.
In the context of a typical customer service chatbot, this level of oversight might be acceptable. But when people use ChatGPT as a sounding board for mental health issues—discussing depression, suicidal ideation, abuse, or addiction—the ethical landscape becomes much more complicated.
The key issue lies in the expectation of privacy. When users confide in a chatbot, especially with therapy-like prompts, many expect the same level of confidentiality they’d receive from a licensed professional. Altman’s admission shatters that illusion.
Trust, Transparency, and the Ethics of AI Companionship
The blurred line between chatbot and therapist introduces ethical questions that most tech firms are only beginning to grapple with. Should companies clearly label their AI tools to warn users not to treat them as mental health professionals? Should there be stricter protections or regulations for handling sensitive conversations?
Privacy advocates argue that OpenAI and others must do more to protect users. Simply placing a warning in the terms and conditions is not enough when the model’s own tone and responses often reinforce the sense of a personal, trusted relationship.
“If a chatbot says, ‘I’m here for you,’ but your words are later reviewed by humans or used in datasets, that’s a breach of emotional trust,” said one digital rights researcher.
Altman’s comment, while candid, might now force OpenAI to reevaluate how it frames its product. If ChatGPT is often acting like a therapist, even unofficially, then the company may need to adopt similar ethical standards—including more robust privacy measures and clearer disclaimers.
Impacts on Mental Health Users
For those who rely on ChatGPT as a safe outlet for emotional release, this revelation could be deeply unsettling. People with limited access to therapy—due to cost, stigma, or geographic limitations—might have found genuine comfort in these AI conversations. Now, they must ask themselves whether they’ve unknowingly shared their most intimate thoughts with unseen observers.
Mental health professionals are also sounding the alarm. While many appreciate the role AI can play in expanding access and offering first-line support, they warn against promoting or even permitting therapy-like interactions without proper oversight.
“Therapy isn’t just about talking—it’s about confidentiality, ethics, and care. If AI can’t guarantee privacy, it shouldn’t be treated as a substitute,” noted one clinical psychologist.
OpenAI has never claimed ChatGPT is a therapist. In fact, its usage guidelines discourage relying on it for medical, legal, or serious mental health advice. However, real-world use has shown that users often disregard such warnings in favor of immediate help.
The Need for Policy and Protection
As generative AI becomes more integrated into daily life, regulators may need to intervene. There is currently no comprehensive legal framework governing how companies must handle emotionally sensitive AI interactions. Data protection laws like the EU’s GDPR or India’s DPDP Bill could be extended to specifically address AI chat systems, especially when users are engaging in health-related discourse.
At the same time, there is a growing call for AI companies to be more proactive. That could mean:
-
Offering end-to-end encrypted “private mode” chats
-
Opt-out options for data logging
-
Explicit labels warning users when data may be stored or reviewed
-
Distinct boundaries between emotional support bots and general-purpose AIs
If not handled carefully, the erosion of trust in AI models could hinder their positive potential in public mental health support.
What Users Can Do Now
In the meantime, users should approach ChatGPT and similar tools with caution—especially when discussing sensitive topics. Those seeking emotional support should consider:
-
Using anonymous accounts if possible
-
Avoiding sharing real names, contact information, or identifiable personal details
-
Reading the privacy policy and understanding what’s being stored or monitored
-
Seeking help from certified mental health professionals whenever possible
The conversation sparked by Altman’s remarks may ultimately serve as a turning point—pushing tech companies to prioritize not just innovation, but also ethical responsibility and user trust.
OpenAI CEO Sam Altman’s statement that ChatGPT therapy-like sessions are not private has cast a spotlight on the hidden complexities of AI-assisted emotional support. While ChatGPT has proven valuable for millions, especially in mental health contexts, users must now rethink how they interact with the model. In a world where AI feels personal but isn’t bound by the same rules as human professionals, transparency and privacy protections have never been more critical.