HIPPA, Attorney-Client Privilege and The LLM Intimacy Illusion

Legal Limits of Digital Intimacy: a critique of how Generative AI interfaces mimic professions which advise the vulnerable typically guided by strict privacy protocols but now without any of the protections for the inquirer.

We have all opened up ChatGPT to ask it a question or to request some advice.  It begins innocently enough, how many ants are there in the world or where is chewing gum illegal?  We are used to having the facade of privacy tapping on our phonescreens or in our browsers on our laptops, so we may feel better asking embarrassing or dubious questions to an AI agent than a clinician or friend.  But how private are these chats? From medical queries about potential diagnoses to legal questions about grey areas of the law, the facade of privacy from not sharing directly with another person is a misleading one.  You may be alone and looking for guidance or insight but the LLM responding is under no compulsion to keep your queries private should law enforcement come knocking.

Your sense of intimacy is entirely illusory. Doctors must adhere to HIPPA laws to keep your health questions and information private.  Attorney client privilege protects you in a legal setting, but chatbots like ChatGPT are not bound by any similar legal obligations. In fact, OpenAI CEO Sam Altman recently made it crystal clear that there is zero legal confidentiality when users rely on ChatGPT for therapeutic, medical, or legal advice. Conversations that are conducted with generative AI can be stored, reviewed, subpoenaed, and even made entirely public.

Despite this revelation, the typical use of large language models for deeply sensitive queries and intimate conversations has continued to grow exponentially. People are turning to AI for comfort, for symptom checks, for legal reassurance, and for emotional processing. Tools like ChatGPT, Claude, and Gemini are filling gaps in access, affordability, and stigma. Yet while these models increasingly simulate the roles of professional diagnosticians, healthcare workers and legal advisors, they do so entirely outside the professional structures that we have grown to expect which protect our privacy. The most worrying aspect being that users appear completely unaware of this distinction.

This false assumption of confidentiality is exacerbated by the design of these systems. The conversational format of tools like ChatGPT mimics texting on mobile or journaling on desktop.  It mimics intimate and familiar forms of communication which rest safe from public view. When a chatbot responds with empathetic language and tonality, the suggestion of human-machine intimacy becomes nearly indistinguishable from actual privacy and protection. The result is a dangerous mismatch between perceived privacy and actual data handling and storage practices.

Recent revelations have made this risk more practical than theoretical. In a July 2025 article, TechCrunch reported that user conversations shared via OpenAI’s “public link” feature are now being indexed and surfaced in searches by Google and other engines. This means that chats that seemed private to the users as they were happening which often contain emotionally raw, medically urgent, or legally sensitive information have become discoverable online, often without the user’s awareness or intent. Some indexed conversations include admissions of criminal activity, confessions of infidelity, suicidal ideation, and health-related fears. These are not anonymized data sets. They are direct records of human distress, unintentionally made public not through malicious hacking but through the digital infrastructure of UX design and platform defaults.

OpenAI is not alone in exposing the contours of our collective digital vulnerability. Meta’s new AI assistant which is integrated into such ubiquitous apps as Instagram, Facebook, and WhatsApp, has begun surfacing what they term “interesting” AI chats in a publicly viewable “Discover” feed. Although Meta insists that these conversations are only visible if the user explicitly agrees to share them, the interface of consent presents approval prompts  after the conversation is already completed, when users may be mentally checked out of the conversation or wantonly hitting buttons to exit the screen. Many have reported being unaware that they had consented to publication at all. The content of these surfaced chats has included deeply personal questions about mental health, gender identity, and family dynamics. Whether or not the users intended to share them, the emotional damage of exposure is real.

If these examples feel like recent developments, the precedent they rest on is not. In 2018, an Amazon Alexa device recorded a private conversation between a couple and, due to an unfortunate sequence of misunderstood voice commands, they sent the audio file to a contact in their address book. Amazon confirmed the event, calling it an unlikely sequence of events, but did not dispute the essential fact: always-on microphones and cloud-based processing mean that household conversations can be captured, stored, and accidentally or purposefully shared. Furthermore, Amazon and other smart device providers have routinely complied with law enforcement requests for voice recordings. This establishes a clear legal and technological trajectory: when data exists, it can be subpoenaed.  No matter how intimate the setting may seem, no matter what privacy standards we are subconsciously applying to our conversations- this data can wind up out of your hands. 

Generative AI extends this reality into the realm of thought. Unlike audio recordings, which require activation and spoken language, text based large language models encourage users to voluntarily submit text which can often be more revealing, descriptive, specific, and emotional than a spoken fragment may be due to the desire to get a proper response from the AI. These descriptive queries are routed to remote servers, stored, and in some cases, even reviewed by human moderators for safety and quality assurance or used to train more advanced models. Unless explicitly disabled, conversations conducted with free versions of ChatGPT and similar models are also typically stored and used to improve future models. That means the words themselves which are at that point anonymised but not deleted could then be transmuted into training data. The story of your medical history or emotional trauma may eventually become part of someone else’s autocomplete or Gen AI output.

Some platforms including Claude and certain Open AI models and subscriptions, now also include “persistent memory features” that remember your prior interactions across different input sessions. For the user, this adds a new layer of continuity and convenience but it also increases the risk of long-term data retention and misuse. Users are rarely able to evaluate the full potential of future implications of such features, especially when assurances about anonymization of data and conversations obscure the structural reality of centralized processing.

Practically speaking, for the data privacy enthusiasts, some alternatives do exist. Open-source models like LLaMA, Mistral, and Phi-3 can be run locally, offering genuine data control for your most sensitive topics and intimate queries. These systems process text entirely on a user’s device, without uploading queries to remote servers or storing them past the completion of the session. However, running local models requires a level of technical skill, sufficient computing power, and first and foremost, a genuine awareness of and concern for the risks posed by cloud-based systems. Most users of ChatGPT will never actually explore these options, in part because many do not even know they exist. Instead, they continue to interact with mainstream, closed-source tools under a false sense of security and the incorrect assumption that what they say will not be seen, saved, or shared.

None of this is an argument against the use of Gen AI. These tools are powerful, helpful and increasingly indispensable. But they are also not without the potential for misuse or abuse. As we invite them into the most intimate discussions of our lives and the companies we interact with develop new features and interact more with law enforcement, we must demand transparency.  Information should be readily available in plain language about how they work, where our data goes, and who can access it. If users’ expectations aren’t in line with the technological realities, we risk turning vulnerable admissions into pieces of evidence and intimate moments of support seeking into unintended public records, indexed on the front pages of the internet.  With the illusion of the privacy of a journal and the expertise associated with strict privacy protected fields, we cannot leave the burden of discretion solely to the hopeful imagination of the person typing a query or plea in a moment of vulnerability.  The lion’s share of responsibility for transparency and ethical design must be assumed by the providers in the form of clear disclosure of data use and a range of interaction options for each interface.

Works Consulted

Altman, Sam. OpenAI CEO Comments on Law Enforcement Access to Chat Logs. OpenAI, July 2025. Public statement during press briefing.
(Also referenced in: OpenAI. "Response to the New York Times Data Demands." OpenAI, 2 Aug. 2025, https://openai.com/index/response-to-nyt-data-demands/.)

Feiner, Lauren. “ChatGPT Conversations Are Now Publicly Searchable—Even When Users Didn't Mean to Share Them.” TechCrunch, 17 July 2025. https://techcrunch.com/2025/07/17/openai-public-link-chats-indexed-google/.

Frenkel, Sheera. “Denmark Passes Deepfake Law Letting People Copyright Their Face and Voice.” The New York Times, 10 July 2025. https://www.nytimes.com/2025/07/10/world/europe/denmark-deepfake-copyright-ai-law.html.

Heater, Brian. “Meta’s AI Discover Feed Raises Concerns over Privacy and Consent.” TechCrunch, 23 June 2025. https://techcrunch.com/2025/06/23/meta-ai-discover-feed-privacy-consent/.

Koebler, Jason. “Amazon Echo Recorded a Conversation, Then Sent It to a Random Contact.” Motherboard, 24 May 2018. https://www.vice.com/en/article/a3y8g4/amazon-alexa-recorded-conversation-sent-to-random-contact.

Krolik, Aaron, and Kashmir Hill. “Your Smart Speaker Could Be Listening to More Than You Think.” The New York Times, 21 July 2020. https://www.nytimes.com/2020/07/21/technology/alexa-google-home-privacy.html.

OpenAI. “How ChatGPT Uses Data.” OpenAI Help Center, 2024. https://help.openai.com/en/articles/7039943-how-chatgpt-uses-data.

OpenAI. “ChatGPT: How We Handle Your Data.” OpenAI, 2023. https://openai.com/policies/privacy-policy.

Roose, Kevin. “AI Is Taking on Therapist, Doctor, and Lawyer Roles. But Is It Confidential?” The New York Times, 25 June 2024. https://www.nytimes.com/2024/06/25/technology/ai-therapy-medical-legal-privacy.html.

Vincent, James. “Amazon Admits Alexa Can Send Private Conversations.” The Verge, 24 May 2018. https://www.theverge.com/2018/5/24/17388702/amazon-alexa-private-conversation-recording.

Whittaker, Meredith. “The Illusion of AI Privacy.” AI Now Institute Report, 2023. https://ainowinstitute.org/reports.html.

Next
Next

AI Geofenced Neocartography: Regulation and Resources