Skip to main content
Get your AI Readiness score in 5 minutes— free, no signup, no salesperson will call.
Freistyle AI

AI Education

Is ChatGPT Private? What the 2026 Court Rulings Mean for Your Business

What business owners need to know about ChatGPT privacy after the 2026 court rulings — what's discoverable, what to never paste, and the one-page policy fix.

By Dominic Frei9 min read

ChatGPT is not legally private. On Free, Plus, Pro, and Business plans, prompts can be subpoenaed, produced in court, or read by OpenAI staff. Only Enterprise or API ZDR are confidential.

Is ChatGPT Private When I Use It for Work?

No. ChatGPT is not private in the legal sense most owners assume. Your prompts and outputs are stored on OpenAI's servers, can be reviewed by OpenAI staff or contractors, and — on consumer plans — used to train future models unless you opt out.

Three definitions of "private" get conflated in this conversation, and the confusion is what gets businesses sued. Encryption means data is scrambled in transit and at rest — ChatGPT scores fine here. Confidentiality means no one outside the conversation can read it — ChatGPT fails on consumer plans, where staff and contractors review chats for safety and model improvement. Privilege means a court can't force the contents to be produced — ChatGPT fails on every plan.

For a business owner, only the third definition really matters. Privilege is what stops a subpoena. Sam Altman has admitted this directly: there is no legal confidentiality on ChatGPT, and OpenAI could be required to produce conversations under court order. That is a CEO admission, not a marketing claim.

Can My ChatGPT Conversations Be Subpoenaed or Used in Court?

Yes. As of 2026, U.S. federal courts treat ChatGPT prompts and outputs as electronically stored information (ESI) — the same legal category as email. They are discoverable by opposing counsel, subpoena-able by prosecutors, and not protected by attorney-client, doctor-patient, or therapist privilege.

Three rulings in the past four months settled this. On January 5, 2026, U.S. District Judge Sidney H. Stein affirmed an order forcing OpenAI to produce 20 million ChatGPT conversation logs to plaintiffs in the consolidated New York Times copyright case. The users in that sample were not notified and cannot object.

Key Stat
20,000,000 ChatGPT conversation logs were ordered produced to plaintiffs in the New York Times v. OpenAI case, per the National Law Review (January 2026). The affected users were not notified.

On February 10, 2026, U.S. District Judge Jed Rakoff ruled in United States v. Heppner that AI chatbot conversations carry no privilege of any kind. Defense counsel had argued the defendant's Claude conversations were quasi-therapeutic. The court rejected that. The conversations were admitted as evidence.

For a small business, the practical implications are not criminal — they are civil. Divorce, wrongful termination, trade-secret disputes, regulator inquiries. In every one of those, opposing counsel can now ask: "Did you or any employee use ChatGPT on this matter? Produce the logs."

Does Deleting a ChatGPT Chat Actually Delete It?

Usually within 30 days — but a court order can override that. Between May and September 2025, a federal preservation order forced OpenAI to retain every consumer chat indefinitely, including chats users had pressed "delete" on. Standard 30-day deletion resumed September 26, 2025, but logs from that period are still being produced in 2026.

OpenAI's published policy for consumer accounts is 30-day deletion. The ceiling is whatever a court orders. Business, Enterprise, Edu, and API tiers were explicitly excluded from the 2025 preservation order — that is a recorded legal advantage, not marketing.

Memory is a separate retention bucket most owners don't think about. ChatGPT's Memory feature is persistent. Deleting a chat does not delete what Memory has stored. Clear it separately in Settings → Personalization → Memory.

Pro Tip
Open ChatGPT → Settings → Data Controls → switch off "Improve the model for everyone." That single toggle stops new conversations from being used to train future models. It does not retroactively remove anything you have already typed, and it does not change what a court can compel. It is the floor, not the ceiling.

What Should a Business Owner Never Put Into ChatGPT?

Anything you would not want printed in a competitor's discovery letter, an insurance claim file, or a regulator's inbox. The plain-English rule: if losing it would hurt the business, don't paste it.

The hard "never" list:

  • Client personally identifiable information (PII), payment data, passwords, API keys
  • Source code that touches authentication, payments, or proprietary algorithms
  • Contracts, NDAs, M&A documents, term sheets
  • Personnel files, performance reviews, salary data, HR investigations
  • Patient or medical information of any kind
  • Active legal strategy or attorney-client communications

The soft "redact first" list — paste only after replacing identifying details with generic placeholders: financial figures, supplier and partner names, pricing strategies, internal disputes, anything regulated under HIPAA, GDPR, or Switzerland's nFADP.

Pseudonymization takes 60 seconds. Before you paste, swap real names for "Client A," real companies for "Vendor B," real amounts for round figures. The canonical cautionary tale is Samsung in 2023 — three engineers pasted proprietary code into ChatGPT in 20 days. Samsung banned generative AI tools company-wide.

What's the Privacy Difference Between ChatGPT Free, Plus, Business, Enterprise, and API?

Five tiers, three privacy postures. Free / Plus / Pro train on your data by default. Business and Enterprise do not train on your data by default and add admin controls. The API with a signed Zero Data Retention agreement is the only tier where OpenAI never stores your prompt at all.

| Plan | Trains on your data? | Retention | NYT preservation order excluded? | Defensible for client data? | |---|---|---|---|---| | Free / Plus / Pro | Yes (toggleable off) | 30 days after delete | No | No | | ChatGPT Business | No | 30 days, admin-set | Partial | Marginal — minimum baseline | | Enterprise / Edu | No | Admin-controlled | Yes — explicitly excluded | Yes, with DPA/BAA | | API with ZDR | No | None — not stored | Yes | Yes — gold standard | | Self-hosted open-weight | N/A | You control it | N/A | Yes |

The line that matters for an SMB is between Plus and Business. Plus is a consumer product. Business is a workspace product. The price difference is about $5 per user per month. The legal difference is a Data Processing Addendum, no model training by default, admin controls, and partial exclusion from preservation orders. That is the cheapest legal upgrade most businesses will ever make.

For HIPAA, attorney-client matters, or trade secrets, only Enterprise or the API with ZDR buys you contractual confidentiality.

Are There Special Rules in Switzerland or the EU?

Yes — and they are stricter than U.S. rules. Switzerland's revised Federal Act on Data Protection (nFADP, in force September 1, 2023) treats AI processing the same as any other data processing, and the Swiss FDPIC enforces it. The EU AI Act's high-risk obligations land August 2, 2026.

Under nFADP, a Swiss SMB processing personal data through ChatGPT must meet the same baseline as any other processor: transparency, lawful purpose, proportionality, and a Data Protection Impact Assessment for high-risk processing. The FDPIC's 2024 guidance made clear that AI tools are not exempt.

For Swiss SMBs serving EU customers, GDPR applies in parallel. Where it diverges from nFADP, follow whichever is stricter. The EU AI Act's high-risk system obligations take effect August 2, 2026 — three months from now. If your business uses AI in hiring, credit scoring, or other categories defined as high-risk in the Act, you have a deadline.

What Does an AI Conversation Evidence Trail Look Like in a Lawsuit?

Discovery requests today specifically ask whether you used ChatGPT, Claude, Gemini, or Grok on the matter. The party using AI must produce relevant prompts and outputs. Opposing counsel can use them to impeach testimony, prove state of mind, or contradict written positions.

Federal Rule of Civil Procedure 26 governs civil discovery. Parties must produce all non-privileged information relevant to a claim or defense — and since 2026, that explicitly includes AI prompts and outputs.

A litigation hold on AI tools, for a 25-person firm, looks like this: when litigation is reasonably anticipated, the firm must preserve AI chat history alongside email and Slack. Failure to do this is potential spoliation. Three things to write into your engagement letters and employee handbook now: (1) employees use only the company-administered ChatGPT Business or Enterprise account for company work; (2) certain categories of data are never pasted into any AI tool; (3) chat history may be subject to litigation hold and must not be deleted unilaterally.

Vague gets people hurt. "I think our AI is private" is not a policy — it's a lawsuit waiting to be filed.

What's the 60-Minute Fix for a Small Business Owner?

Move every business user off personal ChatGPT, switch off model training where it isn't already, write a one-page AI Use Policy, and decide one tier up from where you are: Free → Business, Business → Enterprise, regulated → API ZDR or self-host.

1. Inventory (10 min) — One row per employee using AI on company business. Columns: name, tool, account type, purpose, sensitive data exposure. 2. Toggle training off (10 min) — On every consumer account still in use. 3. Pick a tier (15 min) — Sensitive client data: Business at minimum. Regulated industry: Enterprise with DPA. Trade secrets or HIPAA: API ZDR or self-hosted. 4. Write the one-page policy (15 min) — Approved tools, the never-paste list, redact-first list, accident protocol, litigation-hold trigger. 5. Brief the team (10 min) — One all-hands or one Loom. One written acknowledgment per employee.

Quick Win
The 5-minute audit: Open the ChatGPT account every employee uses for work. Check three things — (1) Is it personal or a Business workspace? (2) Is "Improve the model for everyone" off? (3) Is anyone using Memory? Write the answers in a single-tab spreadsheet. That spreadsheet is the start of every defensible AI policy.
Related Tool
Run the AI Readiness Quiz — a 12-question diagnostic that scores where your business stands on AI privacy, governance, and operational readiness, in plain English, in under three minutes. Take the quiz →

When Is ChatGPT Actually Safe Enough to Use?

On a Business or Enterprise tier with training disabled, with a written use policy and a "never paste" list, ChatGPT is safe enough for most SMB workflows. Risk is not avoided — it is managed.

Green-light tasks (any tier with training off): marketing copy from public information, summarizing public documents, generic emails, learning, brainstorming. Yellow-light (Business or Enterprise, with redaction): financial modeling on anonymized figures, HR drafts without names, draft contracts with placeholders. Red-light (Enterprise with DPA, API ZDR, or self-hosted only): anything regulated, anything privileged, anything you would hide from a court.

A risk analyst's three questions before each prompt: (1) If this prompt were read aloud in a deposition, what would it cost? (2) If the contents leaked, what would I tell my biggest customer? (3) Would I send the same content over unencrypted email? If you wouldn't email it, don't paste it.

The GEO Readiness Guide covers AI risk alongside AI visibility. Law firms have the tightest privilege exposure — start at the law firm hub. Dental practices have HIPAA-equivalent obligations on every patient note — the dentist hub is here. Or take the AI Visibility Readiness Quiz to see how AI engines see your business while you get your privacy posture in order. Every Tuesday, The Risk Memo covers one risk question, one move worth making this week, and one from-the-field example, in five minutes of plain English.