Talking to AI could be risky now, warns Sam Altman in global privacy wake-up call
ChatGPT is being used by millions around the world every day, but now OpenAI CEO Sam Altman has raised serious concerns about its privacy system. He revealed that there’s currently no legal protection for AI-based conversations, something the users needs to be aware of and act on.
- Talking to AI could be risky now, warns Sam Altman in global privacy wake-up call
- “We should have the same concept of privacy for your conversations with AI that we do with a therapist”
- “No one had to think about that even a year ago” – The AI boom is creating new legal challenges
- Host Theo Von shared his concern: “I feel unsure about who might see my personal information”
- Is ChatGPT safe for personal use? Users are now more confused and cautious
- What does OpenAI’s privacy policy really offer?
- Users must stay aware and cautious until laws are in place
“We should have the same concept of privacy for your conversations with AI that we do with a therapist”
In a recent podcast interview, Altman acknowledged that people today use ChatGPT as a therapist, advisor, or personal coach, often sharing their deepest thoughts. But the legal system doesn’t recognize those interactions as confidential.
He explained, “Right now, if you talk to a therapist or a lawyer or a doctor about those problems, there’s legal privilege for it. There’s doctor-patient confidentiality, there’s legal confidentiality.”
But with ChatGPT, no such legal guarantee exists. If a court demands user data during a legal case, the company may be required to share even the most personal chat content.
“No one had to think about that even a year ago” – The AI boom is creating new legal challenges
Altman said that just a year ago, nobody had considered the legal status of conversations with AI. But now, as AI becomes deeply embedded in our daily lives, it’s emerging as a serious global legal concern.
He added,
“We should have the same concept of privacy for your conversations with AI that we do with a therapist.”
This means lawmakers around the world will now need to create new frameworks that define and protect user rights when dealing with AI.
Host Theo Von shared his concern: “I feel unsure about who might see my personal information”
Podcast host Theo Von also shared his hesitation about using ChatGPT. He said he avoids it because he’s not sure who else might have access to what he types into the chatbot.
His words:
“I don’t talk to ChatGPT much myself because there’s no legal clarity about privacy.”
Altman agreed and responded,
“I think it makes sense.”
He added that although lawmakers are beginning to understand the issue, there are still no enforceable AI privacy laws in place across most countries.
Is ChatGPT safe for personal use? Users are now more confused and cautious
Around the world, people use ChatGPT for mental health support, relationship advice, and major life decisions. But when the CEO of OpenAI himself admits that users have no legal protection, it naturally shakes public trust.
This raises a vital question for everyone: Should we continue treating ChatGPT like a trusted personal advisor when AI conversation confidentiality is still legally undefined?
What does OpenAI’s privacy policy really offer?
Altman’s statements have triggered a global debate. People are now questioning, how well the OpenAI privacy policy really protects users and what happens to their sensitive information.
If AI platforms like ChatGPT aren’t legally required to keep user chats private, is it safe to talk to them? Could this information end up being accessed by someone else? And most importantly, can AI platforms ever be considered truly confidential?
Users must stay aware and cautious until laws are in place
Until strong international laws are formed, sharing personal information with AI tools is becoming a calculated risk for users worldwide. Altman’s honest remarks show that privacy and ethics need to come before AI growth.
For real trust in AI, the world must prioritize discussions about AI and personal data safety. Otherwise every conversation we have with a chatbot, could be an open file that someone else might access.
ALSO READ: Google Pixel 10 Series Launching With AI Upgrades





