- Sam Altman discussed the possibility of society granting some confidentiality around what people tell AI.
- As AI systems grow more popular, safeguarding the sensitive info that consumers share is key.
- Altman brought up the idea of "AI privilege" while talking about his new AI health venture with Arianna Huffington.
Should the sensitive information we share with AI be regulated under some form of confidentiality agreement similar to attorney-client privilege?
Sam Altman mulled the idea in a recent interview with The Atlantic, saying that society may decide "there's some version of AI privilege."
"When you talk to a doctor or a lawyer, there's medical privileges, legal privileges," Altman said in the interview. "There's no current concept of that when you talk to an AI, but maybe there should be."
The topic came up during a conversation with the OpenAI CEO and media mogul Arianna Huffington about their new AI health venture, Thrive AI Health. The company promises an AI health coach that tracks users' health data and provides personalized recommendations on things like sleep, movement, and nutrition.
As AI systems and products are implemented at increasing numbers of companies, regulating how that data is stored and shared has become a hot topic.
Laws like HIPAA make it illegal for doctors to disclose sensitive patient health data without the patient's permission. The agreement is important because it allows patients to feel comfortable being honest with their doctors, which can lead to better and more accurate solutions.
But some patients still have trouble opening up to doctors or seeking medical attention, and that's part of what motivated Altman to become involved with Thrive AI, he told The Atlantic. Other factors include the cost of healthcare and accessibility, according to an Op-Ed that Altman and Huffington wrote about the new venture in Time.
Altman said that he's been surprised by how many people are willing to share information with a large language model, or the AI systems that power chatbots like ChatGPT or Google's Gemini. He told The Atlantic that he's read Reddit threads about people who found success telling LLMs things they weren't comfortable sharing with others.
While Thrive AI is still figuring out what its product will look like, Huffington said in the interview she envisioned it being "available through every possible mode," including workplace platforms.
That, of course, raises concerns about data storage and regulation. Big tech companies have already faced lawsuits over claims they trained their AI models on content they didn't have a licensing agreement with. Health information is some of the most valuable and private data that individuals have, and it could also be used by companies to train LLMs.
Altman told The Atlantic it would be "super important to make it clear to people how data privacy works."
"But in our experience, people understand this pretty well," Altman added.
OpenAI's Startup Fund and Thrive Global announced the launch of Thrive AI Health last week. The company said it seeks to use AI "to democratize access to expert-level health coaching" and tackle "growing health inequities."