- Sam Altman discussed the possibility of society allowing some secrecy regarding what people tell AI.
- As AI systems become more prevalent, it will be important to protect the sensitive information consumers share.
- Altman brought up the idea of ”AI privilege” while speaking with Arianna Huffington about her new AI health venture.
Should confidential information shared with AI be regulated under some kind of confidentiality agreement similar to attorney-client privilege?
Sam Altman explored this idea in a recent interview with The Atlantic, saying that society may decide that there's “some kind of AI privilege.”
“When you talk to a doctor or a lawyer, you have medical privilege, you have legal privilege,” Altman said in an interview. “When you talk to an AI, that concept doesn't exist right now, but maybe it should.”
The topic came up during a conversation with OpenAI CEO and media mogul Arianna Huffington about her company's new AI health venture, Thrive AI Health, which promises an AI health coach that tracks users' health data and offers personalized recommendations on sleep, exercise, nutrition, and more.
As more companies deploy AI systems and products, regulating how that data is stored and shared has become an important topic.
Laws like HIPAA make it illegal for doctors to disclose sensitive health data about patients without the patient's permission. This agreement is important because it allows patients to feel comfortable and honest with their doctors, leading to better and more accurate solutions.
But some patients still struggle to open up to doctors or seek medical care, which is part of Altman's motivation for getting involved with Thrive AI, he told The Atlantic. Other factors include the cost and accessibility of health care, according to a Time op-ed by Altman and Huffington about the new venture.
Altman said he was surprised by how many people were willing to share information with large-scale language models and AI systems that power chatbots like ChatGPT and Google's Gemini. He told The Atlantic that he read a Reddit thread about a person who had successfully told a law student about something they weren't comfortable telling someone else.
Thrive AI is still figuring out what its product will look like, but Huffington said in an interview that it envisions it being “available across every possible mode,” including workplace platforms.
Of course, this raises concerns about data storage and regulation. Large tech companies are already facing lawsuits over allegations that they trained AI models on content they didn't license. Health information is some of the most valuable and private data individuals possess, and it could potentially be used by companies to train LLMs.
“It's really important to make it clear to people how data privacy works,” Altman told The Atlantic.
“But in our experience, people understand this pretty well,” Altman added.
OpenAI's startup fund and Thrive Global announced the launch of Thrive AI Health last week, saying the company aims to use AI to “democratize access to expert-level health guidance” and address “growing health disparities.”