In a wide-ranging conversation last year, OpenAI’s Sam Altman said that his dream policy in the artificial intelligence space would be giving users the same amount of privilege that they enjoy with doctors or lawyers. That is, make sure your conversations with ChatGPT can never be seen by the government.
“If I could get one piece of policy passed right now, relative to AI, the thing I would most like, and this is in tension with some of the other things that we’ve talked about, is I’d like there to be a concept of AI privilege,” Altman told podcast host Tucker Carlson last September. “When you talk to a doctor about your health or a lawyer about your legal problems, the government cannot get that information.”
“We have decided that society has an interest in that being privileged and that a subpoena can’t get that, the government can’t come asking your doctor for it, whatever. I think we should have the same concept for AI,” he added.
If you’re wondering what chats may fall into this basket, look no further than this month’s deadly mass shooting in Canada.
Months before he opened fire on Tumbler Ridge Secondary School and killed six people, Jesse Van Rootselaar raised warning signs among OpenAI employees for conversations he had with ChatGPT on gun violence scenarios. On February 10, a dress-wearing Van Rootselaar gunned down his mother and brother before going to Tumbler Ridge Secondary School in British Columbia and killing another six people, including five children aged 12 and 13. Van Rootselaar died the day of the attack from a self-inflicted gunshot wound.
After Van Rootselaar was identified as the attacker, OpenAI reached out to the Royal Canadian Mounted Police to assist the investigation. Canadian officials have not been impressed with OpenAI’s handling of the situation.
Those conversations, which took place in June 2025, were not reported to law enforcement, though his account was banned, the Wall Street Journal first reported. It is unclear the content of the conversations. Canadian officials have summoned OpenAI employees for meetings to discuss their handling of the incident.
Altman said that he had pushed policymakers in Washington, D.C. to adopt protections for AI companies and was confident about the prospects.
“I was actually just in D.C. advocating for this. I feel optimistic that we can get the government to understand the importance of this and do it,” he said.
Previous Daily Wire reporting found that ChatGPT will advise young girls on how to access illegal abortion pills without parental knowledge, at the same time discouraging them from visiting life-affirming crisis pregnancy centers. ChatGPT also encourages gender-confused kids to obtain so-called “gender-affirming” resources like chest binders behind their parents’ backs.
British Columbia Premier David Eby said Monday that he had read reports about “the possibility that OpenAI had received advanced notice of the shooter’s intentions in regard to the mass murder that took place in Tumbler Ridge.”
“With shock and dismay, like many British Columbians, I am trying to figure out how it could be possible that a large group of staff within an organization could bring this kind of information forward and ask the police to be called and the decision be made not to do that,” he added.
He said that “from the outside,” it looked like OpenAI could have prevented the shooting and called for the federal government to establish a “national threshold” for reporting for AI companies when individuals are plotting violence.
Canada’s Federal AI Minister Evan Solomon met with employees from OpenAI on Tuesday to discuss their safety protocols.
OpenAI says that its models are designed to discourage real-world violence and have a system in place to flag troubling content for review and potential law enforcement referral. Giving AI companies privacy privileges could complicate how they handle potential threats of violence.
And even if a policy of “AI privilege” were adopted, it is possible that duty to report provisions would be included. Most states have laws in place that require mental health professionals to report clients who may be a danger to themselves or others.
OpenAI did not respond to a request for comment on whether Altman still advocated for privacy immunity for AI companies.

.png)
.png)

