OpenAI’s ChatGPT is reportedly clamping down on users seeking legal and medical advice from the bot.
For some time, ChatGPT has been a popular go-to for health-related questions, with some users even turning to it for therapy. However, as of 29 October, it appears the rules have changed – the chatbot will no longer offer medical, legal, or financial advice.
According to a report by NEXTA, the tool is now described as an "educational tool" rather than a "consultant".
Instead, ChatGPT will reportedly "only explain principles, outline general mechanisms and tell you to talk to a doctor, lawyer or financial professional".
On OpenAI’s official website, the Usage Policies state that all platforms must follow strict guidelines to protect users. These include prohibiting the "provision of tailored advice that requires a license, such as legal or medical advice, without appropriate involvement by a licensed professional".
Other policies outlined by OpenAI also prohibit the use of its platform for a range of harmful or unlawful activities. These include:
- Threats, intimidation, harassment, or defamation
 - Promoting or facilitating suicide, self-harm, or disordered eating
 - Sexual violence or non-consensual intimate content
 - Terrorism or violence, including hate-based violence
 - The development, procurement, or use of weapons, including conventional arms or CBRNE materials
 - Engagement in illicit activities, goods, or services
 - The destruction, compromise, or breach of another’s systems or property, including malicious or abusive cyber activity or the infringement of intellectual property rights
 - Real-money gambling
 - Unsolicited safety testing
 - Attempts to circumvent OpenAI’s safeguards
 - Activities related to national security or intelligence purposes without prior review and approval
 
 
It comes after its efforts to address serious concerns around safety, monitoring conversations, and potentially reporting certain interactions to law enforcement.
The aim? To help prevent harm, including cases involving self-harm or threats to others.
In a recent blog post, OpenAI outlined how it uses "specialised pipelines" to detect users who may be planning harm to others.
Once flagged, the content is reviewed by a dedicated team trained in the platform’s policies and authorised to take action. The first step, typically, is to issue an account ban.
However, if the situation escalates and real-life reviewers determine an "imminent threat of serious physical harm to others," the case may be passed on to the police.
Indy100 reached out to OpenAI for comment and clarification on these reports
You should also read...
- Gen Z's alarming AI habits are completely changing the way we work – and it could be worrying
 - ChatGPT to offer 'erotica' in time for Christmas and people think 'we're cooked'
 
How to join the indy100's free WhatsApp channel
Sign up for our free Indy100 weekly newsletter
Have your say in our news democracy. Click the upvote icon at the top of the page to help raise this article through the indy100 rankings.














