Artificial intelligence-based chat bot ChatGPT has long been seen as a tool where users get information on every subject. People were explaining a symptom on their body, requesting a divorce protocol, or looking for investment ideas. But this situation has changed with the new decision made by OpenAI. Now ChatGPT will only offer general information rather than personal advice.
The new rules come into force as of October 29, 2025. From now on, ChatGPT will no longer be able to name medications for health-related symptoms or tell you the possible treatment for a disease. Content such as legal documents, contract samples or lawsuit petitions will no longer be created. Likewise, the model will not recommend you invest or give an opinion on buying or selling shares. The user will only be explained the basic principles on the subject, and then advised to consult a specialist.
ChatGPT will be a system that explains general information
With this decision, OpenAI prevented ChatGPT from acting as an advisor. Besides all this, there is a much more concrete reason behind the company imposing these restrictions. According to NEXTA, major technology companies began to consciously block such content in order to avoid legal liability. Because actions taken based on false information can have serious consequences for users. This causes the company to face the risk of lawsuits.
Although ChatGPT is successful in simplifying and explaining complex concepts, this competence is inadequate in certain areas. Particularly in fields that require expertise such as medicine, law and finance, providing incorrect or incomplete information can cause serious harm. When someone who feels a lump in their chest tells ChatGPT about the situation, the model may bring up a serious illness. However, such a symptom could also be just a simple oil patch. This is why the actual diagnosis should be left to doctors, and artificial intelligence should not be trusted in this sense.
Another issue that draws attention at this point is the tendency of ChatGPT to be used for psychological support. Some users turn to the model when they experience emotional difficulties and tell her their troubles. However, it carries a great risk for an artificial intelligence that cannot empathize to take on such a role. Moreover, while a therapist has legal responsibilities and professional ethics, ChatGPT does not have such a framework. This can lead to harmful consequences, especially for sensitive users.
Similar dangers apply to financial matters. Artificial intelligence can explain what a financial term means. However, it is quite risky to offer advice without taking into account personal information such as an individual’s income level, debt amount, and investment goals. Additionally, some users may try to get a quick response by entering their bank account information or tax numbers into ChatGPT. However, writing this data into the system may cause irreversible data security vulnerabilities.
The situation is not much different in terms of legal documents. The drafts offered by ChatGPT generally consist of general texts. However, considering that each country and even each state has different legal rules, the validity of these documents becomes highly controversial. The document in question may be completely invalid due to missing notarization or incorrect wording. The inadequacy of artificial intelligence in these areas stands out as one of the main reasons for the restrictions.