Technology
Danish Kapoor
Danish Kapoor

OpenAI will require authentication for advanced models

OpenAI is preparing to launch a tighter verification process to limit access to new generation artificial intelligence models. According to the information on the company’s support page, this process will be presented to developers under the name of “verified organization .. Together with this system, official identity will be requested from institutions that require access to advanced models. In order to be valid for the identity, one of the supported countries will have to be a state -approved document.

Each identity can only be used once every 90 days and will be limited to the verification of a single organization. In other words, it will not be possible to make more than one application or to open different accounts in a short time. OpenAI defines this step not only as a technical measure, but also as an ethical necessity. They say that they want to prevent the abuse of artificial intelligence they want to make everyone access.

OpenAI will implement authentication against the risk of abuse

In addition, they emphasize that a significant portion of the developer community acts in accordance with the rules. However, a small minority is deliberately in practices that violate API policies. The company is going to strengthen security measures to prevent such situations. The verification process stands out as one of these security measures.

This decision of OpenAI is not only an ethical choice, but also on serious security concerns. In particular, data withdrawal attempts from some countries have prepared the basis of this decision. For example, at the end of 2024, it was claimed that a large -scale data extraction was made by a Chinese -based group on API. After this, OpenAI completely prevented access from China.

In addition to all these, the company’s verification move is aimed at protecting intellectual property rights. Because high -volume data withdrawal over API contains the risk of being used in illegal training of models. According to Bloomberg, a group of Deepseek may have collected data in this way. OpenAI wants to prevent repetition of such violations.

Although this system can cause some reactions in the developer community, the company is trying to balance the company. Instead of completely closing access to advanced models, it prefers to maintain it with a controlled and safe way. For this reason, the verification system deals with both freedom and security concerns at the same time. In the process, it is also stated that it will be loyal to the principle of fair access.

In the meantime, the verification process will only be valid for some institutions. So not everyone will be included in this system. Only institutions that meet certain criteria will be able to benefit from this privilege. What these criteria are not yet clear.

It is estimated that this system will bring extra load in the short term for developers. However, it is thought to benefit in terms of security and supervision in the long run. One of the most effective ways of preventing abuse through API is the authentication. This step is not only a technical, but also as a requirement of social responsibility.

Danish Kapoor