Technology
Danish Kapoor
Danish Kapoor

It is possible to analyze hidden documents with Claude Gov

Anthropic, only US federal institutions the new artificial intelligence model to access Claude GovHe announced. This model developed, especially In the fields of defense and intelligence It was prepared to facilitate document analysis and content processing processes. According to the company’s information with the public, Claude Gov is currently being used by senior government agencies. However, no information has been given about when it has been used.

Anthropic states that the restrictions on the previously -open models are stretched to a certain extent in Claude Gov. This makes it possible to make analyzes on the contents of the confidential documents. According to the Public Claude versions of the model a broader context of a wider document It is stated that it has the ability to evaluate. It also has a more advanced understanding of understanding in US security languages ​​and regional dialects.

In the creation of Claude Gov, the same security tests in other models of the company have been applied. But the model, It will adapt to national security priorities in the form of rearranged. In this context, especially threat detection, intelligence reports to be analyzed and classified content is optimized for use in tasks.

Anthropic defines a different usage area by loosening the restrictions on publicly open models with Claude Gov

The fact that the model is open to the federal level only to competent state institutions shows that it is completely separated from commercial or individual users. Anthropic clearly states that this product can only be accessed by authorized users working with hidden documents. However, such a limitation brings some question marks about the extent to which the content of the model can be intervened. Especially the boundaries drawn between security and freedom of expression remain controversial.

Anthropic developed about eleven months ago Use exceptions related to the contract Thanks to some state institutions, he says he can adapt certain aspects of the model with special permissions. Nevertheless, use in areas such as disinformation production, establishment of censorship mechanisms, cyber attack designs or development of weapons systems is not allowed. In other words, although the usage area has been expanded, certain ethical and legal limits are preserved around the model.

Claude Gov’s presentation, announced by OpenAI in January Chatgpt gov It is considered as a response to its product. OpenAI announced that Chatgpt Gov was used by more than 90,000 government employees in the USA. Anthropic refrained from giving information about the number of users or application examples. However, it was confirmed that the company participated in Palantir’s Fedstart program.

These developments are the great artificial intelligence companies that he started to work closer to state institutions points to a period. Artificial intelligence practices for public services, not only in the USA but also in other countries, have accelerated. For example, SCALE AI has signed a five -year agreement with the Qatar government in the fields of health, transportation and public administration in recent months. It is also known that the same company has developed an artificial intelligence tool for military planning with the US Department of Defense.

Danish Kapoor