Meta is now preparing to entrust product updates and security of new features to artificial intelligence. According to information based on the company’s internal correspondence, human decisions will be replaced by machines in product risk assessments. Meta’s goal is to carry out at least 90 percent of these evaluation processes with artificial systems. This ratio is quite remarkable for an area previously realized with human experts.
In the new system, product development teams must first fill out a survey form. The nature of the developed feature, potential effects and effect on user behavior are included in this form. Artificial intelligence processes these data and offers an instant decision and automatically indicates risky areas. According to these feedback, product teams can use the update after completing the regulations.
Meta emphasizes artificial intelligence in the safety of young users.
The scope of Meta’s new evaluation system is not limited to technical risks. Topics with social effects such as security, violence content and incorrect information spread are entrusted to artificial intelligence. It is planned that algorithms will decide in these critical fields, which the company defines as “integrity .. But this preference means removing content security from human intuition.
Some experts from and outside the commodity emphasize that this approach can carry serious drawbacks. A former commodity manager argues that reducing control will create more external negativity. It is stated that the damages that people can prevent before can be overlooked with this system. This risk is increasing especially in updates with high social effects.
Considering the social impact of each change on the platform, examples of inadequate algorithms have been seen in the past. Automatic decision systems accidentally delete some content, while others can overlook. There are many feedback that artificial intelligence is limited under the control of violent or harassment content. In addition to all these, artificial intelligence cannot be as flexible as human beings in ethical assessments.
Meta’s three -month integrity report recently shared with remarkable data. After new content policies, the number of content removed from the platform has fallen significantly. However, there is an increase in bullying, threats and graphic content, albeit small. This table may indicate that the artificial intelligence -based system is limited to security.
The Meta Front states that this transition does not mean completely automaticization. According to the spokespersons of the company, in new and complex cases, human experts will still come into play. Low -risk decisions will be left under control of algorithms. However, it is not clear how to draw the limits of this distinction.
This change is a reflection of the rising automation tendency throughout the technology sector. Companies are tending to digitize the decision processes in the name of both speed and scalability. However, such decisions are not easy to balance when taken in areas affecting user security. Reactions from users and future problems will lead to the functioning of this system.