Technology
Danish Kapoor
Danish Kapoor

Meta makes billions with fake ads on Facebook and Instagram

The economic size reached by fake advertisements published on Meta’s digital platforms has brought about a new debate in the technology world. In applications such as Facebook, Instagram and WhatsApp, which reach millions of users, content ranging from investment frauds to illegal product promotions has reached a remarkable level. The new report revealed by Reuters suggests that the company generates approximately $16 billion in revenue annually from these ads. This figure corresponds to approximately 10 percent of the total advertising revenues announced by Meta.

The report reveals not only the revenues generated, but also how weak the system is in the fight against fake ads. In particular, it is stated that the company’s content control mechanisms cannot take fast and harsh measures against advertisers. However, this slowness in the system allows malicious advertisers to circulate similar content over and over again. However, Meta’s greater tolerance for big-budget advertisers has drawn criticism over inequality of control. While it takes eight violations for small-scale fraudsters to be removed from the system, some large advertisers remain on the platform with over 500 violations, revealing a remarkable double standard.

Meta prioritizes the fight against fake ads due to its internal policies

Another information revealed in internal correspondence is that company managers prioritize economic concerns in the process of blocking fraudulent advertisements. It is stated that some people at the management level were warned that the actions to be taken should not cost more than 0.15 percent of the company’s income. This situation shows how user security lags behind in company policies. On the other hand, the fact that only four fake advertising campaigns earned Meta 67 million dollars clearly reveals why the system is kept so flexible. Despite everything, this flexibility creates a situation that calls into question the company’s social responsibility.

In his statement on the subject, Meta spokesman Andy Stone questioned the accuracy of the figure, arguing that the 10 percent income stated was “crude and overly comprehensive.” However, he did not share clear and alternative data in response to this criticism. In contrast, he said that 134 million fake ads have been removed from the platform so far in 2025. In addition, he claimed that such content, which has been the subject of user complaints, has decreased by 58 percent in the last 18 months. However, these figures do not mean that the problem is completely solved.

These statements of the company are not considered sufficient by some circles. Because advertisers can commit hundreds of violations without being removed from the system, which shows that the problem is systematic. On the other hand, it is known that these contents cause financial losses not only to individual users but also to the general public. Frauds carried out through such advertisements damage not only Meta users but also the sense of trust in the online ecosystem. Therefore, it is clear that not only advertisements should be removed, but a permanent control system should be established.

Although Meta’s advertising infrastructure is technically advanced, it does not seem to work as effectively in terms of content filtering and abuse prevention. While the company’s algorithms are successful in detecting the revenue potential of ads, they are inadequate in prioritizing the risk of these contents in terms of user security. Therefore, not only technical improvements in the system but also changes in policy and decision processes are required. In addition to all this, the fact that content control is limited to algorithms also suggests that manual control is neglected. This paves the way for fraudulent content to spread rapidly.

According to some experts, the focus of platforms like Meta solely on revenue performance deepens the gaps in digital security. Although the company states that it has invested in artificial intelligence-supported content scanning systems, the current picture reveals that these systems are not fully effective. Users continue to encounter fake e-commerce ads, illegal product promotions and investment scams. In addition, since these ads are generally professionally prepared content, detection becomes more difficult. Control mechanisms are expected to respond faster to such advanced frauds.


Danish Kapoor