Meta, recognized as the world’s most dominant and far-reaching social media conglomerate, reportedly derives billions of dollars in revenue each year from advertisements that promote fraudulent schemes. Recent investigative reporting has uncovered that the company is well aware of this reality. According to internal corporate documents obtained by Reuters, users across Meta’s core platforms — Facebook, Instagram, and WhatsApp — are collectively exposed to nearly fifteen billion advertisements every single day, many of which push shameless scams. These range from deceptive promises such as counterfeit stimulus checks purportedly endorsed by former President Trump to highly realistic deepfake videos in which personalities like Elon Musk appear to promote questionable or fake cryptocurrency ventures. Evidence suggests that Meta’s own internal safety and integrity teams are cognizant of the scale of this crisis: Reuters revealed estimates from the company’s trust and safety group suggesting that roughly one-third of all scams occurring in the United States are linked to Meta platforms. The key question, then, is why the company, with its immense resources and technological capacity, has not undertaken more decisive action to curb this pervasive menace. A likely answer lies in the staggering profitability of such advertisements, which reportedly generate over seven billion dollars in annual profit for Meta — an incentive structure that discourages meaningful reform.
The magnitude of the scam epidemic is anything but trivial. In the United States alone, citizens reported losing approximately sixteen billion dollars to fraud, based on data collected by the Federal Bureau of Investigation. This figure, alarming as it may seem, almost certainly represents a severe undercount, since many victims never come forward due to feelings of humiliation or self-blame for having been deceived. On a global scale, the phenomenon is even more catastrophic: estimates from the Global Anti-Scam Alliance suggest that in 2024, total worldwide losses from scams surpassed one trillion dollars. The economic and psychological toll on individuals is immense, particularly because those targeted often belong to the most economically and socially precarious demographic groups. Victims frequently include elderly citizens living on fixed incomes, young people desperately seeking jobs, immigrants navigating unfamiliar systems, or individuals enduring financial uncertainty. For these populations, even modest-sounding offers — a few hundred extra dollars, a promised government benefit, or an enticing employment opportunity — can spark hope. When that hope is shattered by deception, the resulting loss is not merely financial but deeply personal and destabilizing.
For Meta, however, these losses belong to others, while its own balance sheets continue to flourish. Internal corporate records described by Reuters indicate that Meta earns approximately sixteen billion dollars each year from advertisements promoting scams and prohibited products, representing an astonishing ten percent of the company’s total annual revenue. Around seven billion dollars of this comes specifically from ads bearing the unmistakable hallmarks of fraud — content that misuses the identities of public figures, impersonates trusted brands, or uses technologically sophisticated means to dupe users. Even substantial government fines would represent a negligible cost compared with these colossal profits, making financial penalties an insufficient deterrent.
For those who study artificial intelligence and digital deception, including researchers like us, the most common question is what concrete actions can mitigate this issue. The problem will not be solved through superficial fixes such as requiring consumers to enhance their personal financial literacy or by encouraging individuals to simply be more skeptical online. These measures, while well-intentioned, inadvertently place the burden of protection on victims and often amplify feelings of shame. Instead, we must hold Meta directly accountable for its role in perpetuating this widespread cycle of harm. Responsibility for reform lies not with individual users, but with the corporation whose platforms enable, distribute, and profit from these fraudulent advertisements.
Although Meta has the technological capability to flag and remove scam content, Reuters’ investigation revealed that the company’s internal mechanisms demand a threshold of 95 percent certainty before labeling an advertisement as fraudulent. Such an excessive standard of proof essentially guarantees that vast numbers of scam ads remain active. Furthermore, even when an advertisement is conclusively identified as a scam, The Wall Street Journal reported that Meta allows the offending account between eight and thirty-two separate “strikes” before finally issuing a permanent ban. This leniency enables fraudsters to continue posting new or modified versions of deceptive ads for months, siphoning money from thousands of unsuspecting users. The online payment provider Zelle has stated that approximately half of all scam cases reported by its users were connected in some way to Meta platforms — further evidence of the company’s central role in the fraud ecosystem.
The broader architecture of online advertising compounds this issue. Once an individual clicks on a scam ad, complex recommendation algorithms automatically identify them as an “engaged user” and respond by displaying similar deceptive ads in the future. This feedback loop ensures that those most vulnerable to scams — people who exhibit interest or susceptibility — are systematically targeted with even more fraudulent content. Thus, the structural design of the digital advertising ecosystem, built for engagement and profit maximization, paradoxically amplifies exposure to scams rather than preventing it.
In response to Reuters’ findings, Meta spokesperson Andy Stone publicly disputed the characterization of the company’s conduct. In communications shared with The Verge, Stone argued that the leaked internal documents provided only a narrow and potentially misleading snapshot of Meta’s approach to combating fraud, emphasizing that the figures presented were preliminary, overly inclusive, and not definitive. He noted that subsequent internal reviews suggested that a portion of the identified ads were not actual violations. Stone highlighted the escalating sophistication of scammers’ techniques, asserting that the company had significantly reduced the number of user-submitted reports of scam ads—by more than fifty percent over the previous fifteen months. Nonetheless, these explanations ring hollow to critics who see a multibillion-dollar corporation prioritizing its profits over user protection.
The global scale of the scamming industry underscores the complexity of the problem. In regions such as Southeast Asia, transnational criminal organizations operate compounds dedicated to online fraud, often interconnected with illicit gambling enterprises. Many of the workers inside these operations are victims of human trafficking, drawn under false pretenses by promises of legitimate employment. Once captured, they are coerced under threat of violence into conducting romance or investment scams for lengthy hours under exploitative, near-slavery conditions. Increasingly, these criminal networks leverage automation and artificial intelligence to magnify the reach and sophistication of their deceptive operations. The AI-enhanced scam ads appearing on Meta’s platforms frequently incorporate deepfake videos — media that synthetically depict well-known entrepreneurs such as Elon Musk endorsing fake investment schemes, or fabricated footage of American politicians announcing nonexistent government aid. As synthetic content becomes ever more convincing and accessible, the capacity of such scams to deceive and profit will almost certainly grow.
The comparison between Meta and small watchdog organizations reveals a troubling disparity. The Tech Transparency Project, a modest nonprofit with limited resources, has demonstrated that it can identify fraudulent advertisements with relative ease by applying simple criteria: advertisements impersonating government programs, those exhibiting patterns already flagged by the Federal Trade Commission, or ads associated with previously banned accounts. If a small civil-society organization can so efficiently detect fraudulent patterns, yet Meta, one of the wealthiest and most technologically advanced corporations in existence, cannot — or will not — do the same, the failure is clearly systemic and deliberate. Meta must substantially lower the threshold for removing suspect advertisements. If one ad from a given advertiser is discovered to be fraudulent, every ad from that entity should immediately be suspended or reviewed. In addition, the company should implement verified advertiser programs, requiring ad purchasers to use authentic, trackable identities. Such a measure would constrain the use of deepfake materials and establish an accountability record that regulators and law enforcement could readily examine.
At the policy level, the solution extends beyond corporate responsibility to encompass governmental regulation. Major technology platforms that knowingly generate enormous revenues from fraudulent advertising must be treated as complicit in these activities. Governments should elevate digital scam prevention to a national-level priority and position corporations like Meta at the center of that initiative. The U.S. Federal Trade Commission already possesses authority to regulate unfair or deceptive practices, including false advertising. Under that authority, regulators could mandate rigorous identity verification for advertisers, require pre-approval systems for ads before publication, and authorize independent audits of ad recommendation engines. Fines imposed for carrying scam content should be recalibrated to levels that inflict meaningful financial consequences, thus deterring future negligence. These penalties could, in part, finance a compensation fund for scam victims.
State governments have parallel tools at their disposal. Legislatures can pass local laws mandating advertiser verification or pursue action through their attorneys general under existing consumer protection statutes. Fraud and deception transcend partisan boundaries, harming citizens across ideological, economic, and generational divides. This is, unequivocally, a universal consumer-rights issue.
Finally, Meta’s historical record of ethical lapses further underscores the need for external intervention. In 2018, the company — then operating solely under the Facebook name — publicly admitted that it had failed to prevent the use of its platform to incite and coordinate acts of genocide in Myanmar. Ironically, that same nation has since become a hub for some of the illicit scam compounds whose activities now feed Meta’s own advertising revenues. The lesson is undeniable: when left unregulated, Meta’s platforms can become tools of exploitation at a global scale. If the incalculable human suffering associated with those past tragedies did not inspire meaningful reform, perhaps the billions of dollars siphoned annually from vulnerable users will finally compel lawmakers and regulators to act decisively against corporate negligence of this magnitude.
Sourse: https://www.theverge.com/tech/820906/meta-scam-ads-failure-remove-consequences