This initiative raises concerns about a possible resurgence of false information on its platforms.
A strategy focused on engagement at the expense of veracity
The abandonment of fact-checking is part of Meta's broader strategy to stimulate the engagement of content creators. By rewarding those whose publications generate the most interactions, the company seeks to boost its platform. However, this approach could encourage some users to favor sensationalism and misleading content to maximize their profits.
Facebook's new content monetization program is currently invitation-only, but Meta plans to make it widely available this year.
Without external oversight, viral content funded by Meta could include hoaxes and misinformation. The deactivation of fact-checkers thus risks complicating the distinction between reliable content and disinformation, fueling an already worrying dynamic.
A U-turn in the fight against disinformation
Meta had made the fight against fake news a stated priority, targeting for-profit hoaxes. This strategic shift suggests a step backward, relying on users' responsibility to judge the reliability of content for themselves.
While tools like Community Notes or community standards still exist, the question remains: will they be enough to stem the proliferation of false information, especially in an environment where Does the reward model favor audiences over accuracy? Critics point out that even with these safeguards, the lack of external validation leaves dangerous room for misinformation.
The disappearance of fact-checkers on Facebook shifts the responsibility for sorting information to users and automatic detection algorithms. This raises a key question: how can we maintain a reliable information space while serving the economic interests of social platforms?
0 Comments