Internal Facebook bug surfaces questionable platform content

The flaw exposes how vulnerable the social platform's 'downranking' content misinformation strategy can be.
4 April 2022

In May 2021, Facebook reversed its policy banning posts suggesting the Covid-19 was man-made, one of many content misinformation gray areas in its moderation policies. (Photo by ANDREW CABALLERO-REYNOLDS / AFP)

Social media content that had been flagged as misleading or problematic were mistakenly prioritized in users’ Facebook feeds recently — thanks to a software bug that took six months to fix, according to tech site The Verge.

Facebook disputed the report, which was published last week on the last day of March, saying that it “vastly overstated what this bug was because ultimately it had no meaningful, long-term impact on problematic content,” according to Joe Osborne, a spokesman for parent company Meta.

But the bug was serious enough for a group of Facebook employees to draft an internal report referring to a “massive ranking failure” of content, The Verge reported. Beginning in October 2021, the employees noticed that some content which had been marked as questionable by external media — members of Facebook’s third-party fact-checking program — was nevertheless being favored by the algorithm to be widely distributed in users’ News Feeds.

“Unable to find the root cause, the engineers watched the surge subside a few weeks later and then flare up repeatedly until the ranking issue was fixed on March 11,” The Verge reported. But according to Osborne, the bug affected “only a very small number of views” of content.

That’s because “the overwhelming majority of posts in Feed are not eligible to be down-ranked in the first place,” Osborne explained, adding that other mechanisms designed to limit views of “harmful” content remained in place, “including other demotions, fact-checking labels and violating content removals.”

Facebook has been accused for years of being too lax on the moderation of problematic content, including false rumors and conspiracy theories, while on the other hand quickly stamping out content frowned upon by advertisers, such as pornography. The social giant insisted it would not position itself as an arbiter of truth – before gradually changing its tune and succumbing to pressure to responsibly monitor its own platform — in the face of growing outcry from watchdogs and elected officials.

AFP currently works with Facebook’s fact-checking program in more than 80 countries and 24 languages. Under the program, which started in December 2016, Facebook pays to use fact-checks from around 80 organizations, including media outlets and specialized fact-checkers, on its platform, WhatsApp, and on Instagram.

Content rated “false” is downgraded in news feeds so fewer people will see it. If someone tries to share that post, they are presented with an article explaining why it is misleading. Those who still choose to share the post will receive a notification with a link to the article. No posts are taken down. Fact-checkers are free to choose how and what they wish to investigate.

Downranking is used to suppress what Facebook considers “borderline” content, but also videos and other forms of content that were flagged by its AI systems as a violation, but in need of further review by a human moderator. Since last year, Meta has been touting both the fact that it wants to start downranking all political content (part of CEO Mark Zuckerberg’s intent to return Facebook to more lighthearted content), and the capabilities of its AI systems to handle content moderation at scale, which the company says has dramatically improved at identifying troubling content such as hate speech.

Internally, although the claims of seriousness were rubbished by Facebook’s Osborne, they were serious enough to warrant a level-one SEV, or site event — a label reserved for high-priority technical crises, like Russia’s ongoing block of Facebook and Instagram.