American News Group

Facebook Is Shrinking Fake News Stories Because Nothing Else Has Worked

For what feels like the umpteenth time, Facebook is introducing another new plan to fight against the scourge of fake news that populates the platform, this time by making shrinking the size of links to bogus claims and hoaxes. It probably won’t work because people are just the worst.

Facebook unveiled its latest plan Friday at its Fighting Abuse @Scale event in San Francisco. According to TechCrunch, the company will attempt to draw less attention to fake news stories by giving them a smaller billing in the newsfeed. It will also populate a list of fact-checking articles that debunk the phony reports.

The system will supposedly work like this: When a link is shared on Facebook, machine learning algorithms will scan the article for any signs of false information. If it senses that a story may be fake, the system sends it off to third-party fact checkers to look into the validity of the article.

If fact checkers determine a story is fake, they flag it for Facebook. The social network then shrinks the link preview in the newsfeed so it won’t draw the eye the way a standard news story is supposed to. A story from a trusted publisher will appear 10 times larger, and its headline will get its own space, according to TechCrunch.

The new measures at least begin to break the uniformity that Facebook gives to everything shared on its platform—an issue that, while creating a consistent visual appearance, also adds legitimacy to fake news stories by making them look nearly identical to real ones in the newsfeed.

Matt Klinman, creator of the app Pitch, offered one of the more cogent criticisms of Facebook, calling the platform “the great de-contextualizer” in an interview with Splitsider earlier this year. While his comments applied to how Facebook has damaged the business of comedy in particular, his critique applies to the news as well.

The plan is also basically the exact opposite of Facebook’s starting point for fighting off fake news stories. The company initially tried slapping a big, red warning label on debunked articles in an attempt to warn people not to read them. That made people who believed the stories indignant and resulted in the articles being shared even more in defiance of Facebook’s system.

While this latest effort seems to be significantly better approach than anything else Facebook has tried—mostly tweaks to its newsfeed algorithm that no one fully understands—it still probably won’t accomplish much.

Firstly, it does next to nothing to address memes and videos that are shared on the platform. Articles can get fact-checked and Facebook can create an automated system to produce factual information to counter a fake news story. Thus far, it has not been able to do the same for the endless amount of memes that anyone with a conservative family has undoubtedly seen sprinkled about their newsfeed.

Secondly, people just seem to like interacting with fake news because it gets a rise out of them. A study published in Science earlier this year found that falsehood spread online with an ease that real news stories don’t. Researchers reported that hoaxes and rumors reach more people and spread much faster than stories from reliable sources, primarily because human nature makes us susceptible to giving in to feelings of fear, disgust, and surprise—all emotions that fake news stories are often crafted to invoke.

As the study notes, things like reactions and comments—both measures Facebook uses to determine if people are having “meaningful interactions” with content—only incentivize people to participate and share stories, so perhaps Facebook’s next step is to hide the number of reactions or comments a fake story has received. But even that won’t solve the real problem: people.

Exit mobile version