San Francisco : To stop false news from spreading on its platform, Facebook has said it put in place a three-pronged strategy that constitutes removing accounts and content that violate its policies, reducing distribution of inauthentic content and informing people by giving them more context on the posts they see.
Another part of its strategy in some countries is partnering with third-party fact-checkers to review and rate the accuracy of articles and posts on Facebook, Tessa Lyons, a Facebook product manager on News Feed focused on false news, said in a statement on Thursday.
The social media giant is facing criticism for its role in enabling political manipulation in several countries around the world. It has also come under the scanner for allegedly fuelling ethnic conflicts owing to its failure stop the deluge of hate-filled posts against the disenfranchised Rohingya Muslim minority in Myanmar.
“False news is bad for people and bad for Facebook. We’re making significant investments to stop it from spreading and to promote high-quality journalism and news literacy,” Lyons said.
Facebook CEO Mark Zuckerberg on Tuesday told the European Parliament leaders that the social networking giant is trying to plug loopholes across its services, including curbing fake news and political interference on its platform in the wake of upcoming elections globally, including in India.
Lyons said Facebook’s three-pronged strategy roots out the bad actors that frequently spread fake stories.
“It dramatically decreases the reach of those stories. And it helps people stay informed without stifling public discourse,” Lyons added.
Although false news does not violate Facebook’s Community Standards, it often violates the social network’s polices in other categories, such as spam, hate speech or fake accounts, which it removes remove.
“For example, if we find a Facebook Page pretending to be run by Americans that’s actually operating out of Macedonia, that violates our requirement that people use their real identities and not impersonate others. So we’ll take down that whole Page, immediately eliminating any posts they made that might have been false,” Lyons explained.
Apart from this, Facebook is also using machine learning to help its teams detect fraud and enforce its policies against spam.
“We now block millions of fake accounts every day when they try to register,” Lyons added.
A lot of the misinformation that spreads on Facebook is financially motivated, much like email spam in the 90s, the social network said.
If spammers can get enough people to click on fake stories and visit their sites, they will make money off the ads they show.
“We’re figuring out spammers’ common tactics and reducing the distribution of those kinds of stories in News Feed. We’ve started penalizing clickbait, links shared more frequently by spammers, and links to low-quality web pages, also known as ‘ad farms’,” Lyons said.
“We also take action against entire Pages and websites that repeatedly share false news, reducing their overall News Feed distribution,” Lyons said.
Facebook said it does not want to make money off of misinformation or help those who create it profit, and so such publishers are not allowed to run ads or use its monetisation features like Instant Articles.
—IANS
Marvelous, what a webpage it is! This weblog presents helpful data to us, keep it up.