Site icon Didit

Facebook announces 7-point plan to battle “misinformation”

Facebook announces 7-point plan to battle “misinformation”

November 22, 2016: Facebook and Google have both been under pressure in the past several weeks to curb the flow of fake news across their networks. Late last week, Facebook CEO Mark Zuckerberg announced a 7-point plan to battle what he called “misinformation” on Facebook.

While Facebook doesn’t formally classify “misinformation” as a post type specifically prohibited by its Community Standards, such posts do appear to run afoul of its Terms of Service, which forbids users from ever “using Facebook to do anything unlawful, misleading, malicious, or discriminatory” (italics added).

Mr. Zuckerberg’s 7-point plan includes:

It’s possible that Facebook will take an approach similar to that taken by Google to algorithmically evaluate and score the trustworthiness of web pages

1. Stronger detection of information prior to user exposure

“The most important thing we can do is improve our ability to classify misinformation,” wrote Mr. Zuckerberg. “This means better technical systems to detect what people will flag as false before they do it themselves.”

It’s not clear what technical means Facebook will employ to better classify misinformation. But it’s possible that Facebook will take an approach similar to that taken by Google to algorithmically evaluate and score the trustworthiness of web pages using a sophisticated process of extraction against a database of 2.8 billion known “facts.” According to a Google technical paper published in 2015, this approach has successfully separated fact from fiction in a large number of instances. Currently, Bing, Yandex, Yahoo, and Baidu maintain such databases, and so does LinkedIn, whose “Knowledge Graph” consists of:

a large knowledge base built upon “entities” on LinkedIn, such as members, jobs, titles, skills, companies, geographical locations, schools, etc. These entities and the relationships among them form the ontology of the professional world and are used by LinkedIn to enhance its recommender systems, search, monetization and consumer products, and business and consumer analytics.

Facebook could build a similar database on its own, drawing freely from its own rich behavioral/identity data stores, the web at large, and through data provided by its 3rd party data partners. Obviously, it remains to be seen whether a purely algorithmic approach will be effective enough to functionally substitute for the presence of human fact-checkers.

2. Easier reporting and flagging

“Making it much easier for people to report stories as fake will help us catch more misinformation faster,” wrote Mr. Zuckerberg. Others, notably journalist Jeff Jarvis, have noted that that this feature already exists on Facebook, but is “buried so deep in a menu maze that it’s impossible to find.” By making the flagging feature more visually prominent (perhaps even through a “flag this post as fake” icon), Facebook could better leverage the fact-hunting, myth-dispelling potential of its vast user base.

3. Expanded third party verification

“There are many respected fact checking organizations,” wrote Mr. Zuckerberg, ”and while we have reached out to some, we plan to learn from many more.”

Currently, some 75 independent fact checking services exist across the globe, according to Duke University. Many of them are online, and it would be relatively easy for Facebook to establish real-time interactive links with them. In an open letter from the International Fact-Checking Network to Mr. Zuckerberg published last week, the fact checkers noted that “many of our organizations already provide training in fact-checking to media organizations, universities and the general public. We would be glad to engage with you about how your editors could spot and debunk fake claims.”

4. Warning labels

Mr. Zuckerberg noted that “we are exploring labeling stories that have been flagged as false by third parties or our community, and showing warnings when people read or share them.”

Facebook has experimented with labeling obviously “non factual” stories before, most notably back in 2014 when it ran a test that flagged all posts from the satirical site TheOnion.com with “[SATIRE]” prepended to each post’s headline.

Facebook could presumably “raise the bar”on “related articles” by tuning its algorithms to pay more attention to domain signals establishing the general trustworthiness of sources

5. Improve quality of “Related Articles”

“We are raising the bar for stories that appear in related articles under links in News Feed,” wrote Mr. Zuckerberg.

Facebook introduced the “related articles” feature in 2013 as a way to highlight high-quality stories from established publications. Because “Related Articles” are generated by algorithms that “filter and rank based on the percent of related words, number of likes, click through rate, and domain related signals,” Facebook could presumably “raise the bar” by tuning its algorithms to pay more attention to domain signals establishing the general trustworthiness of sources.

6. Disrupting fake news economics

“A lot of misinformation is driven by financially motivated spam,” wrote Mr. Zuckerberg. “We’re looking into disrupting the economics with ads policies like the one we announced earlier this week, and better ad farm detection.”

While the size and rate of growth of the “financially motivated spam” industry is unknown, last week Facebook revised its Audience Network Policy to include “illegal, misleading, or deceptive” ads across its advertiser network.

But tightening up its ad policies will of course do nothing to control fake information spread organically through fake accounts, which are extensively used by the ad farms mentioned by Mr. Zuckerberg. And Facebook has acknowledged in its public SEC filings that fake accounts remain a nettlesome problem.

Even if “undesirable accounts” constitute only 1 percent of Facebook’s MAUs, this would represent about 17.9 million bad user accounts, each of which is capable of transmitting a lot of misinformation through Facebook’s ecosystem

“…there may be individuals who maintain one or more Facebook accounts in violation of our terms of service. We estimate, for example, that “duplicate” accounts (an account that a user maintains in addition to his or her principal account) may have represented less than 5% of our worldwide MAUs (Monthly Active Users) in 2015. We also seek to identify “false” accounts, which we divide into two categories: (1) user-misclassified accounts, where users have created personal profiles for a business, organization, or non-human entity such as a pet… and (2) undesirable accounts, which represent user profiles that we determine are intended to be used for purposes that violate our terms of service, such as spamming. In 2015, for example, we estimate user-misclassified and undesirable accounts may have represented less than 2% of our worldwide MAUs.”

Even if “undesirable accounts” constitute only 1 percent of Facebook’s MAUs, this would represent about 17.9 million bad user accounts, each of which is capable of transmitting a lot of misinformation through Facebook’s ecosystem.

7. Listening

“We will continue to work with journalists and others in the news industry to get their input, in particular, to better understand their fact checking systems and learn from them,” wrote Mr. Zuckerberg.

Will it work?

Facebook finds itself between a rock and a hard place, because if it begins to behave too much like a traditional “publisher,” it risks losing its immunity under the CDA

In a media environment in which 62 percent of the American public receive their news from social media networks, Facebook is being pressured to revise its self-assessment as a “neutral” platform with minimal responsibility for the content that is posted and consumed there. At the same time, however, Facebook finds itself between a rock and a hard place, because if it begins to behave too much like a traditional “publisher” (in the sense of exerting active oversight over and otherwise controlling the content its pages contain), it may lose the benefit of the broad protections provided by the Communications Decency Act of 1996.

Section 230 of the CDA provides broad immunity for sites such as Facebook, stating that “no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” Without it, many of the sites we take for granted on the Web, including Facebook, LinkedIn, Yelp.com, and hundreds of others, would become legally liable for the conduct of all the users accessing these systems. As the Electronic Frontier Foundation notes,

This legal and policy framework has allowed for YouTube and Vimeo users to upload their own videos, Amazon and Yelp to offer countless user reviews, craigslist to host classified ads, and Facebook and Twitter to offer social networking to hundreds of millions of Internet users. Given the sheer size of user-generated websites (for example, Facebook alone has more than 1 billion users, and YouTube users upload 100 hours of video every minute), it would be infeasible for online intermediaries to prevent objectionable content from cropping up on their site. Rather than face potential liability for their users’ actions, most would likely not host any user content at all or would need to protect themselves by being actively engaged in censoring what we say, what we see, and what we do online. 

Which means that Facebook – even as it finds itself driven to begin imposing further restrictions on the content shared on its network, must walk a very fine line, and will, whenever possible, rely on purely technical means – not human editors – as much as it can to preserve its protected legal status as an “interactive computer service,” not a publisher.

Summary
Article Name
Facebook announces 7-point plan to battle “misinformation”
Description
It’s possible that Facebook will take an approach similar to that taken by Google to algorithmically evaluate and score the trustworthiness of web pages using a process of extraction against a database of known “facts.”
Author
Exit mobile version