Three Things Facebook Could Do to Suck Less

Dave Troy
7 min readOct 26, 2019

--

Are you listening, Zuck?

It’s difficult to go through a day without hearing news about Facebook, Instagram, or Mark Zuckerberg. Our tech platforms have become integral to how people communicate, get news, and experience life. So everyone has an opinion about how Facebook might change its design or adopt more ethical values.

I’ve spent many years thinking about this set of problems. I’ve been doing research on social media networks, data, and design effects since 2007, and was the first developer on the Twitter API. I’ve studied data from Facebook and LinkedIn as well. And I’ve spent the last few years researching disinformation campaigns, and I’ve also served as a moderator of a civics-focused Facebook group in Baltimore. I’ve seen some things.

Here are three changes that I think Facebook could make to immediately improve its platform and minimize its negative effects on society.

1. Require Mastheads and a Trust Model

Newspapers adopted the concept of the “masthead” (or “imprint” in British English) to provide a simple and organized way for readers to assess their provenance. Historically, newspapers have not been known for objectivity or espousing the “view from nowhere” that we expect today; newspapers were typically very biased, and a masthead helped readers understand who they were dealing with.

Old Tech: Seattle Times masthead from 1981.

There is no reason why every Facebook Page, Group, Event, and other published resource should not have an analogous construct. For purposes of discussion, I propose a simple model to establish trust and that could be used by publishers to establish their identity:

☆☆☆☆ Zero Stars: Not Trusted or Verified

Zero Stars means that there is no basic masthead information available. This should be discouraged by the platform. One way to do that is to algorithmically decline to amplify these items. Note that this does not silence anyone; it just means the platform can decline to aid in amplification.

☆☆☆ One Star: Basic Masthead is Complete but Unverified

One star indicates that basic information regarding the publisher of a piece of content has been supplied, which may include names (or company) of the publisher, street address, phone number, country of origin, and any other information that may be helpful in identifying the source of the content. This content could be minimally amplified by algorithms.

★★☆☆ Two Stars: Masthead is Complete and Verified by Facebook

The publisher has attached the content to an identity that has been verified by Facebook. For reasons of privacy, publisher may opt to limit information like address or phone number, even though Facebook may have verified it. However, any corresponding domain names should be listed and also verified by Facebook. Verification methods may include: call/text (for phone number), postcard (for mailing address), email/DNS/http (domain name). Two stars should be the minimum threshold for “normal” levels of algorithmic distribution.

★★★☆ Three Stars: Masthead Verified by Facebook + All Info is Public

This would be the standard expected of businesses publishing on the platform. This rating requires basic company contact info, mailing address, phone number, person(s) involved in the company. This information is verified by Facebook and made fully public, as there would not be any privacy concerns.

★★★★ Four Stars: Maximum Transparency

We could debate what could go into this tier, but this rating would be reserved for publishers that have taken every available step to provide all relevant contact and provenance information and have it verified by Facebook on an ongoing basis. This should be treated like a Michelin star, and outlets that fail to maintain stringent standards of transparency and continued verification could lose stars. Four Star ratings might be also determined in part by measurements of information quality. Four Star publishers could achieve the highest algorithmic preference.

This matters because Facebook has been colonized by entities looking to build targeted audiences. Audience building is the first phase of disinformation operations; if you can reach a group of people and know some things about their psychology, you can provoke responses from them with certain stimuli.

Here are two innocuous-looking pages that are building large audiences but whose publishers are unidentifiable. DIYCraftsTV (established December 2017) has an audience of 17 million. TasteLifeOfficial (established March 2017) has an audience of 20 million. The pages are not attached to any specific website. The only point of contact listed for both is a Gmail account. Whether or not these pages are dark information operations — I don’t know—there is no way for a consumer of these publishers to know the origin or intent behind these outlets.

Yes, ostensibly they are publishing inconsequential “craft” and “food” videos, but each of those demographics can potentially reach a fairly targeted segment of the population which can then be repurposed for other kinds of advertising from other publishers. Once you have 40 million people aggregated, there are a variety of options for manipulating and transforming those audiences for many purposes — political, cultural or otherwise.

Facebook has generated controversy because they opted to include Breitbart News in its new “High Quality News” tab. I think that’s a bad call, but more importantly I think this is a wrong-headed strategy. Facebook needs to develop ways to assess trust and inform algorithms across the platform, and it’s likely that these manifold and massive unattributed audience gathering operations are at least as big of a threat to the information landscape.

Gene Wilder died in 2016, but Facebook makes it look fresh in this new post!

2. Mark (and Deprioritize) Old News

This is less complicated. How sick are you of seeing celebrity death reports that are 5 years out of date? Thanks Cousin Jim, but Gene Wilder died in 2016. Roger Moore died in 2017. Lena Horn died in 2010. Yet stories about these and others repeatedly make the rounds as “news” on Facebook because the platform does NOTHING to alert anyone that they are not current.

Simple fix. First, alert the poster that the story is old, and confirm that they want to post it. Second, add a tag (like the “BREAKING” tag FB loves so much) to say “FROM 2010.”

News stories are published with metadata that can include date information such that it would be easy to start requiring it, and AI could be used to extract date info in many other cases.

This also matters for posts about the current political situation. In the current Trump saga, it is altogether common to have posts talking about some news story from 2017 that the poster believes is a current news story. It’s the same cast of characters coming up again and again, so it’s understandable. But anything older than 30 days should be clearly labeled as to its origin date, and it should be easy to inspect any content for more detailed date information.

Old news stories should probably be algorithmically dampened and distributed less than others. Continuing to allow old news to flood the platform distracts from better information, erodes trust in the platform, and can distort people’s perceptions of important current news stories.

3. Allow Reporting of Disinformation Campaigns

There is actually no way to report a disinformation campaign to Facebook. Yeah, you read that right. You can report something as “false news,” but that’s a very narrow part of what might constitute a disinformation campaign, and is actually at odds with Facebook’s own internal policies.

Facebook has come to define its disinformation countering efforts in terms of “coordinated inauthentic behavior.” But there is no way to report that. For example, if a group of bots, trolls, and real-life political actors are working together to advance a particular operation, they may well promote content that is “true” but still is part of “coordinated inauthentic behavior.”

Recently in Baltimore there were some events that could only be described as coordinated inauthentic behavior. The organizers of those events used Facebook to help publicize them. Their posts were not “false news” — the events were in fact happening and they were organized by the people who claimed to be organizing them.

If you try to report such a thing to Facebook, however, these are your options:

Well, none of these apply. If you select “Something Else,” you can choose from:

Fraud or Scam, Mocking Victims, Bullying, Child Abuse, Animal Abuse, Sexual Activity, Suicide or Self-Injury, Hate Speech, Promoting Drug Use, Non-Consensual Intimate Images, Sexual Exploitation, Harassment, Unauthorized Sales, Violence, Sharing Private Images.

And indeed, none of those apply. Specifically, Facebook has no way to report the coordinated inauthentic behavior that they say is the basis for their content moderation policies. Dealing with this behavior remains a complicated problem that requires a lot of information and hard judgment calls, and at least the company is doing something.

However I’m not convinced they are approaching this set of problems in good faith, and it is easy to see how it would be tempting to mete out a quarterly drip of “takedown news” as a sign that the company was doing something (instead of nothing) about its disinformation colonization problem.

If information about “coordinated inauthentic behavior” can’t flow upwards from end users to staff, what are the odds that such information is flowing upwards from staff to management? Slim to none, I’d guess, and based on what we’ve seen, Facebook’s executive leadership seems to strive to not be told bad news, and indeed when they do learn about it, to deny and deflect the fact. Indeed, I’ve made several of these ideas available to appropriate Facebook staffers and so far they haven’t gone anywhere.

There is a lot the platform could do to be a force for good in the world. Why the company can’t pursue even the simplest changes is hard to fathom, and suggests a set of priorities that are misaligned with those of society as a whole. I hope someone at Facebook reads this and takes it seriously, because our democracies depend on being able to have a sane information landscape. And these basic ideas are just the table stakes.

David Troy is an entrepreneur and disinformation researcher based in Baltimore, Maryland.

--

--

Dave Troy
Dave Troy

Written by Dave Troy

Investigative journalist addressing threats to democracy. Public speaker, writer, podcaster. @davetroy on Twitter. See davetroy.com for contact info.

Responses (2)