Facebook Whistleblower Frances Haugen Speaks Out On Reforming Social Media

Frances Haugen, the former Meta Platforms (then Facebook) product manager whose leaks on how the company’s platforms magnify hate and spread disinformation shook the tech giant, spoke at the South by Southwest (SXSW) conference on social media reform. 

Haugen joined Facebook in 2019 after stints at Google and Pinterest. A decade ago, Haugen was diagnosed with celiac disease, a long-term autoimmune disorder. In 2014, she was forced to enter a critical care unit after getting a blood clot in her thigh. She hired a family friend to help her with her day-to-day tasks. Their friendship soon deteriorated after that friend fell prey to conspiracy theories on online forums claiming that dark forces were at work manipulating politics. Her friend was drawn into the world of the occult and white nationalism. Although her friend has since abandoned these beliefs, Haugen’s career path was changed for good. She realized that tech platforms had a dark side and conspiracy theories could draw in normal people. 

In 2018, when she was approached by a Meta recruiter, she asked for a job working in the unit responsible for combating misinformation. By 2019, she was a product manager in the civic integrity team. According to a Los Angeles whistleblower attorney, Haugen’s revelations since then have inspired a new generation of whistleblowers to speak out about corporate malfeasance.

frances haugen
Frances Haugen

Frances Haugen At SXSW

At SXSW, she criticized Meta’s reliance on artificial intelligence (AI) to fact-check and moderate content. In April 2018, the company’s chief executive officer (CEO) Mark Zuckerberg said that he believed that AI was a solution for fighting misbehavior such as fake news, hate speech and propaganda. She believes that the company is over-reliant on these tools.

Haugen says that Meta’s own research shows that AI reduces hate speech by just 3% to 5%, violence-inciting content by 0.08% and graphic violent content by 8%. The company disputes these charges, saying that hate speech was down 50% in the first nine months of 2021. The key to content moderation remains human beings. Content moderation requires human beings to judge the context of what is being said, otherwise, it risks censoring content that is not “misbehavior,” and not providing a viable means of adjudicating queries.

facebook growth
facebook growth

Haugen cites Twitter’s success with a new function that requires its users to click on any link before they share it. According to Haugen, this lowers the spread of misinformation by 10% to 15%. In this way, Twitter ensures that you have at least seen an article before sharing it, without any kind of censorship coming into play.

Frances Haugen believes that Meta could do a better job of moderating content by adding features to its platforms, but fears of reduced profitability have led to dragging of feet on the topic. For her, adding these features would not entail censoring anyone or choosing whose ideas won out. However, she feels Zuckerberg is more concerned about the company’s profitability than about its ability to stop the spread of misinformation. 

Haugen says that the number one priority for tech reform has to be for greater transparency. AI is often a veil that allows tech firms such as Meta to claim they are fighting misinformation, without actually doing any meaningful work to stop it.