Brooklyn Boro

Columbia Journalism Review: The social-media platforms, the “Big Lie,” and the coming elections

October 2, 2022 Mathew Ingram
Share this:

EDITORS’ NOTE: Thanks to the dedication of Columbia Journalism Review, and their insights into all levels of media, we can share with our readers some caveats about social media and upcoming elections.

Writer Mathew Ingram gives us all a thorough and thoughtful breakdown of political impacts unknown, even unimagined, prior to social media. With thanks to CJR we pass along these insights to our readers and urge them to support CJR.ORG.

In August, Twitter, Google, TikTok, and Meta, the parent company of Facebook, released statements about how they intended to handle election-relatedmisinformation on their platforms in advance. For the most part, it seemed they weren’t planning to change much. Now, with the November 8 midterms drawing closer, Change the Terms, a coalition of about 60 civil rights organizations say the social platforms have not done nearly enough to stop continued misinformation about “the Big Lie”—that is, the unfounded claim that the 2020 election was somehow fraudulent. “There’s a question of: Are we going to have a democracy?” Jessica González, a Free Press executive involved with the coalition, recently told the Washington Post. “And yet, I don’t think they are taking that question seriously. We can’t keep playing the same games over and over again, because the stakes are really high.”

González and other members of Change the Terms say they have spent months trying to convince the major platforms to do something to combat election-related disinformation, but their lobbying campaigns have had little or no impact. Naomi Nix reported for the Post last week that coalition members have raised their concerns with platform executives in letters and meetings, but have seen little action as a result. In April, Change the Terms called on the platforms to “Fix the Feed” before the elections, requesting that the same companies change their algorithms in order to “stop promoting the most incendiary, hateful content”; “protect people equally,” regardless of what language they speak; and share details of their business models and approaches to moderation.

“The ‘big lie’ has become embedded in our political discourse, and it’s become a talking point for election-deniers to preemptively declare that the midterm elections are going to be stolen or filled with voter fraud,” Yosef Getachew, a media and democracy program director at Common Cause, a government watchdog, told the Post in August. “What we’ve seen is that Facebook and Twitter aren’t really doing the best job, or any job, in terms of removing and combating disinformation that’s around the ‘big lie.’” According to an Associated Press report in August, Facebook “quietly curtailed” some of the internal safeguards designed to smother voting misinformation. “They’re not talking about it,” Katie Harbath, a former Facebook policy director who is now CEO of Anchor Change, a technology policy advisory firm,told the AP. “Best case scenario: They’re still doing a lot behind the scenes. Worst case scenario: They pull back, and we don’t know how that’s going to manifest itself for the midterms on the platforms.”

Change the Terms first called on the platforms to reduce online hate-speech and disinformation following the deadly 2017 neo-Nazi march in Charlottesville, Virginia; since then, the coalition notes, “some technology companies and social-media platforms remain hotbeds” of such activity, offering the January 6 Capitol insurrection as a prime example. The coalition tried to keep up the pressure on the platforms throughout the past six months to “avoid what is the pitfall that inevitably has happened every election cycle, of their stringing together their efforts late in the game and without the awareness that both hate and disinformation are constants on their platforms,” Nora Benavidez, director of digital justice at Free Press, told the Post.

As Nix notes, the coalition’s pressure on the social-media platforms was fueled in part by revelations from Frances Haugen, the former member of Facebook’s integrity team who leaked thousands of internal documents last year. Haugen testified before Congress that, shortly after the 2020 election, the company had rolled back many of the election-integritymeasures that were designed to stamp out misinformation. An investigation by the Post and ProPublica last year showed that a number of Facebook groups became hotbeds of misinformation about the allegedly fraudulent election in the days and weeks leading up to the attack on the Capitol. Efforts to police such content, the investigation found, “were ineffective and started too late to quell the surge of angry, hateful misinformation coursing through Facebook groups—some of it explicitly calling for violent confrontation with government officials.” (A spokesman for Meta said in a statement to the Post and ProPublica that “the notion that the January 6 insurrection would not have happened but for Facebook is absurd.”)

A recent report showed that misinformation about the election helped create an entire ecosystem of disinformation-peddling social accounts whose growth the platforms seem to have done little to stop. In May, the Post wrote about how Joe Kent, a Republican congressional candidate, had claimed “rampant voter fraud” in the 2020 election in an ad on Facebook. The ad was reportedly just one of several similar ads that went undetected by internal systems.

YouTube told the Post recently that the company “continuously” enforces its policies, and had removed “a number of videos related to the midterms.” TikTok said it supports the Change the Terms coalition because “we share goals of protecting election integrity and combating misinformation.” Facebook declined to comment, and referred to an August news release listing the ways the company said it planned to promote accurate information about the midterms. Twitter said it would be “vigilantly enforcing” its content policies. Earlier this year, however, the latter company said it had stopped taking steps to limit misinformation about the 2020 election. Elizabeth Busby, a spokesperson, told CNN at the time that the company hadn’t been enforcing its integrity policy related to the election since March 2021. Busby said the policy was designed to be used “during the duration” of an election, and since the 2020 election was over, it was no longer necessary.

Subscribe to our newsletters


Leave a Comment


Leave a Comment