Stay up to date with notifications from The Independent

Notifications can be managed in browser preferences.

‘We’re trying to stay one step ahead’: Facebook, Google and Twitter struggling to handle 2020 election

‘We won’t catch everyone immediately, but we can make it harder to try to interfere,’ says Facebook CEO Mark Zuckerberg 

Kevin Roose,Sheera Frenkel,Nicole Perlroth
Monday 30 March 2020 12:58 BST
Comments
Nancy Pelosi slams Facebook for Trump's fake Census ads

The day after the New Hampshire primary last month, Facebook’s security team removed a network of fake accounts that originated in Iran, which had posted divisive partisan messages about the US election inside private Facebook groups.

Hours later, the social network learned the campaign of Michael Bloomberg, the billionaire former New York mayor, had sidestepped its political ad process by directly paying Instagram meme accounts to post in support of his presidential bid.

That same day, a pro-Trump group called the Committee to Defend the President, which had previously run misleading Facebook ads, was found to be promoting a photo that falsely claimed to show Bernie Sanders supporters holding signs with divisive slogans such as “Illegal Aliens Deserve the Same as Our Veterans”.

Facebook, Twitter, Google and other big tech companies have spent the past three years working to avoid a repeat of 2016, when their platforms were overrun by Russian trolls and used to amplify America’s partisan divide. The Internet giants have since collectively spent billions of dollars hiring staff, fortifying their systems and developing new policies to prevent election meddling.

But as the events of just one day — 12 February — at Facebook showed, although the companies are better equipped to deal with the types of interference they faced in 2016, they are struggling to handle the new challenges of 2020.

Their difficulties reflect how much online threats have evolved since the 2016 election. Russia and other foreign governments once conducted online influence operations in plain sight, buying Facebook ads in rubles and tweeting in broken English, but they are now using more sophisticated tactics such as bots that are nearly impossible to distinguish from hyperpartisan Americans.

More problematic, partisan groups in the United States have borrowed Russia’s 2016 playbook to create their own propaganda and disinformation campaigns, forcing the tech companies to make tough calls about restricting the speech of American citizens. Even well-funded presidential campaigns have pushed the limits of what the platforms will allow.

“They’ve built defences for past battles, but are they prepared for the next front in the war?” Laura Rosenberger, the director of the Alliance for Securing Democracy, a think tank that works to counter foreign interference campaigns, said of the tech companies. “Anytime you’re dealing with a sophisticated actor, they’re going to evolve their tactics as you evolve your defences.”

By most accounts, the big tech companies have gotten better at stopping certain types of election meddling, such as foreign trolling operations and posts containing inaccurate voting information. But they are reluctant to referee other kinds of social media electioneering for fear of appearing to tip the scales. And their policies, often created hastily while under pressure, have proved confusing and inadequate.

Mark Zuckerberg is no stranger to the mess that politics and election campaigns can cause on social media (Getty)

Adding to the companies’ troubles is the coronavirus pandemic, which is straining their technical infrastructure, unleashing a new misinformation wave and forcing their employees to coordinate a vast election effort spanning multiple teams and government agencies from their homes.

In interviews with two dozen executives and employees at Facebook, Google and Twitter over the past few months, many described a tense atmosphere of careening from crisis to crisis to handle the newest tactics being used to sow discord and influence votes. Many spoke on the condition of anonymity because they were not authorised to publicly discuss sensitive internal issues.

Some Facebook and Google employees said they feared being blamed by Democrats for a Trump re-election, while others said they did not want to be seen as acting in Democrats’ favour. Privately, some said, the best-case scenario for them in November would be a landslide victory by either party, with a margin too large to be pinned on any one tech platform.

Google declined to speak publicly for this article. Nathaniel Gleicher, Facebook’s head of cybersecurity policy, said the threats of 2016 were less effective now but “we’ve seen threat actors evolving and getting better”. Twitter also said the threats were a game of “cat and mouse”.

“We’re constantly trying to stay one step ahead,” said Carlos Monje Jr., Twitter’s director of public policy.

For Mark Zuckerberg, Facebook’s chief executive who once delegated the messy business of politics to his lieutenants, November’s election has become a personal fixation. In 2017, after the extent of Russia’s manipulation of the social network became clear, he vowed to prevent it from happening again.

“We won’t catch everyone immediately, but we can make it harder to try to interfere,” he said.

In the 2018 midterm elections, those efforts resulted in a relatively scandal-free election day. But 2020 is presenting different challenges.

Last year, lawmakers blasted Zuckerberg for refusing to fact-check Facebook posts or take down false ads placed by political candidates; he said it would be an affront to free speech. The laissez-faire approach has been embraced by some Republicans, including President Donald Trump, but has made Facebook unpopular among Democrats and civil rights groups.

Just after noon on 30 October last year, Jack Dorsey, Twitter’s chief executive, posted a string of 11 tweets to announce he was banning all political ads from the service.

Dorsey has tried his best to make Twitter a neutral space in terms of politics but it is near-impossible to police it completely for propaganda (Getty) (David Becker/Getty Images)

His zero-tolerance move was one action that Twitter and companies like Google have taken to stave off another election crisis — or at least to distance themselves from the partisan fray.

Over the past year, Twitter has introduced automated systems to detect bot activity and has taken down Russian, Chinese, Venezuelan and Saudi bots. The company also prohibited users from posting information illegally obtained through a security breach.

And this month, Twitter enforced new guidelines to label or remove deceptively edited videos from its site.

Google, which owns YouTube, also altered its policies to prevent foreign-backed disinformation campaigns and introduced transparency measures for political ads.

Yet gaps remain in the tech platforms’ armour.

Government officials and former employees said Twitter’s algorithms were not reliably distinguishing between bots and humans who simply tweet like bots. Its efforts to label manipulated media have been underwhelming, said election campaigns. And some Twitter employees tracking election threats have been pulled away to triage misinformation about the coronavirus, such as false claims about miracle cures.

Trump has long been accused of encouraging the circulation of ‘deep fakes’ and harsh social media tactics (AFP)

The most divisive content this year may not come from Russian trolls or Macedonian teenagers peddling fake news for clicks, but from American politicians using many of the same tactics to push their own agendas.

One chief perpetrator? The White House.

Last month, Trump and other Republicans shared a video of Nancy Pelosi, the House speaker, during the president’s State of the Union address. Pelosi had ripped up a copy of Trump’s speech at the end of the address. But the video was edited so it appeared as if she had torn up the speech while he honoured a Tuskegee airman and military families.

A spokesman for Pelosi called for the video to be removed from Facebook and Twitter, saying it was “deliberately designed to mislead and lie to the American people”. But the companies said the video did not violate their policies on manipulated media.

This month, Dan Scavino, the White House social media director, shared another selectively edited video. It showed the former vice president, Joe Biden, appearing to say: “We can only re-elect Donald Trump”. In fact, the full video showed Biden saying Trump would only get re-elected if Democrats resorted to negative campaigning.

The current presidential race has already been dogged by questionable moves on social media (Reuters)

Facebook did not remove the video. By the time Twitter labelled it as manipulated, it had been viewed more than five million times. Because of a glitch, some Twitter users did not see the label at all.

Democrats have also pushed the envelope to get messages out on social media. Bloomberg’s presidential campaign, which he suspended this month, caused headaches for the tech platforms, even as they took in millions of dollars to run his ads.

Among his campaign’s innovations was buying sponsored posts from influential Instagram meme accounts and paying “digital organisers” $2,500 a month to post pro-Bloomberg messages on their social media accounts. The campaign also posted a video of Bloomberg’s presidential debate performance, which had been edited to create the impression of long, awkward silences by his opponents.

Some of the tactics seemed perilously close to violating the tech companies’ rules on undisclosed political ads, manipulated media and “coordinated inauthentic behaviour”, a term for networks of fake or suspicious accounts acting in concert.

Facebook and Twitter scrambled to react, hastily patching together solutions, including requiring more disclosure — or taking no action at all.

By then, the Bloomberg campaign, which declined to comment, had set a new playbook for other campaigns to follow.

“We can’t blame Russia for all our troubles,” said Alex Stamos, Facebook’s former chief security officer who now researches disinformation at Stanford University. “The future of disinformation is going to be domestic.”

The New York Times

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in