Year in technology 2019
Even as research documented a link between online speech and offline violence, internet companies struggled in 2019 to prioritize public safety over the freedom of their users to post extremist content.
At the same time, they dove ever more deeply into questions about whether politicians should have greater leeway than others to promote abusive—even racist—language and the same kind of demonizing falsehoods and memes often disseminated by far-right extremists.
Facebook, YouTube, Twitter and Google all announced new policies involving political content during a year that saw President Trump escalate his attacks on the industry, accusing social media companies of censoring conservative voices and threatening to regulate them in retaliation.
Trump’s threats came as he began to ramp up a re-election campaign that will undoubtedly feature a heavy dose of social media messages that vilify his opponents and rally a largely white base of support. And they came after several years in which the industry has attempted to curtail the use of their services by white nationalists and other extremists.
The strain that Trump’s own rhetoric put on the tech companies did not stop executives like Facebook’s Mark Zuckerberg and Twitter’s Jack Dorsey from meeting privately with him.
In June 2019, in response to mounting criticism, Twitter announced that politicians’ tweets containing threats or abusive language could be slapped with warning labels that would require users to click before seeing the content. But the offensive tweets won’t be removed from the site under the policy, as might those of a normal user. This policy shift, based on the idea that political speech is always a matter of public interest, effectively protects the speech of society’s most powerful figures, no matter whether it otherwise violates Twitter’s rules against abusive language. The policy applies to all government officials, politicians and similar public figures who have more than 100,000 followers. Twitter also said, however, that it would not use its algorithm to promote such tweets.
It wasn’t long before a Trump tweet tested the new policy. In July, Twitter said that the president’s tweet telling U.S. Reps. Alexandria Ocasio-Cortez, Ilhan Omar, Ayanna Pressley, and Rashida Tlaib—all women of color—to “go back” to their countries did not violate its rules against racism. The tweet did not, in fact, get a warning label.
It took a November tweet by Omar’s Republican challenger, Danielle Stella —suggesting that Omar “should be hanged”—for Twitter to take meaningful enforcement action against a political candidate. Twitter said Stella’s account was permanently disabled.
Like Twitter, YouTube struggled to draw the line between public interest and public harm. The video giant announced in September that politicians would be exempt from some of its content moderation rules.
Facebook took a similar tack. Nick Clegg, its vice president of global affairs and communications, announced that it would exempt politicians from its third-party fact-checking program, which it uses to reduce the spread of false news and other forms of viral misinformation. In short, it means Facebook has decided to allow politicians to lie on its platform in their advertisements and other forms of political speech.
In terms of advertisements, Twitter decided to ban campaign ads entirely, while Google opted to severely limit the ability of campaigns to target certain groups of people, a process known as “microtargeting.”
Research Links Online Speech to Offline Violence
The new policies came amid mounting evidence linking online speech to offline violence.
A study released by New York University researchers in June found a correlation between certain racist tweets and hate crimes in 100 U.S. cities. The research examined 532 million tweets across U.S. cities of varying geographies and populations and found that areas with more targeted, discriminatory speech had higher numbers of hate crimes.
Change the Terms, a coalition of more than 50 civil rights organizations, of which the °Ä˛ĘżŞ˝± is a founding member, is advocating for tech companies to adopt model policies that effectively combat hate and extremism. In September, the coalition convened a town hall in Atlanta that featured top leaders from Facebook, including chief operating officer Sheryl Sandberg.
“People in our communities are dying at the hands of white supremacy—the stakes are that high,” Jessica Gonzalez, vice president of Strategy at Free Press, a member of Change the Terms, told Sandberg and other attendees. “The safety of users must be a priority on the platform.”
Social Media Platforms Function as Vectors for Hate
Social media platforms proved to be a vector for the spread of white supremacist ideologies both during and after acts of domestic terrorism in 2019.
On March 15, an extremist attacked two mosques in Christchurch, New Zealand, killing 51 and injuring another 49. The perpetrator broadcast the attack on Facebook Live, and both the video of the attack and a 74-page manifesto went viral in the immediate aftermath. Facebook reported that 1.5 million copies of the video were uploaded in the first 24 hours, with 1.2 million of those being blocked by Facebook to prevent viewing.
Six weeks later, a 19-year-old man in Poway, California, attacked the Chabad of Poway on the last day of Passover, killing one and injuring three. The perpetrator posted a manifesto to the imageboard 8chan—notorious for its community of far-right extremists—in the moments before the attack. The document included a reference to Facebook, although the stream appears to have failed.
After the attacks, Facebook announced tighter restrictions on its Facebook Live platform, including temporary and permanent bans. It’s unclear, though, whether these new policies would have prevented the viral spread of videos showing the attacks.