°Ä²Ê¿ª½± statement on YouTube’s decision to remove hateful content
YouTube announced plans today to clean up extremism and hate speech on its popular service by removing thousands of videos and channels that advocate neo-Nazism, white supremacy and other bigoted ideologies.
The decision by YouTube, which is owned by Google, makes it the latest Silicon Valley giant to remove content centered on extremist ideals. Earlier this year, Facebook announced a ban on praise, support and representation of white nationalism and white separatism on its platform as well as on Instagram, which Facebook owns and controls. Twitter also bars violent extremist groups.
But as with other outlets before it, YouTube’s decision to remove hateful content depends on its ability to enact and enforce policies and procedures that will prevent this content from becoming a global organizing tool for the radical right.Â
It has taken Silicon Valley years to acknowledge its role in allowing this toxic environment to exist online. A simple Google search led Dylann Roof down a hate-filled path toward killing nine people at the Emanuel African Methodist Episcopal Church in Charleston. The platform’s own algorithm has, for years, pushed individuals toward increasingly extreme content.Â
Whether this rabbit hole of dangerous rhetoric is due to a flaw in the algorithm or that the algorithm is too easily manipulated, the end result is the same.Â
Tech companies must proactively tackle the problem of hateful content that is easily found on their platforms before it leads to more hate-inspired violence.