University of Notre Dame Law School
Silicon Valley has long been viewed as a full-throated champion of First Amendment values. The dominant online platforms, however, have recently adopted speech policies and processes that depart from the U.S. model. In an agreement with the European Commission, tech companies have pledged to respond to reports of hate speech within twenty-four hours, a hasty process that may trade valuable expression for speedy results. Plans have been announced for an industry database that will allow the same companies to share hashed images of banned extremist content for review and removal elsewhere.
These changes are less the result of voluntary market choices than a bowing to governmental pressure. Private speech rules and policies about extremist content have been altered to stave off threatened European regulation. Far more than illegal hate speech or violent terrorist imagery is in EU lawmakers’ sights, so too is online radicalization and “fake news.” Newsworthy content may end up being removed along with terrorist beheading videos, “kill lists” of U.S. servicemen, and instructions on how to blow up houses of worship.
The impact of extralegal coercion will be far reaching. Unlike national laws that are limited by geographic borders, terms-of-service agreements apply to platforms’ services on a global scale. Whereas local courts can only order platforms to block material viewed in their jurisdictions, a blacklist database raises the risk of total censorship. Companies should counter the serious potential for censorship creep with definitional clarity, robust accountability, detailed transparency, and ombudsman oversight.
Danielle K. Citron,
Extremist Speech, Compelled Conformity, and Censorship Creep,
Notre Dame Law Review
Available at: https://scholarship.law.bu.edu/faculty_scholarship/621