Speech and liability at the fragile core of the open internet

Chris Riley
7 min readMar 20, 2018

--

In December, I helped the Berkeley Center for Law & Technology put together an event to commemorate the 20th anniversary of the U.S. Supreme Court case Reno v. ACLU. The day’s sessions covered at the past, present, and future of law and public policy in relation to speech online, and to the role that online intermediaries — think social networks, search engines, community forums, and other sites and services that help people communicate with each other — play in that context. The event was extraordinarily timely: Over the past couple years, digital rights advocates, the tech industry, and governments have squared off over the role played by online intermediaries in facilitating societally undesirable acts of speech — for example, Facebook’s role in the spread of misinformation related to the 2016 U.S. Presidential campaign. Public policy for online speech is tricky. The “correct” outcomes are often balanced on the blade of a knife, where a fall to one side risks crippling the internet’s unique power to foster innovative new technologies and businesses, and to the other risks a critical public loss of sufficient trust and safety to be willing to live our lives online.

You’re basically doing this whenever you change public policy for online speech.

When I wrote my original ‘CATS’ post, the ‘S’ stood for ‘Security’. That may not feel like an emerging topic, despite the context in which I placed it (i.e., the emerging policy fights that will be at the center of public discussion over the next 2–10 years). The first public policy discussions over encryption took place decades ago, and viruses and worms have seen many front page headlines. It does change frequently however; conversations like those today, on how governments should handle security vulnerabilities they discover (from acquisition, to disclosure, to use), weren’t part of the gestalt in the past. There will certainly be high profile security tech policy debates 5 years from today, though it’s impossible to predict now what those will be. (Though I personally hope they’re focused on improving defenses, rather than warfare and cyberattacks.)

Like security, online speech issues feel like they’ve been around forever, and yet they’re constantly changing. The history is well established, in some sense. Lawyers, political scientists, and technologists have been looking at the role played by online intermediaries in facilitating speech for decades (as evidenced by the now 20 year old Reno v ACLU decision). Similarly, the study of security of computer systems, and legal and policy issues surrounding that, is far from new, with large and successful institutions dedicated to it both in research and in practice. But, as technology and the internet become more and more embedded into our daily lives, the consequences of harm become more apparent and in many cases more severe. Maintaining a risk-tolerant industry and way of life thus becomes much more difficult.

Historically, most Western governments have gone to great lengths to defend speech online and the intermediaries that facilitate it (in contrast to those governments with restrictive policies on speech both offline and online; to keep this post focused, I will omit discussion of these, for now). Today, though, intermediaries are on the hot seat. The U.S. is pursuing legislation to make it easier to prosecute certain intermediaries for facilitating sex trafficking online, the most well-known of which is the Senate bill known as SESTA. In Europe, an aggressive new law in Germany, NetzDG, authorizes fines of up to 40 million Euros (tiering penalties by size of the platform) if an online platform does not remove “manifestly unlawful” content within 24 hours. Both of these bills have faced substantial opposition from the public interest community and the tech industry. And both focus on the same target that governments everywhere are scrutinizing: the role of technical intermediaries in facilitating — and moderating — online speech.

If only it were this simple.

The motivation behind these developments is, I believe, understandable and reasonable. There are bad things going on on the internet that we want to stop. The question is how, and how to do so without jeopardizing the good things about the internet. In practice, it’s impossible to target restrictions (whether legal or technical) to capture only harmful speech and activity — negative externalities are inevitable, including chilling effects that go above and beyond both the intended and actual scope of the restriction. (In other words: No, we can’t disconnect the internet ‘just for the bad guys’.)

In my prior job to working at Mozilla, I worked for the U.S. Department of State, as part of a team managing ‘Internet Freedom’ grants to support technology development, deployment, and training activities in countries where their governments took action to repress speech online. What happens to that agenda and that ideology when there’s no clear line between repressive and non-repressive states in governments imposing ex ante restrictions on technology? Are we headed for a future with fewer and fewer “green” countries on the Freedom House Freedom on the Net report?

I see a global shift in government focus away from celebrating and even expanding the internet’s openness, towards levying restrictions of all shapes and sizes on tech companies to limit the unwanted consequences of that openness. Now, I’m not saying these interventions are universally unwarranted. But the shift runs counter to one of the bedrock principles of the internet as a social and economic engine: the idea that intermediaries would not be held liable for the actions of their users, provided they engage in suitable processes to comply with legal requirements after the fact. This principle is under threat, as many believe these wealthy, successful businesses employing the best and brightest computer scientists in the world can do more than they are doing today to reduce the scale and impact of harm.

A common scene in 2017 (and 2018 so far)

In a few different contexts, I’ve described 2017 as the year of ‘everybody attacks tech’ (and I have some more colorful versions of that label as well…). We’re in the midst of a huge pendulum swing at work in the Western world — a shift from governments and the public seeing tech as the most amazing, wonderful, awesome thing (look! You can play Angry Birds on your phone!), towards seeing the internet as a locus for fear and distrust. This parallels the shift in government focus.

There are many dimensions of speech challenges online — as many as there are in the offline, analog world where these questions are also messy (to say the least). There are national and regional legal differences regarding speech — acts that are illegal in France or in Indonesia that are protected in the U.S. And even beyond what’s legal or illegal, there are harmful practices that we as a society would, on balance, like to be able to stop.

Generally speaking, industry wants to be part of the solution, in my opinion. It’s hard to deny that many tech companies could do more. The question is, given the concomitant negative externalities, what role should law play in forcing those changes? This is a hugely complex space; it’s about more than just governments restricting industry practices, because of how online services are woven throughout our economy and our lives.

It’s easier to reach a balance when you don’t put everything on the scales.

I remember studying the theory of torts in law school. One school of thought is that the optimal approach is to place burden in the most efficient location — in other words, hold responsible the party that could most easily have prevented the injury from taking place. It’s dangerous to extend that thinking to this context, however, because of the lack of understanding held by so many in government and in the public about what tech can actually do — what seems like an efficient location may in fact be an impossible one, because major technical limitations and impossibilities aren’t taken into consideration. Like in the context of encryption, too often the view is ‘tech people, you’re smart and successful, just figure it out’. It’s not that simple, not by a long shot.

It’s also important to understand the effects of policy changes on companies of different sizes. A law that is reasonable for a company the size of Google or Facebook could well be nonsensical if imposed on an early-stage startup without any variations. A myopic focus on large companies is commonplace in many discussions of this issue, and represents one of the biggest risks going forward. The beauty of safe harbors, as they exist in current law, is how well they protect smaller and newer companies, providing true flexibility and freedom to innovate, and security for investment from outside firms to enable more growth and success.

Everything looks better when you see it through a golden filter.

There is no magic silver bullet here — or perhaps a “golden filter” would be a better analogy? Bottom line: We’ll be having a lot of policy fights in this space in countries all around the world for many years to come.

--

--

Chris Riley

Disruptive internet policy engineer, beverage connoisseur, gregarious introvert, contrarian order Muppet, and proud husband & father. Not in order.