Section 230 and Interoperability

Chris Riley
7 min readSep 10, 2020

Perhaps the most de rigeur issue in tech policy in 2020 is antitrust. The European Union made market power a significant component of its Digital Services Act consultation, and the United Kingdom released a massive final report detailing competition challenges in digital advertising, search, and social media. In the U.S., the House of Representatives held an historic (virtual) hearing with the CEOs of Amazon, Apple, Facebook, and Google (Alphabet) on the same panel. As soon as the end of this month the Department of Justice is expected to file a “case of the century” scale antitrust lawsuit against Google. One competition policy issue that I’ve written about extensively is interoperability, and, while we’ve already seen significant proposals to promote interoperability, notably the 2019 ACCESS Act, I want to throw another idea into the hopper: I think Congress should consider amending Section 230 of the Communications Act to condition its immunity for large online intermediaries on the provision of an open, raw feed for independent downstream presentation.

I know, I know. I can almost feel your fingers hovering over that big blue “Tweet” button or the “Leave a Comment” link — but please, hear me out first.

For those not already aware of (if not completely sick of) the active discussions around it, Section 230, originally passed as part of the Communications Decency Act, is an immunity provision within U.S. law intended to encourage internet services to engage in beneficial content moderation without fearing liability as a consequence of such action. It’s famously only 26 words long in its central part, so I’ll paste that key text in full: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

I’ll attempt to summarize the political context. Section 230 has come under intense, bipartisan criticism over the past couple of years as a locus of animosity related to a diverse range of concerns with the practices of a few large tech companies, in particular. Some argue that the choices made by platform operators are biased against conservatives; others argue that the platforms aren’t responsible enough and aren’t held sufficiently accountable. The support for amending Section 230 is substantial, although it is far from universal. The current President has issued an executive order seeking to catalyze change in the law; and the Democratic nominee has in the past bluntly called for it to be revoked. Members of Congress have introduced several bills that touch Section 230 (after the passage of one such bill, FOSTA-SESTA, in 2018), such as the EARN IT Act which would push internet companies to do more to respond to online child exploitation, to the point of undermining secure encryption. A perhaps more on-point proposal is the PACT ACT, which focuses on specific platform content practices; I’ve called it the best starting point for Section 230 reform discussions.

Why is this one, short section of law so frequently used as a political punching bag? The attention goes beyond its hard law significance, revealing a deeper resonance in the modern-day notion of “publishing”. I believe this law in particular is amplified because the centralization and siloing of our internet experience has produced a widespread feeling (or reality) of a lack of meaningful user agency. By definition, social media is a business of taking human input (user generated content) and packaging it to produce output for humans, doubling the poignancy of human agency in some sense. The user agency gap spills over from the realm of competition, making it hard to evaluate content liability and privacy harms as entirely independent issues. In so many ways, the internet ecosystem is built on the idea of consumer mobility and freedom; also in so very many ways, that idea is bankrupt today.

Yet debating whether online intermediaries for user content are “platforms” or “publishers” is a distraction. A more meaningful articulation of the underlying problem, I believe, is to say that we end users are unable to customize sufficiently the way in which the content is presented to us because we are locked into a single experience.

Services like Facebook and YouTube operate powerful recommendation engines that are designed to sift through vast amount of potentially-desirable content and present the user with what they most value. This content is based on individual contextual factors such as what the user has been watching, and the broader signals of desirability such as engagement level from other users. As many critics allege, the underlying business model of these companies benefits by keeping users as engaged as possible, spending as much time on the platform as possible. That means recommending content that gets high engagement, even though human behavior doesn’t equate positive social value with high engagement (that’s the understatement of the day, there!).

One of the interesting technical questions is how to design such systems to make them “better” from a social perspective. It’s the subject of academic research, in addition to ample industry investment. I’ve given YouTube credit in the past for offering some amount of transparency into changes it’s making (and the effects of those changes) to improve the social value of its recommendations, although I believe making that transparency more collaborative and systematic would help immensely. (I plan to expand on that in my next post!).

Recommendation engines remain by and large black boxes to the outside world, including the users who receive their output. No matter how much credit you give individual companies for their efforts to balance properly their business model demands, optimal user experience, and improving social value, there are fundamental limits on users’ inability to customize, or replace, the recommendation algorithm that mediates the lion’s share of their interaction with the social network and the user-generated content that it hosts. We also can’t facilitate innovation or experimentation with presentation algorithms as things stand due to the lack of effective interoperability.

And that’s why Section 230 gets so much attention — because we don’t have the freedom to experiment at scale with things like Ethan Zuckerman’s Gobo.social project and thus improve the quality of, and better control, our social media experiences. Yes, there are filters and settings that users can change to customize their experience to some degree, likely far more than most people know. Yet, by design, these settings do not provide enough control to affect the core functioning of the recommendation engine itself.

Thus, many users perceive the platforms to be packaging up third party, user generated content and making conscious choices of how to present it to us — choices that our limited downstream controls are insufficient to manage. That’s why it feels to some like they’re “publishing,” and doing a bad job of it at that. Despite massive investments by the service operators, it’s not hard to find evidence of poor outcomes of recommendations; see, e.g., YouTube recommending videos about upcoming civil war. And there are also occasional news stories of willful actions making things worse to add more fuel to the fire.

So let’s create that space for empowerment by conditioning the Section 230 immunity on the provision of more raw, open access to their content experience so users can better control how to “publish” it to themselves by using an alternative recommendation engine. Here’s how to scale and design such an openness requirement properly:

  • Apply an openness requirement only where the problems described above apply, which is for services that primarily host and present social, user generated content.
  • Limit an openness requirement to larger platforms, for example borrowing the 100 million MAUs (Monthly Active Users) metric from the Senate’s ACCESS Act.
  • Design the requirement to be variable across different services, and to engage platforms in the process. The kinds of APIs that Facebook and YouTube would set up to make this concept successful would be quite different.
  • Allow platforms to adopt reasonable security and privacy access controls for their provisioned APIs or other interoperability interfaces.
  • Preserve platform takedowns of content and accounts upstream of any provisioned APIs or other interoperability interfaces, to take advantage of scale in responding to Coordinated Inauthentic Behavior (CIB).
  • Encourage platform providers to allow small amounts of API/interoperability interface access for free, while permitting them to charge fair, reasonable, and nondiscriminatory rates to third parties operating at larger scale.

Providing this kind of openness downstream would create opportunities for innovation and experimentation with recommendation engines at a scale never before seen. This is not just an evolutionary step forward in what we think of as internet infrastructure; it’s also a roadmap to sustainable alternative business models for the internet ecosystem. Even assuming that many users would stick with the platform’s default experience and the business model underlying it, for those who choose to change, they’d have a true business model choice, and a deep, meaningful user experience choice at the same time.

I recognize that this is a thumbnail sketch of a very complex idea, with much more analysis needed. I publish these thoughts to help illustrate the relationship between agita over Section 230 and the concentrated tech ecosystem. The centralized power of a few companies and their recommendation engines doesn’t provide sufficient empowerment and interoperability, thus limiting the perception of meaningful agency and choice. Turning this open feed concept into a legal and technical requirement is not impossible, but I recognize it would carry risk. In an ideal world, we’d see the desired outcome — meaningful downstream interoperability including user substitutability of recommendation engines — offered voluntarily. That would avoid the costs and complexities of regulation, put platforms in a position to strike the right balance, and release a political pressure relief valve to keep the central protections of Section 230 intact. Unfortunately, the present day market and political realities suggest that may not occur without substantial regulatory pressure.

[Note: This piece originally appeared on Techdirt.]

--

--

Chris Riley

Disruptive internet policy engineer, beverage connoisseur, gregarious introvert, contrarian order Muppet, and proud husband & father. Not in order.