Using interoperability for horizontal competition and data portability

Chris Riley
7 min readMay 24, 2018

--

My previous posts on interoperability via APIs have focused on vertical relationships — ensuring that digital platforms remain platforms on top of which independent, innovative products can be built. Vertical competition carries with it the potential of future horizontal competition, something many have examined in the context of the Facebook/WhatsApp merger. But interoperability via APIs has a more present impact on horizontal competition — consistent with the core principle that competition policy is meant to support competition, not competitors — and that’s what I will explore in this post. Specifically, effective APIs that offer core data and functionality on nondiscriminatory terms could enable data portability and horizontal competition in new and powerful ways.

GDPR and data portability scoping

This week, the General Data Protection Regulation takes effect in the European Union. One provision of the GDPR establishes a right to data portability. Many companies already offer data portability to a degree, but effective data portability is far from universal. It remains to be seen what companies will implement to comply with the new law, and even those with existing portability may go further. Google, for example, expanded its existing data portability offerings, and has released some new open source code designed for easier service switching. But the breadth of ways to export data is broad, ranging from capturing rich formatting, context, and other metadata to enrich another service’s understanding of the user’s experience, to stripping all that out and offering as little of the substance as can plausibly be defended to call it portability. (Note that metadata has substantial privacy significance as well; I’m focusing in this post purely on the competition angle of data portability, not privacy.)

Let’s look at a couple examples to unpack that distinction a bit, and see if it helps illustrate the notion of “core data” from my last post. When you use the “save as” feature in Microsoft Word, you’re effectively exporting the data. You can save the document as a different kind of office document, which tries to preserve a substantial amount of metadata (but will vary based on what that alternative type supports); you can save it as a PDF, which translates the formatting and produces a file that looks and feels like the original but cannot be edited in the same way; or you can save it as plain text, stripping out all of the font and style information and retaining “only” the words and punctuation — which might be enough, or might be a pale imitation and insufficient substitute, depending on the use case. For many purposes, being able to extract just the text of a file would be enough. But this is offline software, not an online service (and on top of that, the DOCX format is open so the original file can be easily manipulated, unlike the earlier closed DOC format); I’m using it purely to illustrate the concept of the metadata surrounding the core data, and the complexities of portability.

Music playlists are another example. I love my Spotify playlists. They have a lot of metadata associated with them now — how many times I’ve played each of the songs in the list, for example. If I decided to leave Spotify for another music streaming service, I would expect and want to be able to take my playlists with me. For my personal experience with music streaming, I don’t need most of the metadata, but I very much need to be able to replicate my cultivated playlists. And there shouldn’t be anything proprietary about the core lists themselves, because they are very much my creation; I could write them down on paper and recreate them. For that matter, I suppose I could manually retype my Word documents into another file, if I had no digital data portability options. But that’s exactly the sort of switching cost that data portability is meant to avoid.

There are pieces of the streaming experience that require more than ‘hard’ data, like my consciously curated song libraries and playlists. Replicating those portions of the experience in another service requires some information that may be directly user provided or may be generated based on user activity. For example, I just can’t stand the song “Umbrella”. I gather it’s popular with other folk, and I’m not trying to invite debate about my poor music taste here. I’m just saying that I would ideally like to be able to port signals like my dislike of that song to another service, to replicate a fuller version of the experience I have. That makes it tough to divide data and metadata into hard and fast categories.

Using APIs to get data portability, plus bonus network effects

This is where the idea of APIs offering core data and functionality can come into play. Imagine the service provider has already done the work of identifying core data and functionality to enable effective interoperability, and built APIs to offer that to third parties. At that point, these scoping questions around data portability for competition purposes (again, not counting privacy goals!) have been addressed — and it seems trivial to allow the data made available through those APIs also to be exported by users. This level of data portability doesn’t unduly threaten the platform’s core ROI, if users must individually and affirmatively choose to extract their data and then use it in another service — it doesn’t require making available the valuable internal analytical information and metadata that reflects the platform’s technological sophistication.

But data portability as a sole mechanism for horizontal competition runs headlong into network effects. In order to compete with a large established platform solely through data portability, a competitor would need to individually recruit enough individuals to build up network effects of competing scale — an uphill battle to say the least. There’s a perpetual gravitational pull not to port, repressing otherwise effective competitive forces like a horizontal competitor who offers a better user experience or better user policies or other desirable differences.

I can imagine a smoother on-ramp to horizontal competition by using core data and functionality APIs to unlock greater interoperability. With proper APIs, a competitor could allow its users to continue to benefit from the network effects of the platform by interfacing with the platform’s other users. The result is a path for users to switch and encourage their friends to switch as well — forcing the platform operator to compete to offer a better experience if it wants to keep its users, rather than capturing them using their own data and relationships.

This approach stays consistent with the core principle that competition policy is meant to support competition, not competitors — users aren’t forced to leave the platform via a breakup or other aggressive intervention, nor is any uneven benefit conferred onto the competitors. And, because the interoperability principle requires only offering core data and functionality, the internal business intelligence derived from the platform’s users through proprietary analysis wouldn’t need to be included or offered through the APIs — and incentives to invest in improving the user experience would be preserved, in large part.

Business model complexities

The story of interoperability paints a pretty picture of a competitive market, but the reality is a little messier and a few more details will need to be worked out. One is how to avoid undercutting the business models of platforms that depend on displaying advertising for revenue. Twitter solves this problem through API policies that pass its advertisements downstream. Other businesses might choose to charge for access to the API, rather than offering it for free. Different kinds of platforms might have different solutions to this, all reasonably operating within their own market segments. There’s some risk here that API policies create opportunities to charge monopoly rent or otherwise engage in unfair anticompetitive activity. But, we have well established competition law mechanisms to oversee such a dynamic!

One other complexity could in some contexts arise from use of the API itself: business intelligence might be able to be derived from the ways in which APIs are called. If a competitor to a platform is using the platform’s APIs in interesting, unique, and/or disruptive ways, the platform operator may be able to learn enough from those calls to replicate the behavior — essentially making it impossible to keep some kinds of trade secrets secret. It’s a catch-22 for the digital platform economy — you need the information and access to users and network effects to succeed, but in taking advantage of it, you expose yourself to the possibility of revealing your business secrets. Depending on the context, it’s possible that competition principles and enforcement can provide a ready solution to this as well; though it may require more thought.

The vision for data portability — at least in the context of competition — depends on having a number of compatible applications and services to choose from, all available through the same device and internet connection. What motivated me to start digging into competition policy a couple years ago was my worry that we won’t have those choices in the future, that the internet is headed towards a future of vertically integrated technology silos that don’t play with each other. In that world, data portability is no longer a competition concern — because there’s nothing to port the data to. To flip that story on its head, though: If we protect interoperability and access to APIs to ensure a future for competitive choices, I believe we can parlay that into solving the data portability problem at the same time.

--

--

Chris Riley

Disruptive internet policy engineer, beverage connoisseur, gregarious introvert, contrarian order Muppet, and proud husband & father. Not in order.