Tracking, choices, and business: the Brussels experiment
The third letter in my earlier CATS post on the future of internet and tech policy, ‘T’, stands for ‘tracking’. I use that term to mean the mechanisms that follow some or all of an individual internet user’s activity online — sometimes for beneficial and non-controversial purposes like security and fraud detection, but often in ways that make people uncomfortable. I highlight tracking, as opposed to privacy generally, because it represents a major tension point within the internet ecosystem today. On one side of the tension is the near-universal view that privacy is fundamental — reflected in the Mozilla Manifesto and treated as a fundamental right in the European Union, India, and elsewhere — coupled with a widely held perception that tracking infringes on privacy in unacceptable ways. On the other side is an internet ecosystem where one of the largest sources of revenue is targeted advertising powered in no small part by the customization of ads based on a user’s online activity.
There’s no easy answer here. Some gains can be made through education — helping separate out those beneficial kinds of tracking from the kinds that make people uncomfortable. Others can be made through product innovation, giving people tools to block unwanted trackers. But, privacy is not a monolithic concept, and there’s no such thing as a “one size fits all” privacy configuration. So I think more needs to be done to present users with fundamental privacy options across the ecosystem. And to get there, we need sustainable business models that don’t assume in perpetuity an ability to monetize user activity online in the way it’s done today.
The European Union is trying to catalyze a massive change in thinking along these lines, and whether it makes incremental or transformative improvements in the near term with the ePrivacy Regulation it’s pursuing, the EU — along with other regulators, and many many internet users — won’t stop until privacy-invasive tracking is a matter of meaningful user choice. This is a good goal. The challenge, of course, is whether we can avoid breaking the internet as we know it in the process.
I think I can see a light at the end of the tunnel, though. And it starts with a better technical understanding of tracking, to focus regulatory intervention in more productive directions.
I’ll start with a caveat: Although I’ll try to be careful with my terminology, when in doubt, assume I use “tracking” to mean potentially privacy-invasive tracking, not tracking of activity for purposes of security, fraud detection, traffic measurement, and so forth. The core of discomfort associated with tracking is when it involves the maintenance of a persistent profile on an individual, collecting data associated with that individual across many different sites, feeding all of that data back into the profile — often then using it to customize advertising. It’s not a universal discomfort, as studies have shown; actually, nothing about privacy is really universal, as there’s so much cross-cultural and individual variance. But, that’s a piece for another writer at another time.
The kind of tracking I’m focused on involves tracking user activity across multiple, unrelated sites, and sometimes encompasses physical real-world actions like purchases in brick and mortar retail stores. Often, this kind of tracking is done through the use of identifying information stored on a user’s device, such as a “cookie”. More advanced forms of user identification, for tracking purposes or others, can be powered by techniques like “fingerprinting” that recognize distinctive features of a user’s system, and through these can identify many individuals without any local data storage. Although fingerprinting is hardly universal, the very idea of non-local, non-obvious identification across devices means there ultimately may not be any ability for users to control and choose what’s going on.
So, the EU institutions that adopted the Directive are working to replace it with a new Regulation, a more aggressive and prescriptive instrument intended to give users the option to control all tracking-related information stored locally on their devices — something web browsers do offer these days, that would now be required of a broader range of end user software. While this wouldn’t inherently stop all tracking, if at the end of this political process the rules end up being worded well and can avoid causing unintended harm, it will be a significant improvement on the Directive, and the principles behind it are certainly compelling.
Mozilla has released a position paper laying out our official position on this file, so I won’t go into the details of that here; you can read about it on our blog. But I want to highlight one key piece of the ePrivacy Regulation conversation and talk about why it matters in the broad, messy context of internet and tech policy — and why it merits the T in my CATS narrative.
One of the major ideas in the proposed Regulation is that users should have privacy options available to them. We’ve incorporated this principle into our core products at Mozilla. You can use regular Firefox, which stores cookies so that you don’t have to log in to a website every time, or you can use private browsing mode (or, on mobile, Firefox Focus), which doesn’t store cookies — and in fact goes a step further by enabling active tracking protection technology. There are trade-offs in choosing between the two. I like that I can type a few words of an article I read recently and Firefox will remember it from my browsing history, for example. But the important point, in the context of this policy conversation, is that we invest in making sure both options are excellent user experiences.
Let me switch gears a little. Some of the big tech companies have a really bad rap for privacy. They’ve been getting criticism over privacy concerns for years now, so they all offer some privacy settings offering varying levels of user control. For example, Google will show you what it thinks about you based on its tracking, and you can tell it if any of those guesses are wrong and effectively de-categorize yourself. Facebook gives you a setting to turn off “ads on apps and websites off of the Facebook companies” — but you can only access that if you have a Facebook account (Facebook is tracking you even if you don’t), and turning off ads isn’t the same thing as turning off tracking, of course. Depending on your point of view, these settings are either a good first step towards user privacy, or barely scratching the surface.
With many businesses in the internet ecosystem, there is a ceiling to how many privacy settings they will introduce, and how much control they will give to users. That ceiling exists because those businesses have revenue models derived in large part from targeted online advertising powered by tracking. Limiting privacy-invasive tracking limits monetization for business models built on it.
I view this as a deep tension lying underneath the surface of the ePrivacy Regulation process — not to mention privacy and the tech industry more broadly. And right now it feels like a war. Some policymakers want to end all privacy-invasive tracking immediately — that would threaten our internet economy in a fundamental way, and result in taking away valuable services. Some businesses want the government to just get out of the way and let them continue their practices and revenue models exactly as they are — that would lead to continued discomfort and distrust. If either of these positions wins outright, the internet will be worse off.
A better future for the internet is one where more businesses invest in parallel user experiences with respect to tracking, each backed by sufficient revenue pathways, and each available to the user as meaningful privacy options: one ‘normal’, and one privacy-protective that allows users to escape from the discomforts/costs of invasive forms of tracking (although there may be other tradeoffs in exchange).
The privacy-protective experience may be more limited in its monetization potential than the normal experience, and in particular is probably unable to derive the same kinds of revenue from tailored advertising. Different businesses in different industries would try different approaches to compensate for that and make the user experience sustainable in its own right. Some would charge users directly. Others would run more advertisements, or find other ways of categorizing users to tailor ads — for example, users could be asked to self-categorize to power customizable revenue-generating advertising without tracking. These options probably can’t offer the exact same user-facing experience, but all of them should be able to offer the same core communications and content. And importantly, users should be able to choose between normal and privacy-protective options seamlessly, depending on their needs and wants in specific contexts, not just once in perpetuity, but in realtime at will. And users need to have access to the same core, non-personalized content and communications, regardless of which they choose.
Legal systems vary from one country to another, as do general cultural perspectives. But it’s expensive and inefficient to fragment and tailor services to meet country specific needs. Furthermore, that doesn’t take advantage of individual differences within a country. Privacy is deeply personal and subjective, different from person to person within a culture; there are no universals here. So setting a single privacy standard doesn’t always make sense, particularly in the context of tracking where it has consequences for revenue and sustainable business.
That’s why the principle of privacy options is so compelling to me — not just in the context of the ePrivacy Regulation and how that law uses that phrase, but more broadly. I am inspired by the idea of having a meaningful choice among multiple experiences that all offer the same core network, content, and communications benefits, but with some tradeoffs around privacy and functionality. From a privacy perspective, neither businesses nor government policymakers know what every user wants. And even a single individual is different in different contexts and at different points in time. Their valuation of the privacy and functionality pieces inherent in these tradeoffs may fluctuate, and only they are in a position to say where it lands.
This certainly isn’t an easy ideal to realize. For some businesses, it may seem to trigger existential challenges, even. But we need to figure it out. I believe this is the long-term outcome that fundamental rights in the EU require, as well as the ultimate target for the EU government institutions and many other governments around the world, who will continue reworking their privacy laws and regulations until it is achieved.