NB: This post is part of the “Skepticism About Information Fiduciaries” symposium. Other contributions can be found here.
Online platforms do different things for (and to) users. Some of these things are a good fit for fiduciary principles, some are not.
Perhaps most obviously, platforms collect data about users. Some of that data is inherently sensitive, like health records; some of it is sensitive in the aggregate, like months of Facebook likes. Either way, the users could be harmed if their data got into the wrong hands or were used against them.
Fiduciary principles are a good fit for platform data collection in two overlapping ways. First, the core fiduciary duty of confidentiality has long applied to knowledge professionals like doctors and lawyers when they receive information about their patients and clients. Like digital platforms, they need information to do their jobs; fiduciary law makes sure they use it only to do their jobs. Second, fiduciary duties of care and loyalty have long applied to parties who are entrusted with a thing of value. That’s what happens in a literal trust, the paradigmatic source of fiduciary duties. It is not difficult to extend those duties to parties who hold information, rather than money or other tangible property. Current U.S. information privacy law is patchy and hesitant, but its best version of itself would cash out fiduciary principles in specifying when and how platforms can use and share user data.
Platforms also make recommendations to users. Search engines recommend websites, advertising networks recommend ads, streaming services recommend music to listen to next, and so on. Good advice helps users find what they’re looking for, or didn’t know they wanted until they heard it; bad advice hides from them the thing they want most in the world, or manipulates them into making bad choices.
Here too, there is a natural-enough fit with fiduciary principles. Consider investment advisors and the on-again off-again fiduciary rule. The case for loyalty here is based on the investor’s need to be able to trust the advisor’s recommendations, in a situation where the advisor enjoys an immense informational advantage. This schematic fits the user-platform relationship reasonably well: it is hard to imagine a bigger informational asymmetry than the one between users and platforms. The primary duty here is to give unconflicted advice; duties of competence and diligence are a distant second.
But this is also where the first notes of caution start to appear. The difficulty is not that recommendation platforms ought to be loyal primarily to users: they should. The difficulty is that the problem of making recommendations is so complex that it is hard to flesh out the contours of disloyalty in an administrable way. It is easy enough to tell property-handling fiduciaries like trustees that they must not engage in self-dealing; doing so does not seriously restrict the freedom of action they need to manage trust assets appropriately. Applying the same concept of loyalty to traditional advice-giving fiduciaries similarly doesn’t take much off the table: most doctors do not have a financial stake in the medications they prescribe to patients.
Carry this reasoning over to online platforms, however, and it becomes clear that the rule against self-dealing is either absurdly under-inclusive, absurdly over-inclusive, or both. Platforms have subtle and complex financial interests in almost all of the advice they give. Spotify has different licensing deals with different music companies; Google’s paid advertising results and its unpaid organic results are always substitutes for each other; every Facebook News Feed recommendation affects how long users will keep on scrolling through Facebook. These conflicts are pervasive — so pervasive that prohibiting them entirely would effectively prohibit platforms from giving most of the advice that users turn to them for, or prohibit the platforms themselves. (The duty of loyalty is not a suicide pact.) But ignoring the conflicts, or ignoring them as long as they are disclosed in vague and general pro forma terms somewhere, eviscerates the duty of loyalty. Either way, users lose. Something more factually specific and context-sensitive is required.
I tried in Speech Engines to describe that “something” for search engines. The best I was able to come up with was subjective dishonesty. A search engine requires substantial discretion to determine what its users consider relevant — both because different users want different things and because the search engine itself has to guess at what users want. There is no baseline of ground truth about relevance: the legal system is not in a position to say that a given search result is “true” or “false.” Instead, a search engine acts wrongly when “it returns results other than the ones it believes users will find the most relevant.” (emphasis added). Perhaps some bright-line prophylactic rules would be better than this hard-to-administer inquiry into a large entity’s subjective motivations. But these rules would not really be fiduciary ones; the fiduciary ideal at best helps motivate why some such rules are appropriate, but is not so useful in telling us what they should be.
There are other concerns about online platforms for which fiduciary principles have very little to say. Consider content moderation. As soon as we move away from services used by individuals in isolation (think Spotify) to genuinely social media (think YouTube), the fiduciary model collapses. It is one thing to say that YouTube has a fiduciary duty to safeguard viewing data. It is another to say that YouTube has a fiduciary duty when making content-moderation decisions. Does it have a fiduciary obligation toward video creators not to suppress their speech? Does it have a fiduciary obligation toward video viewers to protect them from unwanted content? Either answer excludes the other, and if YouTube is supposed to balance its duties toward both, then neither duty is truly a fiduciary one.
Does eBay have a stronger fiduciary duty to buyers or to sellers? Fiduciary theory rejects the question: it would say that eBay should not simultaneously represent buyers and sellers, precisely because their interests inherently conflict. That’s a plausible answer for a fiduciary like an attorney, but it is not a plausible answer for an online marketplace like eBay, which means that eBay is not a fiduciary in this aspect of its operations. Online platforms deal with multiple interacting parties in a way that traditional fiduciaries simply do not; fiduciary concepts provide no useful traction in explaining how platforms should negotiate the conflicting desires of their different users.
Finally, consider concentration. If a local real-estate multiple listing service has overwhelming market share, there is a sense in which it does so because of user data: brokers upload their listings there. But really it has overwhelming market share simply because it is an online platform at the center of a two-sided market and it is most convenient for all its users to converge on a single directory. This is an economic problem about the structure of a market. Trying to break down that problem in terms of fiduciary obligations to different users just wastes your time and annoys the pig.
The concept of an “information fiduciary” is an accurate and helpful way of describing the privacy interests that users have in data about them held by online platforms. It provides a good starting point for thinking about platforms’ recommendations, but few clear answers. And it simply has nothing useful to say about other urgent problems online platforms pose, such as content moderation and market concentration. I’m glad to have it in the regulatory toolbox, but not every online problem is a nail.