Partisanship is at an all time high in Washington. But one issue policymakers on both sides seem to agree on is that something should be done to rein in the power of Big Tech. The American people also seem to agree, with a recent Gallup poll revealing a majority of both Republicans and Democrats favor increased regulation of the tech sector. Prominent conservatives are even abandoning their traditional rhetoric about limited government to join forces with progressives in calling for structural separation and expansive new regulations.
But how did we get here? Not long ago, tech companies were lauded by both Republicans and Democrats as champions of innovation that could make us happier and more productive, and export American values to the world—just like blue jeans and rock and roll. Now instead, Big Tech has been blamed as the source of innumerable societal harms, including rampant censorship, promoting violent extremism, spreading false information, and unfairly stamping out its competition. Even Sen. Mike Lee (R-UT), a reliable and principled free-marketer, commented recently that:
The actions of Big Tech continue to divide the nation…The Silicon Valley fairytale of innovation and technological progress sold to Americans has turned into a corporatist nightmare of censorship and hypocrisy.
While grievances related to the tech industry are far ranging, when it comes down to the details, there’s less agreement about what to do. While both Republicans and Democrats have expressed concerns about the power of Big Tech, there are still underlying ideological differences on law and policy, as well as in framing the problem. In particular, the right has tended to focus on speech and political bias, while the left has focused on harmful content and economic power. But underlying both is a general anxiety about the centralized power of tech platforms.
Two Philosophies of the Future
This discourse reinvigorates a longstanding debate over whether we should prefer open vs. closed systems, and more broadly how we should think about how design choices shape technology’s potential as a tool for human empowerment or oppression.
This is reflected in two opposed philosophies that have come to define the development of personal computing and the internet through its history. One is focused on building systems that are decentralized, open, and interoperable. The other is focused on building systems that are centralized, closed, and proprietary. The latter comes out of the enterprise-level business world and Silicon Valley’s roots in the defense sector. The former comes out of the reaction against these cultures that took hold in academia, hobbyist computing clubs, and the utopian thinking of the counterculture. As journalist John Markoff summarized in his book What the Dormouse Said, a major shift started to happen around the 1960s:
Computing went from being dismissed as a tool of bureaucratic control to being embraced as a symbol of individual expression and liberation. The evolution of the perception of the computer mirrored other changes in the world at large.
This narrative was later encapsulated and popularized in Apple’s famous “Big Brother” television commercial for the Macintosh computer, proclaiming that it will show you “why 1984 won’t be like 1984.” This set up Apple against IBM, the tech giant of its day. Brent Thomas, the art director for the ad’s production firm, told the New York Times in an interview that they wanted to ”smash the old canard that the computer will enslave us.” Although he added the caveat that this was “strictly a marketing position.”
Steve Jobs had a mantra that consumers want things that “just work.” For Jobs, achieving this meant asserting top down, end-to-end control of the myriad products he would help create at Apple. And he was obsessive about this approach. As Dan Farber described it in a 2007 article for ZDNet:
[He was] a strong willed, elitist artist who doesn’t want to see his creations mutated inauspiciously by unworthy programmers. It would be as if someone off the street added some brush strokes to a Picasso painting or changed the lyrics to a Bob Dylan song.
Jobs famously disdained users that might tinker with his creations, leading to computer cases that couldn’t be opened with regular screwdrivers, components unnecessarily soldered or sealed in place, and hostility to jail breakers and repair shops.
Fast forward to today, and it’s no surprise that Apple is under fire for its closed-system approach, which places substantial limitations on third-party developers and users alike (although this has its benefits too). And unlike Android, there’s no simple way to load apps other than through its App Store. Apple even had its 1984 ad parodied and thrown back in its face by Epic Games, as part of the lead up to an ongoing antitrust lawsuit between the two companies. While Apple’s persnickety need for control may have looked quirky when it was the scrappy underdog, it comes off differently now that the company has a $2 trillion market capitalization.
Steve Jobs was the product and marketing genius behind Apple, and it was his closed-system philosophy that ultimately won out (and some would argue helped propel the company’s success). But co-founder Steve Wozniak, the engineering brains in the company’s early days, represented a much different view. Woz’s philosophy came out of the engineering culture of Silicon Valley in that era—a culture of hackers, tinkerers, and open systems. Still technically on the Apple payroll, Woz has consistently criticized the company for not being more open. He told Fortune in a 2015 interview that, “I don’t like being in the Apple ecosystem…. I don’t like being trapped.” Similarly, in a 2012 interview with TNW, he said, “I’m for more openness. I believe that you can create the best most innovative products even when they are open.” No doubt to Apple’s chagrin, he’s also a proponent of jailbreaking.
Fixing the Future
This leads us back to a potential path forward for the debate over Big Tech in Washington. For policymakers, there are two broad strategies for approaching the issue. One is to punish, regulate, and control the tech companies—which may be difficult given the lack of agreement about the desired outcomes or even the problem. The other is to try to make the innovation ecosystem more open and competitive, empowering end users and weakening central authorities. To frame this another way, per EFF’s Cory Doctorow, we can either try to fix Big Tech, or we can fix the internet. He writes:
Fixing Big Tech means demanding that these self-appointed dictators be benevolent dictators, that the kings be wise kings….By contrast, the fix-the-Internet people want a dynamic Internet, where there are lots of different ways of talking to your friends, organizing a political movement, attending virtual schools, exchanging money for goods and services, arguing about politics and sharing your art.
The problem isn’t (merely) that the CEOs of giant tech companies are not suited to making choices that govern the digital lives of billions of people. It’s that no one is suited to making those choices.
Proponents of this approach argue a more open and decentralized ecosystem could be achieved through policies promoting data portability, interoperability, and open protocols. Data portability is the concept that you should be able to export and transfer your data from one platform to another (a version of this is already part of CCPA and GDPR). The related idea of interoperability is more expansive, describing the ability of different systems to interconnect and work together. Finally, open protocols are standards anyone can use or work with, and are what the internet ecosystem was originally built on (and largely still uses as infrastructure).
In the U.S., this policy approach found bipartisan support in the House Judiciary Committee’s year-long investigation into digital market competition, as well as legislative proposals like the ACCESS Act. Outside the U.S., it has also gained traction among policy discussions such as the U.K.’s Competition and Markets Authority Digital Markets Taskforce report, and the European Commission’s report on “Competition Policy for the Digital Era” and in its proposed Digital Markets Act.
Speaking at CPAC this year, Rep. Darrell Issa (R-CA) made the case for robust data portability legislation. He argued, “we have to find a way to tear down the barriers to make sure that American innovators can get into this market,” adding that consumers should “own their own data…and demand that it be portable.” Rep. Ken Buck (R-CO), ranking member of the House Judiciary Committee’s Subcommittee on Antitrust, Commercial, and Administrative Law, similarly advocated for “consumer-oriented data portability and interoperability policies.” Democrats are also on board (but would like to pair it with other aggressive measures).
Proponents argue that, while this approach raises some significant technical questions, it has the potential to solve some of the hardest problems facing tech platforms, giving users more control over speech and harmful content, and circumventing the economic power of walled gardens. If successful, it could be a boon for competition, lowering switching costs between platforms, reducing barriers to entry for new firms, and empowering users with greater choice. On the other hand, critics argue a poorly designed mandate could inhibit innovation through technological lock-in, give incumbents a wider moat, and compromise privacy and security.
In short, the policy discussion around promoting interoperability entails a range of policy approaches that would make it easier for users to move between platforms, or even incentivize an architectural shift away from large centralized platforms entirely. But there are still a lot of details to be worked out, and good reasons for moving cautiously.
Towards a Theory of Interoperability
Interoperability plays an important role in our everyday lives that’s easy to overlook. In the analog world, it’s at work in the standardized threads that make it easy to screw a lightbulb into its socket, or use aftermarket parts to repair or upgrade your car. When drafting an article like this one, interoperability allows you to seamlessly export from Google Docs to Microsoft Word, and send an email from a Mac to a PC. It’s also what allows you to pick up a phone on the Verizon network and call someone on the AT&T network, or take your number with you if you change carriers.
In the digital world, interoperability requires functionality across many different layers, not all of them technical. In their 2012 book, Interop: The Promise and Perils of Highly Interconnected Systems, Urs Gasser and John Palfrey defines four key layers of interoperability: (1) the technological layer, including the hardware and software; (2) the data layer, involving informational content; (3) the human layer, involving how humans act and coordinate; (4) the institutional layer, involving legal systems and other societal structures.
There are multiple ways of thinking about interoperability. It can be categorized as vertical, between a system and its users; or horizontal, between two similar systems that might compete in the market. It can also be framed in terms of permission. For instance, it can be adversarial, where a company is hostile to a system trying to connect with it (e.g. ripping a music CD or using an unauthorized iPhone repair shop). It can be permissioned, where a company gives approval or license to the interoperator (e.g. an API that connects Salesforce with Gmail). Or it can be permissionless, where a company is indifferent to it or collaboratively works on an open protocol (e.g. scraping a website to create an RSS feed or sending data over Wi-Fi).
Interoperability can arise as a result of different kinds of government actions, including as an antitrust remedy (e.g. AIM), as a regulatory requirement in a certain sector (e.g. LNP), or deregulatory changes that make it easier to achieve compatibility without permission (e.g. reforming the Computer Fraud and Abuse Act or Digital Millennium Copyright Act). It can also result from government standard setting bodies, like the National Institute of Standards and Technology; or from R&D programs like DARPA and the National Science Foundation, which played an active role in establishing the internet protocol suite.
But interoperability doesn’t require government action. Companies often have a business interest in making their products interoperable, at least to a degree. This includes the many internet protocols developed by specialized standards setting bodies, developer APIs on operating systems like Android or Windows, third party tools like IFTTT, or even platforms that are designed from the start to be decentralized—like the protocol-based file system IPFS, or the peer-to-peer social media platform Mastodon. There are also significant efforts by the big tech companies, such as the Data Transfer Project, which seeks to create an open source data portability standard.
In evaluating government vs. private sector solutions, more expansive government interventions typically bring higher risks of undesirable consequences. These include the possibility for technological lock-in, cyber vulnerabilities, and capture by incumbents. Yet, while private sector solutions typically have fewer risks, achieving (optimal) interoperability may sometimes require government action to address coordination problems, misaligned incentives, or overcome existing regulatory barriers.
The Limits of Permissioned Openness
Among the large social media companies, Twitter has taken the lead in (at least rhetorically) embracing a shift to being more open and interoperable. In late 2019, Twitter CEO Jack Dorsey announced the creation of Project Bluesky, a team under CTO Parag Agrawal exploring decentralized models. In January, the team released its “Ecosystem Review” white paper, although there are few details about how it might actually be implemented.
In recent congressional testimony, Twitter CEO Jack Dorsey endorsed the more specific approach of giving users “algorithmic choice.” This concept, previously proposed by Stephen Wolfram, would open up its news feed to third party algorithms selected by its users, moving the platform to be more decentralized. This might look something like email, where there are different services users can choose from on the same common protocols (e.g. Gmail or ProtonMail), with each using a different algorithm to sort out spam. Or it could look more like browser extensions, where there’s still a central authority but added options for a customized experience.
Unlike Facebook, which is busy calling for new global regulations to appease critics and leveraging its new Oversight Board as a lightning rod, Twitter’s CEO seems to recognize that there are no good long-term solutions under the current governance paradigm. Writing about the hearing on his Substack Pirate Wires, Founders Fund’s Mike Solana gets at the key point:
Dorsey in particular came to the hearings with one goal: to make it clear, in the Gandalfian spirit of the One Ring, that the power to control speech is too great for anyone — is too great to exist.
It wasn’t long ago that Twitter touted itself (half-seriously) as the “free speech wing of the free speech party.” In its early years the company was known to seriously embrace more cyber-libertarian values.
In some ways Twitter has been a more open platform than its rivals, with APIs that facilitate a wide variety of third party apps and services. These offer a repeatable way for computer systems to access, use, and interface with Twitter in near real time, increasing its utility. But what Jack gives, Jack can take away. Platforms like Twitter have complete control over the operation of their APIs, and have tightened these controls at various points in their history. In 2018, it announced new developer requirements followed by cutting off access to more than 143,000 apps. This move came on the heels of the Facebook-Cambridge Analytica scandal, which involved third party data access through its API (and Facebook implemented its own restrictions around the same time).
Even if Twitter opens up its API more, such as to facilitate algorithmic choice, it could still require developers to follow the process of applying for access and comply with their policies—thus failing to escape the gravity of central authority. On the other hand, if Twitter wants to fully commit to the path of decentralized protocols, it will have to crack the code of how to monetize this new system, and get buy-in from its stockholders, employees, and users. Based on what we’ve seen so far, it doesn’t seem like they’re even close to being ready to do that.
The current debate over Big Tech is stuck in second gear. The most salient proposals on the table, such as revoking Section 230, or radically changing antitrust law, are unlikely to produce good results for speech or competition (even if they’re popular with activists and pundits). This is because they don’t address the underlying problem: the architectural shift to centralized platforms, rather than the particular companies or the people running them. As long as the current governance model remains, these platforms will always be an attractive target for states and activists to exert pressure and control. Even if you break them up into smaller companies. And the structure of closed systems will tend to be less favorable to innovation and competition, leading to more concentration and less choice.
Promoting a more open digital ecosystem—through interoperability and open protocols—is a smarter approach than trying to punish or control particular platforms. As policymakers consider how best to incentivize it, they should start by focusing on the low-hanging fruit with relatively low risks of negative unintended consequences. This should include robust data portability, deregulation to facilitate adversarial interoperability, and perhaps basic forms of messaging and network interoperability for large platforms.
Despite their limitations, we should also encourage the development of private-sector-led open protocols and APIs, including decentralized web efforts, as well as Twitter’s Bluesky and algorithmic choice initiatives. Who knows, Jack might even surprise us.
Zach Graves is head of policy at Lincoln Network.
This article was supported by the Ewing Marion Kauffman Foundation. The contents of this publication are solely the responsibility of the authors.