Who's actually in charge? (Spoiler: not governments)
Written by Nhiyc
Editor: Lucija Ajvazoski
There is a version of the internet origin story that goes like this: a decentralised network, built on open standards, designed to survive nuclear attack by routing around damage, democratising access to information for everyone. No single point of control. No single point of failure.
That version is historically accurate and almost completely irrelevant to the internet we actually live with.
What we have instead is a small number of very large private companies with more economic power than most nation states, controlling the infrastructure that the rest of the world runs on, governed by unclear terms of service nobody reads, accountable to shareholders and, depending on jurisdiction, to almost nobody else.
This did not happen by accident.
How did we get here?
The early internet was genuinely built on open standards. Anyone could build a server. Anyone could publish. The protocols were designed for interoperability, meaning different systems could talk to each other without one owning the other.
The shift happened when the business model shifted. "Free" services funded by advertising required scale, and scale required lock-in. Big Tech companies systematically weakened interoperability: they made it difficult to export your data, impossible to communicate across platforms, and costly to leave. Network effects did the rest. If everyone you know is on one platform, leaving that platform means losing access to everyone you know. That's not a free choice. That's a structural trap.
Proprietary formats, deliberately incompatible systems, and aggressive acquisition of potential competitors consolidated what had been a distributed network into a small number of centralised platforms. The open architecture of the internet became the delivery mechanism for a handful of closed ecosystems.
By the time governments started asking serious questions about tech power, those companies had already become the infrastructure governments depended on. That dependency removed most of the available leverage.
So at what scale are we talking about?
According to Synergy Research Group, Amazon Web Services, Microsoft Azure, and Google Cloud together control around 63% of global cloud infrastructure spending. That includes vast amounts of government data, scientific research, health records, financial systems, and critical infrastructure. The data underpinning climate science, environmental monitoring, and public health sits, in large part, on servers owned by three US corporations.
This concentration has a geographic dimension worth being precise about. Data centres require massive amounts of land, water for cooling, and energy. They are concentrated in the Global North and increasingly in China. The undersea cables that carry data between continents are largely owned by US, European, and Chinese corporations. The satellites providing global internet coverage and Earth observation capabilities are predominantly controlled by the same actors.
The result is that the physical infrastructure of the information economy resembles the old colonial trade systems: built by and for the centres of power, extracting value from the periphery, leaving less powerful countries with limited ability to govern or exit the system.
Big Tech vs. Everyone positioning obscures what's happening
It is tempting to frame this as Big Tech versus governments, or Silicon Valley versus everyone else. That framing is too simple and it obscures something important.
There are multiple tech empires, not one. They have different governance models, different relationships to the state, and different extractive mechanisms. But they share the same underlying logic.
US Big Tech operates on surveillance capitalism: the model where user data and attention are the product, monetised through advertising and increasingly through AI development. The companies are nominally private and nominally independent of government, though the relationship between Silicon Valley and the US national security apparatus is considerably closer than the "move fast and break things" mythology suggests.
Chinese tech giants operate on a state-integrated model: data serves state goals, companies operate with explicit government involvement, and the boundary between corporate and state data collection is deliberately blurred. This is a different model from surveillance capitalism but not a less extractive one.
Both extract from the Global South. Both concentrate power. Both resist sovereignty claims from communities and governments outside their home jurisdictions. The framing of US Big Tech as the villain and Chinese tech as the alternative, or vice versa depending on your political starting point, is a distraction from the fact that both are versions of the same problem: infrastructure controlled by distant powers, extracting value from communities with no meaningful say in how that infrastructure operates.
Emerging alternatives exist. India's Data Empowerment and Protection Architecture, Brazil's digital sovereignty initiatives, and the EU's digital sovereignty rhetoric all represent attempts to assert some degree of control. Whether any of these amount to genuine sovereignty or simply a different set of dependencies is an open and important question.
Surveillance problems
One of the less discussed consequences of data concentration is what happens when the same infrastructure serves multiple purposes simultaneously.
Environmental monitoring technology is a useful example. Satellites used to track deforestation use the same underlying capabilities as satellites used to track human movement. Machine learning algorithms trained to identify forest clearance can identify border crossings. Earth observation companies with conservation contracts also hold defence contracts. Planet Labs, one of the largest commercial satellite operators with significant environmental monitoring work, holds contracts with NATO, the US Department of Defense, the US Navy, and the US National Geospatial-Intelligence Agency alongside its environmental and conservation work. The data is the same. The application depends on who is asking and what they are paying for.
This dual-use reality has documented consequences. In documented cases across multiple countries, environmental and protest monitoring infrastructure has been repurposed for political surveillance. Amnesty International's research found that Dutch police have used drones, video surveillance cars, and bodycams to conduct mass surveillance of peaceful protesters across climate protests, pro-Palestine protests, and other movements, in ways that Amnesty says violate the right to privacy and have a chilling effect on the right to peaceful assembly. The infrastructure built to watch the planet watches the people on it.
The implication for climate data specifically is significant. Expanding environmental monitoring, which is genuinely necessary for climate action, also expands the surveillance apparatus available to states and corporations. Open climate data enables justice movements and enables oppression, depending entirely on who controls access and under what conditions.
This is not an argument against environmental monitoring. It is an argument for being precise about who controls it and under what governance conditions it operates.
Regulations: is it working?
The standard response to tech power concentration is regulation: passing better laws, enforcing antitrust, requiring data portability, mandating algorithmic transparency. These are not wrong answers. They are insufficient ones, and understanding why matters.
Tech companies' lobbying expenditure in Washington DC and Brussels is substantial and growing. In the US, tech giants combined to spend $61.5 million on lobbying in 2024, employing one lobbyist for every two members of Congress. In Brussels, the tech industry now spends around €151 million annually on lobbying, a rise of more than 50% in four years, with the top 10 digital companies spending three times more than the top 10 spenders in the pharmaceutical, financial, and automotive industries combined. The revolving door between tech companies and regulatory agencies is well documented: senior regulators move into industry roles, and industry veterans fill regulatory positions. The institutional knowledge required to oversee these complex technical systems increasingly sits with the companies being regulated rather than the agencies doing the regulating.
The result is a regulatory environment shaped substantially by the entities it is supposed to constrain. GDPR, the most substantive data protection framework, is genuinely significant in establishing that individuals have rights over their personal data. Yet as academic research has documented, it has also generated a multibillion-dollar compliance industry, absorbed enormous institutional resources, and left the core business model of surveillance capitalism largely intact. Individual consent mechanisms designed to be bypassed do not constitute meaningful data governance.
The deeper problem is structural. Effective regulation of global tech giants that operate across borders requires international coordination. International coordination requires political will from governments that are simultaneously dependent on the infrastructure being regulated and receiving significant lobbying investment from the companies operating it. That is a structural conflict of interest that procedural fixes do not resolve.
What genuinely sovereign infrastructure would require
Sovereignty over data requires sovereignty over infrastructure. This is not a technical point, it is a political one.
Community mesh networks exist and work: NYC Mesh in New York, Guifi.net in Catalonia, the Detroit Community Technology Project, Indigenous community networks in Canada and Australia. Worker-owned cooperative cloud hosting exists: May First Movement Technology operates globally, Autonomic Cooperative operates in the UK. Federated social media protocols like ActivityPub and Matrix demonstrate that decentralised communication infrastructure is technically viable.
These are not marginal experiments. They are proof of concept that infrastructure does not have to be owned and controlled by a small number of extractive corporations. They are also not currently at a scale that constitutes a genuine alternative for most of the world's population. Getting from proof of concept to viable alternative requires public funding, political will, and sustained organised effort over a long period. None of those things are currently present at the required scale.
The argument that this is too hard or too slow is, again, a description of current conditions rather than an argument about what is possible or necessary. The current conditions were themselves the result of decades of deliberate political and economic choices. Different choices are available.
The accountability gap
When an algorithm makes a discriminatory decision, it is worth asking where accountability sits. The company that built the model will point to the training data. The organisation that deployed it will point to the vendor. The regulator will note that the decision-making process is proprietary and not subject to disclosure requirements. The community affected by the decision has limited legal recourse and even more limited practical ability to challenge a process they cannot see inside.
This is not an edge case. Algorithmic systems making consequential decisions about credit, housing, healthcare, policing, and increasingly climate resource allocation operate largely as black boxes, with accountability structures that are deliberately unclear. When the training data encodes historical discrimination, the algorithm reproduces and scales that discrimination. When the communities most affected are those with the least political and economic power to challenge it, the discrimination compounds.
Who is accountable when the algorithm redlines a climate-vulnerable neighbourhood out of insurance coverage? When a flood risk model trained on incomplete data from informal settlements produces inaccurate results that affect disaster response? When an emissions trading system built on corporate-reported data consistently underestimates actual emissions from the most powerful emitters?
The answer, currently, is: largely nobody. And that is not an oversight. It is an outcome that serves specific interests.
So what now?
The concentration of data infrastructure in a small number of private hands, operating beyond effective democratic accountability, with extractive relationships to the communities generating the most valuable data, is not a temporary market condition that will self-correct. It is a structural feature of how the digital economy was built.
Changing it requires the same things changing any structural concentration of power requires: legal challenges, alternative institution building, coordinated political pressure, and sustained organising by the communities with the most at stake. The next article in this series looks at where that is actually happening and what it looks like when it works.
This article is part of a series expanding on the conversations from Panic with a Purpose, a podcast exploring data, AI, and justice for people and the planet. Episode 1 on Data Sovereignty, Democracy, and Ownership is out now.