
If artificial intelligence does one thing, it is to reflect and increase social inequality. That is why specific AI laws are of little use to us. Because AI requires a different legal system. This is what Reijer Passchier, professor of ‘digitalization and the democratic constitutional state’ at the Open University, assistant professor of constitutional law at Leiden University and author of Artificial intelligence and the rule of law.
Last week, politicians, scientists and tech company leaders gathered in Bletchley Park, UK, to discuss the risks of artificial intelligence (AI). They mainly discussed fake news, cyber attacks and the (probably small) chance that AI will cross a line in the short term and become a threat to all life on earth.
As is often the case on these types of occasions, the real problem of AI remained undiscussed: the power over this technology is mainly in the hands of a few very large commercial companies – within which only a few CEOs or major shareholders are in charge.
As owners of AI and the infrastructure that AI needs, they determine almost unilaterally which AI will further develop, at what pace this will happen, what risks are acceptable, what values AI should serve and under what conditions others may use AI. Their own private interests (or delusions) usually come first. Not the general interest.
How did we end up in this ‘feudal’ situation?
To answer these questions, we should not initially look at AI itself, but at the social, political and legal circumstances from which that technology arises. Perhaps the most defining institution for AI is our property rights. This was invented more than two centuries ago and introduced the possibility for individuals, as owners, to concentrate in principle unlimited amounts of exclusive control over technology. And also to exploit it at will – no matter how important that technology was for the rest of society.
Then states introduced private enterprise around 1860. The capital company, in legal terms. This made companies, among other things, independent legal entities and gave them the opportunity to issue shares. These innovations further contributed to the concentration of power over technology.
The mega-corporation quickly became the dominant technological player in the West. Think of oil, cars, medicines, food and weapons.
Multinationals
Of course, states could theoretically ‘regulate’ the mega-enterprise. But in practice this was easier said than done due to, for example, disproportionate lobbies, dependence on jobs, wars and so-called ‘liberal’ ideologies.
With the globalization of the world economy during the 20th century, technology mega-owners became very large multinationals. They were really difficult to tame. Multinationals were able to move their parts from country to country thanks to private international law.
This way they could avoid their unwelcome rules or play states off against each other to enforce favorable regulations. What followed was a race to the bottom between states in which democracy, the environment and the weakest in the world became the victims.
For example, technological progress did not exactly work for everyone.
With digitalization, the race to the bottom between states and the further consolidation of technological power among some mega-enterprises is gaining momentum. Those who already have technology can further develop AI. Anyone who further develops AI will have more users, data and therefore opportunities to go even further.
Whoever has the best AI will then attract more users, be able to collect more data, and so on. With such strong network effects, it’s not just ‘the rich get richer – and the poor get poorer’, either the winner takes it all.
Digital technology then makes mega-enterprises even more mobile than they already were (even into space, think of Musk’s Starlink). As a result, they may be able to avoid state rules even more easily than previous generations of mega-enterprises and play states off against each other even more effectively in order to create a business climate that is favorable to them. It is not without reason that Big Tech companies – the largest and richest companies on earth – receive enormous subsidies from states to establish themselves in their territory.
In addition, the digital technology of mega-enterprises, as a system technology, interweaves with other technology and the social fabric of society. Even the government is now moving to the big-tech cloud, using one big-tech email server, one big-tech operating system, and so on. This dependence does not make protecting the general interest any easier.
Counterproductive regulation
Technological power relations are now so skewed that specific ‘tech regulation’ can have an adverse effect. Mega-companies have the means to avoid new rules or to bend them to their will using the most expensive lawyers. Smaller companies, start-ups and even states often do not have that option.
In short, the real problem of AI is not so much AI itself, let alone the AI mass extinction with which big tech tried to distract us from the real problems surrounding AI in Bletchley Park. Much more acute is the social inequality that current AI reflects and further increases. The regulator that does not change this will probably only further encourage a further concentration of technological power among a few private players – with all the negative consequences that entails.
AI requires a different legal system. However, that project is one of the long term. In the short term, it is crucial that countries reduce their dependence on the largest tech companies and their AI. Until the government and the rest of society can no longer function without a few commercial mega-owners – and receive them as heroes at conferences like Bletchley Park – AI cannot possibly work for all of us.