Illustration: Face on gray tiles.

The Future Rulers?

Artificial intelligence (AI) has many advantages, but also significant and hard-to-calculate drawbacks for society. For totalitarian regimes, AI is the perfect tool for exercising power. In liberal democracies AI can fuel lack of trust in politicians and institutions and lead to greater polarisation.

Russian president Vladimir Putin underlined the strategic importance of artificial intelligence when he said, ‘Whoever becomes the leader in this sphere will become the ruler of the world.’ Technology has always been used for good or for harm, and it has fundamentally changed human relations by either extending or constraining both power and opportunity. Today, the discourse on widespread digitalisation and the rise of artificial intelligence (AI) amplifies both of these ethical dimensions. On the upside, AI is celebrated as a new source of innovation, economic growth, and competitiveness, as well as for the productivity and efficiency improvements that AI offers across all industries and sectors.

Intelligent automation also promises to resolve some of the most urgent global challenges and achieve the United Nations' Sustainable Development Goals. The potential economic and social benefits of AI innovations can be tremendous. For the majority, the rise of AI is already being experienced as an increase in everyday convenience.

Intelligent automation also promises to resolve some of the most urgent global challenges and achieve the United Nations' Sustainable Development Goals.

On the downside, the rapid advancement and omnipotent nature of AI capabilities has been cautioned against and condemned as a source of unprecedented security and privacy risks as well as a source of severe social, economic, political, and international imbalances in the long term. In the past, the benefits of such dual-use or disruptive technologies eventually outweighed their harm, but this often took place only after a period of misuse and accidents that caused people and governments to demand further improvements and regulations. As AI won’t be an outcome of only human agency but will increasingly develop into an independent agent of autonomous decisionmaking itself, we cannot readily rely on those past experiences.

Independent Agent

However, over the next decades the main risk is not that AI itself will cause immediate harm and long-term imbalances, but our existing human relations, practices, and intentions and thus how AI will be applied is the primary cause and source of disruption. AI won’t be external to history but perpetuate and probably accelerate the current trajectory of humankind, and as history has entered a downward spiral and become more divided and unsustainable, the risk of experiencing more of the downside of AI is very high.

With AI on the rise, coupled with other disruptive technologies such as 5G, the Internet of Things (IoT), robotics, quantum computing, and biosynthetics, our imaginary distance between science fiction and real science has shrunk considerably.

 

AI already beats humans in difficult tasks like playing chess, Go and other complex strategy games, or when conducting medical and legal diagnoses. Besides the intelligent automation of control systems, computer vision and language processing have received the most attention in recent years and vastly outperform certain forms of human perception and expression.

Yet AI is still far away from mimicking human-level intelligence or reaching superhuman intelligence, and it still needs to overcome engineering bottlenecks related to creative and social intelligence. Today’s algorithms are not able to abstract from one situation and apply general concepts to new contexts and tasks. Nor can algorithms automatically change the methodology of learning itself.

While the application of AI systems can be extremely efficient and scalable, training AI systems still takes a long time, is extremely costly, and is much more inefficient than how humans learn. From the perspective of collective intelligence, AI cannot build or compete against large and complex social organisations, which is the human ability that arguably distinguishes humankind from nature.

Today’s algorithms are not able to abstract from one situation and apply general concepts to new contexts and tasks. Nor can algorithms automatically change the methodology of learning itself.

In short, since the rise and collection of mass data, AI has advanced rapidly, but it will not advance rapidly enough to match the apologetic or dystopian fantasies of a post-humanist and post-evolutionist era anytime soon.

The level of risk attributed to AI is not a matter of optimism or pessimism but one of understanding how AI serves existing human behaviour and how it can alter power relations.

Even before AI reaches or exceeds human-level intelligence, the disruptions of AI will be twofold; they will be immediate and felt directly, and they will be structural and unfold over a longer period of time. Regarding the former, AI’s immediate risks relate to the existing landscape of cybersecurity threats, which will change tremendously due to the malicious use of AI. There has been a steep increase in traditional cybersecurity breaches and cybercrime incidences that mainly threaten individuals, businesses, and national infrastructures.

These are caused by individual criminals, organised crime groups, terrorists, and states or state-sponsored actors, and they primarily involve the disruption of digital and physical systems, theft, and cyber espionage.

The level of risk attributed to AI is not a matter of optimism or pessimism but one of understanding how AI serves existing human behaviour and how it can alter power relations.

Cyberwarfare is a combination of all of these and also involves information and psychological operations to manipulate public opinion. Due to its scalability and efficiency as well as the increasing psychological distance between the attacker and the target, the malicious use of AI will lead to the expansion of existing cybersecurity threats, create entirely new forms of cyber-physical threats, and carry out attacks and crimes that are much more targeted by optimising the trade-off between effort and harm or gain.

Due to such a changing landscape of immediate threats and risks, cybersecurity (and more recently AI) has become a matter of national security and military priority. While the next generation of mobile networks, or 5G, makes it possible to connect everything with everything and with everyone, at home, in the office, in factories, and in smart cities, AI provides automation for the purpose of efficiency and convenience. The combination of both technologies will tremendously expand the surface for cyber-physical threats and accidents. It will further complicate both the deterrence and attribution of cyber-attacks or other hacking exploits due to the increasing complexity and interconnectedness of computer networks.

It won’t be possible to prevent those threats, but it will only be possible to mitigate them. For many governments, it’s not a question if but when severe cybersecurity incidences will occur. The risk is independent of specific technology providers.

Economic Imbalances

In addition to these immediate risks, there are longer-term structural risks associated with AI, which are more difficult to anticipate, but their impact will be even more widespread and pervasive. This is simply because technology is not external to us, developing independently of history. Instead, it is deeply interwoven with history, and the current trajectory of humankind shows little sign of escaping from today’s downward spiral of economic, societal, political, and international relations.

Economically, mass labour displacement, underemployment, and de-skilling are likely outcomes of intelligent automation and augmentation. AI directly competes with human intelligence, which was spared from automation during previous industrial revolutions.

AI will not just target knowledge work but continue automating the physical labour that escaped previous waves of rationalisation. As a consequence, governments must prepare for profound structural changes. Widespread automation and aging societies will reduce the labour force and labour as a major source of tax revenue. In addition, market forces have already caused the concentration of data, AI technologies, and human talents. Research and development increasingly shifts from publicly-funded to privately-owned laboratories of large AI platform companies that are less willing to share their intellectual property for the social good.

While the Internet initially lowered hurdles to setting up businesses, AI raises the bar again, which can lead to digital kleptocracies and AI mercantilism if the zero-marginal-cost economy remains unregulated. While rich countries will be able to afford a universal basic income for those who will not be able to re-skill, low- and middle-income countries won’t be able to do the same and risk becoming trapped in their stages of development. AI coupled with data — the ‘new oil’ on which machine learning thrives — will disrupt the global division of labour.

Research and development increasingly shifts from publicly-funded to privately-owned laboratories of large AI platform companies that are less willing to share their intellectual property for the social good.

Countries that can’t catch up with advanced automation to improve their competitiveness will be left further behind. Labour, and especially cheap labour, won’t provide a sufficient comparative advantage in the future, and this will render previous development models obsolete. Income inequality has already reached alarming levels, not just between rich and poor countries but also among the rich countries. The United States has the most unequal wealth distribution among all OECD countries.

While a small group of transhumanists will effect and enjoy the privileges of digital upgrading, the number of those who are left behind will likely increase and add to the feeding ground for social unrest, populism, and nationalism. Before societies are able to change the meaning of labour and find new sources for improving human dignity, automation will reinforce individualism, alienation, and loneliness, and it will threaten both physical and psychological well-being and social cohesion.

State and political actors will make more use of AI technologies. While businesses employ AI to segment people even more precisely as consumers and compete for their attention, political and state actors do so to better understand citizens as persuadable voters, followers, or potential security threats. This can help make countries more secure and the political process more efficient if AI is used responsibly and balances between economic growth, social good, and national security. However, AI increases the structural risk of shifting the power balance between the state, the economy, and society by limiting the space for autonomy.

Fierce Global Competition

Through AI-enabled mass surveillance, psychological operations, and the weaponisation of information, states and political actors might seek to acquire a disproportionate amount of power or amplify populism. The two poles of this political risk scenario are totalitarianism and tyranny of the majority. In both cases, the struggle over power dominates the struggle over progress and threatens the pillars of modern states and governments — bureaucracy, rule of law, and accountability. While authoritarian states could slide into totalitarian regimes by exerting pervasive state control and repression of differences, democracies could witness the erosion of their institutions, the polarisation of their entire societies, and the disintegration of their ‘public morality’ and ‘manufacturing consent’. Unfortunately, we can already witness the world sliding towards either pole of political imbalances.

AI is not the cause, but it is an increasingly weaponised tool used both within and beyond national boundaries to disrupt the political process of adversarial countries.

While authoritarian states could slide into totalitarian regimes by exerting pervasive state control and repression of differences, democracies could witness the erosion of their institutions.

The Edward Snowden and Cambridge Analytica affairs are the most known and disturbing cases of widespread cyber espionage, privacy violation, the manipulation of public opinion and interfering in the democratic process within the West. Conversely, the West frequently accuses Russia, China, North Korea, Iran, and Syria of state or statesponsored cyber intrusions and attacks and of pervasive mass surveillance.

A fierce global competition over AI supremacy is already raging and threatening to disrupt existing international relations. All of the leading economies have laid out or updated their AI national strategies with the goal of promoting the development of nascent AI capabilities and ecosystems and being able to compete globally. The Russian president, Vladimir Putin, most clearly stated the strategic importance of AI in 2017 when he said, ‘Whoever becomes the leader in this sphere will become the ruler of the world.’ Russia is not leading the AI race; currently, the United States leads the race, followed closely by China. The United States wants to maintain its ‘leadership in AI’, while China aspires to become the ‘primary centre for AI innovation’ by 2030.

About the Author
Thorsten Jelinek
Policy Researcher

Thorsten Jelinek is the Europe director of the Taihe Institute, a public-policy think-tank based in Beijing. Previously, he was associate director at the World Economic Forum responsible for economic relations in Europe. He has worked with small and large enterprises and holds a Ph.D. in political economy from the University of Cambridge and an M.Sc. in social psychology from the London School of Economics.

Culture Report Progress Europe

Culture has a strategic role to play in the process of European unification. What about cultural relations within Europe? How can cultural policy contribute to a European identity? In the Culture Report Progress Europe, international authors seek answers to these questions. Since 2021, the Culture Report is published exclusively online.