artificial intelligence often seen as a threat to democracies and a boon to dictators. In 2025, it is likely that algorithms will continue to undermine democratic dialogue by spreading outrage, fake news, and conspiracy theory. In 2025, algorithms will also continue to accelerate the creation of total surveillance regimes, in which entire populations are monitored 24 hours a day.
Most importantly, AI facilitates the centralization of all information and power into one center. During the 20th century, distributed information networks like the United States performed better than centralized information networks like the Soviet Union, because human machines at the center could not analyze all the information effectively. . Replacing machines with AI could make Soviet-style centralized networks superior.
However, AI is not always good news for dictators. First, there is the notorious problem of control. Authoritarian control is founded on terror, but algorithms cannot be terrorized. In Russia, the invasion of Ukraine officially defined as a “special military operation” and calling it “war” is a crime punishable by up to three years in prison. If a Russian internet chatbot calls it a “war” or refers to war crimes committed by the Russian military, how can the regime punish that chatbot? Governments can block it and seek to punish its creators, but this is much more difficult than disciplining users. Furthermore, authorized bots can develop dissenting views on their own, simply by detecting patterns in the Russian information sector. It's a matter of alignment, Russian style. Russia's human engineers may try their best to create AIs that are perfectly suited to the regime, but with the ability of AIs to learn and change on their own, how can engineers ensure that an AI Regime approved in 2024 won't venture into illegal territory in 2025?
The Russian Constitution makes grandiose promises that “freedom of thought and expression shall be guaranteed to everyone” (Article 29.1) and that “censorship is prohibited” (29.5). Almost no Russian citizen is naive enough to take these promises seriously. But the bot doesn't understand double speech. A chatbot instructed to comply with Russian laws and values could read that constitution, conclude that freedom of speech is a core Russian value, and criticize the Putin regime for violating that value. How can Russian engineers explain to chatbots that although the constitution guarantees freedom of speech, chatbots should not really believe in the constitution and should never mention the gap between theory and practice? international?
In the long term, authoritarian regimes may face a greater danger: instead of criticizing them, AI may gain control over them. Throughout history, the greatest threat to autocrats has often come from their own subordinates. No Roman emperor or Soviet prime minister was overthrown by a democratic revolution, but they were always at risk of being overthrown by their subordinates or turned into puppets. A dictator who gives too much power to AI in 2025 could become their puppet in the future.
Dictatorships are much more vulnerable than democracies to such algorithmic takeovers. Even a hyper-Machiavellian AI would have difficulty amassing power in a decentralized democratic system like the United States. Even if AI learns to manipulate the US president, it may still face opposition from Congress, the Supreme Court, state governors, the media, large corporations and many other non-governmental organizations. . For example, how will the algorithm solve the Senate overreach problem? It is much easier to gain power in a highly centralized system. To hack a dictatorial network, AI only needs to manipulate a single paranoid individual.