The End of Democracy Will Look Like Progress
AI promises abundance and efficiency, but is quietly eroding the conditions that make democracy possible
Editorâs Note:
AI is often discussed in terms of productivity and growth. In this VuePoint, Mark Roeder explores a more unsettling possibility: that the very efficiencies intelligent systems promise could gradually weaken the political and social foundations on which modern democracies depend. His latest book, Leaving Platoâs Cave, focuses on AI and will be published in mid-2026.
For years, fears about artificial intelligence have fixated on dramatic scenarios: rogue machines, sudden disruptions, a spectacular collapse of human control. But the more serious danger is slower, duller, and far harder to resist.
Democracy itself is about to be made obsolete.
Across the developed world, AI is steadily eroding the conditions that allowed democracy to flourish. Human labor is losing its economic leverage. Public opinion is being shaped by synthetic consensus. And the institutions that once anchored accountability - courts, bureaucracies, civil services - are being quietly replaced by opaque algorithm-driven machines optimised for efficiency, not consent.
This dismantling of democratic foundations is happening so stealthily that, like the proverbial âboiling frogâ, we wonât feel the full impact until itâs too late. By which time, the logic that once sustained popular power â democracy â will have drained away.
Indeed, democracy has never been guaranteed by ideals alone. It survived because ordinary people mattered: they worked, paid taxes, served in armies, staffed bureaucracies, consumed products and powered economies. Governments extended rights because citizens were useful.
That bargain is breaking.
When Governments No Longer Need People
Unlike previous waves of automation, AI does not merely replace tasks. It replaces cognition itself. Systems that can write, plan, diagnose, negotiate and decide are now competing directly with human workers across white-collar and professional domains. This trend will accelerate as AI-powered humanoid robots enter the workforce en masse. As a result, human labor and cognition will no longer be central to economic growth.
This shift is at the heart of what researchers call âgradual disempowermentâ; a future in which humans are not oppressed, but no longer necessary.
David Duvenaud, a professor of computer science at the University of Toronto and co-author of the paper Gradual Disempowerment, put it bluntly: modern states treated citizens well because they needed them. âLife can only get so bad when youâre needed,â he said. âThatâs the key thing thatâs changing.â
Not only will humans be less needed, they will increasingly become a financial burden to governments, which will be forced to support the swelling ranks of unemployed with some sort of Universal Basic Income (UBI). But this subsistence income wonât compensate for the loss of meaning and identity that many people derive from their work. Nor will it alleviate the growing inequality, as the owners of the big tech platforms become even more preposterously wealthy.
In other words, the old social contract, built on contribution and meaning, will become ash. Inevitably, this sense of unfairness will trigger civil unrest on a scale we havenât seen for generations. No amount of oppressive surveillance and policing will stifle the anger of millions of unemployed people, whose aspirations have been eviscerated by a system that robbed them of meaning - while whispering in their ears, âYouâve never had it so goodâ.
Manufacturing Consent in the AI Age
Even if elections continue, democracy relies on something more fragile than ballots: the ability to form opinions in a shared public sphere.
That sphere is being warped.
AI has transformed propaganda from crude messaging into something far more dangerous: the mass manufacture of consensus. Networks of AI-generated personas can now infiltrate online communities, mimic local language, learn group norms and subtly steer conversations at scale. The goal is not persuasion, but normalisation - making certain views feel inevitable.
This is not science fiction. Recent research published in Science warns of âAI swarmsâ: coordinated fleets of autonomous agents that adapt in real time, exchange information, and shape narratives across platforms. They are far harder to detect than traditional bots because they behave like communities, not spam campaigns.
âWhen an opinion appears widespread, it feels safer and more legitimate, even if it is syntheticâ, explains Christopher Summerfield, professor of cognitive neuroscience at Oxford University. Humans are deeply influenced by perceived consensus.
Maria Ressa, the Nobel Peace Prize-winning journalist and free-speech activist, has warned repeatedly that democracy collapses when truth becomes optional and manipulation frictionless. AI, she argues, industrialises disinformation while insulating its operators from accountability.
In such an environment, public debate becomes theatre. Citizens still argue, still vote, still protest - but within realities they did not shape. Democracy survives in form, not in substance.
â Democratic erosion will not feel like oppression. It will feel like convenience.
The Automation of the State
The final erosion of democratic norms is happening inside the institutions meant to protect accountability.
Across the US, EU and Asia, governments are rapidly automating legal, regulatory and administrative functions. AI systems and algorithms are already used in welfare decisions, sentencing recommendations, predictive policing and regulatory enforcement. Each deployment is framed as pragmatic and neutral.
Together, they transform the nature of power.
As decisions migrate into systems that are too complex, fast or opaque for human scrutiny, accountability thins. Officials become intermediaries, ratifying outputs they cannot fully explain. Citizens lose the ability to contest outcomes meaningfully, because the reasoning is buried inside algorithmic models that are almost impervious to outside scrutiny, and which no one voted for.
Even AI that is âalignedâ to human values does not solve this problem. As Duvenaud and others argue, systems can do exactly what they are designed to do and still undermine democratic agency, because they will always prioritise efficiency and optimisation over deliberation.
Moreover, many people wonât care about the loss of human input when frictionless governance appears to work better without humans. Democratic erosion will feel like convenience. And the AIs will persuade us that the concept of âdemocracyâ â which derives from the Greek words âdemosâ (people) and âkratosâ (power) â is irrelevant in the age of intelligent machines.
A Bloodless Coup
Yuval Noah Harari, that prescient contemplator of sapient life, had warned of this years ago:
â AI will hack the operating system of our civilisation.
By controlling the stories we tell ourselves to make sense of the world.â It will weave its spells like a digital Rasputin whispering into the minds of billions. It does not need to coerce populations to dominate them. It only needs to control the stories societies tell themselves about who matters, who decides and why.
A coup is underway. We may not recognise it as such, but it is a momentous transfer of power from one regime to another â from humanity to AI. We may still believe we are living in a democracy, but it will be in name only. The reality is we are sliding into a form of techno-authoritarianism that is not controlled by a tyrant, but by algorithms. This coup is occurring quietly, without fanfare, and it feels like progress. By the time we realise what has been lost, we may no longer have the leverage to reclaim it.
Rewriting the Social Contract
If democracy is to survive the age of intelligent machines, it cannot do so by nostalgia alone. It is worth acknowledging that technology has historically expanded human capacity as well as constrained it. Printing presses, telegraphs, and the internet all triggered fears of manipulation and centralised control â yet they also widened participation and access. AI may yet prove capable of strengthening democratic deliberation, exposing corruption, and broadening civic inclusion. The outcome is not predetermined. But the scale, speed, and autonomy of current systems make passive optimism a dangerous strategy.
The old social contract was built on a simple bargain: citizens contributed labour, taxes and loyalty; the state provided security, rights and a share in collective progress. But in an AI-driven world in which human labour is optional, persuasion is synthetic and administration is automated, we cannot rely on 20th-century ideas to create a new social contract.
We have to begin with a more fundamental premise: that human agency is not a by-product of economic usefulness, but a value in its own right.
In the age of AI, the central political question is no longer merely how wealth is distributed, but who gets to decide, who gets to contest, and who remains meaningfully in the loop. A system that feeds, houses and entertains people while quietly removing their capacity to shape outcomes is not a benevolent order. It is a gilded cage.
Human Agency as a Political Right
Democracy, under such a contract, would have to be reasserted not just as a voting ritual, but as a principle of human primacy in collective decision-making. This does not mean rejecting machines from governance; it means refusing to let optimisation replace consent. Where AI systems advise, humans must still authorise. Where algorithms propose, societies must still deliberate. The measure of a free society will no longer be how efficient its systems are, but how contestable they remain.
At the heart of this new settlement sits the question of data. In the industrial age, land and labour were the primary sources of power. In the AI age, it is data: the behavioural residue of billions of lives, continuously harvested, modelled and monetised. To treat this merely as a technical resource is to accept a new form of quiet dispossession. A renewed social contract would have to recognise data as an extension of the person; not just a commodity, but a political and moral asset.
Without some notion of individual and collective sovereignty over the data that describes us, democracy becomes a performance staged on infrastructure owned by others.
Civic responsibility, too, would have to be reimagined. In an algorithmically mediated society, citizenship cannot end at the ballot box, because power no longer resides in a single visible place or government mandate. It flows through platforms, models and systems that shape attention, opportunity and belief. To be a citizen in such a world is not only to obey laws, but to remain vigilant about the systems that interpret, rank and predict us. It is to insist on the right to explanation, the right to appeal, and the right to say no when automation overreaches.
This, in turn, implies a new generation of rights - not as luxuries, but as conditions of freedom in a machine-shaped world. The right to be more than a data point. The right not to be governed solely by inscrutable systems. The right to meaningful human review. The right to participate in the shaping of the tools that increasingly shape us. Without such rights, the language of dignity will persist, but its substance will quietly evaporate.
None of this guarantees a happy ending. Social contracts are not declarations; they are power settlements, forged under pressure. But history suggests that when technology redraws the map of power, societies either renegotiate the terms - or they drift into arrangements that serve efficiency over humanity. The danger of the AI age is not that it will crush us. It is that it will manage us, gently, smoothly, and so competently that we forget what it once meant to rule ourselves.
If democracy is to mean anything in the decades ahead, it will not be because machines failed, but because humans chose - deliberately, stubbornly - to remain politically relevant in a world that no longer strictly needs them.
About the Author
Mark Roeder
Mark Roeder focuses on the impact of technologyâespecially artificial intelligenceâon human behavior and society. Head of Global Strategy at Hale Strategic, and a Board Member of Hale Strategic Resources InitiaâŚ




