How to build AI for democracy
Good governance of artificial intelligence requires providing for trustworthy AI.
By Bruce Schneier and Nathan E. Sanders
Technology will not solve democracy’s problems, but it already has a powerful influence over democracy. Governments must grapple with artificial intelligence (AI) and not simply consign its development and application to corporate entities.
We need structures that incentivize and laws that enforce the trustworthiness of AI. We need transparency into how AI systems are built, when they are used, and what biases they encode in democratic processes. We need AI—and robotic—safety regulations that encompass all the ways in which AI affects the physical world and alters the speed, scale, scope, and sophistication of human activities.
Though the European Union has passed comprehensive AI regulations, the U.S. Congress appears too gridlocked to undertake any meaningful legislation on AI, and the Trump administration has vocally decried AI regulation. The United Kingdom has stepped back from regulation as well, a shift the EU is being pressured to follow.
To be effective, regulators must shift their focus from the technological minutiae (e.g., how big the models are, whether the code is open source, what capabilities a model has) to the real source of risk in the world of AI today: the choices made by people and organizations. Remember that AIs are not people; they don’t have agency, beyond that with which we choose to endow them. To be effective, AI regulations must meaningfully constrain the behavior of corporate AI developers. Regulations on the behavior of AI users—including political campaigners, police, attorneys, and others—should also be reformed, to confront the enhanced speed, scale, scope, and sophistication conferred by AI. Penalties for violations must be large enough to deter lawbreaking, and there must be resources for enforcement.
There is much to be done in addition to AI regulation. Every actor in the AI ecosystem has a role to play and can independently take positive action. Any one of the leading AI companies could voluntarily allow public input to help steer its own work. Industry associations could adopt standards and ethical norms for AI development, and mandate systems of oversight and penalties for misconduct. Researchers could continue to aggressively oversee the activities of industry and its products, resisting pressure from political factions opposed to government oversight. Governments could leverage their buying power, restricting procurement from vendors who do not adhere to trustworthy AI principles. And, although this isn’t a problem that should fall on individuals, we all can shift our own spending toward those companies that demonstrate good practices.
The more AI is adopted by individuals and institutions, the more the concentration of its control threatens democracy. We need to find ways to distribute control of this technology. This does not mean eliminating corporate AI; rather, it requires robust noncorporate alternatives. Ultimately, AI that works for democracy needs to actually be democratic, and that requires something more than today’s corporate AI systems.
One of the most exciting opportunities we see to distribute control over AI and improve its trustworthiness is to spur development of AI from noncorporate actors, particularly governments. Many national governments have the resources to develop their own AI models and could provide these free or at cost as public goods. Two governments investing heavily in this already are Singapore and Switzerland.
The compelling difference between public and private AI development lies in incentives. In a democracy, government entities have an obligation to serve the public and need not be directed by the profit motive. An emerging category of products called public AI, AI developed under democratic control and for public benefit, can more easily adhere to democratic principles because its constituents require it and because its developers have no shareholders to placate financially.
This kind of AI could represent a new, universal economic infrastructure for the 21st century, much like the public schools and highway systems built by past generations. Democratic control is a fundamentally different basis for trust than corporate AI, but it’s no panacea.
Public AI could be nationalistic, built by one nation according to its own values and flaws and for the benefit or oppression of its own citizens. Americans are not likely to trust public AI from China, and vice versa. You might not trust your own government to control the development of AI. You might not trust the AI developed by another government. But democracies give us more say in how they operate than corporations do, especially in noncompetitive markets. Governments can fulfill use cases for trustworthy AI that industrial players cannot. The goal is for public AI to coexist with private AI, providing a bulwark against monopoly and setting a competitive baseline for transparency, availability, and responsiveness that commercial competitors need to meet to succeed.
Moreover, public AI models would have a more justified place in democratic contexts than corporate AI models. Democratic governments should not employ secret and proprietary technologies in making civic decisions and taking civic actions; they need an alternative to corporate systems when using AI to assist in and automate functions of democratic decision-making.
These alternatives need not be limited to national governments. They can be developed at the state and local levels. NGOs and nonprofits can also build AI models, although the substantial funding required for AI development makes them highly vulnerable to corporate influence. The saga of OpenAI, founded with a public-interested mission before it took billions of investment dollars from Microsoft and abandoned its nonprofit model, demonstrates that private entities can be tempted by profit to undermine their values and reverse their public commitments. Public entities in a democracy can be corrupted too, of course, but they at least offer electoral means of public control.
We urge democratic governments to take seriously the need to actively shape the AI ecosystem to produce public benefit. That should include but not stop with regulating the companies developing the technology. Good governance of AI requires providing for trustworthy AI. That should compel governments to support an alternative to corporate AI that is built with incentives aligned to the public interest.
Excerpted from “Rewiring Democracy: How AI Will Transform Our Politics, Government, and Citizenship” by Bruce Schneier and Nathan E. Sanders. Reprinted with permission from The MIT Press. Copyright 2025.
Bruce Schneier is an internationally renowned security technologist and the New York Times bestselling author of 14 books, including “Data and Goliath” and “A Hacker’s Mind.” He is a lecturer at the Harvard Kennedy School, a board member of EFF, and chief of security architecture at InruptInc. Find him on X (@schneierblog) and his blog (schneier.com). Nathan E. Sanders is a data scientist focused on making policymaking more participatory. His research spans machine learning, astrophysics, public health, environmental justice, and more. He has served in fellowships and the Massachusetts legislature and the Berkman-Klein Center at Harvard University. You can find his writing on AI and democracy in The New York Times and The Atlantic, among others, and at his website, nsanders.me.


I don't take seriously an article on AI that completely ignores the environmental damage from these massive data centers and the impact on jobs.
So if I get a chance, I will pressure my elected representatives to regulate AI to death.
As a woman in the performing arts I can only say that AI is a tool of the devil, designed to give corporations the ability to create without human participation - and without compensating anyone; and the lack of any guardrails or ethics among those that b uild AI systems is criminal. Add to that, as Wendy Horgan says below, the enormous cost to the environment, the humongous resources these data centers soak up with not one benefit going to the communities that 'host' them. If ever there was a perfect example of what's in Pandora's jar, AI is it. There can be no conceivable benefit of AI that's worth the cost to humanity.