Toward Resisting AI-Enabled Authoritarianism

Toward Resisting AI-Enabled Authoritarianism

May 28, 2025

Fazl Barez, Isaac Friend, Keir Reid, Igor Krawczuk, Vincent Wang, Jakob Mökander, Philip Torr, Julia Morse and Robert Trager

View Journal Article / Working Paper >

Artificial-intelligence systems built with statistical machine learning have become the operating system of contemporary surveillance and information control, spanning both physical and online spaces. City-scale face-recognition grids, real-time social-media takedown engines and predictive “pre-crime” dashboards share four politically relevant technical features: massive data ingestion, black-box inference, automated decision-making, and no human in the loop. These features now amplify authoritarian power and erode liberal-democratic norms across many political regimes.

Yet mainstream machine learning research still devotes only limited attention to technical safeguards such as differential privacy, federated-learning security and large-model interpretability, or adversarial methods that can help the public resist AI-enhanced domination.

We identify four resulting gaps: evidence (little empirical measurement of safeguard deployment), capability (open problems such as billion-parameter privacy–utility trade-offs, causal explanations for multimodal models and Byzantine-resilient federated learning), deployment (public-sector AI systems almost never ship with safeguards enabled by default) and asymmetry (authoritarian actors already enjoy a “power surplus,” so even incremental defensive advances matter).

We propose re-directing the field toward a triad of safeguards—privacy preservation, formal interpretability and adversarial user tooling—and outline concrete research directions that fit within standard ML practice. Shifting community priorities toward Explainable-by-Design, Privacy-by-Default is a pre-condition for any durable defense of liberal democracy.

Image for Survey on thresholds for advanced AI systems

Survey on thresholds for advanced AI systems

August 29, 2025
Image for Chain-of-Thought Is Not Explainability

Chain-of-Thought Is Not Explainability

July 15, 2025