Key Facts
- ✓ Tobias Osborne is a professor of theoretical physics at Leibniz Universität Hannover and a cofounder of the scientific communication firm Innovailia.
- ✓ The European Union's AI Act is a sweeping regulatory framework that will phase in stricter rules through 2026.
- ✓ Osborne argues that debates about superintelligent machines and a hypothetical 'singularity' have become a dangerous distraction from present-day harms.
- ✓ He identifies the exploitation of low-paid data labelers and mass scraping of copyrighted work as key overlooked risks.
- ✓ Osborne describes claims of a runaway intelligence explosion as 'a religious eschatology dressed up in scientific language.'
The Dystopia Is Already Here
While the tech industry and policymakers debate whether artificial intelligence could one day threaten humanity's survival, a physics professor argues that this fixation on future catastrophes is allowing companies to evade accountability for the very real harms their technology is already causing.
In a recent essay, Tobias Osborne, a professor of theoretical physics at Leibniz Universität Hannover and cofounder of the scientific communication firm Innovailia, contends that debates about superintelligent machines and a hypothetical "singularity" have become a dangerous distraction.
The apocalypse isn't coming. Instead, the dystopia is already here.
Osborne asserts that while technologists argue over existential risks, the industry is inflicting "real harm right now. Today. Measurably."
How Doomsday Narratives Weaken Oversight
The AI debate has increasingly been shaped by doomsday scenarios, including warnings that superintelligent systems could wipe out humanity by design or by accident. These fears are amplified by prominent AI researchers, tech leaders, and government reports.
Osborne argues that this fixation has a concrete effect on regulation and accountability. By framing themselves as guardians against civilizational catastrophe, AI firms are treated like national-security actors rather than product vendors.
By framing themselves as guardians against civilizational catastrophe, AI firms are treated like national-security actors rather than product vendors, which dilutes liability and discourages ordinary regulation.
This shift allows companies to externalize harm while benefiting from regulatory deference, secrecy, and public subsidies. Apocalypse-style narratives persist, Osborne says, because they are easy to market, difficult to falsify, and help shift corporate risk onto the public.
"The apocalypse isn't coming. Instead, the dystopia is already here."
— Tobias Osborne, Professor of Theoretical Physics, Leibniz Universität Hannover
Overlooked Harms of Today
Osborne's essay lays out a long list of present-day harms he believes are being sidelined by futuristic debates. These include:
- Exploitation of low-paid workers who label AI training data
- Mass scraping of artists' and writers' work without consent
- Environmental costs of energy-hungry data centers
- Psychological harm linked to chatbot use
- A flood of AI-generated content that makes finding trustworthy information harder
He also takes aim at the popular idea that AI is racing toward a runaway intelligence explosion. In the essay, Osborne described such claims as "a religious eschatology dressed up in scientific language," saying that such scenarios collapse when confronted with physical limits.
These aren't engineering problems waiting for clever solutions. They're consequences of physics.
Regulatory Divergence
The regulatory landscape is moving in opposite directions. While the European Union has begun rolling out the AI Act—a sweeping regulatory framework that will phase in stricter rules through 2026—the United States is taking a different approach.
Federal efforts in the U.S. are focused on limiting state-level AI regulation and keeping national standards "minimally burdensome." This divergence highlights the global challenge of addressing both innovation and accountability.
Osborne's critique suggests that without a shift in focus, regulatory frameworks may continue to prioritize speculative risks over measurable, present-day harms.
A Call for Accountability
Osborne is not opposed to AI itself. In his essay, he highlights the genuine benefits large language models can offer, especially for people with disabilities who struggle with written communication.
However, he warns that without accountability, those benefits risk being overshadowed by systemic harms. Rather than focusing on speculative future threats, Osborne says policymakers should apply existing product liability and duty-of-care laws to AI systems.
The real problems are the very ordinary, very human problems of power, accountability, and who gets to decide how these systems are built and deployed.
By forcing companies to take responsibility for the real-world impacts of their tools, society can ensure that the benefits of AI are not lost to unchecked corporate power.
"By framing themselves as guardians against civilizational catastrophe, AI firms are treated like national-security actors rather than product vendors, which dilutes liability and discourages ordinary regulation."
— Tobias Osborne, Professor of Theoretical Physics, Leibniz Universität Hannover
"These aren't engineering problems waiting for clever solutions. They're consequences of physics."
— Tobias Osborne, Professor of Theoretical Physics, Leibniz Universität Hannover
"The real problems are the very ordinary, very human problems of power, accountability, and who gets to decide how these systems are built and deployed."
— Tobias Osborne, Professor of Theoretical Physics, Leibniz Universität Hannover









