Atlas of Unfulfilled Warnings: When Warnings Become Specifications
I. Everyone Read the Same Stories
We have been entertained by science fiction for longer than their fantastical worlds could become possible. Hundreds of futures, lived, experienced, and talked about from classrooms to political speeches. They build worlds and systems that have encouraged us to look to the future, and places that shocked us to our very core. And by the time these systems began to appear in the world, the stories were already familiar.
We shared references, describing the very things that we found abhorrent as what we had read in the stories. Surveillance states, predictive systems, behavioural sorting, human optimisation - all of these lived experiences in science fiction, and yet none of them arrived as a surprise. When the future arrived it felt legible and understandable.
For a long time the science fiction community was maligned, yet, they found a home in the digital world, working on the systems that were talked about in the stories. They read the books, watched the films, and absorbed the same dystopian images as everyone else. The collective archive was not hidden from them, if anything they were more fluent in it; more likely to cite it, reference it, or code-name systems after what had come before in science fiction worlds.
With this fluency and understanding of this literature, it is difficult to see how the systems can be replicated in our world, especially because of the imagined consequences. The problem cannot be that the warning was missed.
The narrative exploration of dystopian systems are examined through characters that live inside of them, adapt to them, and attempt to resist them. The reader learns how the system works, but endures the environment through survival, navigation and endurance.
This mode of engagement leaves a residue.
Once a warning has shifted from an ethical boundary to a reference point it changes how we evaluate the threat. If we notice, or recognise, the dangers casually, it becomes a way to assess progress, a comparison. The future was imagined, and familiar enough for us to work with it.
II. The Translation Problem
Systems in stories are built for the purpose of allowing characters to explore and expose the effects. These are speculative, built in the confines of a smaller world, to reveal pressures, contradictions or limits. These structural antagonists have a way of being used as warnings in our discussions, yet the form it takes once they cross domains demands it to be implemented.
Engineering does not work this way.
Each phase must be explained, it has to become an action. To take an idea from design it must be broken into components, constraints and objectives. What cannot be specified cannot be built. An idea is defined in individual tasks, assigned to members of a team, completed as steps toward a larger goal. Meaning can not be assessed at this task level, as it cannot be defined as a checkable item.
The dystopian condition is no longer treated as a moral boundary, but as a design challenge.
Fictional systems are used as literary devices shows what happens when an idea is followed to its end. Engineering encounters systems incrementally, each component justified, each decision framed as reasonable within narrowed context. The total system only emerges later, once the system is built, and shipped.
Building systems in this way we extract the warning, each step is defended, each feature is framed as necessary. The ethical weight distributed across tasks small enough to be managed. In this process what remains is insight without injunction, knowledge without vito.The future is visible, just repurposed as a destination of design.
Once we shift the process to individual tasks the moral idea has been displaced. The warning becomes distributed among functions of a design specification, conditions to be managed, edge cases to consider, and risks to manage.
The system continues to be built because the warning was understood in a way that allowed construction to continue.
At this point, the decisive question is no longer ethical, but methodological. It is not “should this exist?”, but “how do we build this responsibly?”
III. Instrumental Reason at Work
Engineering has a common practical approach to thinking in managing projects, one that is incremental, methodical and oriented toward results. Questions are asked in terms of feasibility, efficiency and risk. This way of thinking doesn’t grapple with the future, it asks what needs to be done to make it work.
In this mode, the ends aren’t debated, they are viewed looking toward the future where the project is completed. Attention is directed toward means: optimising performance, reducing errors, and scaling safely. The future is a problem space, not a moral horizon, and progress is measured by functions, not reflection.
This form of logic is what Max Weber described when he wrote the increasing dominance of ends over means.
The fate of our times is characterized by rationalization and intellectualization and, above all, by the 'disenchantment of the world.' Precisely the ultimate and most sublime values have retreated from public life either into the transcendental realm of mystic life or into the brotherliness of direct and personal human relations. It is not accidental that our greatest art is intimate and not monumental. [1]
When Weber described the “disenchantment of the world,” he was not mourning the loss of religion so much as observing a deeper shift. Meaning itself was being removed from public systems. What remained were mechanisms that are indifferent to questions of purpose.
Max Horkheimer made this narrowing of reason more explicit. In Eclipse of Reason, he writes:
“Reason has become an instrument. It is useful, perhaps indispensable, in dealing with external objects, but it has nothing to say about the ends of human life.” [2]
Reason does not disappear; it becomes operational. Once an objective is accepted as given, critique gives way to optimisation. Ethical questions reformatted.
When making the pathway from science fiction to engineering, dystopian warnings function as scenarios to account for, rather than a system to impact the people it serves. We ask what is the best pathway to get a system made and released.
Jacques Ellul pushed this logic further, he argued that technique does not serve human goals, but defines them. He said that once efficiency becomes the primary criterion, it begins to justify itself.
“Our civilization is first and foremost a civilization of means; in the reality of modern life, the means, it would seem, are more important than the ends.” [3]
Systems no longer require ethical discussion when their continued operations legitimises them. What works continues to exist. What fails is discarded.
At no point does anyone need to decide to build a dystopia.
The system emerges from a sequence of optimisations, each defensible in isolation, each justified by reference to safety, efficiency, or necessity. By the time the project is released, construction is already complete, and the ongoing operations are counted as legitimate.
IV. Safeguards Without Limits
Once systems are established attention turns to how it should be governed, this wasn’t needed before as it didn’t exist. Here, ethical concern becomes more explicit, principles are drafted, oversight is named. This is often viewed as a sign of restraint.
Contemporary governance starts by affirming fairness, accountability and transparency, then moves to questions of implementation. The system is running, so ethics is not about direction, but it becomes an operational framework.
In the European Commission’s Ethics Guidelines for Trustworthy AI[4], the opening definition is precise and revealing: “Trustworthy AI should be lawful, ethical, and robust.” These are conditions of existence, and not qualities or assessment of performance. The system is already there. The task is to ensure it behaves appropriately.
Moral concern arrives downstream. It regulates effects rather than origins. The central question is no longer should this be built? but how do we minimise harm once it is?
This logic appears again in the OECD’s AI Principles[5] frame ethical responsibility in terms of stewardship, accountability, and risk management; obligations that govern how systems are developed and deployed, not whether they should be built at all. These are governance requirements, obligations that operate once the system is operational.
Safeguards, in this sense, function as limits within a system, not limits on it.
In opposition to this, we can look to science fiction where the ethical concern was built into the design of the system. In the robot stories of Isaac Asimov the Three Laws of Robotics has a morality that formalises behaviour. Automation is not refused but the laws specify the conditions by which the robots can operate. Harm is defined in advance, ethics is bound to rules, and responsibility is embedded. Once encoded the system can persist with the moral questions defined as design constraints before the system existed.
This approach is compelling in hindsight, because it does not deny risk and it supports the delivering of the system in a way that is safe to use. Within this framework ethical oversight is reassuring rather than disruptive. It shows responsibility and absorbs critique. Raising concerns no longer sounds like a question of existence, objection is welcomed providing it can be expressed as a design element.
Another benefit is that the more comprehensive the safeguards become, the harder it is to oppose the system at all. The risks have been identified. The mitigations are in place. Oversight exists. This is ethics as insulation. Moral concern is real and operates entirely within a system whose direction has already been settled. The system advances, not despite ethical awareness, but with ethical awareness built in.
V. The Quiet Success of the Dystopia
All things considered, it can be difficult to describe what happened as failure. From an engineering perspective, it would be classed as success.
The systems were delivered to set specifications, and they did not spiral into chaos upon release. On the contrary, many of them function well, performing as designed. The scale. They are maintained, audited and improved.
Dystopia, in these cases, does not arrive as catastrophe. It shows itself as stability, the effects are measurable, and most of them are useful enough to be integrated into society.
The original warnings were implemented with care and attention. Those imagined futures where systems dominate human life, constrain agency, and reshape behaviour, were implemented carefully, with attention paid to safety, governance, and user experience. The warning actually informed the development of it.
This is because the warning was taken seriously, just not in the way it was intended.
When dystopia is translated into design, it loses its force as prohibition. It becomes a specification to meet, and a collection of edge cases to manage. The system moves forward responsibly, guided by the very insights meant to stop it. In this sense, the dystopia succeeds precisely because it is well understood.
The future arrives without difficulty because it has already been rehearsed, optimised, and justified. There is no single moment at which one can point and say: this is where it went wrong.
What exists instead is a continuous chain of reasonable decisions.
If dystopia emerges through competence, then accountability cannot be located in ignorance or malice. It must be located elsewhere. Maybe it is in the decision to treat known futures as problems to solve rather than outcomes to refuse.
Maybe dystopia is not an accident of progress? But one of progresses most reliable outputs: it fits so well within the logic we use to build the systems around us.
Weber, Max. From Max Weber: Essays in Sociology. United Kingdom: Routledge, 1991. ↩︎
Horkheimer, Max. Eclipse of Reason. United Kingdom: Read Books Limited, 2011. ↩︎
Ellul, Jacques. Presence in the Modern World. United States: Cascade Books, 2016. ↩︎
Original Link: Ethics guidelines for trustworthy AI
Snapshot: Internet Archive ↩︎Original Link: OECD AI Principles
Snapshot: Internet Archive ↩︎