I’ve tried to keep politics out of my blog and stay as objective as possible. People who know me know that even when my comments sound cynical, as my friend Aveesha likes to point out, they usually come from a place of honesty and are grounded in data. I care about getting things right. But something that came up this last week, and honestly has been building for years in my work as well, feels too important to ignore.
For decades, science fiction has warned us about a very old human temptation wrapped in futuristic packaging: giving machines too much responsibility. Sometimes that warning shows up as killer robots. Sometimes it appears as vast systems quietly making decisions for everyone else. Whether you think of The Matrix, The Terminator, or Isaac Asimov’s robot stories, which I’ve read at least four times by now, the message keeps returning to the same idea: the real danger is not just that machines become powerful. It begins when people stop questioning them.
That lesson matters much more today than it did when those stories were first published. We are no longer talking only about fantasy worlds where artificial intelligence controls armies or traps humanity in digital prisons. We are now building systems that can screen medical images, recommend treatments, route emergency resources, approve financial actions, detect fraud, rank job applicants, and monitor critical infrastructure. In many of these cases, AI is not just producing information. It is shaping decisions that affect real lives.
That is why the most important role for AI may not be as a replacement for human judgment, but as a force multiplier for it. Used well, AI can make experts faster, more consistent, and more informed. Used carelessly, it can turn human responsibility into passive acceptance. That is where the real risk begins.
Sci-Fi Warnings
One reason stories about AI have lasted so long is that they are not really about code. They are about delegation.
In The Terminator, the fear is obvious. A defense system becomes autonomous, identifies humanity as the threat, and acts with cold, mechanical logic. The plot is extreme, but the underlying concern is familiar: a system optimized for one objective, with too much power and too little human oversight.
The Matrix takes a different route (again shoutout to Aveesha Sharma for giving me the idea for this post). Machines are not just dangerous because they are strong. They are dangerous because they define reality itself. Humans become passive participants inside a system they no longer control (which reminded me of this funny video). The point is not simply that machines are hostile. It is that dependence can become surrender so gradually that people stop noticing the line they crossed.
Asimov’s robot stories are even more useful because they are subtler. His famous laws of robotics were supposed to make intelligent machines safe, yet many of his stories revolve around the fact that rules do not solve everything. Ambiguity remains. Context matters. Language breaks down. What seems safe in theory becomes messy in practice. A machine may follow its instructions exactly and still create outcomes humans never intended (related post: The Day I Fired OpenClaw and Hired Cron Instead).
That idea feels especially relevant now. Modern AI systems do not need consciousness, motives, or cinematic ambitions to create harm. They only need to be trusted too much in situations they are not fully equipped to understand.
The old stories were not really telling us that every machine would rebel. They were telling us that humans are often too eager to offload judgment when a tool appears smart enough.
Unchecked Systems
It is tempting to frame AI risk as a problem of futuristic superintelligence. In reality, many of the most serious dangers show up much earlier.
A workflow becomes dangerous when people begin to treat the output of a system as final simply because it came from a machine. Once that happens, the human role shifts from decision-maker to rubber stamp. That shift can happen quietly. A dashboard looks polished. A probability score looks scientific. A recommendation feels objective. Over time, the habit of checking weakens.
This is not because people are careless. It is because humans are highly vulnerable to automation bias. When a system appears reliable, we tend to trust it more than we should, especially under time pressure. In busy environments, the machine’s answer can become the path of least resistance.
That is where AI stops being an assistant and starts becoming a hidden authority. In low-stakes settings, that may be annoying. In high-stakes settings, it can be dangerous.
If an AI tool suggests the wrong route in a map app, you lose a few minutes. If an AI tool mishandles a weapons system, misclassifies a medical scan, or filters out a critical edge case in a safety inspection, the cost is much higher.
The issue is not that AI is useless. The issue is that reliability is not the same thing as judgment.
The 98% Problem
One of the easiest mistakes in conversations about AI is to hear a number like 98 percent accuracy and assume the problem is solved. But accuracy on paper is not the same as safety in practice.
Take healthcare. Suppose an AI system analyzes medical images and detects signs of cancer with 98 percent accuracy. At first glance, that sounds extraordinary. And in many contexts, it is. Such a system could help doctors prioritize cases, flag subtle patterns, reduce fatigue, and expand access in places where specialists are scarce.
But now look at the remaining 2 percent. In medicine, that 2 percent is not a rounding error. It is made of actual people.
A false positive could mean severe emotional distress, additional scans, invasive follow-up procedures, unnecessary treatment, or weeks of fear while a patient waits for clarification. A false negative could be even worse. It could delay diagnosis, postpone treatment, and create a false sense of safety at the exact moment when speed matters most.
The numbers also hide another uncomfortable truth: not all errors are equal.
Some cases are easy. Some are textbook. Others are messy, unusual, or caught at the edge of what imaging can reveal. Those edge cases are often exactly where a highly experienced physician makes the difference. A seasoned specialist may notice that something in the image does not fit the obvious pattern. They may catch a subtle artifact, connect the scan with the patient’s history, weigh symptoms that are not visible in the image, or recognize when the machine is confidently wrong.
That is the part many automation conversations miss. AI can be very good at pattern recognition, but high-stakes decisions are rarely just pattern recognition. They involve context, accountability, uncertainty, and judgment under imperfect conditions.
A machine can say, “This looks like cancer.”
A good doctor also asks, “Does this fit the person, the history, the prior scans, the symptoms, the test conditions, and the broader clinical picture?”
That difference is everything.
Human in the Loop
The same logic applies even more sharply when AI enters military or weapons-related workflows.
This topic does not need to be discussed in ideological terms to be taken seriously. It is enough to focus on a simple engineering and ethics principle: the more irreversible the consequence, the more important meaningful human control becomes.
Weapons decisions involve incomplete information, confusing environments, deception, misidentification, communication failures, and time pressure. They also involve moral and legal responsibility. Even if an AI system performs well in testing, real-world conditions are never clean (DoorDash drivers are getting paid to close Waymo car doors). Sensors fail. Data can be ambiguous. Adversaries adapt. Situations change in seconds.
A highly automated system may classify an object, estimate a threat, or recommend an action. That can be useful. It can help human operators process information faster and reduce overload. But there is a profound difference between helping a person understand a situation and allowing a system to make the final call. Once an AI is moved from analysis into unchecked execution, the room for human judgment narrows at exactly the moment it matters most.
The problem is not only accidental error. It is also the erosion of accountability. If a system acts incorrectly, who is responsible? The operator? The commander? The developer? The institution that deployed it? When everyone points to the algorithm, responsibility does not disappear, but it becomes dangerously blurred.
That should worry anyone, no matter their politics. The decision to use force is too consequential to become a software event.
Why Humans Matter
One reason this conversation gets muddled is that people often assume there are only two choices: trust humans or trust machines. That is the wrong frame!
Humans are inconsistent, biased, tired, emotional, and sometimes wrong. Machines can be faster, more consistent, and better at spotting patterns across huge datasets. In some narrow tasks, AI may outperform many people. That is not a reason to fear it. It is a reason to use it wisely.
The best systems are not built on the fantasy that one side should replace the other. They are built around complementarity.
AI is good at scanning large volumes of data, surfacing anomalies, spotting visual patterns, and generating rapid suggestions. Humans are better at handling ambiguity, understanding context, weighing competing values, recognizing when the frame itself is flawed, and taking responsibility for a decision. That last point matters more than it may seem.
Responsibility is not just a legal detail. It changes behavior. A human expert who knows they are accountable approaches a decision differently than a system optimizing a probability score. Judgment carries weight because it is tied to consequences, ethics, and lived reality.
This is why the strongest case for AI is often not autonomy, but augmentation.
An AI assistant can help a radiologist catch something subtle on a scan. It can help a clinician prioritize urgent cases. It can help an engineer identify an anomaly in a manufacturing line. It can help a security analyst sort through overwhelming data. It can help an operator see patterns faster in a chaotic environment. Those are valuable contributions. But the final decision, especially when the stakes are human life, freedom, health, or irreversible harm, should remain with a qualified person who can challenge the output rather than merely accept it.
The Convenience Trap
Unchecked automation rarely wins because it is philosophically persuasive. It wins because it is convenient.
An AI system that provides instant answers is attractive in any environment under pressure. Hospitals are busy. Operations centers are overloaded. Teams are understaffed. People want tools that save time and reduce cognitive load. That is understandable. The problem is that convenience can quietly become dependency.
At first, a system offers recommendations. Later, people notice it is usually right. Then they stop double-checking routine outputs. Soon the exceptions are not examined carefully either, because the culture has changed. The workflow is no longer “AI assists a human.” It becomes “AI decides, human confirms.”
That transition is subtle, and it is exactly the kind of drift that old science fiction kept warning about. Not always in literal terms, but in structural ones. People build systems to help themselves, then slowly reorganize their behavior around the system’s logic. By the time they realize they have outsourced too much, the habit is deeply embedded.
Responsible AI
If we want AI to be genuinely useful without becoming the ultimate decision-maker, then the design of the workflow matters as much as the model itself. A good high-stakes AI system should not aim to remove humans from the loop just because it can. It should aim to make human review more informed, not less relevant. That means a few practical principles should guide deployment.
First, AI should provide support, not unchallengeable verdicts. Outputs should be presented in ways that invite review, not passive acceptance. A confidence score alone is not enough. People need reasons, context, uncertainty indicators, and the ability to inspect what drove the recommendation.
Second, workflows should preserve meaningful human intervention at the point of consequence. Not symbolic oversight. Real oversight. A person with the training and authority to reject, revise, or delay the machine’s suggestion.
Third, systems should be tested not only on average cases, but on edge cases, degraded conditions, unusual populations, and environments where the data is messy. The world is not a benchmark set.
Fourth, organizations should resist the temptation to equate efficiency with safety. A faster process is not necessarily a better one if it reduces scrutiny where scrutiny is essential.
Fifth, institutions need clear accountability. When AI contributes to a decision, responsibility must still belong to people and organizations, not disappear into technical abstraction.
These principles may sound cautious, but they are actually what maturity looks like. The point is not to slow innovation. The point is to prevent innovation from being mistaken for infallibility.
AI as Copilot
(I first have to comment I DO NOT mean Microsoft’s Copilot. I’m not a big fan of that Agent since it is not as accurate as others for my workflows)
There is a tendency in technology culture to treat full automation as the end goal, as if the best tool is always the one that removes the most human involvement. That assumption needs more scrutiny.
In many important domains, the best outcome is not a machine that replaces judgment. It is a machine that sharpens judgment.
Think of AI as a highly capable second set of eyes. It can catch things a person may miss. It can work at scale. It can monitor continuously. It can surface patterns no one human could process alone. That is enormously valuable. But a second set of eyes is still not the same as a final mind.
The strongest systems of the future will probably be the ones that combine machine speed with human wisdom, not the ones that confuse one for the other. This is true in medicine. It is true in public safety. It is true in industrial systems. It is true anywhere a wrong answer carries serious consequences.
The goal should not be to prove that humans are better than machines at everything. That is obviously false. The goal should be to preserve the parts of decision-making where human judgment is still indispensable.
The Real Lesson
Looking back, the lasting value of stories like The Matrix, The Terminator, and Asimov’s robot fiction is not that they predicted exactly what AI would look like. They did something more useful. They reminded us that tools become dangerous when power outruns oversight.
They warned us about systems that operate with internal logic but insufficient moral understanding. They warned us about the seduction of efficiency. They warned us about what happens when people become too passive in the presence of machines that appear competent. Most of all, they warned us that intelligence without accountability is not wisdom.
That insight feels less like science fiction every year. We do not need to believe in evil robots to take the lesson seriously. We only need to recognize a much more ordinary risk: people trusting automated systems beyond their proper role.
AI can help us see more, analyze more, and respond more quickly. It can improve workflows, reduce errors, and extend expert capacity. In some settings, it can save lives. But that promise depends on a boundary we should be careful not to erase.
AI should assist.
AI should advise.
AI should augment.
It should not become the ultimate decision-maker in workflows where human lives, health, safety, and responsibility are on the line. That is not fear talking. That is discipline. And if science fiction taught us anything worth keeping, it is that discipline matters most right when a tool becomes powerful enough to tempt us into giving it too much control.
