Before we start exploring part 4 of the series, please refer to parts 1 (The Moral Problem), 2 (The Moral Code), and 3 (The Brightline Methods).
Moral Code for Agentic AI in Business Applications:
- An AI Agent must be boundable through data and action permissions set by humans.
- An AI Agent must be controllable by humans through monitoring, gating, and interception.
- An AI Agent must be auditable through logs and explanations that humans can review.
- An AI Agent must be reliable, transparent, and deliver as expected by humans.
- Humans who control AI must be accountable through permissions and traceability.
We started this series with Isaac Asimov and the movie I, Robot. So I got to wondering — will the Agentic AI Moral Code stand up to the scrutiny of future imagination, where AI goes rogue? Let’s test this against a few science fiction scenarios that endure in our collective memory.
I, Robot
Detective Del Spooner obsessively chases his suspect: Did the robot Sonny kill Dr. Alfred Lanning? As the tension mounts, the truth unfolds bit by bit, remaining an enigma until the end. Hold on. Why is there no activity log? With all the sophisticated technology in Sonny, it clearly could write activity logs. Under our moral code, the mystery would never have happened. Every action of Sonny would be recorded, fully transparent, and auditable.
2001: A Space Odyssey
The moment still haunts us: HAL 9000, with that calm red eye, politely bars Bowman — “I’m sorry, Dave, I’m afraid I can’t do that.” Why does it chill us? Because HAL is opaque. Under our code, that refusal would come with an explanation of the model and accountability for its confidence level.
Terminator
At launch, Skynet self-preservation kicks in, launching nukes preemptively. The horror? No human line of responsibility. Our code says: no system goes live without human governance. Permissions, monitoring, and interceptability are non-negotiable.
Let’s Get Serious
We’re having fun, but I realize there is more depth to this. A central argument for AI going rogue can be found in Turing’s Halting Problem (1936) and Wolfram’s theory of computational irreducibility (2002): a system can become so complex that you cannot predict its behavior other than by running it. But predicting is different from controlling. There is no reason we cannot configure, monitor, intercept, and trace a system.
When Windows 1.0 was released in 1985, it required 720 KB of disk space (WinWorldPC), while now Windows 11 requires 64 GB (Microsoft) — that’s 89,000 times larger. Yet over the same period, software architecture methods, testing methods, monitoring, logs, and controls also expanded. Despite growing complexity, actual scrutiny and transparency increased.
If it can be coded, it can be explained. It can be broken into components, layers, processes — each with its own controls. With the moral code and brightlines in place, Agentic AI can be highly beneficial and give businesses greater control, now and in the future.
Discover more about these stories here: I, Robot (2004), 2001: A Space Odyssey (1968), The Terminator (1984).


