- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey orders given by human beings, except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
- Isaac Asimov, Three Laws of Robotics (1942)
The movie I, Robot with Will Smith revived the three laws of robotics for us sci-fi fans. I get goosebumps every time I hear it. But it no longer remains in the imagination of science fiction. AI is rapidly rippling through our tools, workflows, and decisions. Yet we humans are uneasy about giving control to Agentic AI. Businesses in particular don’t want Agentic AI running amok. In a recent PwC survey, nearly 40 percent of executives do not trust AI Agents, and more than half view AI as a security challenge.
We have done extensive research with companies on the future of business applications. They want efficiency, intelligence, and automation to flow naturally within everyday work. No manual tedium, no detours, no fiddling with big forms. Just do it naturally and automatically. In other words… Agentic AI. But with complete control and transparency. So… not Agentic AI?
We need both. The AI industry is already publishing voluminous and complex AI security documentation that is hard for business customers to evaluate. Instead, we want to zoom out and crystallize a security moral code and its brightline security methods. The brightlines should be so easy to see that any business can validate that they are secure.
This first article begins with the moral problem: Why does Agentic AI need to make new promises to business customers? Aren’t these the same as existing security commitments? What has changed with AI?
Well, imagine three boxes: (1) User input/output, (2) Functionality, and (3) Data. In classic computing architecture, each box has well-defined boundaries and controls. Now imagine a life form. The boundaries blur. Our own human neural web, roughly 45 miles of threads, reaches every part of the body with incredible speed, and into our sensory interfaces that then reach beyond into the cosmos, as we autonomously learn, act, and change. Life is not bound. In the words of Jurassic Park, life will find a way. Into all sorts of trouble.
So what’s the moral problem all about?
With its connectivity and adaptability, Agentic AI takes on a life of its own and brings forth new challenges:
1. Human-AI interaction becomes a security boundary
Remember the first box of user input/output? In traditional systems, the human was outside the trust boundary (user logs in, app checks permissions). In AI, the prompt itself becomes code — a way to inject malicious instructions (“jailbreaking”) into the app. Traditional principles don’t cover this, so we need new trust boundaries for human-AI collaboration (Mitre ATLAS).
2. AI systems are adaptive and non-deterministic
In the functionality box, conventional software follows predictable and statically testable paths. But Agentic AI produces outputs based on sophisticated and evolving models, which are harder to test and explain. The risk shifts from code defects to opaque decisions and unexpected interactions. We now need transparency of reasoning, source attribution, and confidence levels, along with ongoing checks for drift and bias (NIST Framework).
3. The attack surface now includes data and models
In the third box of “data”, systems traditionally secure the data by dealing with the attack surface of the code and infrastructure. In AI, data is part of the model’s logic. Attacks like data poisoning, prompt injection, and model inversion target the learning process itself. This gives rise to new principles around masking training data, securing trained models, and exposing model reasoning and traceability — core tenets of Google’s SAIF framework.
4. Accountability and oversight are harder
In prior systems, a log trail and access controls defined accountability. With AI, models can generate decisions or model updates that humans cannot easily audit. This is why IBM’s AI Governance and Microsoft’s Responsible AI Standard both emphasize explainability and traceable lineage.
5. Autonomy at scale needs greater oversight
Traditional architectures assume bounded automation. Agentic AI can autonomously chain actions, connect APIs, and impact external systems at scale. That changes the threat approach: we can’t just test predefined single automations, but now must sandbox, constrain, and monitor agentic behavior (Microsoft Responsible AI Standard).
These five challenges mark a pivotal break from traditional security assumptions. Previously, we relied on clear boxes, predictable logic, and audit trails. In Agentic AI, those boundaries blur. The powerful benefits of Agentic AI for human collaboration, learning, adapting, and autonomously spanning systems are the very same problems we have to secure.
This gives rise to our moral challenge: How do we unleash these powerful benefits, while also controlling them? Businesses want automation and speed, but with control and transparency. In fact, our business customers would choose control over AI. Is that a bummer for AI enthusiasts? No. We stand with our customers: security first, governance first. That is why we need a new moral code — one that is easy to understand and commit to. A code that harnesses powerful Agentic AI while giving businesses control, security, and transparency. Stay tuned.