- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey orders given by human beings, except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
– Isaac Asimov, Three Laws of Robotics (1942)
When Isaac Asimov introduced his laws of robotics, he cast a moral frame for a future where machines could act with autonomy. Those laws asked us to imagine a world where technology had not just power, but responsibility. Today, as we enter the era of Agentic AI, we face a similar need.
In the first article of this series, we looked at why a new moral code is necessary for Agentic AI in business applications. Adaptive AI creates new challenges in controlling autonomy, dynamic behavior, and data model manipulation. The more we want to harness AI, the more we need to govern that power.
Some voices give in to AI as unpredictable and unbridled, and suggest we fight AI with AI. Others argue AI security is separate from traditional security. I disagree. So do my business customers. Agentic AI is not above the law. Across decades of cybersecurity evolution — from ISO 27001 to the NIST Cybersecurity Framework — security systems have rested on three enduring pillars: Visibility, Control, and Governance. Agentic AI must remain accountable to these same principles.
The good news is that the outcomes we expect of security have not changed: our data must be protected, and every action must follow permission rules. What is different is that we can no longer isolate system security from user security. An AI Agent is both a part of the system and a user empowered to act autonomously. The moral code needs to govern both spheres. Let’s begin.
Moral code for Agentic AI in business applications
1. An AI Agent must be boundable through data and action permissions set by humans.
Good business applications have security layers that prevent intrusion. Great business applications have granular data and action permissions by role. It works. Employees can potentially get into any data or action of the system. It is the permissions that limit and control this power.
Likewise, Agentic AI must follow permission rules, which are transparent and configurable by the business. It is not above the law. Of course, new cases will emerge from AI that should be added to the permission controls, including:
- Manipulative natural language requests for data
- Temporary AI access for a specific task
- Protecting private data that is embedded in the AI trained model
- Sharing of data with third-party AI tools
- AI Agent permissible actions via API into third-party applications
- Missing rules that were common sense for humans but not for AI
2. An AI Agent must be controllable through monitoring, gating, and interception by humans.
Imagine a first-time automation in a factory. It would initially be gated through testing and gradual rollout phases. In full production, it would still be monitored and fail-safe brakes can be applied anytime.
The same must be true for AI Agents, given their potential for speed and scale. Businesses must be able to gate Agentic AI in phased rollouts and rollbacks, monitor its activities, and intercept through human-in-the-loop approvals and on/off switches.
3. An AI Agent must be auditable, through logs and explanations that humans can review.
Every action taken by an AI Agent must leave behind a clear log that humans can review. These logs form the evidence businesses need to verify compliance, investigate errors, and hold the software vendor accountable for the AI operating within its permissions.
Where an action depends on decision logic or applied rules, the reasoning behind that choice must also be included as part of the log. Without this context, audits are incomplete — businesses would see what happened, but not why. Together, logs and explanations make AI behavior reviewable and traceable.
4. An AI Agent must be reliable, through transparency and delivery as expected by humans.
AI is inherently probabilistic, and that is its strength. It can convert ad hoc data into insight and automate tasks built on complex rules. The challenge is not the range of outputs, but whether the AI performs reliably to human expectations. Transparency makes this possible, since expectations cannot be met if they are not first made clear.
Reliability therefore requires both accuracy and honesty. Accuracy risks such as bias and drift must be actively governed through bias scanning and multi-perspective testing. At the same time, AI must declare its confidence and limits so humans can correctly interpret outputs. It can even show its reasoning as it works, allowing users to confirm along the way. A reliable AI does not bluff. It delivers to the expected standard while making its limits clear.
5. Humans who control AI must be accountable through permissions and traceability.
“Quis custodiet ipsos custodes?”
“Who will guard the guards themselves?”
– Juvenal, first-century Roman poet
The first four laws have empowered humans to control AI but that power requires accountability. Business users who control AI must themselves be bound by permissions and traceable history, including AI configuration changes. Likewise, the software vendor must be self-accountable and customer-accountable through logs of AI reasoning and actions.
A moral code must be intuitive enough to grasp and strong enough to be purposeful. These five laws aim to do both, giving us Visibility through settings and logs, Control through real-time monitoring and interception, and Governance through permissions and transparency. But how can a business know these promises are being met? That is the topic of the upcoming third paper: The Brightline Methods. The Brightlines should be so easy to see that a business can know they are secure and in control.


