on Unified ITAM

IT Asset Management Suite IT Asset Management Blog The B2b Agentic Ai Manifesto Part 2 The Moral Code

The B2B Agentic AI Manifesto: Part 2, The Moral Code

The B2B Agentic AI Manifesto: Part 2, The Moral Code
  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey orders given by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Isaac Asimov, Three Laws of Robotics (1942)

When Isaac Asimov introduced his laws of robotics, he cast a moral frame for a future where machines could act with autonomy. Those laws asked us to imagine a world where technology had not just power, but responsibility. Today, as we enter the era of Agentic AI, we face a similar need.

In the first article of this series, we looked at why a new moral code is necessary for Agentic AI in business applications. Adaptive AI creates new challenges in controlling autonomy, dynamic behavior, and data model manipulation. The more we want to harness AI, the more we need to govern that power.

Some voices give in to AI as unpredictable and unbridled, and suggest we fight AI with AI. Others argue AI security is separate from traditional security. I disagree. So do my business customers. Agentic AI is not above the law. Across decades of cybersecurity evolution — from ISO 27001 to the NIST Cybersecurity Framework — security systems have rested on three enduring pillars: Visibility, Control, and Governance. Agentic AI must remain accountable to these same principles.

The good news is that the outcomes we expect of security have not changed: our data must be protected, and every action must follow permission rules. What is different is that we can no longer isolate system security from user security. An AI Agent is both a part of the system and a user empowered to act autonomously. The moral code needs to govern both spheres. Let’s begin. 

Moral code for Agentic AI in business applications

1. An AI Agent must be boundable through data and action permissions set by humans.

Good business applications have security layers that prevent intrusion. Great business applications have granular data and action permissions by role. It works. Employees can potentially get into any data or action of the system. It is the permissions that limit and control this power.

Likewise, Agentic AI must follow permission rules, which are transparent and configurable by the business. It is not above the law. Of course, new cases will emerge from AI that should be added to the permission controls, including:

  • Manipulative natural language requests for data
  • Temporary AI access for a specific task
  • Protecting private data that is embedded in the AI trained model
  • Sharing of data with third-party AI tools
  • AI Agent permissible actions via API into third-party applications
  • Missing rules that were common sense for humans but not for AI

2. An AI Agent must be controllable through monitoring, gating, and interception by humans.

Imagine a first-time automation in a factory. It would initially be gated through testing and gradual rollout phases. In full production, it would still be monitored and fail-safe brakes can be applied anytime. 

The same must be true for AI Agents, given their potential for speed and scale. Businesses must be able to gate Agentic AI in phased rollouts and rollbacks, monitor its activities, and intercept through human-in-the-loop approvals and on/off switches.

3. An AI Agent must be auditable, through logs and explanations that humans can review.

Every action taken by an AI Agent must leave behind a clear log that humans can review. These logs form the evidence businesses need to verify compliance, investigate errors, and hold the software vendor accountable for the AI operating within its permissions.

Where an action depends on decision logic or applied rules, the reasoning behind that choice must also be included as part of the log. Without this context, audits are incomplete — businesses would see what happened, but not why. Together, logs and explanations make AI behavior reviewable and traceable.

4. An AI Agent must be reliable, through transparency and delivery as expected by humans.

AI is inherently probabilistic, and that is its strength. It can convert ad hoc data into insight and automate tasks built on complex rules. The challenge is not the range of outputs, but whether the AI performs reliably to human expectations. Transparency makes this possible, since expectations cannot be met if they are not first made clear.

Reliability therefore requires both accuracy and honesty. Accuracy risks such as bias and drift must be actively governed through bias scanning and multi-perspective testing. At the same time, AI must declare its confidence and limits so humans can correctly interpret outputs. It can even show its reasoning as it works, allowing users to confirm along the way. A reliable AI does not bluff. It delivers to the expected standard while making its limits clear.

5. Humans who control AI must be accountable through permissions and traceability.

“Quis custodiet ipsos custodes?” 

“Who will guard the guards themselves?”

     – Juvenal, first-century Roman poet

The first four laws have empowered humans to control AI but that power requires accountability. Business users who control AI must themselves be bound by permissions and traceable history, including AI configuration changes. Likewise, the software vendor must be self-accountable and customer-accountable through logs of AI reasoning and actions.

A moral code must be intuitive enough to grasp and strong enough to be purposeful. These five laws aim to do both, giving us Visibility through settings and logs, Control through real-time monitoring and interception, and Governance through permissions and transparency. But how can a business know these promises are being met? That is the topic of the upcoming third paper: The Brightline Methods. The Brightlines should be so easy to see that a business can know they are secure and in control.

Was this helpful?

Thanks for your feedback!
Picture of Zubair Murtaza
Zubair Murtaza
Vice President/CPO of Product Management, Ezo.io
Philadelphia, Global
Zubair Murtaza is a leader in product and business innovation. He is Vice President of Product Management at Ezo.io, disrupting ERP software with SaaS products. Recently he was Vice President of eCommerce for Staples America, a $8 Billion annual business, where he transformed eCommerce towards an AI-driven personalized solutions experience. Zubair came to Staples from Microsoft, where he developed and grew six technology businesses, ranging from $10 Million to $20 Billion in size, including transforming Microsoft toward online services and the Azure Cloud. Zubair holds two Engineering Master’s Degrees and an MBA from the University of Chicago.

Frequently Asked Questions

  • How do I evaluate whether a vendor’s “B2B agentic AI” is actually safe—or just marketing?

    Ask for a live demo of permission boundaries, approval gates, real-time monitoring, a kill switch/rollback, and exportable audit logs. If they can’t show how you prevent, intercept, and prove actions, it’s autonomy without accountability.

  • What are the most common failure modes of B2B agentic AI in real deployments?

    Most issues come from weak guardrails: over-permissioning, prompt injection/social engineering, data leakage to third-party tools, and runaway action loops. Agents move fast so small governance gaps turn into big operational damage.

  • What security frameworks map best to governing B2B agentic AI (ISO 27001, NIST, SOC 2, etc.)?

    Use existing frameworks but treat the agent as both software and a user identity. Focus on access control, change management, logging/monitoring, incident response, and vendor risk—then ensure you can produce audit evidence.

  • What should “human-in-the-loop” actually mean for agentic workflows?

    It should be a defined policy: the agent can suggest and draft freely, but needs approval for high-risk “write” actions like exports, access changes, deletions, payments, or bulk updates. Good HITL also includes escalation on uncertainty and clear approval trails.

  • Who is liable when B2B agentic AI causes harm—customer, vendor, or both?

    In practice it’s shared: customers control roles and policies, vendors must provide enforceable guardrails and transparency. The deciding factor is traceability. Can you prove what the agent did, why, under which permissions, and who approved it?

Powerful IT Asset Management Tool - at your fingertips

Empower your teams, streamline IT operations, and consolidate all your IT asset management needs through one platform.