IT Asset Management Suite IT Asset Management Blog B2b Agentic Ai Manifesto 5 Part Series

The B2B Agentic AI Manifesto (Five-Part Series)

The B2B Agentic AI Manifesto (Series)

Part 1: The Moral Problem

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey orders given by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

The movie I, Robot with Will Smith revived the three laws of robotics for us sci-fi fans. I get goosebumps every time I hear it. But it no longer remains in the imagination of science fiction. AI is rapidly rippling through our tools, workflows, and decisions. Yet we humans are uneasy about giving control to Agentic AI. Businesses in particular don’t want Agentic AI running amok. In a recent PwC survey, nearly 40 percent of executives do not trust AI Agents, and more than half view AI as a security challenge.  

We have done extensive research with companies on the future of business applications. They want efficiency, intelligence, and automation to flow naturally within everyday work. No manual tedium, no detours, no fiddling with big forms. Just do it naturally and automatically. In other words… Agentic AI. But with complete control and transparency. So… not Agentic AI?

We need both. The AI industry is already publishing voluminous and complex AI security documentation that is hard for business customers to evaluate. Instead, we want to zoom out and crystallize a security moral code and its brightline security methods. The brightlines should be so easy to see that any business can validate that they are secure.

This first article begins with the moral problem: Why does Agentic AI need to make new promises to business customers? Aren’t these the same as existing security commitments? What has changed with AI?

Well, imagine three boxes: (1) User input/output, (2) Functionality, and (3) Data. In classic computing architecture, each box has well-defined boundaries and controls. Now imagine a life form. The boundaries blur. Our own human neural web, roughly 45 miles of threads, reaches every part of the body with incredible speed, and into our sensory interfaces that then reach beyond into the cosmos, as we autonomously learn, act, and change. Life is not bound. In the words of Jurassic Park, life will find a way. Into all sorts of trouble.

With its connectivity and adaptability, Agentic AI takes on a life of its own and brings forth new challenges: 

  1. Human-AI Interaction Becomes a Security Boundary: Remember the first box of user input/output? In traditional systems, the human was outside the trust boundary (user logs in, app checks permissions). In AI, the prompt itself becomes code — a way to inject malicious instructions (“jailbreaking”) into the app. Traditional principles don’t cover this, so we need new trust boundaries for human-AI collaboration (Mitre ATLAS).  
  2. AI Systems are Adaptive and Non-Deterministic:  In the functionality box, conventional software follows predictable and statically testable paths. But Agentic AI produces outputs based on sophisticated and evolving models, which are harder to test and explain. The risk shifts from code defects to opaque decisions and unexpected interactions. We now need transparency of reasoning, source attribution, and confidence levels, along with ongoing checks for drift and bias (NIST Framework).
  3. The Attack Surface Now Includes Data and Models: In the third box of “data”, systems traditionally secure the data by dealing with the attack surface of the code and infrastructure. In AI, data is part of the model’s logic. Attacks like data poisoning, prompt injection, and model inversion target the learning process itself. This gives rise to new principles around masking training data, securing trained models, and exposing model reasoning and traceability — core tenets of Google’s SAIF framework.
  4. Accountability and Oversight Are Harder:  In prior systems, a log trail and access controls defined accountability. With AI, models can generate decisions or model updates that humans cannot easily audit. This is why IBM’s AI Governance and Microsoft’s Responsible AI Standard both emphasize explainability and traceable lineage.
  5. Autonomy at Scale Needs Greater Oversight: Traditional architectures assume bounded automation. Agentic AI can autonomously chain actions, connect APIs, and impact external systems at scale. That changes the threat approach: we can’t just test predefined single automations, but now must sandbox, constrain, and monitor agentic behavior (Microsoft Responsible AI Standard). 

These five challenges mark a pivotal break from traditional security assumptions. Previously, we relied on clear boxes, predictable logic, and audit trails. In Agentic AI, those boundaries blur. The powerful benefits of Agentic AI for human collaboration, learning, adapting, and autonomously spanning systems are the very same problems we have to secure.  

This gives rise to our moral challenge: How do we unleash these powerful benefits, while also controlling them? Businesses want automation and speed, but with control and transparency. In fact, our business customers would choose control over AI. Is that a bummer for AI enthusiasts? No. We stand with our customers: security first, governance first. That is why we need a new moral code — one that is easy to understand and commit to. A code that harnesses powerful Agentic AI while giving businesses control, security, and transparency. Stay tuned.

Part 2: The Moral Code

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey orders given by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

When Isaac Asimov introduced his laws of robotics, he cast a moral frame for a future where machines could act with autonomy. Those laws asked us to imagine a world where technology had not just power, but responsibility. Today, as we enter the era of Agentic AI, we face a similar need.

In the first article of this series, we looked at why a new moral code is necessary for Agentic AI in business applications. Adaptive AI creates new challenges in controlling autonomy, dynamic behavior, and data model manipulation. The more we want to harness AI, the more we need to govern that power.

Some voices give in to AI as unpredictable and unbridled, and suggest we fight AI with AI. Others argue AI security is separate from traditional security. I disagree. So do my business customers. Agentic AI is not above the law. Across decades of cybersecurity evolution — from ISO 27001 to the NIST Cybersecurity Framework — security systems have rested on three enduring pillars: Visibility, Control, and Governance. Agentic AI must remain accountable to these same principles.

The good news is that the outcomes we expect of security have not changed: our data must be protected, and every action must follow permission rules. What is different is that we can no longer isolate system security from user security. An AI Agent is both a part of the system and a user empowered to act autonomously. The moral code needs to govern both spheres. Let’s begin. 

Moral Code for Agentic AI in Business Applications:

  1. An AI Agent must be boundable through data and action permissions set by humans.

Good business applications have security layers that prevent intrusion. Great business applications have granular data and action permissions by role. It works. Employees can potentially get into any data or action of the system. It is the permissions that limit and control this power.

Likewise, Agentic AI must follow permission rules, which are transparent and configurable by the business. It is not above the law. Of course, new cases will emerge from AI that should be added to the permission controls, including:

  • Manipulative natural language requests for data
  • Temporary AI access for a specific task
  • Protecting private data that is embedded in the AI trained model
  • Sharing of data with third-party AI tools
  • AI Agent permissible actions via API into third-party applications
  • Missing rules that were common sense for humans but not for AI
  1. An AI Agent must be controllable through monitoring, gating, and interception by humans.

Imagine a first-time automation in a factory. It would initially be gated through testing and gradual rollout phases. In full production, it would still be monitored and fail-safe brakes can be applied anytime. 

The same must be true for AI Agents, given their potential for speed and scale. Businesses must be able to gate Agentic AI in phased rollouts and rollbacks, monitor its activities, and intercept through human-in-the-loop approvals and on/off switches.

  1. An AI Agent must be auditable, through logs and explanations that humans can review.

Every action taken by an AI Agent must leave behind a clear log that humans can review. These logs form the evidence businesses need to verify compliance, investigate errors, and hold the software vendor accountable for the AI operating within its permissions.

Where an action depends on decision logic or applied rules, the reasoning behind that choice must also be included as part of the log. Without this context, audits are incomplete — businesses would see what happened, but not why. Together, logs and explanations make AI behavior reviewable and traceable.

  1. An AI Agent must be reliable, through transparency and delivery as expected by humans.

AI is inherently probabilistic, and that is its strength. It can convert ad hoc data into insight and automate tasks built on complex rules. The challenge is not the range of outputs, but whether the AI performs reliably to human expectations. Transparency makes this possible, since expectations cannot be met if they are not first made clear.

Reliability therefore requires both accuracy and honesty. Accuracy risks such as bias and drift must be actively governed through bias scanning and multi-perspective testing. At the same time, AI must declare its confidence and limits so humans can correctly interpret outputs. It can even show its reasoning as it works, allowing users to confirm along the way. A reliable AI does not bluff — it delivers to the expected standard while making its limits clear.

  1. Humans who control AI must be accountable through permissions and traceability.

“Quis custodiet ipsos custodes?” 

Who will guard the guards themselves?

     – Juvenal, first-century Roman poet

The first four laws have empowered humans to control AI. But that power requires accountability. Business users who control AI must themselves be bound by permissions and traceable history, including AI configuration changes. Likewise, the software vendor must be self-accountable and customer-accountable through logs of AI reasoning and actions.

A moral code must be intuitive enough to grasp and strong enough to be purposeful. These five laws aim to do both, giving us Visibility through settings and logs, Control through real-time monitoring and interception, and Governance through permissions and transparency. But how can a business know these promises are being met? That is the topic of the upcoming third paper: The Brightline Methods. The Brightlines should be so easy to see that a business can know they are secure and in control.

Part 3: The Brightline Methods

Moral Code for Agentic AI in Business Applications:

  1. An AI Agent must be boundable through data and action permissions set by humans.
  2. An AI Agent must be controllable through monitoring, gating, and interception by humans.
  3. An AI Agent must be auditable, through logs and explanations that humans can review.
  4. An AI Agent must be reliable, through transparency and delivery as expected by humans.
  5. Humans who control AI must be accountable through permissions and traceability.

Inspired by Isaac Asimov, our prior paper derived a moral code for Agentic AI — a promise that should be made by business application providers. My intent is to crystallize a security moral code and its brightline methods. The moral promise must be intuitive, while the brightlines must be easy to see and validate.

Emerging AI security frameworks are often complex and published separately from the application’s standard security framework, making it hard for business customers to intuitively feel secure. The good news is that the objectives of security have not changed. However, their methods need to cover new AI cases. Business customers do not want separate security worlds. We will augment one framework, and AI will be subject to it.

Identity & Access Management (IAM)

What is the role of the AI Agent and can it be controlled?

The promise of IAM is simple: Only the right people get into the application through authentication, and each person is only authorized for data and actions within their role permissions. AI should follow the same principle.

  1. Brightline: Every Autonomous AI Agent is given an identity and is bound by role permissions.

Since an AI Agent behaves partly like a person, it must be assigned an identity and role, and thereby be bound by data and action permissions. There should also be a human owner for every AI Agent identity. This gives the business control over what each individual Agent can do, and provides visibility to manage the sprawl of Agents over time.

  1. Brightline: An AI Agent cannot exceed the permissions of its owner and its users.

During setup, the AI Agent cannot be given more access than the owner of the Agent. When acting on behalf of a user during runtime, the AI Agent should be further restricted by that user’s permissions. If an employee attempts an unauthorized action through prompt injection, the action will still be rejected because the permission is enforced at the data source and action level.

  1. Brightline: AI Agent permission boundaries are enforced across all integrations.

AI Agents may call APIs directly, multiplying their reach across connected systems. Within the business application, permissions should be defined for each API to specify which calls for data or action the Agent (and users of the Agent) can make. Looking ahead, industry standards are evolving toward portable authorization, where delegated permissions travel between applications (OAuth 2.0) or are managed through centralized non-human identities (workload identities, Okta agent identity).  

Data Security

Will AI expose or misuse my sensitive data — and do it at wild scale?

Data security already protects information through encryption, permissions, tenant isolation, and recovery. However, AI adds new risks (PwC, 2024) because its reach spans multiple systems, its models contain embedded data, and it lacks human judgment.

  1. Brightline: AI Agents are restricted to least privilege access for each task.

The principle of least privilege grants data and action access only to the scope of the task. After all, an AI Agent troubleshooting a customer issue should not also dip into payroll records. Through the first three brightlines, an AI Agent already has durable permission boundaries. Now we add a dynamic layer of control that limits access according to the specific task and user being served.

  1. Brightline: Business data in trained AI models is masked or otherwise secured through permissions.

A trained AI model is data. If left unprotected, hackers can back-solve the underlying information by asking multiple questions (model inversion) or use prompt injection to cleverly ask for private data. The safest approach is to exclude or mask private data during model training. When a user genuinely needs to access sensitive information through AI, retrieval-augmented generation (RAG) keeps the data outside the model and retrieves only the slices the user is authorized to see. If the model still must be trained on private data, then it should be subject to the same security and permission controls as direct business data.

  1. Brightline: All AI Agent output is screened by guardrails. 

While a human knows not to share their password broadly, an AI Agent does not. Permission constraints should already prevent such behavior. But what if a permission was missed? Guardrails add a safety layer to filter or mask sensitive data in AI output and can even reject disallowed prompts. Guardrails rely on natural language interpretation, and thus do not replace permissions which are definitively enforced at the data source and action level.

  1. Brightline: Exchange of business data with third-party AI models requires strict opt-in and is masked or expungable.

This principle is not new. Any third-party data sharing from a business application has always required opt-in and secure handling. However, business applications are increasingly utilizing third-party hosted AI tools, making this principle even more essential. It is best for private data to be masked or not shared at all. If sharing is required, the third-party AI tool should avoid storing private data beyond the session, or follow recognized standards such as tenant isolation and data-expunge capability.

Operational Monitoring & Control

Can the AI go rogue and take runaway actions I don’t want?

Traditional automation and workflow features already give businesses control through configurability, monitoring, approvals, and pause toggles. Agentic AI must extend these same principles.

  1. Brightline: Every AI Agent is throttable, enabling phased release, real-time monitoring, and human-in-the-loop control.

Throttling allows a business to expand an AI Agent’s scope gradually — ideally through dialing its IAM permissions up or down as needed. The AI Agent should also be monitorable through a view showing all active instances, with the ability to pause or stop activity when required.

Human-in-the-loop controls ensure that significant AI actions remain visible and confirmable before they occur. In copilots that work alongside users, this often takes the form of a preview to confirm a sensitive action. For autonomous agents, controls should already exist in the downstream actions and workflows they call. Action controls are set through IAM role permissions, which can require certain actions to send an alert and wait for approval. Workflow controls are configured by including approval steps in the workflow itself, as seen in Microsoft Copilot’s Agent Flows. Automation features also include rate limits on volume of actions in a given time, which can be extended to AI.

Accountability

If something goes wrong, can I see why and who’s responsible?

Traditional audit trails record what happened and who did it. AI requires the same discipline, extended to autonomous actions and configuration changes.

  1. Brightline: Every AI Agent is auditable through logs of its actions, reasoning, and human authorizations.

AI Agents should generate logs of every action and why it was taken, including when initiated or authorized by a human. They should also record non-execution changes such as configuration and permission updates. Every autonomous agent should be traceable to the human who authorized it and turned it on, particularly to avoid orphan agents when employees move on. Together, these logs give the business visibility to verify compliance, investigate errors, and tune how AI operates.

Reliability & Integrity

Can I trust what AI tells me?

Integrity has always been part of security, ensuring data is accurate and unaltered. In AI, integrity expands to mean that information remains truthful in how it is presented.

  1. Brightline: Every AI Agent is reliable through transparency of confidence levels, reasoning, and model controls.

AI Agents produce results that can range from factual retrieval to probabilistic reasoning. The issue is not the range, but knowing how to rely on the output. Transparency helps users interpret results through clear source citations, reasoning, and confidence levels. Vendors, in turn, should continuously test models and share their bias and drift controls with customers, as exemplified by Microsoft. A one-click way for a user to flag questionable results to the vendor is especially effective for tuning models on real-world data.

Collectively, these brightlines fulfill the Agentic AI moral promise, enabling businesses to govern AI Agents through familiar principles. AI Agents cannot exceed user permissions, escape data rules, or act without transparency. For businesses to embrace Agentic AI with confidence, safeguards must be as obvious as locks on a door. Brightlines make that promise real. 

Part 4: Empowering Business Breakthrough with AI

I was captivated by the feedback of our business customers. They love automations, but the jury was still out on the promise of AI. I was intrigued, because in my mind, autonomous AI is the next level of automation. It turns out, my customers nailed it. Automations are configurable, monitorable, interceptable, and traceable. AI did not yet clearly promise the same. But it can and should.

Our prior papers derived a moral code for AI and brightline methods that are easy to see. In this paper, we will apply those brightlines organized by business value. At EZO.IO, we are building the power of AI while giving customers control over it.

1. Business Value: Speed and Productivity

Our first layer of value is speed. Business applications are rich in features, which require looking for, navigating to, and drilling into the right view or action. What if instead you could say, “I want X” and get X from anywhere in the app?

That is what we are doing with our natural language copilot, Zoe. From anywhere, ask any question about the app, and Zoe searches hundreds of knowledge base articles to respond naturally. Ask any question about your data, and Zoe comes back with the right report, insight, and graph. Need to enter a paper invoice into the system? Why type — just take a picture. Want to take an action? Soon you will be able to just type it or say it, and Zoe will bring it up. Zoe also understands context, bringing relevant insights and actions to your fingertips.

Natural language only changes the interface — not what a user is permitted to do. Per our brightline methods, granular user permissions remain enforced at the data source and action level. In addition, third-party calls to LLMs do not share or train external models on business data. With Zoe, a user’s data question is simply parsed through an LLM, while we apply retrieval-augmented generation to fetch only the data the user has permission to see within the customer’s closed EZO.IO account. The emphasis of control is on data and action permissions, with LLM guardrails as an additional safety net.

Once assured of these brightlines, the decision for a business is whether they want the natural language interface for speed and productivity. 

2. Business Value: Intelligent and Proactive Solutions

Business applications have long been effective at organizing work, but not at connecting the dots. The result is that businesses remain largely reactive, acting only once a problem becomes visible. Intelligent solutions shift businesses from reactive to proactive.

At EZO for Construction, we understand that downtime is the kiss of death. But manually analyzing purchase and consumption rates to forecast stockouts is difficult. With AI, our Catalog Optimizer forecasts stockouts, flags waste, and recommends reorder amounts to keep projects flowing. Another costly example is equipment breakdown. To shorten downtime, Zoe instantly fetches guides and checklists from the manufacturer, recommends preventive maintenance schedules, and proactively monitors and learns from meter readings to get ahead of breakdowns.

In IT Service Management (ITSM), similar issues often reappear and get reworked because their patterns go unseen. We are building Zoe in our IT management suite EZO AssetSonar, to take an incoming ticket, summarize the pattern in similar past tickets, recommend a resolution for the immediate ticket, and determine root causes. Technicians not only avoid rework but also shift to proactive improvement.

As you can see, each Intelligent Solution is an AI model available through our copilot. Access is still controlled by role-based permissions, and another brightline is applied: every AI model is transparent in its reasoning. These models are native to our application and private business data is not used in their training.

Businesses should expect their software vendors to demonstrate similar brightlines for their AI models, including access control, masking or securing data in model training and in third-party tools, and transparency of reasoning. With these safeguards in place, the business decision is about which intelligent solutions to enable and for which users.

3. Business Value: Automating Repetitive Tasks

So far, actions have been user-driven. With AI, users can move faster and with deeper, proactive insight. But for tasks that are repetitive or rules-based, should users even take them on? 

At EZO AssetSonar, we are building an autonomous agent that continuously listens for tickets, updates the knowledge base, and resolves L0 tickets automatically or determines whether a technician is needed. Every action or automation it can call is on a permission control, including when a human-in-the-loop approval or alert is required. The autonomous agent will be monitorable and interceptable, and will have toggles for phased rollout. It will also maintain a complete log of its actions and human authorizations.

Likewise for any autonomous agent application, businesses should require the enforcement of least-privilege access, throttling and human-in-the-loop controls, and complete history logs. After that, the decision for a business is not all-or-none. Rather, it is about what repetitive tasks to automate in gradual phases, as employees in tandem shift to higher-value work.

We began with a question: how do we unleash the power of AI and autonomy, yet still control it? Businesses want the benefits of AI, but 70% cite governance and accountability as top concerns (PwC, 2024). At EZO.IO, we stand with our customers. We will enforce security and governance in one clear framework across the application, and AI will be subject to it while providing powerful, breakthrough benefits.

Part 5: Future Proofing Agentic AI

Moral Code for Agentic AI in Business Applications:

  1. An AI Agent must be boundable through data and action permissions set by humans.
  2. An AI Agent must be controllable through monitoring, gating, and interception by humans.
  3. An AI Agent must be auditable, through logs and explanations that humans can review.
  4. An AI Agent must be reliable, through transparency and delivery as expected by humans.
  5. Humans who control AI must be accountable through permissions and traceability.

We started this series with Isaac Asimov and the movie I, Robot. So I got to wondering — will the Agentic AI Moral Code stand up to the scrutiny of future imagination, where AI goes rogue? Let’s test this against a few science fiction scenarios that endure in our collective memory.

I, Robot

Detective Del Spooner obsessively chases his suspect: did the robot Sonny kill Dr. Alfred Lanning? As the tension mounts, the truth unfolds bit by bit, and remains an enigma till the end. Hold on. Why is there no activity log? With all the sophisticated technology in Sonny, clearly it could write activity logs. Under our moral code, the mystery would never have happened. Every action of Sonny would be recorded, fully transparent and auditable.

2001: A Space Odyssey

The moment still haunts us: HAL 9000, with that calm red eye, politely bars Bowman — “I’m sorry, Dave, I’m afraid I can’t do that.” Why does it chill us? Because HAL is opaque. Under our code, that refusal would come with an explanation of the model and accountability for its confidence level.

Terminator

On launch, Skynet self-preserves by launching nukes as a preemptive move. The horror? No human line of responsibility. Our code says: no system goes live without human governance. Permissions, monitoring, and interceptability are non-negotiable. 

Let’s Get Serious

We’re having fun, but I realize there is more depth to this. A central argument for AI going rogue can be found in Turing’s Halting Problem (1936) and Wolfram’s theory of computational irreducibility (2002): A system can get so complex that you cannot predict its behavior other than to run it. But predicting is different from controlling. There is no reason we cannot configure, monitor, intercept, and trace a system.

When Windows 1.0 was released in 1985, it needed 720 KB of disk space (WinWorldPC) while now Windows 11 requires 64 GB (Microsoft) — that’s 89,000 times larger. Yet over the same period, software architecture methods, testing methods, monitoring, logs, and controls also expanded. Despite growing complexity, actual scrutiny and transparency increased.

If it can be coded, it can be explained. It can be broken into components, layers, processes — each with its own controls. With the moral code and brightlines in place, Agentic AI can be powerfully beneficial and give control to businesses, now and in the future.

Discover more about these stories here: I, Robot (2004), 2001: A Space Odyssey (1968), The Terminator (1984).

Was this helpful?

Thanks for your feedback!
Picture of Zubair Murtaza
Zubair Murtaza
Vice President/CPO of Product Management, Ezo.io
Philadelphia, Global
Zubair Murtaza is a leader in product and business innovation. He is Vice President of Product Management at Ezo.io, disrupting ERP software with SaaS products. Recently he was Vice President of eCommerce for Staples America, a $8 Billion annual business, where he transformed eCommerce towards an AI-driven personalized solutions experience. Zubair came to Staples from Microsoft, where he developed and grew six technology businesses, ranging from $10 Million to $20 Billion in size, including transforming Microsoft toward online services and the Azure Cloud. Zubair holds two Engineering Master’s Degrees and an MBA from the University of Chicago.

Powerful IT Asset Management Tool - at your fingertips

Empower your teams, streamline IT operations, and consolidate all your IT asset management needs through one platform.