IT Asset Management Suite IT Asset Management Blog The B2b Agentic Ai Manifesto Part 3 The Brightline Methods

The B2B Agentic AI Manifesto: Part 3, The Brightline Methods

The B2B Agentic AI Manifesto Part 3, The Brightline Methods

Before we start exploring part 3 of the series, please refer to parts 1 (The Moral Problem) and 2 (The Moral Code)

Moral Code for Agentic AI in Business Applications:

  1. An AI Agent must be boundable through data and action permissions set by humans.
  2. An AI Agent must be controllable through monitoring, gating, and interception by humans.
  3. An AI Agent must be auditable through logs and explanations that humans can review.
  4. An AI Agent must be reliable, transparent, and deliver as expected by humans.
  5. Humans who control AI must be accountable through permissions and traceability.

Inspired by Isaac Asimov, our prior paper proposed a moral code for Agentic AI—a promise that should be made by business application providers. My intent is to crystallize a security moral code and its brightline methods. The moral promise must be intuitive, while the brightlines must be easy to see and validate.

Emerging AI security frameworks are often complex and published separately from the application’s standard security framework, making it hard for business customers to intuitively feel secure. The good news is that the objectives of security have not changed. However, their methods need to cover new AI cases. Business customers do not want separate security worlds. We will augment one framework, and AI will be subject to it.

Identity & access management (IAM)

What is the role of the AI Agent, and can it be controlled?

The promise of IAM is simple: Only the right people get into the application through authentication, and each person is only authorized for data and actions within their role permissions. AI should follow the same principle.

1. Brightline: Every Autonomous AI Agent is given an identity and is bound by role permissions.

    Since an AI Agent behaves partly like a person, it must be assigned an identity and role, and thereby be bound by data and action permissions. There should also be a human owner for every AI Agent identity. This gives the business control over what each individual Agent can do, and provides visibility to manage the sprawl of Agents over time.

    2. Brightline: An AI Agent cannot exceed the permissions of its owner and its users.

      During setup, the AI Agent cannot be given more access than the owner of the Agent. When acting on behalf of a user during runtime, the AI Agent should be further restricted by that user’s permissions. If an employee attempts an unauthorized action through prompt injection, the action will still be rejected because the permission is enforced at the data source and action level.

      3. Brightline: AI Agent permission boundaries are enforced across all integrations.

      AI Agents may call APIs directly, multiplying their reach across connected systems. Within the business application, permissions should be defined for each API to specify which calls for data or action the Agent (and users of the Agent) can make. Looking ahead, industry standards are evolving toward portable authorization, where delegated permissions travel between applications (OAuth 2.0) or are managed through centralized non-human identities (workload identities, Okta agent identity).  

      Data security

      Will AI expose or misuse my sensitive data — and do it at wild scale?

      Data security already protects information through encryption, permissions, tenant isolation, and recovery. However, AI adds new risks (PwC, 2024) because its reach spans multiple systems, its models contain embedded data, and it lacks human judgment.

      4. Brightline: AI Agents are restricted to least privilege access for each task.

        The principle of least privilege grants data and action access only to the scope of the task. After all, an AI Agent troubleshooting a customer issue should not also dip into payroll records. Through the first three brightlines, an AI Agent already has durable permission boundaries. Now we add a dynamic layer of control that limits access according to the specific task and user being served.

        5. Brightline: Business data in trained AI models is masked or otherwise secured through permissions.

          A trained AI model is data. If left unprotected, hackers can back-solve the underlying information by asking multiple questions (model inversion) or use prompt injection to cleverly ask for private data. The safest approach is to exclude or mask private data during model training. When a user genuinely needs to access sensitive information through AI, retrieval-augmented generation (RAG) keeps the data outside the model and retrieves only the slices the user is authorized to see. If the model still must be trained on private data, then it should be subject to the same security and permission controls as direct business data.

          6. Brightline: All AI Agent output is screened by guardrails. 

          While a human knows not to share their password broadly, an AI Agent does not. Permission constraints should already prevent such behavior. But what if a permission was missed? Guardrails add a safety layer to filter or mask sensitive data in AI output and can even reject disallowed prompts. Guardrails rely on natural language interpretation, and thus do not replace permissions, which are definitively enforced at the data source and action level.

          7. Brightline: Exchange of business data with third-party AI models requires strict opt-in and is masked or expunged.

            This principle is not new. Any third-party data sharing from a business application has always required opt-in and secure handling. However, business applications are increasingly utilizing third-party hosted AI tools, making this principle even more essential. It is best for private data to be masked or not shared at all. If sharing is required, the third-party AI tool should avoid storing private data beyond the session or follow recognized standards such as tenant isolation and data-expunge capability.

            Operational monitoring & control

            Can the AI go rogue and take runaway actions I don’t want?

            Traditional automation and workflow features already give businesses control through configurability, monitoring, approvals, and pause toggles. Agentic AI must extend these same principles.

            8. Brightline: Every AI Agent is throttleable, enabling phased release, real-time monitoring, and human-in-the-loop control.

              Throttling allows a business to expand an AI Agent’s scope gradually — ideally through dialing its IAM permissions up or down as needed. The AI Agent should also be monitorable through a view showing all active instances, with the ability to pause or stop activity when required.

              Human-in-the-loop controls ensure that significant AI actions remain visible and confirmable before they occur. In copilots that work alongside users, this often takes the form of a preview to confirm a sensitive action. For autonomous agents, controls should already exist in the downstream actions and workflows they call. Action controls are set through IAM role permissions, which can require certain actions to send an alert and wait for approval. Workflow controls are configured by including approval steps in the workflow itself, as seen in Microsoft Copilot’s Agent Flows. Automation features also include rate limits on the volume of actions in a given time, which can be extended to AI.

              Accountability

              If something goes wrong, can I see why and who’s responsible?

              Traditional audit trails record what happened and who did it. AI requires the same discipline, extended to autonomous actions and configuration changes.

              9. Brightline: Every AI Agent is auditable through logs of its actions, reasoning, and human authorizations.

                AI Agents should generate logs of every action and why it was taken, including when initiated or authorized by a human. They should also record non-execution changes, such as configuration and permission updates. Every autonomous agent should be traceable to the human who authorized it and turned it on, particularly to avoid orphan agents when employees move on. Together, these logs give the business visibility to verify compliance, investigate errors, and tune how AI operates.

                Reliability & integrity

                Can I trust what AI tells me?

                Integrity has always been part of security, ensuring data is accurate and unaltered. In AI, integrity expands to mean that information remains truthful in how it is presented.

                10. Brightline: Every AI Agent is reliable through transparency of confidence levels, reasoning, and model controls.

                  AI Agents produce results that can range from factual retrieval to probabilistic reasoning. The issue is not the range, but knowing how to rely on the output. Transparency helps users interpret results through clear source citations, reasoning, and confidence levels. Vendors, in turn, should continuously test models and share their bias and drift controls with customers, as exemplified by Microsoft. A one-click way for a user to flag questionable results to the vendor is especially effective for tuning models on real-world data.

                  Collectively, these brightlines fulfill the Agentic AI moral promise, enabling businesses to govern AI Agents through familiar principles. AI Agents cannot exceed user permissions, escape data rules, or act without transparency. For businesses to embrace Agentic AI with confidence, safeguards must be as obvious as locks on a door. Brightlines make that promise real. 

                  Was this helpful?

                  Thanks for your feedback!
                  Picture of Zubair Murtaza
                  Zubair Murtaza
                  Vice President/CPO of Product Management, Ezo.io
                  Philadelphia, Global
                  Zubair Murtaza is a leader in product and business innovation. He is Vice President of Product Management at Ezo.io, disrupting ERP software with SaaS products. Recently he was Vice President of eCommerce for Staples America, a $8 Billion annual business, where he transformed eCommerce towards an AI-driven personalized solutions experience. Zubair came to Staples from Microsoft, where he developed and grew six technology businesses, ranging from $10 Million to $20 Billion in size, including transforming Microsoft toward online services and the Azure Cloud. Zubair holds two Engineering Master’s Degrees and an MBA from the University of Chicago.

                  Frequently Asked Questions

                  • What does “agentic AI” mean in a business context?

                    Agentic AI refers to artificial intelligence systems that perform tasks autonomously but within human-defined boundaries. These systems have the capacity to act on behalf of humans, but their behavior is governed by strict security, permission, and transparency protocols to ensure accountability and compliance. This approach helps businesses trust AI actions while maintaining full control over sensitive data.
                  • How can my organization keep AI actions within our security policies?

                    To ensure AI actions align with security policies, businesses can implement role-based permissions, real-time monitoring, and audit logs. Solutions like AssetSonar provide transparent permissions and monitoring features, which ensure that AI agents operate within pre-approved actions, preventing them from exceeding their limits.
                  • What controls should be in place so AI doesn’t access data it shouldn’t?

                    The most effective control is the principle of least privilege, where AI agents are only granted access to the data necessary for their task. This can be enforced through dynamic access controls and integrated permission systems, ensuring AI cannot access or misuse sensitive information without explicit authorization.
                  • How do you assign and manage identities for autonomous AI systems?

                    Autonomous AI systems need clear identities and role assignments, similar to human users. By using Identity and Access Management (IAM) solutions, businesses can bind AI agents to specific roles and permissions, ensuring that each AI agent acts within its defined capabilities and is accountable to human oversight.
                  • Can AI systems be restricted so they can’t exceed human user permissions?

                    Yes, AI systems can be restricted by linking them to human user roles and permissions. With integrated IAM controls, businesses can enforce rules that prevent AI agents from exceeding the permissions of the users they represent, ensuring compliance and minimizing risk of unauthorized actions.
                  • What are practical ways to prevent AI from misusing sensitive company data?

                    Businesses can prevent AI from misusing data by enforcing data masking, encryption, and guardrails that filter out sensitive information in outputs. Advanced AI systems, like AssetSonar, integrate these techniques, ensuring that sensitive data is either excluded from training models or securely masked during AI interactions.
                  • How do you stop AI tools from leaking private information through responses?

                    To prevent AI tools from leaking private data, businesses can implement output filtering mechanisms and natural language guardrails. These measures ensure that even if an AI model unintentionally accesses sensitive data, its output will be screened to avoid exposure. It's also essential to use retrieval-augmented generation (RAG) to keep sensitive data outside the model itself.
                  • Is it possible to trace which human authorized a specific AI action?

                    Yes, each AI agent’s actions can be logged with clear attribution to the human who authorized or configured it. With solutions like AssetSonar, every autonomous action performed by the AI is auditable, allowing businesses to track who initiated or approved specific tasks, providing transparency and accountability.
                  • How can businesses monitor AI activity in real time?

                    Businesses can monitor AI activity through real-time dashboards that display AI operations and performance. Throttling mechanisms allow businesses to control the scope of AI activities dynamically. Additionally, integrated monitoring tools ensure that AI operations can be paused or adjusted as needed, offering full control over AI-driven processes.
                  • What methods help prevent rogue or runaway AI behavior?

                    Preventing rogue AI behavior involves implementing real-time monitoring, throttling capabilities, and human-in-the-loop (HITL) controls. These methods allow businesses to step in and halt operations if an AI agent begins performing actions outside its permitted scope, ensuring all actions remain within organizational guidelines.
                  • How do companies make AI output more transparent and trustworthy?

                    Transparency in AI output is achieved by providing clear confidence scores, reasoning, and source citations for each AI-driven action. AssetSonar enables businesses to track AI results, giving users the transparency needed to verify that decisions are made based on accurate and reliable data.
                  • What do teams do to make sure third‑party AI services don’t store confidential data?

                    Companies can ensure third-party AI services don’t store confidential data by enforcing opt-in consent for data sharing and ensuring that all data exchanged is either masked or expunged after the session. Using secure integrations and tenant isolation can further protect sensitive information from being exposed.
                  • How should audit logs for autonomous AI be structured?

                    Audit logs for AI systems should be detailed, capturing every action the AI performs, the reasoning behind its decisions, and any human authorizations involved. These logs should be immutable, easily accessible, and linked to the specific AI agent and its user identity for accountability.
                  • What role does human oversight play in AI workflows?

                    Human oversight remains essential in AI workflows to ensure compliance and mitigate risks. Human-in-the-loop controls enable critical AI actions to be reviewed and approved before execution. Additionally, approval workflows ensure that sensitive actions are visible and confirmable, minimizing the risk of errors or unintended consequences.
                  • Can AI systems be throttled or paused if they start acting unpredictably?

                    Yes, AI systems can be throttled or paused through integrated control mechanisms. Businesses can gradually expand an AI system’s capabilities while monitoring its behavior. If the AI begins acting unpredictably, businesses can halt operations instantly, offering a safeguard to avoid potential harm or inefficiencies.

                  Powerful IT Asset Management Tool - at your fingertips

                  Empower your teams, streamline IT operations, and consolidate all your IT asset management needs through one platform.