Over the last 25 years, I have seen organizations go through several waves of unmanaged technology.
First, it was hardware that moved faster than the inventory system could keep track of. Then came software installed outside approved channels. Then SaaS arrived, and employees could buy business-critical tools with a credit card before IT even knew they existed.
Each wave created visibility, governance, and compliance challenges. But AI is different.
AI is not just another tool employees are using. It is becoming an operational actor inside the enterprise. It can access data, summarize it, move it, make recommendations, trigger workflows, and, in some cases, take action across systems.
That makes AI the newest unmanaged asset in your environment.
And for many organizations, it is already there.
Shadow AI is not coming. It is already here.
One of the most valuable parts of my work is hearing directly from customers and IT leaders about what they are seeing inside their organizations.
Recently, I heard about a 1,500-employee company that blocked ChatGPT at the DNS level. From a policy perspective, the risk looked contained. The tool was blocked. The dashboard looked clean. Everyone could say the issue had been handled.
But an internal audit later found that employees were still using AI tools on personal devices to support company work.
In another mid-market company, someone in the finance department had started running AI-assisted automation workflows using tools outside IT’s visibility. The employee was smart. The workflow was useful. The problem was that nobody in IT knew it existed. It was sitting on a laptop, quietly becoming part of the business process.
This is the pattern we are seeing.
Shadow AI does not disappear when you block one channel. It redistributes.
It moves to personal devices. It moves into browser extensions. It moves into mobile apps and approved SaaS platforms that quietly add AI features. It moves into automation workflows. Visibility drops, but accountability stays with IT.
That is why this problem cannot be treated as a simple policy issue.
Shadow SaaS was a control problem. Shadow AI is a blast radius problem.
Shadow SaaS was difficult, but the risk was usually bounded by the application. Someone bought a tool without approval. Procurement did not know. Security had not reviewed it. ITAM did not have it in the inventory.
That was a control problem.
Shadow AI is different because it can create a blast radius.
A single prompt can extract sensitive information, cross-reference it with other data, generate a structured output, and feed that output into another process. An AI workflow can connect to a CRM, analyze customer records, draft emails, update fields, or trigger downstream actions.
The issue is no longer just, “Do we know this tool exists?”
The better question is, “What can this AI system cause to happen across our environment?”
That is a very different governance challenge.
AI is a new asset class
Every previous asset class was largely passive.
Hardware runs workloads. Software processes inputs. SaaS centralizes data and workflows. But in most cases, a human still initiates the action.
AI changes that model.
AI can decide, act, and execute on your behalf. It can behave like an operator inside your systems. It can work faster than any human, operate across systems simultaneously, and turn simple inputs into system-wide actions.
This does not mean AI is human. But operationally, the risk begins to look less like a static tool and more like delegated authority.
That is why I believe we need to treat AI as a new asset class.
More specifically, we need a discipline for AI Asset Governance: the practice of discovering, classifying, controlling, and continuously governing AI systems as both assets and actors across the enterprise.
This is a natural extension of IT asset management.
ITAM already thinks in terms of discovery, ownership, classification, lifecycle, procurement, compliance, cost, and retirement. AI needs that same discipline, but adapted for systems that can act.
The acting asset changes the governance model
Traditional governance was built around predictable systems.
You assign permissions. You define ownership. You track usage. You document access. You monitor compliance.
AI introduces three complications.
First, AI can be non-deterministic. The same input may not always produce the same output. Behavior can shift as the model, context, or connected data changes.
Second, AI can execute. It not only recommends. Increasingly, it can take action through APIs, connectors, plugins, agents, and automation tools.
Third, AI can operate across systems. Once connected via OAuth, API keys, service accounts, or plugins, an AI system can access multiple enterprise applications.
Permissions tell you what AI can access. They do not fully control what AI can do once connected.
That distinction matters.
For AI, governance has to move from access alone to scope and containment.
Access asks, “What can this system touch?”
Scope asks, “What is this system allowed to do?”
Containment asks, “If something goes wrong, how far can the impact travel?”
A mature AI governance model should be able to say:
- This AI can read, but not write.
- It can summarize, but not send.
- It can recommend, but not execute.
- It can act only below a defined threshold.
- Its access can be revoked quickly if behavior changes.
That is the new governance model.
AI risk operates across three planes
Most organizations talk about AI risk as if it were one problem. It is not.
AI risk operates across three planes.
Plane 1: Interaction
This is the risk most people understand.
An employee pastes sensitive data into a public AI tool. Someone uploads a confidential document. A user types customer information into a prompt.
This is human-to-AI interaction.
The controls here are familiar: training, acceptable-use policies, browser DLP, CASB rules, and restrictions on certain tools.
These controls matter. But they are not enough.
Plane 2: Execution
This is where the risk compounds.
AI tools are increasingly connected to enterprise systems through APIs, OAuth grants, plugins, skills, and workflow automations. The AI is no longer waiting for a user to paste information into a prompt. It may be pulling data directly from systems and acting on it.
When an employee connects an AI tool to Salesforce, HubSpot, Google Workspace, Jira, Slack, NetSuite, or another enterprise system, the risk changes.
Now the question is not only what data users are giving to AI.
The question is what AI can access and do on its own.
This is where identity, permissions, OAuth scopes, API audit trails, and workflow controls become critical.
Plane 3: Inference
This is the plane that many organizations understand the least.
What happens after the AI processes your data? Is the vendor retaining prompts? Are inputs used for model training? Can you opt out? What contractual protections exist? How are outputs generated, stored, or reused?
Most organizations have limited runtime visibility into this plane. That means governance depends heavily on vendor terms, model policies, data handling agreements, and contractual controls.
AI risk always exists across all three planes.
If you are blind in one, you are exposed in all.
Most organizations are solving the most visible problem
The uncomfortable truth is that many organizations are over-invested in Plane 1 and underprepared for Planes 2 and 3.
They focus on prompts, browser usage, and acceptable-use policies. Those are important, but they mainly address the most visible part of the problem.
The bigger risk often sits elsewhere.
- It sits in what AI can access through connectors.
- It sits in what workflows AI can trigger.
- It sits in what model vendors retain.
- It sits in what agents can do once they have credentials.
The most dangerous AI systems are not always the ones that can see your data.
They are the ones who can act on it.
The AI identity explosion is coming
There is another problem that most ITAM programs have yet to fully account for: non-human identity sprawl.
Every AI tool that connects to your environment may create or depend on non-human identities — API keys, OAuth tokens, service accounts, agent credentials, and workflow permissions.
These identities do not behave like employees, but they often have access to the same systems. Sometimes, they have even broader access.
And unlike employees, they do not follow normal lifecycle processes. They do not go through onboarding. They do not always appear in access reviews. They are not always revoked when a project ends. They may not be removed when an employee leaves.
This is where ITAM and IAM begin to converge.
You are no longer just managing tools. You are managing non-human actors with access to your systems.
Copilot versus agent is not a technical distinction. It is a governance distinction.
One of the most important distinctions organizations need to make is the difference between copilots and agents.
A copilot assists a human. It drafts, summarizes, recommends, or answers. The human remains in the loop.
An agent is different. An agent has credentials. It can execute workflows. It can make decisions or take action across systems without human review at every step.
The governance model should scale with authority.
A writing assistant that helps an employee draft a paragraph is one kind of risk. An AI agent that can access CRM data, update records, send emails, or trigger workflows is a very different kind of risk.
The question is not only, “What AI tools are employees using?”
The better question is, “What is each AI system allowed to do?”
Five assumptions organizations need to challenge
Before organizations can govern AI well, they need to challenge a few common assumptions.
1. Blocking reduces risk
Not always.
Blocking can reduce visible usage, but it can also push AI usage into channels IT cannot see: personal devices, mobile apps, unmanaged browsers, and personal accounts.
If blocking is the entire strategy, the organization may feel safer while becoming less informed.
2. AI risk is about data leakage
Data leakage is part of the risk, but it is not the whole risk.
AI risk is also decision-making and action at scale.
An AI system that reads CRM data and sends emails poses a risk of data leakage. It is an operational risk.
3. If the SaaS app is approved, the AI is approved too
This is a dangerous assumption.
A company may have approved a SaaS platform years ago. But when that vendor adds AI features, the implications for data handling, retention, training, and execution may change.
Approving the platform is not the same as approving every AI capability the vendor adds later.
4. Free versus paid is a pricing decision
It is also a governance decision.
Free-tier AI tools often lack SSO, audit trails, enterprise data agreements, admin controls, and clear retention protections. Paid or enterprise tiers may significantly change the governance posture.
The question is not only, “What does this cost?”
The question is, “What control do we gain?”
5. Discovery is the hardest part
Discovery is hard, but it is not the hardest part.
The harder question is what AI can do once discovered.
Many organizations stop at, “We found shadow AI.”
The next question should be, “What does it have access to, and what actions can it take?”
You cannot block your way out of AI
AI is too useful, too accessible, and too embedded in everyday work for blocking to be the only strategy.
If the official answer is always “no,” employees will find unofficial ways to get the productivity benefits.
The better strategy is to outpace shadow AI.
Provide governed tools before employees go looking for their own. Make approved tools easier to access than rogue alternatives. Put them under SSO. Apply DLP where appropriate. Negotiate the right terms. Give employees a safe, practical path.
Governance should not feel like a wall.
It should feel like the faster, safer path.
That is how the culture begins to shift. Instead of “everyone is using whatever AI tool they can find,” the norm becomes, “we have approved tools, and if you need something else, come to IT first.”
But even that will not solve the whole problem. You can standardize the top layer of AI usage: general-purpose copilots, widely used productivity tools, and common enterprise platforms. The long tail will remain fragmented. There will be niche design tools, coding assistants, research tools, departmental platforms, embedded SaaS AI, and LLM-powered workflows.
You cannot eliminate all shadow AI, but you can prioritize and contain its impact.
That is where the SAFE framework comes in.
The SAFE framework for AI Asset Governance
SAFE is a practical operating model for AI Asset Governance.
It stands for:
- Spot — discover AI across the environment.
- Assess — evaluate risk across multiple planes.
- Fix — remediate based on risk and business value.
- Enforce — sustain governance over time.
SAFE is not a one-time audit. It is a continuous loop. AI tools change, SaaS vendors add AI features, employees adopt new tools, agents gain new capabilities, policies become outdated, and contracts change. The loop never fully closes, and that is the point.
Spot: Discover what is already there
Discovery is the starting point, but not every organization will begin at the same maturity level.
Some organizations are still blind. They have no formal visibility into AI usage and no process for asking the right questions.
Others are at surface discovery. They review SSO apps, OAuth grants, expense reports, and known vendor tools.
Some have moved into behavioral detection through DNS logs, CASB policies, browser extension audits, and endpoint signals.
More mature organizations begin looking at execution visibility: API traffic, connector permissions, OAuth scopes, and workflow automations.
The future state is agent awareness: tracking AI agents as non-human identities, reviewing their credentials, monitoring their access patterns, and auditing their actions.
Most organizations today are somewhere between surface discovery and behavioral detection.
The next milestone should be execution visibility.
- What has AI been connected to?
- What scopes does it have?
- What APIs does it call?
- What systems can it act inside?
That is where the real risk becomes visible.
Assess: Evaluate risk and actionability
Once an AI tool is discovered, it should be assessed across several dimensions.
Data exposure: What data can flow into or out of the tool? Is it public, internal, confidential, regulated, customer-related, financial, PII, or PHI?
Retention and training: Does the vendor retain prompts or outputs? Are inputs used for model training? Can the organization opt out? Are there enterprise terms?
Identity posture: Is the tool accessed through SSO or personal accounts? Is MFA enforced? Is there an audit trail? What happens when an employee leaves?
Tier posture: Is the tool free, paid, or enterprise-grade? What controls does each tier provide?
But the most important assessment question is actionability, i.e., what can the AI do?
- Can it only read?
- Can it write?
- Can it update records?
- Can it send emails?
- Can it trigger workflows?
- Can it make recommendations that automatically influence decisions?
Actionability defines blast radius. The more an AI system can do, the stronger the governance needs to be.
Fix: Right-size the response
Not every AI tool needs to be blocked.
This is important. A binary approve-or-block model will not work for AI.
The better approach is to plot each AI tool across two dimensions: risk and business value.
High-risk, low-value: Block it. Use DNS controls, extension removal, app blocklists, or other restrictions.
High-risk, high-value: Remediate it. Do not automatically block a tool that the business genuinely needs. Upgrade the tier, enforce SSO, reduce OAuth scopes, apply DLP, negotiate data terms, add approval workflows, and define ownership.
Low-risk, low-value: Monitor it. Periodic review and logging may be enough.
Low-risk, high-value: Approve it. Add it to the ITAM inventory, procure the right tier, define ownership, and govern it like any other asset.
This is where ITAM becomes strategic.
Governance is not about saying yes to everything or no to everything. It is about prioritized control. The goal is controlled acceleration: enable the AI usage that creates value, and contain the AI usage that creates unacceptable exposure.
Enforce: Make governance sustainable
Enforcement is where many programs fail. They run an audit, produce a report, create a policy, and then the environment changes again.
AI governance has to become an operating model. There are four enforcement layers that matter.
- Policy: Keep it short, specific, and updated frequently. A one-page AI acceptable-use policy that names approved tools and prohibited actions is more useful than a long document nobody reads.
- Technical controls: Convert discovery signals into workflows. If a new AI tool appears in DNS logs, OAuth grants, endpoint activity, or expense reports, route it for review. Do not depend entirely on manual triage.
- Contractual governance: Vendors should disclose AI features, data usage, retention terms, and changes to AI capabilities. If an approved vendor adds AI functionality, the organization should know.
- Culture: This is the hardest layer and the most important. Employees should not see AI governance as a restriction. They should see it as enablement.
The message should be simple: come to us first. We will help you get the right tool, with the right terms, faster than you can set it up yourself.
That is how governance becomes sustainable.
Who should own AI Asset Governance?
AI governance will not remain unowned.
Security will have a point of view. Engineering will have a point of view. Legal, procurement, finance, and compliance will all have legitimate roles.
But if every function governs a slice of AI, nobody governs AI as a lifecycle.
That is why ITAM plays a critical role.
ITAM already owns many of the disciplines this problem requires: discovery, classification, ownership, procurement visibility, vendor accountability, lifecycle governance, compliance, and retirement.
AI is an asset. It is an acting asset, but it is still an asset.
This does not mean ITAM should govern AI alone. It means ITAM should help anchor the operating model so ownership does not fragment across the organization.
The playbook needs to evolve. The org chart does not necessarily need to be reinvented.
The window is closing
AI adoption has a point of no return.
It arrives when workflows depend on AI outputs, when decisions are shaped by AI analysis, when agents are embedded into operational processes, and when employees can no longer imagine doing their work without these tools.
After that point, you cannot remove AI from the environment.
You can only govern it.
Many organizations are closer to this threshold than they realize.
The practical place to start is not a massive transformation program. It is visibility.
Run an OAuth consent audit across your identity provider. Export third-party app grants and flag AI-connected tools.
Review expense reports and credit card statements for AI subscriptions purchased outside formal procurement.
Identify the three highest-risk AI tools or workflows and assess them across data exposure, retention, identity posture, tier posture, and actionability.
That will give you a defensible starting point. But regardless of the tooling, the discipline matters now.
AI is not entering your organization. It already has.
Every unmanaged AI system is not just a tool. It is an actor with access, influence, and potential authority inside systems your organization is accountable for.
This is the first asset class that can outpace your governance.
If you do not define how AI is managed, it will define how your systems behave.