
92% of Enterprises Lack Visibility Into AI Identities — Here Is How to Fix It
Saviynt's 2026 CISO AI Risk Report finds 92% lack full AI identity visibility and 95% can't detect misuse. The research maps the governance gaps and a practical framework to close them.
Enterprises Are Running AI Agents They Cannot See — New Research Shows How to Fix It
A study published April 21, 2026 by Cybersecurity Insiders, in collaboration with identity security firm Saviynt, documents the scale of AI identity governance gaps across large enterprises — and delivers a practical action framework for closing them. The 2026 CISO AI Risk Report surveyed 235 CISOs, CIOs, and senior security leaders across the United States and United Kingdom at enterprises with 5,000+ employees, spanning technology, financial services, healthcare, and manufacturing.
The headline figures are stark: 92% of respondents lack full visibility into AI identities operating within their environments, and 95% expressed doubt in their ability to detect or contain misuse if it occurred. These are not theoretical gaps — the report found that 71% of CISOs confirm AI tools already have access to core enterprise systems including Salesforce and SAP, while only 16% report that this access is effectively governed. Additionally, 75% of organizations have already identified unsanctioned AI tools running within their environments.
Understanding the AI Identity Problem
The challenge is structural. Traditional identity and access management (IAM) was built for human employees: users authenticate, receive provisioned access, and access is revoked when they leave. AI agents are a fundamentally different entity.
An AI agent might be provisioned with service account credentials at deployment and then run autonomously for months — accessing APIs, querying databases, transferring data across systems — without generating the authentication events that traditional IAM monitors. When security teams audit their identity inventory, AI agents are frequently invisible, folded into a service account count without distinction.
The Saviynt report identifies this as the non-human identity problem: AI agents, automated workflows, and machine accounts now represent a significant and growing fraction of identity activity inside enterprise systems, but they were never designed to fit the governance models built for human users.
The Three Areas Where Governance Is Failing
Discovery and inventory: Before you can govern AI identities, you must know they exist. The 75% figure for unsanctioned AI tools running inside enterprise environments is a discovery failure. Security teams cannot audit access they are not aware is occurring.
Access governance: Even for known AI agents, only 16% of CISOs report effective governance despite 71% acknowledging access to core systems. Broad, permanent access grants — the path of least resistance at deployment — create the same over-provisioning problem that human IAM spent two decades solving. AI agents are inheriting that same bad habit.
Incident detection and response: The 95% who doubt their ability to detect AI identity misuse reflects a gap in behavioral monitoring. Traditional user behavior analytics (UBA) tools generate baselines from human activity patterns — they do not inherently flag abnormal behavior from AI agents because normal AI agent behavior has not been defined for them.
The Five-Step Governance Framework
The Saviynt report translates findings into a practical action plan that security teams can apply now:
1. Build an AI identity inventory: Create a dedicated registry for AI agents and automated systems, distinct from generic service account lists. Each entry should include the agent's purpose, access scope, owner, and deployment date.
2. Apply least-privilege from day one: AI agents should be provisioned with the minimum access required for their specific function — not broad access for convenience. Task-scoped permissions that expire on completion are the governance goal.
3. Define behavioral baselines for AI agents: Work with SIEM and UBA vendors to establish what normal looks like for each AI agent type. Deviations from baseline — unexpected data access, off-hours activity, anomalous API call volumes — should generate alerts just as they would for human accounts.
4. Assign human owners to every AI agent: Every AI agent deployed in production should have a named human accountable for its behavior and access scope. Named accountability closes the governance gap that anonymous service accounts create.
5. Update incident response playbooks: Add AI-specific scenarios to existing IR procedures — what to do when an AI agent accesses unauthorized data, exhibits anomalous behavior, or shows signs of manipulation via prompt injection or tool poisoning.
The 2026 CISO AI Risk Report is available in full through Saviynt and Cybersecurity Insiders. For security teams navigating AI deployment at enterprise scale, the research provides both the evidence base and the practical roadmap to turn an ungoverned risk into a managed asset.
Sources: GlobeNewswire / Cybersecurity Insiders (April 21, 2026), Saviynt CISO AI Risk Report (April 2026), HackRead (April 21, 2026), NextBigFuture (April 21, 2026), Tech Startups (April 21, 2026)
