Well, 2025 has been an extremely interesting year with what can only be described as a massive explosion of agentic AI. And while agentic might hold a large amount of potential to address deficiencies the industry has struggled with for years, with great power comes great responsibility.
For the optimists among us, Agentic AI represents the "silver bullet" for one of cybersecurity’s most persistent failures: managing user entitlements, permissions, and moving towards zero trust. The sheer volume of permissions in modern cloud environments has made manual governance nearly impossible, leading to a sprawling attack surface.
According to OWASP, "Broken Access Control" has consistently ranked as the number one web application security risk, with a staggering 94% of applications tested exhibiting some form of access control failure. Humans simply cannot keep up with the velocity of role changes and permission requests. An autonomous AI agent, however, never sleeps. It has the potential to tirelessly monitor these 94% of applications[1], identifying toxic combinations of permissions and revoking unused access in real-time. In this scenario, Agentic AI becomes the ultimate enforcer of "Least Privilege," closing gaps faster than any human admin could ever hope to. Fortunately, this year we saw several vendors announce very promising technology to do just that.
However, the pessimists—or perhaps the realists—see a darker potential. If we empower AI agents everywhere as the industry appears to be embracing, we are effectively creating a new class of "super-users." These agents become high-value targets for threat actors. If an attacker can compromise or "trick" an agent, they no longer need to steal a credential; they can simply instruct the agent to grant them the keys to the kingdom. Will we start seeing requirements that agents start taking anti-phishing training just like human employees?
The stakes for getting this wrong are astronomically high. We are already bleeding revenue from digital crime at an alarming rate. The FBI’s Internet Crime Complaint Center (IC3) reported that potential losses from cybercrime surged to over $16 billion in 2024, a massive 33% increase from the previous year[2].
This $16 billion figure was largely driven by traditional attack vectors. Now, imagine the financial devastation if threat actors begin exploiting autonomous agents. We aren't just risking a data breach; we are risking the automation of the breach itself. If an AI agent is manipulated into facilitating "Broken Access Control" at machine speed, that 33% year-over-year increase in losses could look like a rounding error by the time we reach 2027.
As we move deeper into 2026, the deployment of Agentic AI cannot be a "set it and forget it" strategy. Organizations must establish rigid governance frameworks for the agents themselves. We need to verify the verifiers. Unless we treat these AI agents as non-human identities requiring the same strict monitoring and Zero Trust verification as any other user, they risk becoming the most dangerous insider threat we have ever faced.
EMA™ released research on Agentic AI identities in 2025, and is scheduled to release its report “Navigating the Identity Crisis: A Data-Driven Analysis of IGA Maturity, Risks, and Future Roadmaps” in 2026, which will explore this topic further.
[1] OWASP - A01:2021 – Broken Access Control - https://owasp.org/Top10/2021/A01_2021-Broken_Access_Control/
[2] FBI Releases Annual Crime Report - https://www.fbi.gov/news/press-releases/fbi-releases-annual-internet-crime-report