This year at RSAC, we spoke a lot about artificial intelligence, agentic AI, and the power of community. We spoke about sharing knowledge, moving industries forward through connection and human networking. Somewhere among all the shiny new AI objects and all the discussions on sharing the latest threat intelligence data and methods, we might have lost track of the core mission. We need to enable business to continue in a secure manner.
Many years ago, I saw the security measures in place on a highly sensitive government computer system completely bypassed, not through state-of-the-art hacking tools or exploitation of the latest vulnerabilities, but by using a pen cap. You see, the computer in question was a shared workstation for capturing daily log activities, essentially a mandate from government bean counters to ensure justification for the station and show “look how busy we are” to those in charge. The station itself was responsible for processing important electronic messages, often time-sensitive and high volume, meaning every minute spent at the daily log terminal was less time doing the actual important work.
The computer itself was a Windows machine and not known for being the fastest for logging in, even when just unlocking from screensaver. It had significant security lockdowns in place, including required password unlock and AD authentication for each unlock, prohibiting local caching of credentials. While secure, this significantly increased login delay, resulting in a 30-second log entry taking several minutes and negatively impacting the ability to conduct work in a timely manner.
There were a lot of potential solutions to this problem, such as requesting a longer timeout for the screensaver requirement, or even setting up the daily log terminal as a standalone machine without internet access, using a local account. However, all of these options required approval of an extremely inflexible security officer, so a simpler solution was devised. A pen cap was shoved into the keyboard between the edge of the keyboard and the left shift key. Similar to the “mouse jiggler” devices of today, this ingenious “hack” prevented the computer from ever going to screensaver and allowed the station staff to rapidly use the daily log terminal as needed without waiting for login. Of course, this meant anyone with physical access to the room could get onto the computer unrestricted 24/7, but there was no specific security control preventing the usage of a pen cap to bypass all other security restrictions. From any auditor’s eyes, the computer had to be begrudgingly marked as compliant.
The “pen cap” trick wasn’t a matter of staff being lazy or trying to willfully break security. Their jobs were so time-sensitive that they bypassed all security in order to conduct their work of keeping a log showing how busy they were, which could likely have been proven by simply looking at the metrics for incoming and outgoing messages, eliminating the need for the daily log altogether. The station was spending so much time complying with security requirement mandates that they lost sight of the business goals and improving business enablement.
This year at RSAC, I saw the exact same pattern playing out, except the pen cap has been replaced by something far more expensive: AI.
The early-stage expo floor was filled with vendors whose products were, in many cases, little more than a thin wrapper around ChatGPT or another large language model. Same marketing slides, same buzzwords, same breathless claims of “revolutionary AI-powered” threat detection, patching, configuration management, or whatever other vertical they felt they could jam an AI LLM into with vibe-coded tools and a fancy interface. What was missing was any meaningful differentiation or, in some cases, any real security value at all.
Worse, venture capital firms are still pouring money into these solutions. They see AI in the pitch deck and write checks, often without understanding the underlying technology or asking the hard questions: Does this actually reduce risk? Does it create more work for already overloaded security teams? Or is it just another tool that will sit on the shelf collecting dust?
This is the modern version of security theater, and it’s more dangerous than a pen cap jammed in a keyboard. Every dollar spent on vaporware is a dollar not spent on solutions that genuinely enable the business to move faster and safer. Every hour a security team spends evaluating or managing these tools is an hour they’re not helping developers ship code securely or helping the business adopt new technology with confidence.
The real mission of cybersecurity has never changed: to protect the business without getting in its way. If we’re not careful, we’re going to repeat the same mistake we made decades ago, except this time, the cost won’t be measured in wasted minutes at a log terminal; it will be measured in wasted millions and slowed innovation.
So, what should we do about it?
First, stop rewarding hype. When evaluating new security tools, ban the phrase “AI-powered” from the conversation. Force vendors to talk in outcomes: How much faster can my team respond? How many fewer false positives will I see? How does this reduce friction for the business?
Second, demand transparency. If a vendor can’t clearly explain what their model was trained on, how it makes decisions, or where the human stays in the loop, walk away. Real security tools should be understandable, not magical.
Third, measure what matters. The best security teams I’ve seen aren’t the ones with the most AI tools. They’re the ones whose metrics focus on business velocity, not just alert volume. Track how quickly new applications get securely deployed. Track how many incidents actually impact revenue. That’s the scoreboard that counts.
The pen cap trick was a symptom of a deeper problem: when security stops serving the business, people will find a way around it. Twenty years later, we’re still making the same mistake, only now it’s dressed up in shiny AI packaging and costs millions of dollars.
It’s time to get back to basics. Build security that enables the business instead of slowing it down. The industry doesn’t need more AI – it needs more honesty, more focus, and far less theater.
EMA will explore AI business enablement, ROI, and measurable security outcomes in upcoming research. Stay tuned!