EMA IT & Data Management Research, Industry Analysis & Consulting

AI Was Never Really Coming For Your Job – But Your Job Will Be Different

Apr 8, 2026 2:54:42 PM

Having just returned from the annual RSAC conference, I find myself surprised by writing this blog about…you guessed it: AI. Yes, believe it or not, AI was the most talked about topic at the conference. And it is certainly on the mind of nearly everyone, regardless of the position within their organization.

An interesting side conversation I had while chatting with folks at the conference surrounded the idea that AI was coming for their jobs. As someone who has been through multiple technology evolutionary cycles and decades of experience as an IT professional, all I could do on the spot was share with them that they are too valuable to be easily replaced with AI, and that AI is an immature tool. I also promised that I would share some deeper thoughts after the conference.

So here we are…

I’ve spent the better part of three decades watching the "death of the IT professional" be predicted every ten years. I remember the absolute panic when x86 virtualization hit the mainstream. I was evangelizing for Microsoft at the time and sat in rooms full of terrified sysadmins who were convinced they were going to be "virtualized out of a position." They thought that if you didn't have to physically rack a server, you didn't have a job. Instead, it turns out that virtualization just meant they could manage 500 servers instead of five. We saw the same anxiety with the rise of the cloud and DevOps. Each time, the eulogy was premature. AI is simply the latest "force multiplier."

Those who continue to learn, adapt, and evolve with their careers saw relief from the mundane. Those experienced professionals – the guys and gals with all of the letters behind their names – saw a shift in their workloads: different, yes, but certainly not less, and arguably more fulfilling than hardening servers.

The idea of machines doing our work isn't new; it is the literal history of IT. We’ve been automating for decades to keep our sanity. Whether it was writing cron jobs to handle repetitive system maintenance or creating scripts to parse billions of firewall logs, we have always looked for ways to offload the mundane and reduce our “brain damage” (yes, it’s a technical term).

The AI of today is just taking over a different class of drudge work: basic data entry, entry-level unit testing, and standard monitoring. To a junior tech, this feels like displacement. To a veteran, it looks like liberation. We are shedding the low-value tasks that have historically acted as a bottleneck to real innovation.

At the end of the day, the harsh reality still holds as true today as it did decades ago: garbage in, garbage out still applies to agentic AI solutions. They will only be as good as their training data, as well as those that have trained them. These solutions thrive on the average, the documented, and the frequent.

What agentic AI lacks—and what it will likely never master—is "tribal knowledge." AI cannot access the "corner case" information that only comes from living in the trenches. It doesn't know that the legacy config file was placed to deal with a memory leak in a codebase that was never really patched, or that a specific 10-year-old line of code in your proprietary stack is the only thing keeping the billing system alive, but also makes you the hero of the finance department (who rightfully believes you are some kind of Jedi miracle worker).

Experience brings a level of intuition and context that an LLM simply cannot simulate. You are the curator of the exceptions, and in IT, the exceptions are where the real work happens.

That said, I also recognize the growing pains associated with the rapid expansion of AI. In a recent interview with a network tools architect at a major bank, he noted a disturbing trend: a visible degradation in the quality of major software releases from some of the biggest observability vendors. The suspicion? Overreliance on junior developers using AI coding assistants. When we lean too hard on the "intern" (AI) without enough "adult supervision" (experienced engineers), we see a rise in code vulnerabilities and stability issues. Maybe this is due to release velocity (something else that has dramatically shifted due to AI), but it also shows that AI is rarely a “set-it-and-forget-it” solution: it requires a human’s expertise to ensure that what is "generated" is actually "functional" and "secure."

The IT industry has always frowned on stagnation. Those who view AI as a threat and refuse to touch it may wither away into irrelevancy. But for the professional who treats AI as a sophisticated, albeit occasionally prone to error support tool, the future is bright. The focus is shifting from "how to build it" to "what should we build?"

Don't fight the tool. Master it. The AI revolution is just the latest chapter in our long history of getting better at what we do.

Chris Steffen

Written by Chris Steffen

Christopher Steffen, CISSP, CISA, is the vice president of research at EMA, covering information security, risk, and compliance management. Before EMA, he served as the CIO for a financial services firm, focusing on FedRAMP compliance and security. He has also served in executive and leadership roles in numerous industry verticals. Steffen has presented at numerous industry conferences and has been interviewed by multiple online and print media sources. Steffen holds over a dozen technical certifications, including CISSP and CISA.

  • There are no suggestions because the search field is empty.

Lists by Topic

see all

Posts by Topic

see all

Recent Posts