Blog

The New Shadow IT: Why AI Security is a Culture Problem

Written by Nathan Wilkerson, VP of Engineering | Mar 13, 2026 6:40:00 PM

AI security concerns are very real, but they aren't always found where you’d expect. While "vibe-coded" software (code generated through AI prompts) certainly has vulnerabilities, your IT and development teams likely already have the tools—like static analysis and penetration testing—to catch them.

The more pressing threat actually lives outside the IT department. It’s not just about employees uploading sensitive documents to ChatGPT; it’s about the democratization of code itself.

The Rise of the "Accidental Developer"

AI has made coding accessible to everyone. If an employee has an idea to automate a tedious part of their workflow, AI can now help them build the tool to do it. While this boosts productivity, it also creates a massive security gap: individual users writing and running code without a single peer review or oversight process.

You can try to mitigate this by restricting software installations, but you can’t stop it entirely. An employee doesn't need "Specialist Software" to run a risk; they can embed AI-generated VBA code directly into an Excel sheet or a Word macro. If that unauthorized script reaches outside the network for data, it creates an entry point that can be exploited by bad actors.

At that point, IT isn't just fighting external hackers—they’re fighting internal innovation.

Learning from the "Access Database" Era

We’ve seen this movie before. As a consultant, I’ve worked with dozens of companies struggling to manage hundreds of "zombie" Microsoft Access databases. These were self-built tools created by employees who are long gone, yet the business still relies on them today. No one knows how they work, where the data is stored, or if they contain sensitive PII (Personally Identifiable Information).

This is the classic definition of Shadow IT. In my experience, the companies most effective at stopping this weren’t the ones with the strictest rules. They were the ones with the most open IT departments. When IT is viewed as a partner rather than a "Department of No," employees are far more likely to share what they’re building instead of hiding it.

Three Pillars of AI Safety

So, how do you solve the AI security problem? You lean into your employees’ desire to innovate.

  1. Comprehensive Training: Teach staff not just how to prompt, but how to use AI securely. Show them the difference between a helpful automation and a data leak.
  2. A Shared Infrastructure: Create a framework where employees can develop, test, and deploy their own tools in a "sandbox" environment. This allows IT to identify vulnerabilities and offer fixes before the code goes live.
  3. Active Encouragement: People want to make their jobs easier. By showcasing "Golden Examples" of safe, effective AI use-cases, you steer the culture toward transparency.

Redefining the Role, Not Just the Task

Some might object: "Why am I helping my staff code? I hired them to process claims, not build software."

But we have to remember: people are hired to provide value, not just to perform repetitive tasks. Entering claims is a process; the value is in the completed claim. If an employee finds a way to use modern tools to do that faster and more accurately, they aren't "playing with code"—they are evolving.

White-collar jobs aren’t disappearing; they are changing. By providing a secure path for that change, you don't just protect the company—you empower the people within it.