Uptycs Blog | Cloud Security Insights for Linux and Containers

AI in Security: Efficiency or Blind Spots?

Written by Umesh Sirsiwal | 3/31/26 3:10 AM

How security teams are using Juno to bring visibility back

AI is making software engineers dumber.

In a 2026 study, Anthropic found that developers using AI assistance scored 17 percentage points lower when tested on code they had just written. In another study, researchers found that programmers using generative AI completed more tasks but showed no improvement in their understanding of the codebase. And in a randomized trial by METR, experienced open-source developers believed AI made them 24% faster. When measured, they were actually 19% slower.

The common thread across these studies is that developers stopped building a mental model of the systems they were working on. The AI handled the reasoning and they accepted the output, without building a picture of how things actually fit together. Researchers are calling this epistemic debt: you're productive today, but you're borrowing against understanding you'll need tomorrow.

A similar dynamic is playing out in security.

Security teams are adopting AI faster than almost any other function, and for good reason. Alert volumes are unmanageable, experienced analysts are hard to find, and every major vendor now offers some form of AI-powered assistant.

But the underlying pattern is the same. The AI processes an alert, produces a disposition, and the analyst accepts it without ever investigating what actually happened. Over time, the team loses its understanding of the environment it's supposed to be protecting.

AI’s Visibility Problem

The main culprit here is hidden reasoning. You see the output, never the process. And you can't learn from a process you never see.

In most AI-assisted workflows, the model ingests an alert, correlates some data internally, and returns a verdict. What you don't see is which data sources it checked, what it ruled out, or what assumptions it made along the way. The analyst has no way to tell whether the AI did a thorough investigation or a shallow pattern match that happened to produce a plausible-sounding answer. And this matters more in security than almost anywhere else. 

The obvious solution is to make the AI's reasoning visible.

If an analyst can see how the AI reached its conclusion, they get the speed without losing the understanding. Instead of just receiving a disposition, they follow an investigation. And every investigation they follow builds their mental model of the environment.

How Security Teams Are Using Juno

Instead of receiving a verdict from a black box, security teams using Juno follow the full investigation as it happens. Every query, every data source, every hypothesis tested and ruled out. We at Uptycs, call this the Glass Box.

Consider three recent cases:

A security team gets a GuardDuty alert flagging unauthorized access on an IAM user. With Juno, they don't get a one-line disposition. They watch Juno run eight SQL queries across four data sources in 90 seconds. It checks CloudTrail, IAM activity, compares the flagged IP against peers on the same role, pulls the hourly timeline. By the end, the analyst knows the IP was on the threat list by mistake and its behavior matched normal automated agent traffic.

When an AWS cost spike needs explaining, the team watches Juno test multiple hypotheses in parallel and can trace every step of the reasoning. But Juno also surfaces exposures in the expanded infrastructure that nobody asked about. The team comes out of that investigation knowing more about their environment than they did going in.

And when two teams in different industries face the same technical finding, they see Juno build completely different investigations based on their business context. A fintech team sees findings framed around SOC 2 controls. A software company sees the same finding framed around supply chain risk. The reasoning is visible and tied directly to the context each team provided.

In each case, the analyst is building a clearer picture of their environment.

The Question Worth Asking

The research on epistemic debt points to a simple question: Is your AI making your team more capable, or more dependent?

Your metrics won’t answer it. Alert closure rates go up either way. Mean time to respond drops either way. A team that’s getting better and a team that’s losing its edge can look the same on a dashboard.

You see the difference when something new shows up. A threat that doesn’t match a playbook. An alert that actually needs investigation. That’s when it becomes clear whether your team understands the system or is just moving outputs along.

Making the reasoning visible helps. It gives people something to follow and learn from. But that alone isn’t enough. They also need to trust it. That becomes harder, and more important, as AI systems start making decisions without a human in the loop.