Why security teams are still doing the hardest work themselves – and how they can get off the treadmill
Type a question. Get an answer.
That’s the pitch from almost every security tool with a chatbot bolted on. But here’s the dirty little secret: the chatbot is not really answering your question. It’s just pulling up whatever the platform already knows.
If your question fits inside the pre-built framework – you’re in luck. If not? You’re on your own.
Ask anything that requires real analysis – cause and effect, correlation, reasoning – and most tools fall flat. Because they’re not built to think. They’re built to search.
And in security, that’s not enough.
Most “AI for Security” Doesn’t Investigate Anything
“Our AWS bill just spiked 40%. Is it crypto mining, compromised creds, or legitimate CI/CD growth?”
This is a real question security teams face. But most AI tools can’t answer such a question.
Why? Because they’re not built to investigate. They search. They summarize. They retrieve conclusions someone else already baked in.
Every major vendor – whether it’s Wiz, SentinelOne, Upwind, or anyone else – now offers a natural language interface. They have friendly names like AskAI, Mika, or Choppy. The demos look slick. You type a question, it spits out an answer. But what it’s really doing is querying a set of pre-computed findings.
It’s not figuring anything out. It’s merely helping you navigate the stuff the system already knows.
If your question fits inside the product’s framework – great. “Show me critical vulns in us-east-1” works fine. But the second you need to correlate across assets, dig into cause and effect, or follow a thread the vendor didn’t hard-code… you’re in trouble.
Ask something like “What type of workload are our EC2s in us-east running?” and you’re out of luck. There’s no canned answer for that. It requires digging into process data, analyzing traffic, understanding intent. And most so-called “AI for security” tools are stumped.
What Actual Investigation Looks Like
Let’s take the question we’ve already touched on:
"Our AWS bill spiked 40% last month. Is this a security issue or legitimate growth?"
This isn't a security alert. No rule fires on "bill spike." But a security engineer needs to answer it because the spike could mean:
- Cryptomining attack
- Compromised credentials spinning up resources
- Data exfiltration (transfer costs)
- Shadow IT or unauthorized provisioning
- Or just legitimate growth
Here's what the investigation actually requires:
You check for cryptomining alerts. You examine provisioning patterns. You analyze CloudTrail for unusual API activity. You identify cost anomalies by account and service. You synthesize: security incident, policy violation, or legitimate growth?
This takes hours.
Context-switching between Cost Explorer, CloudTrail, GuardDuty, EC2 console. Correlating manually. Writing up findings.
Notice what's happening here: the path isn't fixed. Each step depends on what you find. You're testing hypotheses, ruling things out, following threads. You reach a conclusion through reasoning.
If AI is going to do this — actually investigate, not just search — it needs to work the same way. Generate a plan. Execute it. Observe results. Adjust. Reason toward a conclusion. And so on.
This is Dynamic Reasoning. Or as we like to call it here at Uptycs, Agentic Investigation.
Agentic Investigation in Practice
Here's how Juno, our AI Assistant, approaches that same question.

Juno agentic investigation showing dynamic plan for AWS cost spike security assessment allowing to verify Juno’s conclusion and easily check it against your data
Juno generated this plan by reasoning about what was needed – not by following pre-programmed steps. Multiple hypotheses were tested in parallel: CloudTrail analysis, crypto-mining checks, provisioning patterns, and cost anomalies by account.
Every step is visible. Every query is inspectable.
The verdict:

Juno investigation verdict showing legitimate growth determination with security recommendations
"Legitimate Growth, Not Security Breach."
Juno ruled out: crypto mining, unauthorized provisioning, anomalous API patterns. It identified the root cause: Jenkins CI/CD and EKS auto-scaling driving infrastructure expansion.
Juno surfaced additional exposures in-scope of the expanded resource footprint:
- Unencrypted AMIs
- IMDSv2 not enforced
- Unencrypted EBS snapshots
- Stale IAM keys
The connection: "The 40% cost increase means 40% more resources with these vulnerabilities."
A rule-based system alerts on cost spikes (if that rule exists) OR alerts on unencrypted AMIs separately. It doesn't expand the investigation scope rationally based on findings.
This is thinking like an analyst. Hours of manual investigation, reduced to minutes.
This is VerifiableAI; dynamic reasoning you can verify because the work is auditable.
AI That Can’t Reason Won’t Help You
We hope this little tour through the facade and the reality of investigative AI has been helpful.
Because here’s the thing: The next threat won’t come with a playbook. And the next question your team needs to answer won’t be something your vendor anticipated.
If your AI can only surface pre-baked findings, it’s not helping you investigate. It’s just making the dashboard slightly easier to use.
Search is useful – but it’s only a starting point. Real investigation means reasoning. Following evidence. Shifting direction based on what you uncover. And doing it all transparently.
This is what Juno does. It doesn’t just answer questions. It figures things out – step by step, query by query – with every part of the process visible and verifiable.


