I watched all 11 main stage keynotes at RSAC 2026
and less of my time was wasted than you might guess
A different vibe
When I think of RSAC keynotes, I think of buzzword-laden vendor execs confidently, expertly leading you towards their company’s next big product release.
I was an industry analyst at one point, so you’ll have to forgive my cynicism. I’ve sat for a LOT of vendor briefings over the years.
The buzzwords were there for sure — if you plan on watching these keynotes, don’t base a drinking game on machine speed, agentic, real-time, or human-in-the-loop. The confidence and the thinly-disguised product pitches were there as well.
What I wasn’t expecting was the admission that we don’t really know how to protect this latest technology. Everyone agreed that AI agents need to be secured and that this work has to begin immediately. Everyone has thoughts on what some of the key ingredients should be. But no one claimed to have the solution.
I had the same experience talking to attendees at the conference. I interviewed the founder of an AI governance startup, who told me that none of his customers were using any sort of enforcement or guardrails yet. Everything was in ‘monitor mode’.
In a way, this is unsurprising - the quickest way for the security team to get in trouble has been impacting availability. At a time when businesses are terrified of being left behind, security had BEST not get in the way.
Like most of the 43,000+ RSAC attendees, I was running around all week and didn’t get to attend as many talks as I would have liked. I attended Security Tinkerer events, Cybersecurity Canon events (including working a shift at the excellent RSAC Bookstore!), and recorded interviews for CyberRisk TV.
Luckily for myself and the rest of us, I’m told that all of the talks at RSAC Conference 2026 were recorded (check out the one I gave with Adam Shostack). Before flying back home, I decided to download the main stage keynotes playlist, so that I could start watching them and taking notes on the trip home.
Fun fact: 43,000 is 0.78% of all cybersecurity professionals, if we take ISC2’s word that there are 5.5 million of us, globally. This stat is probably off, given that a lot of the 43,000 attendees are vendors. Surely there are some ISC2 members working at vendors, right? I digress.
Here’s what I learned from watching all 11 main stage keynotes.
Securing AI Agents
Everyone agrees that we must protect AI agents, but that we’re not sure how.
There does seem to be agreement on many details.
Asset management for AI agents: discovering, ownership, responsibility
Data permissions patterned after users (a la Microsoft Co-Pilot) is too broad, user data hygiene is too poor
Visibility into AI actions and reasoning. This was often referred to as auditability or traceability.
Validation of output
Integrity becomes a real challenge — George Kurtz shared several examples of AI inventing the solution to a problem. Did it just retrieve real company/customer data that solves your problem? Or did it fabricate that data? How would you know?
AI agents can’t be trusted with intent. Feed them a social contract or ethics and they modify it or break it in order to complete a task.
Compliance with existing regulations could be challenging. How does GDPR’s right to be forgotten work with new AI tech stacks? Does AI memory need to be purged? Will AI agents actually remove data, or just say they’ve done so?
Agents will scale to a point where manual, human-driven security controls can’t work (we’re probably already there in many cases).
The Characterization of AI Agents
Digital Co-Workers
Several speakers characterized AI agents as ‘Digital Co-Workers’. From what I’ve seen, assistant agents might feel like this, but most enterprise agents won’t. The ephemeral agent that exists for the 12 seconds it takes to enrich a phishing alert won’t feel like someone you’d like to have a drink with. You’re unlikely to even interact with the majority of these agents. A SOAR trigger or orchestration agent will interact with these agents.
Human-in-the-Loop or Not?
Some were saying that keeping a human in the loop is essential - a non-negotiable point. Others were saying that human-in-the-loop is a temporary stopgap that won’t scale — that very quickly. There were mentions of human-on-the-loop and agent-in-the-loop. Basically, the difference between in-line enforcement and out-of-band monitoring. Where have we had to make that tradeoff before?
Disagreements on how AI agents will work
Some describe AI agents as ephemeral. Just-in-time agents with just enough access that are destroyed as soon as their task is complete. Analogous to containers or perhaps actually running within containers.
Others, especially those describing agents as digital co-workers, imagined long-lived agents that get smarter over time. Agents that learn and improve as they ‘gain experience’. Perhaps this is possible through the concept of decentralized memory, though it seems like the agents themselves will still be ephemeral, even if memory is persistent.
Thousands of agents per person
Several imagined that, just a few years into the future, we’d each have thousands of agents running around doing stuff for us. I have a few questions:
Will the planet be able to generate enough power for each person to have hundreds or thousands of agents burning tokens 24/7?
What exactly are we going to do with thousands of agents?
Since automation has been possible on personal computers for decades, why don’t we already have thousands of automated jobs doing work for us today? Zapier, IFTTT, n8n, and power automate all existed before ChatGPT was released.


Acting like automation didn’t exist before LLMs
This one really triggers me.
Computer-based automation has been replacing jobs as long as computers have become commonplace in the enterprise. Even email is an automation, replacing the task of an internal courier, physically carrying a message from one employee in the office to another.
There were lines like, “Attacks are now faster than a human can respond.” Girl, that was the case back when Dennis Nedry was screwing over all of Jurassic Park to make a quick buck. Jurassic Park was written in the 80’s. Dennis used SHELL SCRIPTS.
Big Concern: Navel Gazing
At one point, one of the speakers asked, “How many of you here went to the GTC conference last week? Or watched Jensen’s keynote?”
Silence.
“Anyone?”
Nothing.
“There’s a complete Venn diagram with no intersection.”
We’re making this huge deal about AI in our industry, but cybersecurity isn’t paying attention to the industry making AI our problem? Maybe one of the reasons that AI lacks functional guardrails is because we’re not there — we’re not part of the conversation. And look — I get it, I don’t particularly enjoy Jensen’s keynotes, but the AI industry is hanging on his every word. What Jensen says or introduces today is something we have to secure tomorrow.
Aren’t we the industry that made a big deal about getting security “baked in” as opposed to “bolted on?” Where did that all go?
“We can’t let AI happen to us, we have to make it work for us” — Hugh Thompson
This doesn’t just apply to the AI industry, but the larger tech industry as well. What conferences are the CTOs and CIOs going to? What podcasts and blogs are the DevOps folks consuming?
We don’t need to worry about just keeping up with AI, we need to keep up with the folks deploying AI.
Threats are getting faster
Threats are getting faster and more automated. The fastest breakout time is seconds, fastest transition from the 1st stage to 2nd stage of an attack also takes only seconds now.
The speakers all seem to agree that detect and respond need to effectively become a single step. That means automation. No human in the loop.
This also means that we’re going to need permission from the business to break some stuff. Most of us won’t get that permission.
Another common conclusion is that we need to prioritize hardening and prevention (the pendulum has swung back). As I’ve often said, we need to build systems as if everything has a zero day and the patch is never coming. We also need to reduce attack surface — something I have suggested a strategy for.
Fundamentals and Magical Defense
The fundamentals are difficult because enterprise infrastructure, identity, and data is complex and sprawling. Applying security controls across all of it takes huge effort and some of that effort must be indefinitely maintained as these controls drift over time.
Now we’re talking about doing it faster? In real time? Zero Trust on steroids? Words like comprehensive, correlated, and unified are thrown around. Magical defense that requires perfect knowledge and control over the environments we protect.
It’s as if we can’t remember why NAC failed. Or the early attempts at application control — remember how we declared malware a thing of the past? NDR that learns from traffic over time and gets better at detecting and stopping attacks. Deception designed to trap attackers in a hall of mirrors.
These are vendor-delivered keynotes however, so hyperbole is to be expected, I guess.
They’re right though — fundamentals are more important than ever, and some of them now need to be adapted for AI agents.
Particular standouts
Tomer Weingarten/SentinelOne - Securing Human Potential and Freedom in the Age of Agentic AI
This one was surprisingly equal parts tender, passionate, and urgent regarding the future of the human mind
Tomer focused on the dangers of becoming complicit in a world of AI agents eager to do your thinking for you.
“The moment we stop exercising judgement on AI output, we start to suffer cognitive atrophe”
Sandra Joyce/Google Security - Activate Industry! Moving Beyond Defense to Disruption and Active Defense
Not about AI - about threat intel sharing and disrupting threat actors
I loved this one because there was no magical thinking, no hand-waving about defenders needing a cohesive platform. There was a clear plan and evidence that this plan is working.
She shared several examples of how civil legal action and public disclosure have been successful in disrupting attackers infrastructure and tools, setting them back months or years.
The CTA for defenders was less clear, however, and I really wanted to hear more about what she described as Technical Takedowns - create a hostile environment for attackers, on the targets they’re hacking into ← is she talking about things like deception? I can’t be sure.
Jeetu Patel/Cisco - Reimagining Security for the Agentic Workforce
You don’t have to watch the talk, but it’s worth checking out the open source AI defense tools that Cisco released.
It seems like a lot: AI BOM, Skill Scanner, MCP Scanner, A2A Scanner, CodeGuard, DefenseClaw
Definitely the only talk where OSS was praised (unless you count OpenClaw)
My favorite quotes
Here are some quotes I found funny and/or interesting, provided here, out of context, on purpose.
“The fundamentals are not basic”
“Easy to declare, hard to prove”
“In a world where every company is an AI company, trust will be the only currency that survives.” (huh?)
“It’s like PACMAN from hell”
“We’re building the biggest flat network of all”
“This is going… nuclear, really”
“Within 24 months, the smartest employee in your organization will be a machine”
“AI is the new operating system”
“AI is now the biggest insider threat”
“Using identity as a control plane, that’s not different - we’ve got to do it at runtime, it’s probably going to make things like Zero Trust today look soft.”
“Show me where customers are entrusting their data, and I’ll show you where hackers are focusing”
Conclusion
I found this a useful exercise and I think I’ll try to do it more in the future. Let me know if you also found this useful. I’m considering watching all the Innovation Sandbox contestants and doing something similar with those videos.
It seems like all this uncertainty should leave me with some dread around the lack of security for AI agents, but it doesn’t. While generative AI has evolved much more quickly than other technological breakthroughs, the reactive role of security remains the same. Technology changes and we do our best to keep up.
There’s some solace in the fact that breaches don’t kill companies, but failing to keep up in competitive markets does.










