Your Users Aren’t the Weak Link. They’re the Wrong Metric.

Blog Main Img

I’ve spent most of my career trying to solve the same problem: how do you protect the human in the loop when the threats move faster than humans can think? For years, we answered that question with training. Then with awareness. Then with simulations. And for a while, those answers were good enough.

They’re not anymore.

We’ve entered a phase where AI isn’t just accelerating phishing, it’s fundamentally changing the nature of social engineering. Agentic AI systems can now research a target, craft a pretext, generate a convincing email, adapt when ignored, and follow up with a new angle, all without a human operator. These aren’t attacks designed by people and delivered at scale. They’re attacks designed by machines and delivered with precision.

And here’s what really bothers me: we’re still measuring “human risk” with frameworks built for a world where the attacker was human too.

I’ve talked with hundreds of CISOs over the past two years, and the ones who are thinking clearly about this recognize something uncomfortable: the industry’s entire approach to human risk was built on the assumption that attacks have tells. That if you look closely enough, you’ll see the seams. Agentic AI doesn’t leave seams. It learns what seams look like and eliminates them.

The Detection Gap No One Wants to Talk About

Every CISO I talk to understands this at a gut level, even if they haven’t named it yet. There’s a growing gap between what AI-generated attacks look like and what humans are trained to spot. The old tells, bad grammar, suspicious URLs, urgency without context, are disappearing. Today’s AI-crafted emails are grammatically perfect, contextually aware, and timed to land at the exact moment a target is most likely to act.

Let me be direct: you can’t train a human to detect what a well-tuned language model has been optimized to make undetectable. That’s not a failure of your people. It’s a failure of the paradigm.

The traditional security awareness model was designed around a straightforward premise, teach people what to look for, test them periodically, and measure click rates. That model assumes the threat is static enough that pattern recognition works. But agentic AI doesn’t repeat patterns. It generates novel patterns. Every attack is bespoke. Every pretext is tailored. And the feedback loop is instant, if one approach doesn’t work, the system pivots autonomously.

Think about what that means practically. A finance manager gets an email that references a real invoice number, uses the CFO’s actual writing style, mentions a project discussed in last week’s all-hands, and arrives at 4:47 PM on a Thursday when the target is most likely rushing to close out their week. No training module prepares someone for that. The signal-to-noise ratio has been weaponized.

From “Human Error” to “Human Exposure”

We need to stop framing this as a human error problem and start framing it as a human exposure problem. The distinction matters.

“Human error” implies your people failed. It puts the burden of detection on the individual and assumes that with enough training, they’ll get it right. “Human exposure” acknowledges that your people are operating in an environment where the adversary has a structural advantage and that the system around them needs to adapt, not just the person.

And let’s be honest about how we got here. For years, email security vendors have used security awareness training as a crutch, a convenient rationale for why threats still get through. The implicit message was: “Our gateway caught what it could. The rest is a people problem. Go train your users.” That framing let vendors off the hook for detection failures and shifted accountability to the CISO’s training budget. It was always a deflection, but it was a tolerable one when the threats were simpler. Now that attackers have AI generating flawless pretexts at scale, the “just train your people harder” argument isn’t just inadequate. It’s negligent.

This is what CISOs with limited budgets and stretched teams need to hear. The goal isn’t to build a perfect human firewall. The goal is to reduce the surface area where human judgment is the last line of defense against machine-scale attacks. That means moving from a posture of “train and hope” to one of “detect, augment, and intervene in real time.”

Here’s a simple analogy: we don’t hand passengers a checklist and ask them to screen their own luggage. We built the scanner into the process. People walk through it, the system does the detection, and only exceptions get escalated to a human. That’s what managing human exposure means, building the detection into the workflow so your people aren’t left making split-second judgment calls on every message that hits their inbox.

What Agentic AI Actually Means for the Human Layer

There’s a lot of noise around “agentic AI” right now, and most of it is marketing. Let me cut through it.

On the attack side, agentic AI means autonomous systems that can execute multi-step social engineering campaigns without human intervention. They don’t just generate a phishing email, they research the target on LinkedIn, identify the reporting structure, find recent company announcements to reference, craft a context-perfect pretext, and adapt their approach based on whether the target engages or ignores the first attempt. This is not theoretical. This is happening now.

On the defense side, agentic AI means something equally important: the ability to reason about intent, not just detect payloads. Traditional email security asks, “Does this URL match a known bad list?” Agentic defense asks, “Why is this message trying to move this person to take this action at this moment?” That’s a fundamentally different question, and it’s the one that matters when the payload looks perfectly clean.

For the CISO evaluating tools and strategies, the critical question isn’t “Does your AI look for bad payloads?” It’s “Does your AI reason about bad intent?” If a vendor can’t explain how their system understands attacker intent at a semantic level, not just matching signatures, they’re selling you yesterday’s architecture with a new label.

I’ll go further. If someone hands you a risk score without being able to explain the reasoning behind it in plain language, they’re selling you a black box. And black boxes don’t survive board conversations or audit cycles. CISOs need explainability, not just for compliance, but because the humans in the loop need to understand why a message was flagged to actually learn from the intervention.

A New Framework: The Human Exposure Maturity Model

I’ve been working on a way to help security leaders assess where they stand and what needs to change. I’m calling it the Human Exposure Maturity Model, four levels that map how organizations think about, measure, and protect the human layer.

This isn’t about perfection. It’s about knowing where you are so you can take the right next step.

Most organizations I talk to are somewhere between Level 1 and Level 2. They’ve invested in awareness but haven’t yet shifted to a posture where the system protects the human, rather than relying on the human to protect the system. The jump to Level 3 is where the ROI inflection point lives, and it’s where AI becomes a force multiplier instead of a buzzword.

Quick gut check: If you still measure human risk primarily by click rates, you’re at Level 1 or 2. If your email security can’t explain why it flagged a message in plain business language, you haven’t reached Level 3. And if your defense doesn’t understand the target’s role, communication patterns, and organizational context, Level 4 is still on the horizon.

How to Use This Model

If you’re a CISO reading this, here’s what I’d suggest. First, be honest about where you are. If your human risk strategy is still anchored in annual training and phish-click rates, you’re at Level 1, and that’s okay, but you need to move. Second, evaluate your vendors against this framework. Ask them not just what they detect, but how they reason about threats and how they reduce the cognitive burden on your people. Third, reframe the conversation with your board. Stop reporting click rates and start reporting human exposure, how many of your people were targeted by AI-generated attacks, how many were protected by automated intervention, and how that ratio is changing over time.

The board doesn’t need to know your click rate. They need to know your exposure rate, and whether it’s going up or down.

Final Thoughts

We built the security awareness industry on a belief that if you educate people, they’ll make better decisions. That belief wasn’t wrong, it was incomplete. In an era where the adversary is a machine that never tires, never repeats itself, and adapts in real time, the human can’t be the primary sensor. They have to be the protected asset.

The organizations that figure this out first won’t just reduce their phishing risk. They’ll redefine what it means to manage human risk in the age of AI. And that’s not just a security win, it’s a strategic one.

The question isn’t whether AI will change how we think about the human layer. It already has. The question is whether your defense has caught up.

If you’re ready to assess where your organization stands on the Human Exposure Maturity Model or want to continue this conversation, I’d welcome the dialogue. This is the defining challenge of our field right now, and the leaders who lean in early will set the standard for everyone else.

Frequently Asked Questions (FAQs)

Q1: What is the difference between "human error" and "human exposure" in cybersecurity?

Human error puts the blame on the individual — the implication being that better training would have prevented the breach. Human exposure is a more accurate framing: it acknowledges that your people are operating in an environment where the adversary has a structural advantage. When an AI system has researched your CFO's writing style, knows what projects your team discussed last week, and has timed its message to arrive when the target is most distracted — that is not a failure of awareness. That is a failure of the system around the human. The distinction matters because it changes where you invest: in training people to do the impossible, or in building detection that removes the burden from them entirely.

Q2: Why isn't security awareness training enough anymore?

Traditional awareness training was built on a reasonable assumption: threats have patterns, and if you teach people what patterns to look for, they'll catch them. That assumption no longer holds. Agentic AI doesn't repeat patterns — it generates novel ones. Every attack is bespoke. Every pretext is tailored to the specific target, organisation, and moment. You cannot train a human to detect what a well-tuned language model has been optimised to make undetectable. That's not a criticism of your people. It's a recognition that the paradigm has been overtaken by the technology it was designed to address.

Q3: What is agentic AI, and how is it different from regular AI-assisted phishing?

Most people have heard about AI being used to write more convincing phishing emails. Agentic AI goes significantly further. An agentic system doesn't just generate a message — it researches the target on LinkedIn, maps the reporting structure, finds recent company announcements to reference, crafts a contextually accurate pretext, sends the message, and then autonomously adapts its approach if the first attempt is ignored. There is no human operator directing each step. The entire campaign — research, crafting, delivery, follow-up, adaptation — runs without human intervention. This is not a future capability. It is operational now.

Q4: What is the Human Exposure Maturity Model?

It is a four-level framework for assessing how an organisation thinks about and protects the human layer against AI-generated threats. Level 1 is reactive — annual training, phish-click rates, the blame-the-user culture. Level 2 is aware — better simulations and reporting tools, but still fundamentally reactive to novel threats. Level 3 is adaptive — AI reasoning about intent at the point of delivery, real-time context surfaced to the user, risk measured per person rather than per campaign. Level 4 is autonomous — agentic defence that reasons about threats before they reach the inbox, with the human layer protected rather than relied upon as the last sensor. Most organisations today are between Level 1 and Level 2. The ROI inflection point is the move to Level 3.

Q5: How do I know what level my organisation is at?

Two quick diagnostics. First: how do you primarily measure human risk? If the answer is phish-click rates, you are at Level 1 or Level 2. Second: can your email security explain why it flagged a specific message in plain business language — not a confidence score, but an actual explanation of intent and context? If not, you have not reached Level 3. The test for Level 4 is whether your defence understands the target's role, communication patterns, and organisational context well enough to reason about risk before delivery — not just after a user clicks.

Q6: What should I be reporting to the board instead of click rates?

Report human exposure — the proportion of your people who were targeted by AI-generated attacks, how many were protected by automated intervention, and whether that ratio is improving over time. The board does not need to know your click rate. They need to know your exposure rate and whether it is going up or down. Click rates measure how well your training worked. Exposure rates measure how much risk your organisation is actually carrying. Those are fundamentally different conversations, and only one of them is strategically meaningful at the board level.

Q7: Is the goal to remove humans from email security decisions entirely?

No — and that framing misses the point. The goal is to stop asking humans to be the primary sensor for machine-scale attacks. Your people are not weak links. They are being asked to do something structurally unreasonable: make accurate threat assessments in seconds, at scale, against adversaries that have spent compute cycles specifically optimising to fool them. The right model is the airport security analogy — we do not hand passengers a checklist and ask them to screen their own luggage. We built the scanner into the process. Human judgment is still essential, but it should be applied to exceptions and decisions — not to every email that hits every inbox.

Subscribe to Our Newsletters!

Be the first to get exclusive offers and the latest news

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Talk To Us

Your gateway can't see
what's already inside.

Deploy in minutes, not months. Zero tuning. See what your current tools are missing.