Legacy with Lipstick: Why Most "AI Security" Will Fail in 2026 (Feat. CTO Riz)

Blog Main Img

The bad guys aren't writing scripts anymore; they are training agents.

As we enter 2026, the gap between "secure" and "breached" is no longer about who has the best firewall—it’s about who has the best reasoning.

We sat down with Riz Muhammad Rizwan, CTO of StrongestLayer, to decode the noise. He didn't hold back. From the rise of "Calendar Hijacking" to the hard truth about why your vendor's "AI feature" might just be a marketing sticker, here is the raw code on what’s coming for your inbox this year.

1. The Era of "Agentic AI" Attacks

Q: Everyone talks about AI text, but what about AI agents? Are we going to see autonomous bots that can "think" and adapt mid-attack in 2026?

⚡ The Short Answer 

Yes. We are moving from AI-assisted to AI-autonomous attacks. Attackers are deploying agents that can spend weeks building trust and adapting their tactics in real-time without human intervention.

The Deep Dive

"The real question isn't 'will we see this?'," says Riz. "It's 'how do we defend when attackers use AI that learns faster than we can write rules?'"

Riz explains that an AI agent can now run thousands of parallel campaigns simultaneously at near-zero cost. Unlike a static phishing script, these agents adapt. If a target asks a question, the agent replies with context.

"You fight AI with AI. We built our detection to ask 'what is this trying to accomplish?' not 'have we seen this before?' Because when every attack is novel, pattern matching is already dead."

2. Deepfakes: Why "Identity" is Dead

Q: If deepfakes can now fool voice and video checks, how do we prove identity? Is "behavior" the only thing left we can trust?

⚡ The Short Answer 

Identity verification is failing. Security must shift to Contextual Risk Assessment. Even if the voice is real, the request (e.g., a $500k wire transfer) might be the attack.

The Deep Dive

 "We need to reframe the question," Riz argues. "Instead of asking 'is this really Bob?', ask 'even if this IS Bob, should he be requesting a $500K wire transfer to a new vendor account?'"

StrongestLayer verifies using four dimensions:

  1. Sender Trust
  2. Content Analysis
  3. Recipient Risk
  4. Action Legitimacy

🛑 The Key Takeaway: A deepfake might pass a biometric check, but it fails the Action Legitimacy check. If the urgency is manufactured or the recipient lacks authority, the request is blocked—regardless of whose face is on the video.

3. The Hidden "Blind Spot": Supply Chains

Q: Beyond standard email phishing, what is the biggest communication security gap companies are ignoring right now?

⚡ The Short Answer 

Supply chain communications. Companies trust their vendors implicitly, but attackers are compromising vendor accounts to bypass perimeter defenses.

The Deep Dive

"Companies spend millions securing their perimeter but treat vendor communications as inherently trusted," Riz notes.

He shares a chilling example from a recent customer audit:

"We caught this at a customer – 347 threats in 10 days, several from legitimate vendor accounts that were compromised."

These attacks passed traditional security gateways because the sender was trusted. The fix? Verify every request. Ask "Is this normal for this relationship?" rather than just "Is this sender legitimate?"

4. The Sleeper Threat: Calendar Hijacking

Q: What is one dangerous attack vector that nobody is talking about yet, but will be huge by the end of the year?

⚡ The Short Answer 

Calendar Hijacking. Attackers are modifying legitimate meeting invites to insert malicious Zoom links or weaponized attachments (SVGs).

The Deep Dive

This is the prediction that should keep CISOs up at night. Attackers are bypassing email filters by targeting the calendar directly.

"Trust is assumed in a calendar invite," Riz explains. "Hybrid work has normalized clicking meeting links."

Attackers are now:

  • Changing Zoom/Teams links to phishing sites.
  • Adding malicious dial-in numbers.
  • Embedding exploits in meeting attachments (like SVGs).

"We're already catching malicious SVGs in meeting invites. Nobody's scanning for this yet."

5. The Hot Take: "Legacy with Lipstick"

Q: What is one cybersecurity buzzword that is overrated and needs to die in 2026?

⚡ The Short Answer 

"'AI-Powered'. Most vendors are just using 2018 machine learning with new labels. If it can't reason, it's not AI."

The Deep Dive 

Riz doesn't mince words here.

"Real AI-native security uses reasoning models that understand context and intent, not pattern matching," he says. "If your product was built before GPT-4 and you added an AI module, you're not AI-native — you're legacy with lipstick."

Riz’s Challenge to Vendors: Ask them: "Show me how your AI handles a novel attack it's never seen." Most can't answer, because they are still just matching patterns with extra steps.

Final Thoughts: Don't Rely on "Legacy" Defense

Riz’s warnings—Agentic AI, Calendar Hijacking, and Deepfakes—aren't sci-fi. They are happening in server logs right now.

As Riz put it: "When every attack is novel, pattern matching is dead."

Subscribe to Our Newsletters!

Be the first to get exclusive offers and the latest news

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Talk To Us

Don’t let legacy tools leave you exposed.

Tomorrow's Threats. Stopped Today.

Talk To Us

Don’t let legacy tools leave you exposed.

Tomorrow's Threats. Stopped Today.