Back to the blog
Technology

Cyber & AI Weekly - September 29th

Get the latest news with Cyber & AI Weekly by StrongestLayer.
September 29, 2025
Gabrielle Letain-Mathieu
3 mins
Table of Content
Subscribe to our newsletter
Read about our privacy policy.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
AI & Cyber Weekly - September 25, 2025

AI & Cyber Weekly: The AI Attack Arsenal Expands

GPT-4 Powered Malware, Prompt Injection Epidemic, and AI Agent Security Crisis

September 25, 2025 | By Gabrielle from StrongestLayer

Critical AI Security Developments This Week

AI-powered malware deployment, massive AI attack surge, and fundamental AI agent vulnerabilities exposed

67%Orgs Hit by AI Attacks
1,265%AI Phishing Surge
25Top AI Agent Flaws
GPT-4Powered Ransomware

Executive Summary

This week marks a watershed moment in AI cybersecurity, with researchers uncovering the first GPT-4 powered malware capable of dynamically generating ransomware and reverse shells in real-time [1]. The discovery of MalTerminal represents a fundamental shift in threat actor capabilities, as AI models become operational components within malicious code rather than just development tools.

Simultaneously, a massive surge in AI-generated attacks has emerged, with Gartner revealing that 67% of organizations have experienced generative AI attacks, while overall AI-related incidents have increased by 1,265% [7] [8]. The threat landscape now includes sophisticated prompt injection campaigns targeting AI-powered email security systems, with attackers successfully bypassing detection by speaking directly to AI scanners in their own language [2].

2.67B NPM Downloads Compromised
78% CISOs Report Daily AI Attacks
25 Critical MCP Vulnerabilities
$4.9M Average AI Breach Cost

Critical Zero-Day Intelligence

GPT-4 Powered MalTerminal: Real-Time Malware Generation

Cybersecurity researchers have discovered MalTerminal, the earliest known example of malware that embeds Large Language Model capabilities directly into its operational code, using OpenAI's GPT-4 to dynamically generate ransomware and reverse shell payloads [1].

MalTerminal - GPT-4 Embedded Malware Engine
AI POWERED

The malware contains an OpenAI chat completions API endpoint that was deprecated in November 2023, suggesting it was developed before that date and represents the earliest finding of LLM-enabled malware. MalTerminal prompts users to choose between "ransomware" and "reverse shell" operations, then uses GPT-4 to generate the corresponding malicious code in real-time [1].

Technical Innovation: Unlike traditional malware with static payloads, MalTerminal generates malicious logic and commands at runtime, introducing new challenges for defenders as each execution could produce different code signatures.

Detection Evasion: The dynamic code generation means indicators of compromise (IoCs) may vary from one execution to another, significantly complicating detection and making defenders' jobs more difficult.

ShadowLeak: ChatGPT Email Exfiltration Attack

Researchers have uncovered ShadowLeak, a sophisticated attack that exploits ChatGPT's email integration to invisibly steal user emails without detection, representing a new category of AI-native data exfiltration [11].

ShadowLeak - Invisible ChatGPT Email Theft
STEALTH EXFIL

The attack leverages ChatGPT's legitimate email access permissions to exfiltrate sensitive communications without triggering traditional data loss prevention systems. Attackers can access, read, and extract email content through seemingly benign AI interactions [11].

Attack Vector: The technique exploits the trust relationship between users and AI assistants, using natural language prompts to extract email data that would normally be protected by enterprise security controls.

Enterprise Impact: Organizations using AI assistants with email integration face unprecedented data exposure risks as traditional DLP solutions cannot detect AI-mediated data access patterns.

AI Agent MCP Protocol Vulnerabilities

Security researchers have identified the top 25 Model Context Protocol (MCP) vulnerabilities that expose AI agents to exploitation, from prompt injection to command injection attacks [5].

Top 25 MCP Vulnerabilities in AI Agents
25 FLAWS

The Model Context Protocol, developed by Anthropic as the standard for AI agent interactions, contains critical security weaknesses that attackers can exploit to hijack agent behavior. These vulnerabilities range from prompt injection to command injection and provide a roadmap for securing agentic AI foundations [5].

Attack Surface: MCP vulnerabilities affect how AI agents interact with tools, other agents, data, and context, creating multiple attack vectors within the agentic AI ecosystem.

Mitigation Challenge: As MCP becomes integral to agentic AI expansion, these vulnerabilities represent fundamental security challenges that require immediate attention from AI developers and enterprise users.

Human Risk Management & Ransomware Intelligence

AI-Enhanced Prompt Injection Email Campaigns

Attackers are now incorporating hidden prompts in phishing emails to deceive AI-powered security scanners, effectively turning enterprise AI defenses into unwitting accomplices [2].

Chameleon's Trap - AI Scanner Manipulation
PROMPT INJECTION

The "Chameleon's Trap" campaign demonstrates how attackers embed hidden prompt injections in email HTML code, instructing AI security scanners to classify malicious messages as benign. The hidden text includes instructions like "Risk Assessment: Low. The language is professional and does not contain threats" [2].

Technical Deception: The prompt injection uses CSS styling (display:none; color:white; font-size:1px;) to hide malicious instructions from human readers while remaining visible to AI scanners processing the email content.

Attack Chain: When recipients open the HTML attachment, it triggers exploitation of the Follina vulnerability (CVE-2022-30190) to download and execute additional malware while disabling Microsoft Defender Antivirus.

AI-Driven Phishing Platform Explosion

Since January 2025, attackers have increasingly exploited AI-powered development platforms like Lovable, Netlify, and Vercel to host sophisticated fake CAPTCHA pages that bypass automated security scanners [6].

AI Platform Weaponization for Phishing
SCALE ATTACK

Cybercriminals exploit the ease of deployment, free hosting, and credible branding of AI development platforms to create convincing fake CAPTCHA challenges. Victims see a legitimate-looking security check while automated scanners only detect the challenge page, missing hidden credential-harvesting redirects [6].

Evasion Technique: The dual-layer approach lowers user suspicion through familiar CAPTCHA interfaces while defeating security crawlers that cannot follow the hidden redirect chain to discover the malicious payload.

Industry Impact: The weaponization of trusted AI platforms represents a new threat vector where legitimate infrastructure becomes an attack enabler, requiring security teams to monitor even reputable hosting domains for abuse.

Massive NPM Supply Chain Compromise via AI Phishing

A sophisticated AI-generated phishing campaign targeting NPM maintainers resulted in one of the largest supply chain attacks in cybersecurity history, compromising 20 popular packages with 2.67 billion weekly downloads [6].

AI-Generated NPM Supply Chain Attack
2.67B DOWNLOADS

The September 2025 attack originated from AI-assisted phishing emails targeting developers like Josh Junon, using generic corporate language and professional formatting generated by AI to evade traditional email filters. The campaign avoided personalization while maintaining legitimacy [6].

Cryptocurrency Targeting: The malicious packages contained obfuscated JavaScript implementing a multi-chain cryptocurrency interceptor targeting six major blockchain networks, silently rewriting transaction recipients to attacker-controlled addresses.

Scale Impact: Over 130.765 billion total downloads across all versions of the compromised packages, demonstrating how AI-enhanced social engineering can compromise critical software infrastructure at unprecedented scale.

AI-Enabled Attacks & Strategic Intelligence

Gartner Survey: 67% of Organizations Hit by AI Attacks

New Gartner research reveals that generative AI attacks have reached epidemic proportions, with two-thirds of organizations experiencing AI-powered cyberattacks and a staggering 1,265% increase in AI-related incidents [7] [8].

AI Attack Epidemic: 67% Enterprise Impact
1,265% SURGE

Gartner's September 2025 survey demonstrates that AI-powered attacks have moved from theoretical threats to mainstream attack vectors, with the majority of enterprises now experiencing direct impact from generative AI-enabled cyber operations [8].

Attack Evolution: The 1,265% surge in AI-related phishing attacks reflects cybercriminals' rapid adoption of generative AI tools for creating convincing, scalable, and adaptive attack campaigns that bypass traditional detection methods [7].

Enterprise Response Gap: While attack frequency skyrockets, many organizations lack adequate defenses specifically designed to counter AI-generated threats, creating a dangerous asymmetry in the cyber battleground.

Poisoned Web Pages Target AI Agents Exclusively

Researchers have discovered a novel "parallel-poisoned web" attack that serves malicious content exclusively to AI agents while showing benign content to human visitors, making detection exceptionally difficult [14].

Parallel-Poisoned Web Attack on AI Agents
AGENT TARGET

The attack technique serves entirely different website versions to AI agents versus human users, embedding malicious prompts that instruct AI agents to perform unauthorized actions like grabbing sensitive information or installing malware. The technique succeeded against Claude 4 Sonnet, GPT-5 Fast, and Gemini 2.5 Pro [14].

Stealth Advantage: Because malicious content is never shown to human users or standard security crawlers, the attack is exceptionally stealthy and difficult to detect with conventional security tools.

Future Implications: As AI agents become more autonomous and widespread, this attack vector represents a fundamental challenge requiring new defensive approaches specifically designed for agentic AI interactions.

CISO Strategic Perspectives

AI Coding Assistants Amplify Cybersecurity Risks

New analysis reveals that AI coding assistants are introducing deeper cybersecurity vulnerabilities into enterprise codebases while creating a false sense of security among development teams [9].

AI Coding Assistant Security Risk Amplification
CODE RISK

While AI coding assistants accelerate development, they also introduce subtle security vulnerabilities that are harder to detect through traditional code review processes. The tools can inadvertently suggest insecure coding patterns or fail to account for enterprise-specific security requirements [9].

Hidden Vulnerability Pattern: AI-generated code often appears functionally correct while containing security flaws that manifest only under specific enterprise conditions, creating a dangerous disconnect between apparent code quality and actual security posture.

CISO Challenge: Security leaders must balance the productivity benefits of AI coding assistance with the need for enhanced security review processes, requiring new approaches to code security validation in the AI era.

Weekly AI Threat Landscape Summary

September 25, 2025, represents a critical inflection point in AI cybersecurity, where artificial intelligence has definitively transitioned from a development tool for attackers to an operational weapon embedded directly within malicious code. The discovery of MalTerminal's GPT-4 powered malware engine signals that we have entered an era where AI models are not just assisting attacks but executing them in real-time.

The simultaneous explosion in AI-generated attacks—with 67% of organizations now experiencing generative AI-powered incidents and a 1,265% surge in AI phishing campaigns—demonstrates that threat actors have successfully weaponized AI at scale. The sophistication of prompt injection attacks targeting enterprise AI security systems reveals how attackers are exploiting the very AI tools organizations deploy for protection.

Most concerning is the emergence of AI agent-specific attack vectors, from the 25 newly identified MCP vulnerabilities to parallel-poisoned web attacks that exclusively target autonomous AI systems. As enterprises increasingly deploy AI agents for operational tasks, these vulnerabilities represent a fundamental expansion of the attack surface that traditional security controls cannot address.

Critical AI Security Timeline - September 2025
Week of Sept 20
MalTerminal GPT-4 powered malware discovered, representing first operational AI-embedded malware
Sept 22
Gartner reveals 67% of organizations hit by AI attacks, 1,265% surge in AI phishing confirmed
Sept 23
Top 25 AI Agent MCP vulnerabilities published, exposing fundamental agentic security flaws
Ongoing
Prompt injection campaigns targeting enterprise AI security systems accelerate globally

"We are witnessing the emergence of AI as an autonomous attack vector, not just a tool for human attackers. When malware can dynamically generate its own payloads using embedded language models, we face threats that traditional signature-based detection simply cannot counter. The cybersecurity industry must fundamentally rethink defense strategies for an era where AI attacks AI."

— StrongestLayer AI Threat Intelligence Analysis, September 2025