The Stryker attack on 14 March will be studied for a long time — not because of how it was executed, but because of what it made visible.
No malware. No zero-day. No novel technique. Pro-Iranian hacktivist group Handala gained access to Stryker’s Microsoft cloud management plane and used it to wipe every device it could reach. Tens of thousands of endpoints. Manufacturing disrupted. Electronic ordering offline. 56,000 employees told to stay disconnected.
The lesson most security commentators have drawn: cloud environments are vulnerable to living-off-the-land attacks. True. Also incomplete.
The more important lesson: Stryker spent years centralising on Microsoft’s cloud ecosystem. That centralisation was a deliberate, rational, well-governed decision. It reduced complexity. It lowered operating cost. It gave the IT team unified visibility and control.
It also created a single administrative plane that, once accessed, could touch every endpoint simultaneously.
The blast radius was not a failure of security controls. It was a consequence of architecture.
What Cloud Centralisation Actually Buys
Cloud transformation programmes have been sold — accurately — on efficiency, agility, and cost. Fewer silos. Unified management. Reduced technical debt. Faster deployment cycles. These are real benefits, and organisations that achieved them paid real costs to get there.
What the transformation narratives consistently underestimated was the need to shift the security topology.
On-premises infrastructure hasbeen inherently fragmented. Active Directory forests that don’t talk to each other. Subnets with different administrative controls. Physical segmentation that limits lateral movement. Most of this was intentional security architecture. It was the by-product of organic, messy growth. But that messiness had a property that centralised cloud environments don’t: an attacker who compromised one segment couldn’t automatically reach everything else.
Centralised cloud management inverts this. The administrative plane — the layer that provisions devices, pushes configuration, manages identity — is now unified, accessible, and in the case of Microsoft Intune or similar platforms, capable of executing mass actions across the entire device fleet in minutes.
This is, again, a feature. It is also a concentrated point of failure.
The Stryker attack was not a sophisticated exploitation of this architecture. It was a straightforward use of it. Whoever held the access could reach every device. They chose to wipe rather than exfiltrate or encrypt. The choice reveals a different adversary calculus — disruption over ransom — but the capability was built in to the environment long before Handala arrived.
The AI Acceleration Layer
Simultaneously, Microsoft published threat intelligence this week that frames a separate but convergent problem.
Generative AI is now embedded across the attack lifecycle — reconnaissance, phishing, malware generation, evasion. This is not a warning about what might happen. It is a description of what is already happening. The report identifies specific examples: AI-summarised OSINT for targeted reconnaissance, hyper-personalised phishing lures that evade standard detection, AI-generated and obfuscated malware code, AI-assisted infrastructure setup.
The consensus response to this is some version of “defenders need AI too.” That is true and misses the point.
The more important observation is structural: the cost for sophisticated, targeted attacks like this has collapsed.
A threat actor who previously needed months of specialist training and a capable team to execute a well-crafted spear-phishing campaign against a specific executive can now approximate that capability via a custom GPT. The skill floor has lowered. The volume ceiling has risen. And critically, the gap between commodity-level attackers and nation-state-level techniques has narrowed in ways that don’t show up in traditional threat modelling.
This matters for how you calibrate your threat model. Most enterprise security programmes are calibrated against adversaries with realistic capability constraints. Those constraints are eroding. Updating your threat model is not optional — it is overdue.
Shadow AI: The Blind Spot You Built Yourself
While the external threat environment has shifted, the internal visibility problem has quietly compounded.
Shadow AI — the unauthorised AI tools employees are using outside sanctioned channels — is not a new risk category. But its scale and character have changed.
Employees are not using shadow AI maliciously. They are using it because it works. ChatGPT is faster than the approved internal tool. The Claude API integration someone built in their browser is more useful than the procurement-approved alternative. A developer’s private GitHub Copilot subscription gets committed code reviewed at a speed that theor enterprise tool often can’t match.
The data governance problem is structural one. Every shadow AI session is a potential unmonitored data flow. Employees are pasting internal documents, client data, strategic plans, and code into models that sit outside your data governance boundaries. You don’t know which models. You don’t know what data. Your CASB was not designed to fingerprint LLM traffic at inference time. Your DLP rules were not written with prompt injection in mind.
Policy will not solve this. The tools are too useful and the friction is too low. An acceptable use policy reminder will have no measurable effect on shadow AI adoption. API traffic analysis to identify LLM endpoints will. The discovery-first approach is operationally uncomfortable, technically demanding, and structurally necessary. In that order.
Agentic AI: The Risk You Haven’t Inventoried
The Token Security CISO guidance published this week is technically correct and operationally insufficient.
Yes, organisations should inventory their AI agents. Yes, they should enforce least privilege on agent API access. Yes, they should monitor agent actions in real time. The guidance is sound.
The challenge it does not address: most organisations do not know how many AI agents are running.
The agents that matter for security purposes are not the ones that were procured as “AI agents.” They are the ones embedded in other procured systems. The procurement platform that was “upgraded” three months ago and now has an AI assistant with read access to your vendor contracts. The customer service orchestration layer that routes inquiries and has access to customer PII. The data pipeline automation tool that a developer built to summarise Salesforce records and now has an API key with broad permissions.
None of these appear in a centralised AI agent registry. They were not purchased as AI. They were purchased as features.
The GlassWorm supply-chain compromise illustrates the convergence clearly. GlassWorm poisoned 433 components across GitHub, npm, VSCode, and OpenVSX. If your AI coding agent autonomously pulls and deploys code dependencies without human review of each package, a poisoned dependency is not a developer’s mistake. It is an automated deployment. The efficiency you built is the propagation mechanism.
The Operating Model Question
The security industry’s response to this confluence of risks has moved toward a single answer: add controls. More monitoring. More guardrails. More governance frameworks. Better tooling.
I understand the instinct. It is also the wrong frame.
Controls are necessary. They are not sufficient when the underlying problem is architectural. You cannot monitor your way out of a single administrative plane that touches every endpoint. You cannot policy your way out of shadow AI adoption that is faster and more useful than your approved alternative. You cannot inventory your way out of AI agents that were never catalogued as agents when they were procured.
The more useful question — and the one worth taking into your next board risk briefing — is a blast radius question:
If an attacker had the same access as your own administrative tooling, what could they reach, how fast, and with what operational impact?
Running that exercise honestly will surface more actionable intelligence than any compliance framework. And the answer, in most cloud-centralised environments, will be uncomfortable.
That discomfort is the correct starting point.
Board Questions
These are designed to be copy/paste ready. They reveal gaps without proposing solutions, and they work without technical context.
- If our cloud management plane were compromised today, how many devices could an attacker wipe before we detected and contained it?
- Which of our AI tools — including those embedded in vendor platforms — have access to sensitive data, and who reviews their actions?
- If employees are using unauthorised AI tools to process internal data, how would we know?
- What would a mass endpoint wipe look like in our environment, and have we ever simulated one?
What to Do With This
Three actions that are not on your vendor’s roadmap and are not in your last compliance report:
1. Map your administrative blast radius. Before your next board risk briefing, run a tabletop: if an attacker had admin access to your cloud management plane, how many devices could they touch in under 60 minutes? The Stryker precedent is not just a cautionary tale about Iran. It is a case study in blast radius which is why you need to understand yours.
2. Inventory your AI agents before your AI agents inventory you. The actual exercise is harder than the Token Security checklist suggests: find every AI-enabled integration that touches your production environment, regardless of how it was procured. Agents provisioned as features, not as “AI.” That is where your blind spots are.
3. Treat shadow AI discovery as a security project, not a policy project. Run API traffic analysis to identify LLM endpoints in use across your environment. The discovery-first approach is operationally uncomfortable. It is structurally necessary.
On the Streaming Fraud Conviction
The Michael Smith AI royalty fraud conviction is worth a moment too.
Smith purchased hundreds of thousands of AI-generated songs, uploaded them to major streaming platforms, deployed bots to generate artificial plays, and collected royalties at an industrial scale. The scheme ran for years. The first major conviction tying generative AI directly to financial crime at scale is now on record.
The operational template is the signal. Generate content at AI scale. Automate monetisation. Operate at a volume no human operation could match. Evade detection long enough to extract value.
This template applies to insurance claims processing, procurement invoice manipulation, synthetic identity packages for financial fraud, and any other system where value accrues from volume, and the detection mechanisms were built for human-scale operations.
Closing
The week’s stories share some themes. Cloud centralisation is efficient. It is also a concentrated risk. Agentic AI is efficient. It is also a propagation vector. Shadow AI is efficient. It is also an unmonitored data channel.
The security conversation needs to catch up to the architectural one. The blast radius problem is a design problem. It was built in, not bolted on.
The most useful question is not “how do we add controls?” It is: “what did we build, and what does it enable someone else to do?”
— David
References
- Stryker attack wiped tens of thousands of devices, no malware needed — BleepingComputer
- A message to our customers: Stryker network disruption — Stryker
- Microsoft: Hackers abusing AI at every stage of cyberattacks — BleepingComputer
- GlassWorm malware hits 400+ code repos on GitHub, npm, VSCode, OpenVSX — BleepingComputer
- Musician pleads guilty to $10M streaming fraud powered by AI bots — BleepingComputer
- Shadow AI is everywhere. Here’s how to find and secure it — BleepingComputer
- Top 5 things CISOs need to do today to secure AI agents — BleepingComputer
- How CISOs can survive the era of geopolitical cyberattacks — BleepingComputer
- More attackers are logging in, not breaking in — Dark Reading
- FBI links Signal phishing attacks to Russian intelligence services — BleepingComputer
- Employees had to restrain a dancing humanoid robot after it went wild at a California restaurant — TechCrunch


