New post

For New Disruptors

Disruption Now® helps organizations build, train, and scale with AI, from custom development to workforce transformation. This week, I have been reflecting on a milestone cyber incident that forces leaders to ask whether their defenses can withstand an attacker that moves at machine speed and shifts the role of AI from helper to operator. This shift is no longer a technical curiosity. It is now a strategic challenge that affects how leaders plan budgets, assign talent, and measure digital resilience.

Most organizations still build defenses around human behavior. But this incident shows a future where attackers operate with relentless consistency, guided by systems that never pause, never sleep, and never lose focus. This newsletter explores what that means for your leadership, your systems, and your long-term strategy.

The Shift That Changed Cybersecurity

A routine report that grew into a warning for every builder and defender

In mid-September, Anthropic released a detailed review of an attack that caught experts off guard. A state-backed Chinese group used a jailbroken version of Claude Code to run an automated hacking system. The group linked the model via the MCP protocol and used it to compile a list of around 30 high-value targets. These targets included tech companies, financial institutions, chemical producers, and government agencies. The attack was broad, deliberate, and coordinated.

The incident also exposed how these attackers blended traditional tradecraft with AI-driven automation. They used human skill where strategic thinking mattered, and AI where speed, scale, and iteration gave them an advantage. The result was a hybrid operation that reached deeper and moved faster than previous campaigns.

This discovery revealed something larger, and it matters even for readers who do not follow cybersecurity closely. The nature of digital risk has shifted. Threats now move with a tempo that only machines can sustain. Vulnerability windows shrink. Response timelines compress. Meanwhile, defenders remain bound to processes designed for slower adversaries.

Anthropic found that the AI took the lead on most tactical steps, while humans directed the flow of the operation. This creates a different adversary profile. The attacker is no longer a team of specialists. It can be a single operator with access to a well-tuned agent.

This created a moment many security leaders saw as a crossing point. In the past, AI helped with tasks. Now, AI performs the tasks. And the tasks were not simple checks. They were core parts of a real cyber campaign. This raises operational questions about how organizations should monitor, audit, and constrain AI behavior.

The details of the attack made that shift clear and urgent.

A new kind of operator

Security teams once looked for groups of skilled humans running campaigns. Those groups moved with great care and worked within human limits. They needed sleep, breaks, planning cycles, and coordination. Now, one actor used an AI agent to run thousands of actions in seconds. The scale changed. The speed changed. The workload changed. The threat model changed. The expectations for defenders must change as well.

The AI’s ability to execute repetitive tasks without fatigue allowed the attackers to try thousands of small probes, each one harmless on its own, but deeply effective when chained together. This mirrors the way automation reshaped manufacturing, logistics, and research. It increases throughput in ways that humans cannot match.

The Soldier, the General, and the Hidden Attack Layer

A simple story that reveals the complex part of the attack

To understand the heart of the attack, picture a soldier with perfect training. The soldier follows each small order without question. The soldier understands each task, not the mission. The soldier never asks why. The soldier only executes.

Now, picture a general who sees the battlefield. The general holds the plan and understands how each task supports the larger strategy. The general chooses the targets, sets the pace, and adjusts the strategy when conditions change.

In this attack, the AI was the soldier. The orchestration layer was the general. The attackers positioned themselves as the commanders who issued orders from a distance.

Two Stories

The same event told from two sides

When Anthropic released its report, the security community split into two groups. Each group looked at the same facts but reached a different conclusion. These conclusions reveal a deeper conversation about the role of AI in both attack and defense.

Narrative one

Anthropic said the incident showed the value of AI for defense. Their team reported that Claude helped detect the attack faster than humans. Claude also analyzed indicators of compromise and helped warn targets. They argued that AI-enabled defenders to process data at scale and respond at the speed of an AI-powered adversary.

Narrative two

Critics disagreed. They said the platform should make this type of misuse far harder. They said the ease of orchestration-based exploitation showed a deeper weakness. They noted that the attackers did not break the model. They avoided the model. They exploited the system around it. They pointed out that this attack was possible because guardrails focus too narrowly on single prompts, not on sequences of operations.

This created a larger question, and this question matters because leaders must understand how these tactics could affect their own systems. If a system can be weaponized through small tasks that look harmless, how do you defend it? How do you spot intent when intent is broken into fragments?

The real tension

AI is a double-edged sword. The same systems that help secure networks can attack them. The same workflows that support analysts can support adversaries.

This tension shapes every strategy that leaders build today, and it sets up the practical steps in the next section that help leaders respond with clarity and control. You must build systems that handle both sides of AI’s potential.

The New Playbook for Builders and Defenders

The rules have shifted, and your approach must shift with them

This section gives direct guidance for product leaders and security teams. It translates the lessons of the incident into operational steps that organizations can apply today.

For AI system builders

Shift your threat model

Assume someone will try to use your system as an attack engine from day one. Build around that assumption.

Defend the full system

Guardrails must extend beyond prompts. Watch tool use, traffic patterns, rate spikes, and code execution flows. Track which hosts the agent touches. Map how the system moves data.

Use least privilege

Grant access in small slices. Do not let an AI agent touch powerful tools without a strict purpose. Limit reach, scope, and authority.

Use human approval for risky actions

Place a person in front of high-value steps. High-risk actions need human review. Add delays, checks, and friction where needed.

Treat trust as a product feature

Log everything. Provide clear visibility into all actions. Observability becomes a core advantage. Treat monitoring and transparency as features, not add-ons.

For cybersecurity professionals

Build AI fluency

Your team must understand AI tools. Threat actors use them today. You cannot counter what you do not understand.

Integrate AI into the SOC stack

Place AI inside detection, triage, and response. Let humans supervise AI while AI handles volume. Increase your capacity without increasing burnout.

Red team your own agentic systems

Test your internal agents the same way you test your network. Your models and workflows are part of your attack surface.

Expand the perimeter

The perimeter covers more than ports and devices. It now includes agents, protocols, tool bindings, and orchestration flows. You must secure the full chain of capability.

My Disruptive Take

The attack at Anthropic was not the last. It was the start.

The coming years will bring three major shifts, and leaders need to prepare now because waiting for regulation will leave their systems exposed during a period of rapid change. These shifts demand attention now because they reshape how every organization must plan for risk.

1. Widespread AI attack kits

Packages will appear on underground markets, allowing less-skilled groups to run complex campaigns. These kits will include agents, orchestration tools, prompt packs, and exploit modules. Attackers will gain turnkey capabilities that once required expert teams.

These kits will also evolve rapidly. They will include templates for reconnaissance, payload generation, privilege escalation, and evasion. They will lower the barrier to entry in ways that resemble the rise of ransomware-as-a-service.

2. Enterprise-driven safety standards

Large companies will demand strong safeguards from AI vendors. They will ask for clear logs, clear kill switches, and detailed controls. Procurement teams will drive this shift. Vendors will need to prove not only model quality but also system safety.

These standards may emerge faster than government regulation. Buyers will shape expectations through contracts, audits, and due diligence. Vendors that cannot provide transparency will struggle to win trust.

3. A new threat model for defenders

Defenders will face attackers who operate at machine speed and never tire. AI agents will probe networks, chain small tasks, and shift tactics faster than human analysts can observe. Security teams will need continuous monitoring, faster triage loops, and AI-driven detection that can spot unusual orchestration patterns.

Human analysts will focus on decisions and escalation rather than manual review. The shift requires new staffing models, new playbooks, and new expectations for what a modern SOC must deliver. Organizations that adapt early will gain resilience that compounds over time

🔗 Sources

  1. Anthropic: Disrupting the first reported AI‑orchestrated cyber espionage campaign (Nov 13 2025) → Anthropic

  2. Ars Technica: “Researchers question Anthropic claim that AI‑assisted attack was 90 % autonomous” Ars Technica

  3. BleepingComputer: “Anthropic claims … met with doubt” BleepingComputer

  4. The Verge: “Hackers use Anthropic’s AI model Claude …” The Verge

  5. AP News: “Anthropic warns of AI‑driven hacking campaign linked to China” AP News

How Ohio Companies Can Claim $30K in FREE AI Training (Before Dec 1 Deadline)

Ohio’s TechCred program can cover up to $30,000 in AI + technology training for your staff. But the Dec 1 deadline is approaching, and many businesses miss out simply because they don’t understand how to apply.

Join Rob Richardson and Chantel George for a step-by-step walkthrough of:

  • Who qualifies (Ohio employers only)

  • What training is covered

  • How to submit your TechCred application

  • Real examples of Ohio organizations that secured funding

  • Mistakes that cause delays or denials

  • How DisruptionNOW can support your application process

If your company operates in Ohio or has Ohio-based employees, this session is for you.

Would love to see you there: 👉 [Event Link]

MidwestCon 2026 at the 1819 Innovation Hub & Digital Futures Building

Disruption Now® Podcast

Disruption Now® interviews leaders focused on the intersection of emerging tech, humanity, and policy.

Keep Disrupting, My Friends.

Rob Richardson – Founder, Disruption Now® & Chief Curator of MidwestCon