
AI Governance Readiness Assessment — a 15-minute diagnostic that maps AI systems, identifies applicable state laws, and flags the highest-priority compliance gaps before Colorado's June 30 enforcement date. → Take the Assessment Now
For New Disruptors
Disruption Now® is a tech-empowered platform helping elevate organizations in entrepreneurship, social impact, and creativity. Through training, product development, podcasts, events, and digital media storytelling, we make emerging technology human-centric and accessible to everyone. This week I've been reflecting on why the last month in AI felt so familiar — like I'd lived through this exact moment before, twenty-five years ago, in a dorm room with a dial-up connection.
I’ve Seen This Before
I'm aging myself here, but I remember Napster.
For those who weren't there, Napster was a file-sharing application that launched in 1999 and let anyone with an internet connection download music for free. You typed in a song name, and within minutes, sometimes seconds if your connection was decent, you had it. No driving to the store. No paying $18 for a CD when you only wanted two tracks. Just music, instantly, from your computer. I was in my dorm room the first time I used it, and it felt like actual magic. You have to understand what the world was like before you heard a song on the radio, and you either bought the whole album or you waited and hoped they played it again. Napster erased all of that overnight.
The music industry's response was predictable: lawsuits, outrage, moral panic. They called it theft. They shut Napster down. And they were right about the law; it was piracy. But they were catastrophically wrong about the signal. Napster didn't create demand for digital music. It revealed demand that was already there, demand so massive that no legal framework could contain it. The industry spent years fighting the wave instead of riding it. And then Steve Jobs walked in with iTunes and Apple Music, gave people a legal way to do what Napster proved they wanted to do, and built a hundred-billion-dollar ecosystem on top of the answer.
I tell you that story because the last month in AI felt exactly the same. A developer named Peter Steinberger, an Austrian who'd founded a successful PDF company, sold his stake, and spent three years barely touching a computer, rediscovered his spark playing with Claude, built an AI assistant for himself, open-sourced it with a lobster mascot, and watched it explode. Within 24 hours, 9,000 GitHub stars. A week later, 60,000. By the end of January 2026, it had crossed 145,000 stars and 20,000 forks — one of the fastest-growing open-source projects in GitHub's history [1]. Andrej Karpathy praised it. Scientific American covered it [2]. Cloudflare's stock jumped over 20% in two days after investors realized that local AI agents need secure infrastructure, and that Cloudflare was already positioned to provide it [3]. A hobby project moved markets. Sound familiar?
The project was originally called Clawdbot. Anthropic sent a trademark notice that the name was too close to "Claude," and Steinberger renamed it Moltbot, then OpenClaw. None of that drama matters much. What matters is the signal beneath it, the same signal Napster sent in 1999: over a hundred thousand builders didn't flock to this project because they were confused. They flocked because it delivered something they'd been starving for. And the establishment wasn't ready.
What Siri Should Have Been
So what does OpenClaw actually do that has people this excited? It performs labor. Not suggestions, actions.
For over a decade, Apple, Google, and Amazon promised AI assistants that would transform our lives. Siri arrived in 2011. Google Assistant followed. Alexa colonized millions of kitchens. And in 2026, most of us are still frustrated. OpenClaw exposes how timid those efforts were. It reads your emails, triages your inbox, and drafts responses in your voice. You tell it to book a flight, and it opens a browser, searches, fills out forms, and confirms. One user asked it to make a restaurant reservation, and when OpenTable didn't have availability, the agent found AI voice software, downloaded it, called the restaurant directly, and secured the reservation over the phone with no human intervention [4]. Another developer configured it to run coding agents overnight. He'd describe features before bed and wake up to working implementations. Someone else built a complete web application while walking to get coffee, issuing instructions via WhatsApp and watching commits land in his GitHub repo in real time.
This is what I keep telling organizations in our AI training work: the gap between knowing AI exists and knowing how to integrate AI into your actual workflows is the critical gap of this era. OpenClaw didn't invent agentic AI. But it made the demand visible in the same way Napster made the demand for digital music visible. People don't want chatbots that suggest things. They want AI that does things. And the organizations that figure out how to deliver that safely will build the next Apple Music. The ones that fight it will be the next record labels.
The Security Warning You Can’t Ignore
But here's where the Napster analogy breaks down in an important way and where builders need to pay close attention.
Within 72 hours of the Anthropic rebrand, Steinberger made an operational mistake when changing the GitHub organization name. He released the old handles before securing the new ones. The gap was about ten seconds. Crypto scammers were watching and grabbed both accounts instantly. A fake $CLAWD token appeared on Solana, hit a $16 million market cap, and collapsed [5]. Meanwhile, security researchers found hundreds of exposed instances with plaintext credentials, API keys, private messages, and shell access with root privileges [6]. Palo Alto Networks warned that OpenClaw presents a "lethal trifecta" of risks: access to private data, exposure to untrusted content, and the ability to act externally while retaining memory across sessions [7]. Google's VP of Security Engineering called it "infostealer malware in disguise."
Here's the bind every builder needs to understand. A useful agentic AI requires broad permissions; it needs to read your files, access your credentials, execute commands, and interact with external services. Broad permissions create a massive attack surface. The thing that makes these tools valuable is also what makes them dangerous. This isn't a bug to be patched. It's a tension to be managed. When Napster disrupted the music industry's distribution model, no one's bank account was at risk. When agentic AI breaks the productivity model, your credentials, your client data, your entire digital life is in play. The upside is bigger. The downside is sharper. Both are real.
How to Think About Agentic AI Right Now
So should you ignore this? Absolutely not you'd be making the record label mistake all over again. Should you give an untested agent root access to your business systems tomorrow? Also no. The right posture is cautious optimism.
Start smaller than OpenClaw. Agentic browsing tools, workflow automation through platforms you already trust, Claude's computer use these are lower-risk entry points that teach you the same fundamental lessons about delegation, supervision, and tradeoffs. You don't need to install an open-source agent on a Mac Mini to understand what's coming. You need to build the judgment to evaluate when agentic tools are ready for your workflows and when they're not. That's the 201-level AI fluency we train organizations on at Disruption Now not just prompting, but understanding system design, risk assessment, and when to keep a human in the loop. The testing itself is the opportunity. Every hour you spend experimenting with agentic tools now is an hour of judgment you're building before the stakes get higher.
The New Rules of Agentic AI Readiness
Test on sandboxed systems first. Never connect an agentic AI to production data, financial systems, or client communications until you've pressure-tested it in isolation.
Start with agentic browsing, not root access. Lower-risk tools give you the learning without the liability. The goal right now is building judgment, not building infrastructure.
Build AI fluency before AI infrastructure. Your team needs to understand tradeoffs — security, privacy, accuracy — before deploying autonomous tools.
Assume anything connected could be compromised. Use throwaway accounts for testing. Rotate credentials. Plan for the worst case.
Watch the builders, not just the headlines. 145,000 developers starring OpenClaw aren't confused. They see what's coming. Your job is to understand it before it reshapes your industry.
Go Deeper: AI Governance Readiness Assessment
We built an assessment that helps enterprise teams understand their AI compliance exposure in 15 minutes, maps their AI systems, identifies which state laws apply, and flags the highest-priority gaps.
My Disruptive Take
Napster revealed that people wanted digital music. The industry fought it, lost, and Apple built the future on the answer. OpenClaw is revealing that people want AI that acts — not AI that suggests. The industry will spend the next year arguing about safety, liability, and risk. They'll be right about most of it. And they'll still lose to whoever figures out how to deliver agentic AI that's both useful and responsible. That's the world we're exploring at MidwestCon Week 2026 under the theme "The Era of Abundant Intelligence." When intelligence becomes accessible to everyone, when a single developer can build what Apple and Google couldn't deliver in a decade, the rules change. Start testing now. Start small. But start.
Ready to Build Your AI Governance Framework?
For enterprise teams of 20+ looking to navigate the state AI regulatory landscape — let's talk. We help organizations inventory their AI systems, map their compliance obligations, and build governance frameworks that work across jurisdictions before enforcement deadlines arrive.
Sources
MidwestCon Week 2026 at the 1819 Innovation Hub
MidwestCon is where policy meets innovation, creators ignite change, and tech fuels social impact. This year's theme—"The Era of Abundant Intelligence"—explores how AI is reshaping what's possible when intelligence becomes accessible to everyone

Disruption Now® Podcast
Disruption Now® interviews leaders focused on the intersection of emerging tech, humanity, and policy.
Keep Disrupting, My Friends.
Rob Richardson – Founder, Disruption Now® & Chief Curator of MidwestCon

