New post

For New Disruptors

Disruption Now® is a tech-empowered platform helping elevate organizations in entrepreneurship, social impact, and creativity. Through training, product development, podcasts, events, and digital media storytelling, we make emerging technology human-centric and accessible to everyone. This week I've been reflecting on a paradox playing out in city halls, county courthouses, and state agencies across America: everyone wants AI, but almost no one is preparing their people to use it responsibly.

The Rush Without the Roadmap

If you work in or around state and local government, you've probably noticed the frenzy. According to the Brookings Institution's August 2025 tracking, 47 states have introduced AI-related legislation, and 34 are actively studying AI through task forces or standing committees. In 2024 alone, state lawmakers introduced nearly 700 AI-related bills across 45 states, with 113 enacted into law. The pace is accelerating—by mid-2025, more than 550 additional bills had been introduced.

The appetite is real. A 2024 NASCIO survey found that more than half of state CIOs reported employees using generative AI tools in their daily work. From chatbots handling citizen inquiries to AI systems processing invoices and analyzing public sentiment, the technology is already inside government. Phoenix built myPHX311 to answer common questions in English and Spanish. Mt. Lebanon, Pennsylvania, cut invoice processing from a week to one or two days using AI-enabled platforms. California is running sandbox experiments with AI for tax administration.

But here's the problem: the infrastructure for responsible use hasn't kept pace with the enthusiasm for adoption. A 2024 Salesforce survey found that only 28% of government workers considered themselves experts at using AI—the lowest of any sector measured. Six in ten public sector professionals cited the AI skills gap as their biggest obstacle to implementation. According to a January 2025 Pew Charitable Trusts analysis, while at least 30 states have issued guidance on state agency AI use, that guidance varies widely in depth and enforcement. Some states have comprehensive frameworks with impact assessments and inventories. Others have symbolic task forces that produce reports nobody reads.

What Responsible AI Actually Looks Like

Let's be clear about what "responsible AI adoption" means for government—because it's not the same as slowing down or avoiding the technology entirely. Responsible adoption means building the institutional muscle to deploy AI in ways that serve constituents, maintain public trust, and create accountability when things go wrong.

For state and local leaders, this comes down to three pillars. First, transparency: citizens deserve to know when AI is being used in decisions that affect their lives, whether that's processing unemployment claims, flagging fraud, or prioritizing 911 responses. A January 2025 analysis noted that only about 25% of states currently require vendors to disclose when AI is embedded in their solutions. That gap limits agencies' ability to understand and govern the AI tools they're already using.

Second, workforce fluency: and this doesn't mean turning every caseworker into a data scientist. It means ensuring that the people who interact with AI systems understand what the technology can and cannot do, how to spot errors, and when to escalate to human judgment. The Center for Democracy and Technology's 2025 legislative analysis highlighted Kentucky's SB 4 as a model—it establishes comprehensive requirements for how state agencies use and oversee AI technologies, including training requirements.

Third, accountability structures: who's responsible when an AI system denies someone benefits they deserve, or flags innocent people for investigation? Colorado's 2024 AI Act, which takes effect in 2026, offers a template: it requires identifying AI's role in consequential decisions, mandates transparency for both developers and deployers, and affirms consumers' right to an explanation of that role. Several states—including Illinois, Georgia, and Maryland—have modeled legislation after Colorado's approach.

Why Governments Keep Getting It Wrong

If the blueprint exists, why aren't more governments following it? Three structural problems keep getting in the way.

The first is procurement. Government purchasing systems were designed for buying hardware and software licenses, not for acquiring rapidly evolving AI capabilities. Multi-year contracts and legacy systems weren't designed to accommodate technology that changes every quarter. As one panelist at a recent NASCIO event put it: if you've got a multi-year IT rollout, you can't always just add generative AI into that workflow. By the time a procurement cycle concludes, the technology you're buying may already be outdated—or worse, you may be locked into a vendor relationship that limits your flexibility.

The second problem is the skills gap—not just in IT departments, but across constituent-facing functions. A June 2025 Federation of American Scientists report found that public agencies often lack the staff and skills needed to implement AI regulations, let alone use AI tools effectively. HUD launched a skills competency model in late 2024 specifically because traditional supervisor-employee skill assessments weren't capturing where AI knowledge gaps actually existed. The agency expected that its workforce—which isn't made up of many technologists—would score low on AI readiness.

The third problem is political pressure to show "innovation" without measuring outcomes. It's easier to announce a chatbot pilot than to track whether that chatbot actually improved service delivery, reduced wait times, or created new problems. The Deloitte Center for Government Insights' May 2025 analysis emphasized that scaling AI requires embedding it into core processes with measurable outcomes—not just engaging a large number of users in isolated experiments.

The Framework for Getting It Right

If you're a state or local leader navigating this responsibly, here's a practical framework that balances innovation with accountability.

Start with an honest inventory. Before you deploy new AI tools, know what you already have. Georgia's proposed HB 147 would have required annual inventories of public agencies' AI tools. North Carolina's SB 747 called for a one-time inventory of all AI systems in use or under consideration. You can't govern what you can't see. Map your current AI footprint, including tools embedded in vendor solutions that your teams may not even recognize as AI.

Build training into everyday functions, not just IT. The Corporation for a Skilled Workforce's November 2025 analysis argued that embedding AI skill development into existing workforce frameworks—like WIOA programs and community college partnerships—is more sustainable than standalone tech training. Your permit clerks, social workers, and court administrators need to understand AI's capabilities and limitations, even if they never touch a line of code.

Require pre-deployment impact assessments for high-stakes uses. Before deploying AI in decisions that affect benefits, housing, criminal justice, or child welfare, conduct a structured assessment of potential harms, biases in training data, and mechanisms for human review. The March 2024 OMB memo on AI asked federal agencies to develop exactly these kinds of risk mitigation strategies. State and local governments should follow suit.

Create public accountability mechanisms. Algorithmic registries—public databases of AI systems in use, their purposes, and their oversight structures—are emerging as a best practice. New York City passed a package of three bills in 2025 to regulate the use of AI by city agencies. Transparency doesn't slow down innovation; it builds the public trust that makes sustained innovation possible.

Finally, maintain human-in-the-loop requirements for consequential decisions. AI should handle repetitive questions and data lookups, freeing humans to focus on problem-solving and addressing complex issues. That's the principle Pavan Parikh, Hamilton County's Clerk of Courts, articulated on this week's Disruption Now Podcast. He's digitizing paper-heavy court workflows and using AI to reduce barriers to justice—but he's clear that the goal is augmentation, not replacement.

The New Rules of Government AI

  1. Inventory before you innovate. You can't govern AI you don't know you have. Map your current footprint, including vendor-embedded tools.

  2. Train the frontline, not just IT. Caseworkers, clerks, and administrators need AI fluency to spot errors and exercise judgment.

  3. Assess before you deploy. High-stakes AI systems affecting benefits, housing, or justice require structured impact assessments before launch.

  4. Default to transparency. Public registries and disclosure requirements build trust and enable accountability when things go wrong.

  5. Keep humans in the loop. AI should amplify human judgment in consequential decisions, not replace it.

My Disruptive Take

Here's what I want you to take away: responsible AI adoption isn't a brake on innovation—it's what makes innovation sustainable. The governments that build real workforce fluency, transparent accountability structures, and smart governance frameworks today will be the ones that actually capture AI's benefits five years from now. Those who rush in without this foundation will spend the next decade cleaning up costly failures and rebuilding public trust.

The skills gap isn't just an IT problem. It's a constituent services problem. It's an equity problem. It's a democracy problem. When government workers don't understand the AI systems making decisions about people's lives, citizens lose—and the promise of better public services evaporates into vendor contracts nobody can explain.

If you're leading a team in state or local government right now, the question isn't whether to adopt AI. It's whether you're building the human infrastructure to use it well. That means training, governance, and accountability—not as afterthoughts, but as prerequisites. The technology is already here. The question is whether your people are ready.

Ready to Close the AI Fluency Gap?

For government teams and enterprise organizations of 20+ looking to build real AI readiness across constituent-facing functions—let's talk. We help organizations develop practical training programs, governance frameworks, and responsible adoption strategies that work.

Disruption Now® Podcast

Disruption Now® interviews leaders at the intersection of emerging tech, humanity, and policy. Conversations focused on how builders and decision-makers can operate effectively to accelerate change, empowering humans rather than replacing them.

Keep Disrupting, My Friends.

Rob Richardson – Founder, Disruption Now® & Chief Curator of MidwestCon