Question the Default: Why Anthropic’s Data Policy Change Matters

A quick note on the importance of reading the terms of service! One Week to Midwestcon 2025!

For New Disruptors

Disruption Now® is a tech-empowered platform helping elevate organizations in entrepreneurship, social impact, and creativity. Through training, product development, podcasts, events, and digital media storytelling, we make emerging technology human-centric and accessible to everyone. This week’s newsletter is about how AI companies quietly shift the rules of engagement and why you should never trust the defaults.

With AI Services Privacy Flips by Default

Last week, Anthropic made a subtle but powerful change: Claude users are now opted in by default to having their conversations used for training unless they explicitly navigate to settings and opt out.

Previously, Anthropic stood out by requiring explicit consent (a thumbs-up/down system) before using your data. That policy is gone. Now, unless you take action, your data could be stored and reused for up to five years.

The community response has been sharp. Privacy advocates call it a betrayal of trust. Reddit and Hacker News threads have lit up with users saying they feel blindsided. And yet, this isn’t unusual. OpenAI, Google, and others have already taken similar steps.

The particularly frustrating part? The opt-out is buried in settings and only appears briefly in a quick window when the change is rolled out. You have to know how to look for it. How many users will just click “ok” without reading and continue chatting, unaware that their conversations are now training data? Most of them, which is precisely the point.

The lesson? Don’t get comfortable with defaults. Today’s privacy-friendly tool can quietly become tomorrow’s data-hungry platform. If you don’t read the fine print, you might already be part of the training set.

Consumers vs. Enterprises: The Two-Tier System

One detail stands out: enterprise customers are shielded. If you’re a business paying for Claude through a commercial plan, your data stays your data. The default switch only applied to individual users.

That tells us something about where the real leverage is in this market. Enterprises' big clients with negotiation power get ironclad privacy guarantees. Consumers, meanwhile, are nudged into “consent by inaction.”

My Disruptive Take

Anthropic justifies this shift as necessary for safety and model improvement. More real-world conversations make Claude better at coding, reasoning, and nuanced dialogue. That’s true, but its improvement comes at the cost of your autonomy, unless you take the time to opt out. This isn’t just about Anthropic. It’s about the nature of AI as a service. These platforms aren’t stable products; they’re living systems whose rules, capabilities, and terms can change overnight.

That means your relationship with AI has to be active, not passive. Think of every tool like a rental car: check the terms before you drive it off the lot. Because in this game, if you aren't intentional with your data, you aren't using the product as a consumer; you become the product by default.

MidwestCon 2025 at the 1819 Innovation Hub

MidwestCon is where the future of AI, policy, and human-centered design collide. Join industry leaders, founders, and disruptors for three days of learning and connection.

Disruption Now Podcast

Keep Disrupting,

Rob, CEO of Disruption Now & Chief Curator of MidwestCon