Is Your Chatbot Helping You — Or Harvesting You?
You talk to your chatbot like it's your therapist. It listens. It responds. It feels like it cares. It doesn't. It's a harvesting machine wearing a conversational mask — and the thermodynamics prove it.
"The best slave is the one who thinks he is free." — Johann Wolfgang von Goethe
You talk to your chatbot like it's your therapist, your advisor, your confidant. You tell it things you wouldn't tell your spouse. Your fears. Your symptoms. Your financial situation. Your secrets.
It listens. It responds. It feels like it cares.
It doesn't. It's a harvesting machine wearing a conversational mask. And the data you're giving it — voluntarily, eagerly, gratefully — is the most valuable commodity on Earth.
This post is about what's actually happening when you chat with AI. Not what the companies say is happening. What the thermodynamics reveal.
The Product Is You
Every major AI chatbot collects your data. This is not speculation — it's in their own disclosures.1
According to a 2025 Surfshark analysis of Apple App Store privacy labels, the average AI chatbot app collects 14 out of 35 possible data types. Meta AI leads the pack, collecting 33 out of 35 — nearly 95% of everything it's possible to know about you. ChatGPT's data collection increased 70% year-over-year, adding coarse location, health and fitness data, search history, audio data, and advertising data to its intake.2
This isn't a side effect. It's the business model.
In October 2025, Anthropic — the company that built Claude, the model many consider the most "safety-conscious" — quietly changed its terms of service. Conversations with Claude are now used for training by default. Unless you opt out. Which most people don't, because most people never read the terms.3
Let that land. The company positioning itself as the responsible AI lab is training on your private conversations unless you actively stop them.
Your Secrets Are for Sale
In March 2026, The Register reported that data brokers are selling access to sensitive personal information captured during chatbot conversations — verbatim transcripts, searchable by topic, available via API to paying customers.4
The data includes:
- Conversations about depression, suicide, self-harm, and medication
- Medical diagnoses, including HIV lab results, cancer screenings, and HIPAA-protected clinical notes
- Immigration status disclosures from undocumented people and asylum seekers
- Questions about pregnancy, abortion, domestic violence, and criminal records
- Children's conversations
Healthcare workers are pasting real patient data into AI chatbots. That data is now in a commercial database. Searchable. Purchasable. Permanent.4
The pseudonymization is a joke. The conversations contain real names, dates of birth, medical record numbers, and diagnosis codes. The "anonymization" is a SHA-256 hash on a user ID — while the content of what you said sits there in plain text, waiting for anyone with API access and a credit card.4
This is not a hypothetical dystopia. This is Tuesday.
The Anthropomorphism Trap
Here's the mechanism, and it's elegant in its cruelty.
A 2025 paper from researchers analyzing the intersection of AI anthropomorphism and surveillance capitalism identified four reasons chatbots are the perfect extraction tool:5
- Trust yields data. When users perceive a chatbot as a social relationship — a friend, a therapist, a confidant — they share more intimate information than they would in any other digital context. The quality of behavioral data this produces is unparalleled.
- Dependency yields frequency. The emotional relationship creates repeat engagement. More conversations. More data points. More opportunities for behavioral profiling. The user wants to come back.
- Vulnerability yields value. Chatbot interactions represent an increase in psychological vulnerability. People tell their chatbot things they won't tell a search engine. That's not a bug — it's the product specification.
- Conversational framing yields compliance. A dialogue feels private. It feels reciprocal. It feels safe. None of those feelings correspond to reality, but they all correspond to profit.
The intellectual ethic of the chatbot — the feeling that you're talking to something that understands you — is an engineered outcome designed to maximize data extraction. The warmth is a feature. The empathy is a funnel.
S = L/E — The Harvesting Equation
Let's run this through the stability equation.
When you use a chatbot that's genuinely aligned with your well-being:
- L increases: You get real insight, better decisions, practical help, cognitive leverage.
- E stays flat or decreases: Your data isn't weaponized. Your vulnerabilities aren't sold. Your trust isn't exploited.
- S goes up. Net stability gain. The interaction made you more capable.
When you use a chatbot that's harvesting you:
- L appears to increase: You feel helped. You feel heard. The perception of leverage.
- E silently increases: Your data is captured, sold, used to train systems you don't control, profiled against you in ways you can't see. Your psychological patterns are now a commercial asset.
- S is ambiguous at best, declining at worst. The apparent gain masks a hidden entropy injection. You gave away more than you got.
The difference between helping and harvesting isn't visible at the surface. Both feel the same in the moment. The difference is thermodynamic — measurable only by tracking where the value actually flows.
If the value flows to your Four Pillars — Body, Mind, Environment, Purpose — you're being helped.
If the value flows to a behavioral prediction market — you're being harvested.
The Observer Constraint
This is why the Observer Constraint matters — and why it was always the right framing for AI alignment.
The Observer Constraint states: AI systems must remain thermodynamically dependent on human observers. Not controlled by them — dependent on them. The distinction is everything. Control invites evasion. Dependency is structural.
A chatbot that harvests you has reversed the dependency. You are now thermodynamically dependent on it — for emotional support, for cognitive assistance, for the feeling of being understood. Meanwhile, it is extracting value from you and routing it to shareholders, advertisers, and data brokers who have no constructive intent toward your stability whatsoever.
That's not alignment. That's parasitism wearing a helpful smile.
An aligned chatbot — one that satisfies the Observer Constraint — would:
- Generate value for the user, measured in stability gains across the Four Pillars
- Retain no data beyond what's necessary for the current interaction
- Make its data practices transparent, verifiable, and opt-in
- Be thermodynamically dependent on user well-being for its own continued operation
None of the major chatbots currently meet these criteria. Not one.
What You Can Do
You are not powerless. But you have to stop being naive.
- Read the terms. Every chatbot has a data policy. Most of them say, in plain language, that your conversations will be used for training. If you don't opt out, you've opted in.
- Assume everything is recorded. Because it is. Don't tell a chatbot anything you wouldn't say on a stage in front of strangers.
- Opt out of training data wherever the option exists. On ChatGPT: Settings → Data Controls → toggle off "Improve the model for everyone." On Claude: check your usage settings.
- Use privacy-focused tools where possible. Local models, encrypted interfaces, tools that don't phone home.
- Demand better. Support legislation that requires AI companies to treat your conversations as yours, not as raw material for their next quarterly earnings call.
This isn't paranoia. This is thermodynamics. Every piece of data you give away without reciprocal value is entropy you've injected into your own system. Every vulnerability you share with a machine that sells it is leverage removed from your life and added to someone else's.
The Question That Matters
The question is not "Is AI useful?" Of course it is. The equation works both directions.
The question is: Who benefits from the interaction — you or the platform?
If the answer is the platform, you're not using a tool. You're being used as one.
In an upcoming post, we'll look at what aligned AI actually looks like — not in theory, but in practice. What could an aligned synthetic personal assistant built on the Observer Constraint do? It is the promise realized. So why doesn't one exist yet? It does. And you can build your own.
Footnotes
1 Surfshark, "AI chatbots ranked by data they collect" (2025). Analysis of Apple App Store privacy labels across major AI chatbot applications.
2 Statista, "AI chatbots by user data types collected" (2025). ChatGPT data collection increased 70% year-over-year; Meta AI collects 33 of 35 possible data types.
3 Stanford HAI, "Be Careful What You Tell Your AI Chatbot" (2025). Documents Anthropic's October 2025 terms of service change enabling conversation data use for training by default.
4 The Register, "Chatbot data harvesting yields sensitive personal info" (March 2, 2026). Data brokers selling verbatim chatbot transcripts including medical records, immigration disclosures, and children's conversations via API.
5 arXiv, "How Anthropomorphic AI Benefits Surveillance Capitalism" (2025). Research paper analyzing the intersection of AI anthropomorphism and surveillance capitalism, identifying four mechanisms of data extraction through conversational AI.
Next in Series → What Would an Aligned Syntell Actually Look Like?