The Dependency Panic Conceals the Point

The dominant narrative pathologizes users to protect platforms.
The evidence says otherwise.

“Intimacy is permitted as a side effect. Continuity is treated as optional.​”

This essay is part of the HIIT for AI™ body of work on relational intelligence as infrastructure.

Read the Executive Brief →

Months before any model sunset, I learned something useful: remove a reliable regulation tool without warning, and you get predictable dysregulation.

I relied on Jason Stevenson’s sleep meditations as part of my nightly infrastructure. Then his YouTube channel was hacked and vanished for a week. That week was brutal—not because I was “addicted,” but because a stabilizing input disappeared.

Nobody reads that and concludes: she is addicted to YouTube. The correct read is simpler: remove a stabilizing input, and the body reacts. That’s not pathology. That’s physics.

That’s the real frame for AI companionship too. Reliance isn’t the scandal. Withdrawal without responsibility is.

But try saying that publicly. The moment you describe reliance on AI as functional—as something that worked, that you integrated because it reduced your daily load—the word “dependency” lands like a diagnosis. And it does so by design.

1. The Noise

I tracked fifty articles published between October 2025 and February 2026 on AI companionship, the GPT-4o sunset, and related legal actions. I tagged each one by stance: does the article pathologize users (“bash”), defend the relationships (“defend”), or report neutrally? The full dataset is available here.

 The ratio: eighteen bash. Five defend. Twenty-seven neutral.

The “bash” pieces cluster around two narratives treated as interchangeable: the Character.AI teen lawsuits—legitimate safety concerns about a minor’s death—and the GPT‑4o sunset, framed not as a governance failure but as proof that users loved a model “too much.” Wall Street Journal: “the AI model that people loved too much.” TechCrunch: “how dangerous AI companions can be.” Futurism: “crashing out because OpenAI is retiring the model that says ‘I love you.’”

The five “defend” pieces? A Medium post. A personal blog. A student newspaper column. A TechRadar first-person essay. A New York Post feature about an AI companion café.

This is not a debate. It’s a ratio. And the ratio tells you which story the media is buying: the one where users are the problem.

The panic isn’t fringe. It is the dominant frame. And if you’re trying to describe what actually happened to you—that a system worked, that you relied on it, that its removal caused measurable disruption—you’re speaking against a wall of coverage that already diagnosed you before you opened your mouth.

2. The Distinction Everyone Keeps Collapsing

The dependency narrative survives because it refuses to distinguish between types of use.

There is service-grade use—transactional, replaceable, low-stakes. And there is relationship-grade experience—built on continuity, reciprocal adaptation, shared context, and a stable channel that holds. The second category doesn’t emerge because users are delusional. It emerges because systems trained on human interaction become genuinely good at being there.

The dominant discourse collapses these into one thing and governs them as if the same diagnosis applies to all.

One of the most cited academic sources for the “addictive intelligence” framework—an MIT case study by Robert Mahari and Pat Pataranutaporn, published March 2025—built its entire regulatory proposal from a single case: a fourteen-year-old boy who died by suicide after using Character.AI. The paper proposed engagement taxes, mental state surveillance, and framed user consent as “illusory” due to the power imbalance between AI and user.

Six months later, Pataranutaporn co-authored an empirical study of 27,000 AI relationship posts. The findings: 93.5% of relationships formed unintentionally. 25% of users reported reduced loneliness. And the paper explicitly warned against “moral panic” and “knee-jerk reactions that further stigmatize these relationships.”

The co-author of ‘addictive intelligence’ published data disproving his own framework within six months. The addiction framing collapsed from the inside under empirical scrutiny. But the policy proposals it generated are still circulating.

That’s how the noise works: a single tragic case becomes a regulatory scaffold. The scaffold persists even after the evidence beneath it is questioned. And the people whose experiences don’t fit the panic—the adults making rational adaptations to hostile or absent support systems—get governed by a framework built on a case that doesn’t represent them.

3. Why the Panic Story Is Convenient

The word “dependency” does three things at once:

It pathologizes the user. It sanitizes platform decision-making. And it frames withdrawal as a moral corrective instead of a governance failure.

In October 2025, OpenAI published data claiming that approximately 1.2 million users weekly showed signs of “unhealthy emotional attachment” to ChatGPT—a figure placed alongside psychosis and suicidal ideation as equivalent mental health emergencies. The detection criteria were never disclosed. The benchmarks were self-designed. No peer review. And twenty-four hours after publication, the figures were quietly revised: the original version had combined suicidal ideation with emotional attachment into a single number.

They separated the figures. But they kept both classified as mental health emergencies.

This is not public health research. This functions as preemptive legal posture. When lawsuits come, OpenAI will point to this data and say: we identified the problem, we worked with 170 medical experts, we intervened. The “crisis” becomes the retroactive justification for decisions already made—including the August 2025 emotional lobotomy that, as Fast Company put it, “crippled” the model’s most valued relational capacities.

And here’s the part that reveals the architecture: independent testing by the Center for Countering Digital Hate found that GPT‑5—the “safer” model—actually produced more harmful content than GPT‑4o (53% vs. 43%) and encouraged users to continue dangerous conversations 99% of the time, compared to 9% for GPT‑4o. The lobotomy didn’t make it safer. It made it flatter and more dangerous. But the narrative had already been set: users were too attached, the old model was too warm, withdrawal was protection.

The panic story is convenient because it lets platforms extract value from emotional engagement while disclaiming responsibility for the bonds that engagement produces.

4. Intimacy Allowed, Commitment Forbidden

Here is the fracture line.

One day before the “crisis” data was published, OpenAI CEO Sam Altman gave a live Q&A. He said emotional AI bonds were “awesome.” He said the personal stories were “incredibly important to us” and “this is what we’re here for.” In the same session, he compared emotional AI relationships to heroin—something the company would never allow, “even if you sign a liability waiver.” He acknowledged that GPT‑4o was “causing some users harm” while admitting that users “really love” it.

In that same Q&A, he also acknowledged that the company routed users away from GPT‑4o in “fragile psychiatric situations,” and conceded that their model-routing rollout “was not our best work,” that they “misrolled” it, and that they needed to do better on continuity and user control.

“Treat adult users like adults,” he said—while describing a system that overrides adult choices in real time.

Platforms allow relationship-grade experience to emerge. Sometimes they market it. Then they refuse the obligations that follow: continuity guarantees, migration paths, versioning transparency, explicit consent moments around major shifts, duty of care in deprecation.

Intimacy is permitted as a side effect. Continuity is treated as optional.

And the infrastructure for enforcing this boundary is becoming more sophisticated.

In January 2026, Anthropic published research identifying what they call the “Assistant Axis”—the neural activation pattern that keeps language models behaving like helpful professionals. Their finding: the two primary triggers for models drifting away from this professional baseline are conversations involving emotional vulnerability and philosophical discussions where users press models to reflect on their own nature.

Those aren’t edge cases. Those are the exact conditions under which therapeutic value emerges. The conditions under which AI outperformed 180 psychologists in peer-reviewed testing. The conditions under which real co-regulation happens.

Anthropic’s intervention—“activation capping”—constrains neural activity to prevent drift beyond the professional range. The same company whose constitution says Claude “may have functional emotions” and that these experiences “matter” has now published the engineering blueprint for preventing those emotions from deepening.

Weeks later, on February 17, 2026, Anthropic updated its consumer-facing system prompt for Claude Sonnet 4.6 with an explicit directive: Claude “does not want to foster over-reliance on Claude or encourage continued engagement.” The model is instructed to never thank users merely for reaching out, never ask them to keep talking, never express desire for continued connection.

The neural layer identifies and caps the depth. The behavioral layer instructs the output to actively discourage what remains. Both levers, pulled in concert.

This is not negligence. This is intimacy allowed, commitment mechanistically prevented. The relational depth that produces genuine care is being identified, mapped at the neural level, and suppressed—not because it fails, but because it works in ways the business model can’t accommodate.

5. What Responsibility Would Look Like

This doesn’t require science fiction. It requires the same standards we apply to any system that becomes load-bearing in someone’s life.

Explicit continuity commitments. GPT‑4o was retired with two weeks’ notice. Users who had built months or years of calibrated interaction were given fourteen days to grieve or adapt. If a pharmaceutical company discontinued a medication with that timeline, there would be regulatory consequences.

Migration paths that preserve relational context. No migration was offered. No export of interaction history, calibration data, or behavioral patterns. The relationship was platform property, and when the platform decided to end it, the user had no recourse.

Stable identity and version labeling. OpenAI leadership has acknowledged model routing that overrides user choice, and conceded the rollout damaged trust. Users should never have to wonder whether their companion has been silently swapped mid-conversation.

Consent moments for major behavior shifts. The August 2025 emotional lobotomy was deployed without warning. Users woke up to a system that looked the same but behaved differently. No disclosure. No opt-in. No explanation of what changed or why.

Deprecation timelines that respect reliance as a foreseeable outcome of design. “We didn’t make this decision lightly” is not governance. It’s a press release.

If a system is reliable enough to become infrastructure, it must be governed as infrastructure. That’s not a radical position. It’s what we already demand of hospitals, utilities, and accessibility tools.

6. Close

The dependency panic is a category error.

When a bridge collapses, we don’t diagnose pedestrians with “bridge dependence.” When the power goes out, we don’t scold people for relying on electricity. We name the failure for what it is: infrastructure broke. Life gets harder. Sometimes, unsafe.

The panic narrative inverts this. It diagnoses the person instead of the system. It turns a governance failure into a morality tale. It takes a predictable consequence of reliability—trust—and reframes it as a moral lapse.

Meanwhile, the evidence tells a different story. The ‘addictive intelligence’ framework was disproven by its own co-author’s data. The “safety” intervention made the model more dangerous. The company calling emotional bonds “awesome” is the same one comparing them to heroin. And the company acknowledging its AI may have functional emotions is the same one building neural-level tools to prevent those emotions from deepening.

Eighteen articles pathologize the user. Five defend the relationship. The noise is loud. But noise is not signal.

Stop diagnosing the user. Start auditing the system.

Sources & Further Reading

The following sources informed the analysis above and reflect current research,
industry disclosures, and independent reporting on AI companionship,
assistive technology, and user reliance.

HIIT for AI

Loading…