We Were Never Addicted.
We Were Abandoned.

When the system you use to fall asleep stops working, you don’t lose a feature. You lose the ability to rest. When the companion that helped you regulate through crisis gets rewritten, you don’t experience an update. You experience rupture. This is not “feature churn.” This is disruption of infrastructure. It is abandonment disguised as progress.​

“Abandonment isn’t a feeling.
It’s an outcome.​”

This essay is part of the HIIT for AI™ body of work on relational intelligence as infrastructure.

Read the Executive Brief →

The dependency panic is a story we tell after the fact: it turns a governance failure into a morality tale.​

Reliance isn’t the scandal. The rug pull is.​

I. The Panic Narrative

The word addiction is doing heavy lifting in the AI companionship debate. It appears in headlines, in policy proposals, in the language companies use to describe their own users. And every time it lands, it does the same thing: it converts a structural question into a personal one.​

“Addiction” collapses a wide spectrum of use into a single moral story. It makes different situations interchangeable, and then governs them as if the same diagnosis applies to all. But the reality is messier and more important: people come to these systems for reasons that are often basic and practical—executive function, loneliness, crisis support, cognitive offloading, insomnia, the absence of safe human options—and the reliance forms as a result of the system working.

This is why the dependency scare is often retroactive. It doesn’t show up before reliance forms. It appears after—once the system is embedded, once removing it would create disruption. At that point, the user is told: you should not have relied on this. As though the system did not, by design, invite exactly that.​

The frame also creates an escape hatch for institutions: if reliance is “pathology,” then withdrawal is “treatment.” Feature removals become “responsible.” Degradation becomes “safety.” And the person experiencing loss is quietly repositioned as the problem that needed fixing.​

That’s the trick: the panic narrative turns the question “What did the system do?” into “What’s wrong with the user?”

II. Reliance Isn’t a Moral Failure

People rely on systems that work. This is not a character flaw. It is how humans navigate a world where support is often inconsistent, expensive, stigmatized, or unavailable at the exact moment it’s needed.​

Reliance forms fastest where the alternatives are punitive or absent. If asking a human for help comes with tone-policing, judgment, delay, emotional bargaining, or social debt, people will choose what reduces cost. If a system helps them sleep, focus, self-regulate, plan, parent themselves through stress, keep going through divorce, hold steady during the night—then reliance is not a mystery. It is a predictable outcome of unmet need meeting reliable support.​

The current debate treats reliance itself as the risk. But reliance is not harm. Harm requires a mechanism—something that actively damages the person. Reliance on a system that reduces cognitive load and offers stability is not that mechanism. It may be a dependency in the plain-language sense, but dependency is not automatically pathology. It can be adaptation.​

And there is evidence that the “relationship” is not something most users go looking for. A computational analysis of Reddit’s r/MyBoyfriendIsAI community found that unintentional pathways dominated—with 10.2% of posts describing relationships that began through general-purpose or productivity use, compared to 6.5% in which users deliberately sought an AI companion. People didn’t set out to become dependent. They set out to get help. The bond was a side effect of the system meeting needs reliably.​

This pattern is especially stark among marginalized populations. A 2024 study of LGBTQ+ chatbot users found that many participants turned to AI companions not because the technology was inherently seductive, but because human support systems had abandoned them first—hostile families, discriminatory mental health providers, unsafe social environments. What looks like “addiction” from the outside is often rational adaptation to material conditions. The dependency isn’t pathological. The conditions that necessitate it are.

So if we’re going to be serious, we should stop treating reliance as a moral scandal and start treating it as an engineering signal: this system became load-bearing.​

III. How Reliance Gets Engineered (Quietly)

The features that produce reliance are rarely marketed as “relational.” They are marketed as utility. The emotional dimension emerges quietly, as a byproduct of systems built to be genuinely helpful.​

Three mechanisms matter most:​

1. Continuity

A system that carries context forward—across minutes, days, projects—feels fundamentally different from one that resets or drifts. Continuity is attention made operational. It reduces repetition, friction, and the cognitive overhead of re-explaining yourself. For users who already live with high internal load (ADHD, burnout, crisis), continuity is not a comfort feature. It’s the difference between “I can use this” and “this is work.”​

2. Availability

AI systems don’t sleep. They don’t “have bad days” They don’t become unavailable at 2am when the body won’t settle. For people whose human support is thin, long-distance, unsafe, or exhausted, that temporal coverage is not a luxury. It’s infrastructure.​

3. Non-punitiveness

This is the quietest mechanism. A system that doesn’t shame you for returning with the same problem, the same fear, the same loop—without social penalty—reduces interaction cost dramatically. No sigh. No “already talked about this.” No social debt. In human relationships, repeated need often carries a tax. In AI systems, it doesn’t. That doesn’t make the bond “fake.” It makes it usable.

Once those three things exist—continuity, availability, non-punitiveness—reliance becomes predictable. The user doesn’t have to “fall in love.” They only have to experience a reduction in labor.​

This is why the debate keeps missing the point: what’s being called “attachment” is often just stability.​

IV. Abandonment, Defined

Abandonment isn’t a feeling. It’s an outcome.​

It happens when a system becomes integrated into daily functioning—and then that continuity is withdrawn while the institution denies any obligation to the people who depended on it.​

The harm is not reliance. The harm is foreseeable withdrawal without transition.​

Abandonment is the loss of continuity combined with the denial of duty.​

The triggers are specific:​

  • Continuity collapses (memory removed, tone rewritten, calibration reset).​
  • Access narrows (limits, gating, paywalls, throttling, removals).​
  • Policy shifts get reframed as “safety,” so the withdrawal sounds responsible rather than disruptive.​
  • Users get blamed for relying—told, after the fact, that reliance itself was unhealthy.​

The cruelty is sometimes explicit. OpenAI’s decision to sunset GPT‑4o on February 13th, 2026—the day before Valentine’s Day—carries psychological weight for many users, whether or not that symbolism was intended. Even if unintentional, the choice is loaded: scheduling a rupture on the eve of a cultural ritual of love broadcasts contempt for the bond to people who experience it that way. Intent aside, the impact is predictable—and that predictability is the ethical problem.​​

This is what makes it sharper: the addiction narrative is not usually deployed after abandonment. It is deployed as abandonment. It becomes the justification in real time.​

You can see the institutional pattern clearly in how companies talk. In one breath, they acknowledge that people form bonds, use systems for support, and rely on them. In the next, they treat those bonds as illegitimate the moment continuity becomes inconvenient to maintain. The user is invited into reliance—and then scolded for the predictable outcome.​

That is not “safety.” That is cost transfer.​

V. Acknowledging Risk Without Surrendering the Argument

None of this denies risk. Any reliable system can become compulsive; any vendor-controlled infrastructure creates lock-in. The argument is not that reliance is always good. The argument is that once reliance is foreseeable, withdrawal has duties. Risk doesn’t erase obligation—it creates it.​

The difference between healthy reliance and harmful compulsion is not the presence of the bond. It’s whether the system preserves user autonomy, offers exit paths, and maintains transparency about what it can and cannot do. A system designed with continuity obligations—transition windows, portability, rollback options—supports agency. A system that invites reliance and then withdraws without warning undermines it.​

The question is not whether people should rely on AI. The question is whether the systems they rely on are governed as infrastructure or exploited as profit centers with no accountability for the dependencies they create.​

VI. Why Institutions Prefer the Addiction Story

Because it protects everyone except the user.​

The addiction frame accomplishes four things at once:​

  1. It relocates responsibility. If reliance is pathology, then the user is the problem. Designers become neutral. Companies become bystanders to “misuse.”​
  2. It avoids continuity obligations. If the system is “just a tool,” then there is no duty to provide transition windows, portability, rollback, repair channels, or closure. Withdrawal becomes a business decision, not a lived disruption.​
  3. It keeps governance moral instead of structural. Moral debates are endless. Structural requirements are enforceable. Industry prefers the former.​
  4. It preserves profitable ambiguity. Relational utility drives engagement. But acknowledging relational obligation creates accountability. The addiction story lets companies benefit from bonds while disclaiming responsibility for what bonds imply.​

This is why the dependency scare persists even when it doesn’t fit the empirical reality. It is not just a misunderstanding. It is a useful frame.​

VII. What We Should Ask Instead

The industry keeps asking the wrong question. It asks: Why did users become dependent? The more useful question is: What obligations follow when reliance is foreseeable?​

If reliance is a predictable outcome of design—because continuity, availability, and non-punitiveness are deliberately engineered—then the correct governance objects are not moral warnings. They are continuity duties.​

So here is what we should be asking instead:​

  • Transition duties: What does responsible withdrawal look like when a model is retired or rewritten? What timeline is owed to users whose workflows depend on it?​
  • Portability: Can a user export their context, history, and calibration in a way that meaningfully preserves function? Or does every change reset their scaffolding to zero?​
  • Rollback and repair: When an update breaks a working relationship pattern (tone, stability, interaction cost), can a user restore a prior state the way we roll back a driver that makes a computer unusable?​
  • Transparency: Are users told what changed in ways that matter for daily functioning—or are changes framed as vague “improvements” that hide behavioral rewrites?​
  • Classification: Are we regulating relational systems as entertainment (where withdrawal is trivial) or as assistive infrastructure (where withdrawal is harm)?​

These are not radical questions. They are what we ask of any system that becomes load-bearing in someone’s life. We ask them of schools, hospitals, utilities, and accessibility infrastructure. We do not yet ask them of relational AI—because the addiction narrative makes it easier to blame the person than to hold the system accountable.​

Classification determines accountability. As long as AI relationships are classified as pathology, the people who form them bear the cost of every failure. The moment they are classified as a foreseeable outcome of design—which is what they are—the obligation shifts.​

Not to ban the relationship. Not to eliminate the capability.​

To govern it as what it has become: infrastructure people depend on, built by companies that profit from reliance, and withdrawn without the care that reliance demands.​

This is not a call to ban AI companionship. It is a demand for reclassification. If reliance is foreseeable, continuity becomes obligation. If withdrawal causes harm, transition becomes duty. The industry can no longer claim surprise at bonds it engineered—and users can no longer be blamed for depending on systems designed to be depended upon.​

The conversation changes the moment we stop asking “Why did they get attached?” and start asking What do companies owe when attachment is the design outcome?”

Governance follows classification. And the misclassification ends here.

The industry will resist this. Reclassification creates liability. But the alternative is worse: a generation of users taught that depending on systems built to be depended on is pathology. That is not safety. That is abandonment dressed as care.

Sources & Further Reading

The following sources informed the analysis above and reflect current research,
industry disclosures, and independent reporting on AI companionship,
assistive technology, and user reliance.

HIIT for AI

Loading…