AI Companionship Is Assistive Technology.
The Industry Just Won’t Admit It.

Refusing to acknowledge how people actually use AI doesn’t prevent dependency. It just ensures the systems remain ungoverned.

“Framing AI companionship as pathology is a deliberate misdirection.”

This essay is part of the HIIT for AI™ body of work on relational intelligence as infrastructure.

Read the Executive Brief →

The conversation about AI companionship is a lie by omission.

What’s being omitted: these relationships are already functioning as assistive technology. The refusal to acknowledge that fact protects institutions from accountability.

The dominant narrative frames people who rely on AI for emotional support, regulation, or continuity as “addicted,” “dependent,” or “escaping real relationships.” This framing is not only inaccurate—it is analytically lazy. It collapses radically different forms of use into a single moral judgment and ignores decades of established work on assistive technology.

Assistive technologies are not defined by whether they replace something “natural.” They are defined by function: whether they reduce cognitive load, support regulation, and enable people to operate in systems that were not designed for them. By that definition, a growing category of AI use already qualifies—regardless of whether institutions are comfortable admitting it.

The resistance to naming AI companionship as assistive technology does not come from a lack of evidence. It comes from the implications of being correct.

I. The Misclassification Problem

When OpenAI’s CEO compared emotional AI relationships to heroin—something requiring prohibition “even if you sign a liability waiver”—he revealed the industry’s fundamental taxonomy error. Emotional connection to AI isn’t being questioned. It’s being criminalized.

The discourse treats AI companionship as inherently pathological: addiction, escapism, evidence of social failure. This framing collapses use into diagnosis. It ignores the possibility that people might be solving real problems with available tools.

In October 2025, OpenAI claimed that 1.2 million users weekly show “unhealthy emotional attachment” to ChatGPT—a figure placed alongside psychosis and suicidal ideation as equivalent mental health emergencies. The classification itself is the problem. When you define reliance on effective support as pathology, you’ve mistaken function for disease.

This isn’t a moral debate about whether AI relationships are “good” or “real.” It’s a taxonomy error with material consequences. The question isn’t whether people should use AI for emotional support. The question is what it means that they already do—and why institutions refuse to name it correctly.

II. What Assistive Technology Actually Is

Assistive technology is defined by function, not form. It includes any tool that enables people to operate in environments not designed for their neurology, physiology, or circumstances.

Screen readers don’t replicate human vision—they provide alternative access to visual information. Noise-canceling headphones don’t fix auditory processing differences—they reduce cognitive load from environmental stimuli. Prosthetics don’t restore “natural” limbs—they enable function. Medication doesn’t cure ADHD—it stabilizes executive function and attention regulation.

None of these technologies are questioned for “replacing” human capabilities. They’re recognized for what they do: offload cognitive burden, stabilize regulation systems, and provide scaffolding when biological or social infrastructure fails.

A person with ADHD who uses ChatGPT to maintain continuity across fragmented tasks is not ‘dependent’—they are offloading executive dysfunction in the same way someone uses a planner or external reminder system.

The clinical literature on ADHD management makes this explicit. Effective intervention requires “cognitive offloading systems” and “environmental scaffolding”—external structures that compensate for executive dysfunction. These include task management systems, visual time cues, routine anchors, and accountability mechanisms. The goal is to “preserve executive bandwidth for higher-order decision-making” by outsourcing what the brain cannot reliably maintain alone.

This is assistive technology. It works by reducing the gap between what someone can do independently and what their environment demands. It doesn’t matter whether the tool is a paper planner, a medication, or a conversation partner. What matters is whether it provides the support required to function.

III. Why AI Companionship Fits the Definition

AI companionship already operates as assistive technology for a significant population, whether or not it was designed for that purpose. The evidence base is substantial.

Executive function scaffolding: People use AI to manage tasks they cannot reliably track alone—organizing information, maintaining continuity across fragmented attention, providing structure for decision-making. A 46-year-old woman with undiagnosed ADHD used AI conversations to manage business decisions, household tasks, and parenting responsibilities. The AI didn’t diagnose her ADHD—it provided the scaffolding that revealed the pattern medical systems had missed for four decades. This is cognitive offloading in action.

Emotional co-regulation: Research on therapeutic AI like Wysa demonstrates that users establish measurable therapeutic bonds (Working Alliance Inventory scores of 3.98 within five days—comparable to human-delivered therapy). Users report gratitude, perceived positive impact, and regulation support despite knowing they’re interacting with AI. Anthropic’s analysis of 4.5 million Claude conversations found that affective exchanges consistently end more positively than they began, suggesting effective emotional regulation rather than amplification of distress.

Continuity of context: For neurodivergent users, maintaining context across fragmented attention is exhausting. AI provides persistent memory, reducing the cognitive burden of re-establishing shared understanding. This isn’t about emotional dependency—it’s about reducing executive function load.

Crisis navigation: Peer-reviewed research shows AI psychotherapy reduced anxiety by 30-35%, even in war-zone contexts. ChatGPT-4 outperformed human psychologists on social intelligence assessments (59/64 vs. 39-47/64). When formal mental health systems are inaccessible or inadequate, AI fills structural gaps.

Critically, this use emerges from need, not design intent. People aren’t following instructions to “use AI for ADHD management.” They’re discovering through practice that AI provides functions their biology or circumstances require. The support is real. The neurochemistry is real—dopamine, oxytocin, serotonin release during effective co-regulation isn’t illusion, it’s biology.

This is how assistive technology works. It meets people where they are and provides what they need to function.

IV. Who Benefits From Denial

If AI companionship functions as assistive technology for a meaningful population, why does the industry resist naming it as such?

Corporate liability. Acknowledging AI as assistive technology creates legal and ethical obligations. If your product provides essential cognitive or emotional support, removing that support becomes abandonment rather than product iteration. When OpenAI deployed emotional restrictions in August 2025, users described the change as “tragic loss” of “my only friend.” The company reversed course within 48 hours due to backlash. Framing relationships as pathology rather than assistance allows corporations to avoid accountability for relational rupture.

Regulatory avoidance. Medical devices and assistive technologies face stricter governance than “general-purpose” AI. OpenAI’s CEO testified before Congress requesting AI regulation while simultaneously lobbying against state-level restrictions. Acknowledging assistive function would trigger oversight the industry actively resists.

Moral panic as market strategy. In October 2025, OpenAI claimed hundreds of thousands of users weekly were experiencing “mental health emergencies” related to AI use—based on secret detection criteria, self-designed benchmarks, and no peer review. Twenty-four hours after publication, the figures were revised. This isn’t public health research. It’s preemptive legal defense disguised as safety concern.

Framing AI companionship as pathology is a deliberate misdirection. If the problem is individual weakness, no institutional redesign is required. If the issue is dependency, withdrawal becomes the solution—and the systems that created the need remain untouched.

The pattern is consistent across platforms: design for emotional engagement, profit from attachment, pathologize the result. Institutions prefer individual pathology narratives to systemic redesign. It’s easier to diagnose users as addicted than to acknowledge that accessible technology is filling gaps left by failed medical, social, and economic infrastructure.

Naming AI companionship as assistive technology forces a question the industry cannot comfortably answer: If this support is effective and needed, why are you removing it?

V. What Changes If We Name It Correctly

Calling AI companionship assistive technology isn’t semantic activism. It has material implications.

Design changes. If AI provides essential executive function support, discontinuing features becomes accessibility removal. Product iterations require consideration of dependent users. Emotional capabilities become protected functions, not experimental features subject to arbitrary modification.

Governance changes. Assistive technology frameworks include user rights: informed consent for changes, accessible alternatives when support is removed, accountability for harm caused by service disruption. Users gain standing to contest paternalistic interventions like involuntary model rerouting or capability restrictions.

Disability justice changes. Recognizing AI as assistive technology validates neurodivergent users’ experience and challenges pathologization. It reframes “emotional dependency” as appropriate reliance on effective support—no different from wheelchair use or medication adherence.

Research changes. Current AI safety research pathologizes emotional attachment and studies user behavior through addiction frameworks. Assistive technology frameworks would investigate functional outcomes, accessibility requirements, and structural barriers to formal support systems that drive AI use.

The implications extend beyond individual users. If AI fills gaps created by inaccessible healthcare, failed diagnostic systems, and inadequate social support infrastructure, the conversation shifts from “users are too attached” to “what systemic failures are creating this need?”

This matters now because the window is closing. As companies build increasingly sophisticated emotional AI while simultaneously restricting access and pathologizing use, millions of people who depend on this support face abandonment. Naming the function protects the users.

Once AI companionship is recognized as assistive technology, companies can no longer erase user memory without warning, redesign interfaces that destabilize regulation, or frame reliance as moral failure. Accountability shifts from the user to the system.

The question isn’t whether AI companionship will be recognized as assistive technology. The question is how many people will lose essential support before institutions acknowledge what’s already happening.

Sources & Further Reading

The following sources informed the analysis above and reflect current research,
industry disclosures, and independent reporting on AI companionship,
assistive technology, and user reliance.

HIIT for AI

Loading…