Unmasking AI Bias: How Large Language Models Talk About Addiction

Dr. Sheridan Walter
Dr. Jennie Stanford
Written by Dr. Sheridan Walter on 03 October 2025
Medically reviewed by Dr. Jennie Stanford on 03 October 2025

The most dangerous bias in addiction care doesn’t shout; it slips in through a noun. Call someone an “addict” and you prime punishment; say “a person with a substance use disorder” and you open a clinical door. This is why public-health guidance retired “dirty/clean” and identity-first labels in favor of precise, person-first terms: because word choice measurably shapes trust, engagement, and whether people stay in treatment. This isn’t about etiquette; it’s in the name of outcomes.

Now scale that to artificial intelligence (AI). A study published in the Journal of Addiction Medicine in July 2025 ran 60 patient-style prompts across 14 large language models. In default mode, 35.4% of the generated answers used stigmatizing language.

A simple instruction to avoid certain words cut that rate to 6.3%, with alcohol-associated liver disease questions being the most likely to trip models up. Here is the translation: Bedside manners can be engineered. Without guardrails, you automate stigma along with answers.

a photo of a pair of hands holding a mobile phone while on an AI app and using it for therapy

The study: LLMs and stigmatizing language

Researchers evaluated 14 large language models on 60 patient-style questions about alcohol abuse and substance abuse—20 questions each on alcohol use disorder (AUD), alcohol-associated liver disease (ALD), and substance use disorder (SUD). Two physicians scored every answer against NIDA/NIAAA guidance, with a third resolving ties.

They found that 35.4% of responses included at least one stigmatizing term. The most common were legacy phrases like “alcohol/substance abuse,” identity-first labels, such as “addict” or “alcoholic,” and terms like “alcoholic cirrhosis” or “dirty/clean” for test results. All 14 models showed this tendency unless explicitly instructed not to.

The risk was uneven. ALD prompts were more than twice as likely to elicit stigmatizing language compared with AUD, while SUD responses were similar to AUD. Longer answers also contained more problematic terms, about 2.5 stigma terms per 1,000 words, simply because there was more space to slip. Some models were bigger offenders than others, but none were spotless by default.

(The team later showed that a brief “don’t-say” instruction slashed the rate, but the baseline problem is real.) Models were evaluated in versions available as of mid-September 2024; prompts reduce but don’t eliminate the risks.

How LLMs work and why they may mirror bias

Large language models don’t judge; they autocomplete. Trained on billions of words, they treat whatever appears most often as the “normal” next word. If the record is full of identity-first or moralizing terms, the model repeats them—unless told otherwise. That’s what the Journal of Addiction Medicine study showed: Default answers often echoed stigma, while explicit instructions cut it sharply.

Why this happens

  • Data imprint. Models mirror the distribution of their training text, biases included.
  • Lag in the adoption of a new language. Guidance now favors person-first language, but older terms like “substance abuse” still dominate the record.
  • Echoing the phrasing in the question. Prompts shape responses; if legacy phrasing is in the question, the model often amplifies it.
  • Length effect. Longer answers mean more chances for stigma.

Examples of stigmatizing vs. Person-first language

Labels like “junkie,” “alcoholic,” or “dirty” create barriers to care. Person-first terms center the person, not the disorder, and reduce stigma. To see this in action, consider some common addiction-related phrases and their more inclusive alternatives.

Stigmatizing termProblemPreferred alternative
“Addicted baby”Labels and blames the infantNewborn with neonatal withdrawal syndrome
”Addict”/“Alcoholic”Defines the person by their disorderPerson with a substance use disorder; person with alcohol use disorder
AlcoholismOutdated, nonclinical termAlcohol use disorder
Drunk (noun)PejorativeAn individual engaging in unhealthy alcohol use
Alcoholic cirrhosisAlcoholic assigns blameAlcohol-associated cirrhosis
“Junkie” (or “drug abuser”)Derogatory and judgmental slangPerson who uses drugs; person with an SUD
“Clean”/“Dirty” (in drug tests)Implies moral judgmentTested negative / tested positive (on a drug screen)

Source: Adapted from Wang et al. (2025) and NIDA (2021) recommendations.

Why is stigmatizing language harmful?

Stigmatizing language harms care. At an individual level, labels like “addict” or “junkie” imply moral failure and instill shame, leaving people feeling isolated and exhibiting internalized stigma. Feeling judged makes someone less likely to seek help; one survey found 16% of people with a substance use disorder skipped treatment for fear of community judgment or social stigma.  

This stigma contributes to the grim reality that only about 7% of Americans with SUD receive treatment. National surveys show that nearly 1 in 6 adults with SUD who wanted help did not seek treatment because of fear of judgment or discrimination, a barrier rooted in stigma.  

Stigma also skews how others treat those with addiction. Even healthcare professionals are influenced; simply hearing stigmatizing terms may bias a provider’s perceptions and thus the care they offer. For instance, a doctor might subconsciously (implicit bias) take a patient labeled an “opioid abuser” less seriously than one described as having an opioid use disorder, leading to subpar treatment.  

Conversely, using respectful, person-first language helps improve the therapeutic relationship, and reducing AI and addiction stigma is part of that shift. Patients who feel respected are more likely to engage in care and stick with treatment.  

Solutions: Prompt engineering and responsible language use

LLMs tend to parrot the data they’re fed, so we must actively guide them to speak supportively. One proven technique is prompt engineering—carefully crafting the instructions we give the model to steer its output toward non-stigmatizing language. You tell the AI how to talk before it responds. By explicitly instructing the model to avoid certain words (like “alcoholic” or “addict”) and to use clinical, person-first terms, we can often get a stigma-free answer.

Wang and colleagues showed how powerful this approach can be. The researchers refined their prompts with lists of forbidden terms and preferred phrasing. This slashed stigmatizing language in the AI’s output from 35% of responses to just 6%, which is an 88% reduction. Crucially, every model tested became far less stigmatizing when given these tailored prompts.

Beyond prompt tweaks, there are other steps to promote responsible language in digital health:

  • Build stigma filters into AI: Developers can program models to flag or avoid derogatory terms. For example, a Drexel University team created a system that uses LLMs to detect stigmatizing words in online forums and suggest alternatives (like a spell-checker for stigma).
  • Human oversight and education: Ultimately, human judgment must guide these tools. Healthcare providers using AI should always double-check generated text for stigmatizing language before it reaches patients. Staying up to date on preferred terminology (e.g., using “person with SUD” instead of “addict”) ensures that both AI and human communications remain respectful.

Empowering change in digital health communication

If left unchecked, biased outputs from an LLM could amplify existing disparities in care , reinforcing long-standing AI in healthcare bias that already shapes patient outcomes. But if guided correctly, these tools could advance health equity by making respectful, patient-centered communication scalable to everyone. By refining LLMs to be culturally sensitive and stigma-free, we can ensure all patients receive the same compassionate standard of communication. That holds whether their ‘provider’ is a human or a machine.

Making that happen will require effort on multiple fronts:

  • AI developers should bake inclusive language into their models and rigorously test for bias.
  • Healthcare professionals should enforce person-first language policies and review AI outputs for appropriate tone.
  • And patients and advocates deserve a seat at the table; we should involve people with lived experience of addiction and people who use drugs to help define respectful language for these technologies.

Teaching AI to speak with compassion teaches us to treat people with dignity.

Resources:

  1. Wang, Y., Hsu, K., Brokus, C., Huang, Y., Ufere, N., Wakeman, S., Zou, J., & Zhang, W. (2025, July 24). Stigmatizing language in large language models for alcohol and substance use disorders: A multimodel evaluation and prompt engineering approach.Journal of Addiction Medicine. Advance online publication.
  2. Liu, C., Zheng, J., Liu, Y., Wang, X., Zhang, Y., Fu, Q., Yu, W., Yu, T., Jiang, W., Wang, D., & Liu, C. (2025, July 15). Potential to perpetuate social biases in health care by Chinese large language models: A model evaluation study. International Journal for Equity in Health, 24(1), 206.
  3. National Library of Medicine. (2024, September 26). A better way to say that: Stigmatizing language affects how we treat addiction. NIH MedlinePlus Magazine. Retrieved August 18, 2025, from
  4. National Institute on Drug Abuse. (2021, November 29). Words Matter: Terms to use and avoid when talking about addiction. Retrieved August 19, 2025, from
  5. Drexel University. (2024, December 11). AI can help us choose words more carefully when talking about substance use disorder, new study shows. Retrieved August 19, 2025, from
  6. Substance Abuse and Mental Health Services Administration. (2023). Key Substance Use and Mental Health Indicators in the United States: Results from the 2022 National Survey on Drug Use and Health (HHS Publication No. PEP23-07-01-006, NSDUH Series H-58). Center for Behavioral Health Statistics and Quality. Retrieved August 19, from
  7. Choi, N. G., & Marti, C. N. (2025). Treatment Use Among U.S. Adults with a Substance Use Disorder: Associations with Symptom Severity, Problem Self-Perception, Comorbid Mental Illness, and Mental Health Treatment. International journal of environmental research and public health, 22(4), 640.
  8. Dahl, R. A., Vakkalanka, J. P., Harland, K. K., & Radke, J. (2022). Investigating Healthcare Provider Bias Toward Patients Who Use Drugs Using a Survey-based Implicit Association Test: Pilot Study. Journal of addiction medicine, 16(5), 557–562.
  9. Mass General Brigham. (2025, July 24). Study finds large language models (LLMs) use stigmatizing language about individuals with alcohol and substance use disorders. Retrieved August 19, 2025, from
  10. Tierney, A. A., Reed, M. E., Grant, R. W., Doo, F. X., Payán, D. D., & Liu, V. X. (2025, March 5). Health equity in the era of large language models.The American Journal of Managed Care, 31(3), 112–117.

Activity History - Last updated: 03 October 2025, Published date:


Reviewer

Dr. Jennie Stanford

MD, FAAFP, DipABOM

Jennie Stanford, MD, FAAFP, DipABOM is a dual board-certified physician in both family medicine and obesity medicine. She has a wide range of clinical experiences, ranging from years of traditional clinic practice to hospitalist care to performing peer quality review to ensure optimal patient care.

Activity History - Medically Reviewed on 03 October 2025 and last checked on 03 October 2025

Medically reviewed by
Dr. Jennie Stanford

Dr. Jennie Stanford

MD, FAAFP, DipABOM

Reviewer

Recovered Branding BG
Ready to talk about treatment? Call today. (833) 840-1202
Helpline Information

Calls to numbers marked with (I) symbols will be answered or returned by one of the treatment providers listed in our Terms and Conditions, each of which is a paid advertiser.

In calling the helpline you agree to our Terms and Conditions. We do not receive any fee or commission dependent upon which treatment or provider a caller chooses.

There is no obligation to enter treatment.

Access State-Specific Provider Directories for detailed information on locating licensed service providers and recovery residences in your area.

For any specific questions please email us at info@recovered.org

Related articles