At any given moment, somewhere in the United States, a teenager is texting the words “I don't want to be here anymore” to a stranger. That stranger is a trained crisis counselor at Crisis Text Line, and the reason they are reading that message first, before hundreds of other incoming texts, is because an AI flagged it as high risk.
Crisis Text Line was built on a simple premise: young people in crisis are more likely to text than call. Today, the platform has processed over 250 million messages from people in pain and operates in four countries. But the challenge was never just about volume. It was about triage. When thousands of people text in at the same time, how do you figure out who needs help right now?
The Triage Problem
In a hospital emergency room, nurses use a triage system to determine who gets seen first. A broken arm waits. A heart attack does not. Crisis Text Line faced the same problem, but with text messages instead of patients, and no established system for doing it digitally.
Their data science team built a machine learning model that analyzes incoming messages and assigns a risk score based on language patterns. The model was trained on millions of real conversations, with outcomes labeled by clinical supervisors. It learned that certain word combinations, sentence structures, and even texting patterns are strong predictors of imminent risk.
The model does not make decisions. It does not respond to texters. What it does is sort the queue, so that counselors see the most urgent messages first. The result: high risk conversations are now answered 4x faster than they were before the system was deployed.
What the Data Revealed
One of the most surprising findings from Crisis Text Line's data is when people reach out. The peak hour for crisis texts is not during the day. It is between midnight and 2 AM, when most other support services are closed. For LGBTQ+ youth, the peak is even later. For veterans, it tends to cluster around national holidays.
The AI also revealed that the word “pills” in a message is 16 times more likely to be associated with an active crisis than the word “suicide” itself. People in the most danger often do not use the words we expect. The model picks up on these patterns in ways that a simple keyword filter never could.
The Ethics of Mental Health AI
Crisis Text Line has faced scrutiny over how it handles its data. In 2022, the organization came under fire for sharing anonymized data with a for-profit spinoff called Loris.ai, which used conversation insights for customer service applications. The backlash was swift and the partnership was ended.
This episode illustrates one of the central tensions in AI for social good: the data that makes these systems powerful is also deeply sensitive. Crisis Text Line has since tightened its data governance policies and committed to never sharing individual level data with any third party, commercial or otherwise.
The lesson is important. AI can do extraordinary good in mental health, but only if the organizations deploying it hold themselves to a higher standard of privacy and consent than the law requires. When you are dealing with people at their most vulnerable, trust is not optional. It is the entire foundation.
Looking Ahead
Today, Crisis Text Line operates in the US, Canada, the UK, and Ireland. Similar models are being adapted for use in India, Brazil, and South Africa, where the ratio of mental health professionals to population is even more dire. The World Health Organization estimates that globally, there are fewer than 2 mental health workers per 100,000 people in low income countries.
AI will not solve the mental health crisis. But it can make sure that when someone reaches out at 2 AM, the person who needs help the most gets answered first. And sometimes, that is the difference between life and death.
Sources: Crisis Text Line Annual Impact Reports, Journal of Medical Internet Research (2021), The New York Times (2022), World Health Organization Mental Health Atlas (2023).