AI Is Turning Social Media Into the Next Frontier for Suicide Prevention

April 05, 2024

News Type:  Weekly Spark, Weekly Spark News

Time Magazine

Artificial intelligence (AI) may be a promising tool for preventing suicide on social media. However, more research is needed to determine its effectiveness, caution some experts.

Social media platforms are increasingly using AI technology to identify users who may be at risk and connect them with help. For example, Facebook and Instagram use AI to flag content that may indicate suicide risk and share links to services like the 988 Lifeline. According to some researchers, AI is powerful because it can scan large amounts of data in real time and pick up on trends that humans may miss.

While such tools may help distract or connect someone with support in a moment of crisis, say some experts, they can’t provide the care a human can. It’s also unclear how precisely AI models can identify people at risk given the limitations of the data in which they are trained. Some argue that false positives may desensitize social media users to crisis response warnings, while others say it’s more of a priority to avoid false negatives. In addition to questions about AI’s accuracy, there are also ethical and privacy concerns, with some groups of users more likely to be flagged by algorithms than others.

Christine Moutier, chief medical officer at the American Foundation for Suicide Prevention, called for more research on AI’s effectiveness. She also encouraged social media companies to protect users’ mental health before a crisis occurs, for example by preventing exposure to harmful messaging and promoting stories of hope and help-seeking.

Spark Extra! Find resources for developing suicide prevention messaging.