Around 800 000 people die from suicides every year. It is one person every 40 seconds. Finding ways to prevent people from taking their lives is crucial.
The most common approaches to finding people in the risk zone are questionnaires and conversations with doctors. Not everyone, however, is willing to answer the questions truthfully and even come for medical help. The suicide stigmas and the negative perception of suicidal people make it extremely hard to get accurate results. That’s where AI is brought into play. Analyzing medical and social data can help identify suicidal people more accurately and save their lives. How well does AI detect such people? What can be done after that? And what are the disadvantages of using AI?

Identifying suicidal people thanks to AI
Two groups of data can be used: medical and social.
AI can find a correlation between the medical records of the patients and suicidal inclinations. For example, people with mental and terminal illnesses and past suicide attempts are more likely to have suicidal ideations and act upon them. What is less obvious is that even such illnesses as diabetes or arthritis can play their role (it was mentioned in the WebMD article that US veterans with these diagnoses had higher chances of committing a suicide).
When people don’t want to go to doctors and are scared of being judged by their friends and family, they turn to the Internet: ask Google questions, take online tests, and read articles and forums. Our activity online is analyzed by AI and can help comprehend a person’s mental state. Facebook and Google already monitor users’ behaviors to identify people at risk. If you ask Google about something related to suicides, the first sites you will see are about getting help.

Facebook went further. If AI finds something suspicious in your posts or comments, a special team will check the information and, in some cases, even contact the police.
Trying to save people
After identifying people in the risk zone, there seem to be two main approaches to saving them.
A more stern approach is acting quickly and firmly. A good example is welfare checks by the police. The effectiveness of this method, however, is questionable. According to New York Times, an officer once came to a woman after Facebook informed the police of a potential suicide. The woman claimed that she was fine and wouldn’t hurt herself, but the officer brought her to the hospital for a mental health assessment anyway.
In other cases, the outcome was much worse. Mason Marks, in his article “Artificial Intelligence-Based Suicide Prediction,” described many situations when policemen killed people during such welfare checks. For instance, in 2014, Jason Harrison’s mother asked the police to help her bring Jason to the hospital (he had schizophrenia and bipolar disorder). Harrison met the police with a small screwdriver in his hand. The officers shot the man.
A mild approach is sending supportive emails and showing people that it is possible to receive help (that’s where pop-ups with suicide hotlines can play their role). The main idea here is to show people that they are not alone and stop their negative train of thought. In the WebMD article, a veteran Dan Miller, who was once close to committing a suicide, says: “If I happened to be online, searching maybe for a bridge to jump off of … and suddenly that pops up on the screen, it’s like it changes the channel.”
According to WebMD, a study of 4730 veterans in the risk zone proves the effectiveness of the method. Half of the veterans from this group were receiving supportive emails with positive personal things that the veterans shared with the scientists (love for a certain sport or activity, for example). In two years, it turned out that more veterans receiving emails stayed alive compared to the other half.
Disadvantages of AI
The use of AI is in no way an ideal option. False identifications can be very dangerous. People can be restricted and forced to go through medical assessment or even medical treatment. Falsely identifying people with mental issues can stigmatize such people even more and hurt them. In some cases, the welfare checks of the police can cause confrontations and even deaths.
Moreover, we can’t be sure that social data related to mental state is safe. Medical data is stored and controlled by medical institutions that have no right to disclose it. Google, Facebook, and other companies that gather social data are not bound by such strict laws. Even though Facebook promises that data related to mental state won’t be disclosed, it is hard to believe that, considering many information security scandals the company was involved in.
Mason Marks believes that companies can decide to sell suicide information to third-parties, which will cause discrimination. For example, people will be denied housing, employment, or life insurance.
Conclusion
AI can be both a useful tool in preventing suicides and a threat to our well-being and privacy. But if we manage to create laws for protecting our data, find ways to verify this data, and safely help suicidal people, many lives can be saved.
Uliana Reneiskaya
Sources:
https://yjolt.org/sites/default/files/21_yale_j.l._tech._special_issue_98.pdf
https://www.webmd.com/mental-health/story/suicide-prevention-and-AI
https://www.theverge.com/2022/3/30/23001468/google-search-ai-mum-personal-crisis-information
https://www.ncbi.nlm.nih.gov/books/NBK531453/
https://www.atrainceu.com/content/3-screening-suicide-risk
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6165520/
https://www.nytimes.com/2018/12/31/technology/facebook-suicide-screening-algorithm.html
First of all, this is a very well-written blog.
Now, about the blog, I would like to begin by saying that I totally agree with you about how standard methods in identifying people in the “risk zone” such as questionnaires and conversations with doctors are not 100% reliable as more often than not, people lie to others about themselves and their current mental state as we are not used to asking someone for help or being asked for help. The majority of people are often good at hiding their true thoughts and emotions, so I agree that AI can help assess a person’s suicidal behavior. I believe that using a person’s activity on the internet and social media can be used to assess suicidal risks and even though it is still not the most reliable method, it is definitely a form of hard proof and is more reliable than what the person would say in questionnaires or meetings with doctors because like I said before, people do tend to lie in situations like these for reasons such as lying to build their self-worth, lying about sensitive topics (where suicidal behavior is most definitely a part of) and more.
It is also surprising to see how you mentioned that even diseases that are NOT terminal diseases can be a cause of suicide (“mentioned in the WebMD article that US veterans with these diagnoses had higher chances of committing a suicide”).
While this idea of using AI to monitor suicidal risks is a good idea, I’m glad that there are some disadvantages mentioned as well, especially about false identifications and forcing a medical assessment and/or medical treatment, as that could lead to a waste of manpower and resources (which can be utilized on people who actually need it) such as the police doing a welfare check based on comments or posts made on social media which turned out to be a joke or perhaps the person didn’t mean it or the doctors/psychologists who have to sit with the patient and assess/analyze their mental health only to find out that they were in a good state both mentally and physically. It is quite hard to tell if somebody is joking about an issue as serious as this on social media or the internet and hence, this makes AI an unreliable method of recognizing suicidal behaviours and can be easily fooled as it is programmed to act based on a specific pattern, regardless of the fact whether or not the person is serious or not. As long as the algorithm picks up what it’s looking for from its pre-existing set of rules that indicate a suicidal risk, it will flag the person for being in the “risk zone” but I guess it is better to be safe than sorry.
I just hope that the algorithm is able to self-learn and self-evolve to flag people that don’t exactly show symptoms of suicide, but is able to analyze the person’s emotions and behavior for any signs of self-harm or suicidal risk. But I think we’re still a long way from perfecting this algorithm or any algorithm to fully understand a person’s emotions and behaviour.
Overall, I really liked this blog and how you were able to explain the role of AI in recognizing suicidal behavior or people who are in the risk zone.
Unfortunately, the number of people committing suicides truly tends to grow these days, which is why I believe that professional data analysts should unite their efforts and try their best in improving the reliability of suicide-detecting AI systems in order to combat this issue as soon as possible and minimise the risk of harm to individuals. I fully agree that the sterner method of preventing suicide intentions is dangerous since it could considerably impair the mental health of people. I don’t think that convincing a person to stay alive by force will be fruitful. As far as I am concerned, the mild approach, mentioned and described in the post, is the best solution to deal with suicidal thoughts of human beings. In my opinion, a person who considers suicide is a person who has a craving for help and support, therefore the generation of supportive emails and posts, showing society’s care is the best approach to lower suicidal cases safely and harmlessly.