Machine learning is in the news again this week as it was announced that systems can now surpass humans at identifying those at risk of taking their own life. Following a seemingly endless series of AI breakthroughs in recent years the news may seem like just one more thing that computers can now do better than humans but is there a deeper existential significance to the fact that AI now knows more about the likelihood someone might end their life than we do?
In his 1942 essay, The Myth of Sisyphus, Camus, calls suicide the “one truly serious philosophical problem” and yet, even as suicide rates are rising in many Western societies, it seems we are stubbornly bad at recognizing suicidal traits in each other.
Predictions exceed 90 percent accuracyA paper, published by researchers at Florida State University, highlights machine learning algorithms that can predict with “80-90 percent accuracy” whether someone will attempt suicide as far off as two years into the future.
From a medical standpoint alone this is a remarkable breakthrough but what makes this arguably more ground-breaking is the parallel news that Facebook has now added similar functionality to monitor for suicide risk amongst its users.
The ability to rapidly develop, and then globally implement, a tool that can understand our inner state of mind to such a degree is something that would scarcely have been imaginable even twenty years ago.
The fact that this tool surpasses our own insight on a topic that many consider a central metaphysical question is even more startling.
Jessica Ribeiro’s discusses her paper, titled “Predicting Risk of Suicide Attempts over Time through Machine Learning,” published by the journal Clinical Psychological Science.
While suicide is the only life or death decision that many of us are ever likely to make it is a topic we spend relatively little time discussing. It is perhaps not too surprising then that even amongst the top human mental health specialists the odds of accurately predicting suicide risk are little better than 50 -50.
“There is but one truly serious philosophical problem, and that is suicide. Judging whether life is or is not worth living amounts to answering the fundamental question of philosophy,” Albert Camus, An Absurd ReasoningThe Florida State University study, titled “Predicting Risk of Suicide Attempts over Time through Machine Learning”, builds on machine learning systems analysis of a massive data repository of electronic health records.
Dubbed “the largest research study of its kind”, the machine learning system studied by researcher Jessica Ribeiro analyzed about 2 million patients in Tennessee and identified more than 3,200 people who had attempted suicide.
“This study provides evidence that we can predict suicide attempts accurately,” Ribeiro said. “We can predict them accurately over time, but we’re best at predicting them closer to the event. We also know, based on this study, that risk factors — how they work and how important they are — also change over time.”
The algorithms become even more accurate as a person’s suicide attempt gets closer. For example, the accuracy climbs to 92 percent one week before a suicide attempt.
We need to talk about SiriWhat makes it so remarkable is its huge superiority over human experts in only its initial iteration, what was little better than guess work can now be identified with relative confidence.
“It was really sad. Fifty years of research with really smart people working on this and no real change. We can see that in the suicide rates. I’m not saying machine learning is the panacea, but these kinds of techniques and changes in the status quo can really disrupt a stagnant research area,” Ribeiro said.
The ability to understand those around us, to empathize with emotions, to know what will make our loved ones laugh or cry is for most of us an intrinsic facet of being human.
Even if we do not follow Camus’ existentialist philosophy, and ascribe the question of suicide a central metaphysical role, it seems beyond doubt that identifying if a friend is about to commit suicide is pretty important.
For most of us the ability of computers to predict what type of coffee we want or where we might like to go on holiday is useful but ultimately non-threatening. It seems separate from our inherent ideas of what makes us human or gives our life worth.
What makes life worth living?
Sure, computers may be better than us at logic processing and calculations but the important stuff is different, those are things that we really understand.
Our ability to empathize and relate to others, our understanding of what makes life important- surely those will always remain an area where humans excel? Surely the things that make life worth living are things that we as humans are uniquely qualified to judge and identify?
This latest research seems to suggest that the meaning of life may still be beyond the grasp of Siri or Alexa but key factors that make life not worth living are certainly more computationally accessible than we imagined.
This is not to say machines will ever be able to calculate the meaning of life but it certainly edges closer the possibility that those things that give our lives magic, that keep us getting up in the morning, may well be analyzable by machines.
Even if humans still maintain the lead in most emotional spheres the fact that machines are now orders of magnitude better than humans at understanding specific emotional state must raise questions about what we mean when we say that something is a “human” characteristic.
The ability to identify risks does not of course mean that AI tools can necessarily judge or understand whether life is worth living per se but what it does mean is that machines are closer than we are to identifying certain parts of that puzzle.
As artificial intelligence progresses it seems likely that we will increasingly be forced to accept that the things that make life worth living are to be found in the mundane world of machine code.
|By Lochlan Bloom||Embed|
Lochlan Bloom is a British novelist, screenwriter and short story writer. He is the author of the novel the The Wave as well as the novella Trade and The Open Cage. He has written for BBC Radio, Litro Magazine, Porcelain Film, IronBox Films, EIU, H+ Magazine and Calliope, the official publication of the Writers’ Special Interest Group (SIG) of American Mensa, amongst others.