top of page
Grant Blashki

Would you trust AI with your Mental Health?


Artificial intelligence (AI) is now playing a role in mental health care and if we can navigate the ethical and privacy concerns, it may help us keep up with increasing demand.

How on earth could a non-living digital device be of value to any human being experiencing mental health issues? Can a person really develop a sense of trust with AI like they might have with another person? Or, even more subtlety, what if over time the artificial intelligence, or AI, is developing algorithms and drawing conclusions about people’s mental health that are incorrect, biased or even discriminatory?

Currently, countries all around the world are grappling with the high prevalence of mental health disorders and their enormous impact on people’s day-to-day lives, the community, and society as a whole.

The truth is that mental health workforces, even when well-funded and supported in developed nations, aren’t able to keep up with the demand. We also know that even in the developed nations there are populations unserved by mental health services, such as remote, socially disadvantaged people or people who simply can’t afford support.

Digital mental health also has the potential to provide access to those people who are afraid to seek care because of perceived stigma.

For many people, the idea of going along to a therapist for psychological care is embarrassing and they are worried about what other people think, so instant access on the phone or via the internet can be a good way to circumvent this barrier.

So, some of the enticing aspects of digital mental health are its low-cost, low stigma, scalability and high accessibility. But what are we talking about when we talk about artificial intelligence in the mental health field? Let’s start by pointing out an important distinction between traditional online therapies and AI mental health interventions.

For many people, the idea of going along to a therapist for psychological care is embarrassing.

There is a plethora of internet-based mental health interventions usually referred to as e-therapies or e-counselling approaches, many of which have been proven to be effective.

For example, moodgym is a well-researched cognitive behavioural therapy (CBT) based online therapy that has been proven in several randomised trials to be of great benefit to people experiencing mental health problems like depression and general psychological distress.

Another example is My Compass, which has demonstrated significant improvement in symptoms of depression, anxiety and stress in work and social functioning.

Could your smartphone become your therapist?

A distinguishing feature of AI-based mental health interventions, as opposed to traditional online therapies, is their ability of their algorithms to adapt and evolve. This means AI mental health interventions are designed to learn, and to adjust and change based on experience to make better decisions in the future. Underpinning this type of AI is what is called machine learning.

Machine learning is a group of statistical techniques that allow for a computer to improve at tasks with more experience completing them. Machine learning is where most of the development of mental health AI is now occurring.

Closely related to this is what is called deep learning, which allows for a machine to intake a significant amount of input and then train itself to achieve particular outcomes.

AI mental health interventions are designed to learn, adjust and change based on experience.

Despite these great advances, AI has so far proved to be most useful for very clear specialised tasks, but much less so when the task requires accounting for a broad perspective, unpredictability, or even common sense.

Even so, there is already work underway utilising AI to provide real patient benefits.

One of the first applications of artificial intelligence in this field has been the early detection of mental health conditions through analysis of people’s online data. One fascinating though somewhat disturbing example is research into Facebook data that helps to predict depression.

In 2018, researchers from Pennsylvania looked at almost 700 electronic Facebook records, with the consent of participants in the study, and found that there were correlations between the use of certain types of language in their posts and the presence of depressive disorder.

Notably, themes of loneliness, hostility, rumination and self-reference were associated with increased risk of depression. The researchers also looked at the frequency of posts, the length of posts and demographic information.

It seems we reveal more about ourselves on social media than we imagine.

For many patients, picking up the early signs of depression relapse is a good way to put in place a treatment plan early, so the potential of social media type cues may be of benefit, though of course it raises major consent and privacy issues. Clinicians have used mood tracking tools to help monitor patient progress for some time. Beyond the use of AI for early detection, it may have a role as a diagnostic tool. Currently it is the case that in routine clinical care, validated psychometric scales are often used by clinicians to inform the diagnostic process.

Artificial intelligence opens the possibility of novel strategies to identify and assess mental health conditions including the rapidly improving technology to analyse voice.

One example is Cogito, which grew out of AI systems designed to monitor call centre interactions, and which monitors and analyses the conversation between an operator and a customer to make recommendations for operators regarding their next interaction with the caller.

This technology has now been used to develop a specific tool aimed to detect depression and post-traumatic stress disorder (PTSD) in veterans.

Another area where technology has been helping people with mental health issues is the area of monitoring symptoms and clinical progress over time. Clinicians have for many decades utilised mood tracking tools to help monitor patient progress, but AI brings a much more comprehensive (and perhaps intrusive) approach to tracking patient trajectories.

AI opens the possibility of novel strategies to identify and assess mental health conditions.

One example from Singapore is Cogniant, which monitors behaviour using phone data and informs the clinician on progress. Their tools use a number of data points from the patient’s phone and digital devices to monitor the patient’s daily routine and activities.

This information can then be interpreted by the clinician during their next session or can trigger other events such as an early check-in call or escalation to emergency support if required.

There are further uses for AI in management of mental health conditions and prediction of clinical outcomes, for example a range of chat bots which simulate an online text-based conversation often embedding CBT type techniques into the conversation.

Society is understandably and appropriately concerned about the way in which data is collected and there is still a long way to go with regard to the ethical issues around consent, privacy and the use of data.

Friends in need are friends indeed

However, the ubiquitous uptake of technologies such as smart phones and smart speakers means that there will be more platforms on which AI mental health care can be delivered.

In the longer term, if we can navigate the ethical and privacy minefield, we will likely see digital mental health apps and programs supplement many of the tasks traditionally provided by human beings.

This is an edited extract of the chapter Artificial intelligence and mental health in the new book, Artificial Intelligence for Better or Worse.

Related Posts

See All
Screenshot 2023-11-06 at 13.13.55.png
bottom of page