Should we trust AI with our mental health? Discovering the data.

Should we trust AI with our mental health Cristina Imre

AI – the KING

During the last decade, the word AI became a buzzword in everyone’s mouth. But the same goes for mental health. It’s very obvious that mental health deteriorated and the trend skyrocketed during COVID-19.

When it comes to AI, this magical compound of the two words – artificial intelligence – apparently, you’re holding the secret of all achievements. You can get out of many discussions cleverly if you mention AI or that your company is using AI in many different areas. You rock!

Data analytics, big data, predictive analytics were one by one replaced by the term AI, even if some companies didn’t change much in their ways to use their data. But AI is a great marketing term, we can all agree with that.

Let’s get to the basics. Don’t worry, we don’t go into all the details, just enough to link all the findings to mental health. Bear with me, it won’t hurt much.

What AI actually is?

Since it’s artificial and relies on computer science we’ll presume that many things can be done a lot better and faster than with our neural human networks.

Humans can learn using their senses, while AI uses symbolic learning, such as computer vision or robotics. The other big part of AI, machine learning, uses [through pattern recognition] statistical learning, such as speech recognition or NLP. Also, machine learning uses deep learning by replicating what the human brain does through neural networks. For example, deep learning can result in object recognition.

As the norm, machine learning with all its layers targets two things: classifications and/or predictions.

Now, how intelligent the AI actually is?

Let’s take machine learning as an example. You need a lot of data to gather enough information to recognize and predict what you want. First, the machine learns the pattern, then it can make predictions based on that learning.

Compared to us, machines can learn from lots of high-dimensional data to determine patterns and emit predictions. Your AI is as good as the data that feed the AI. It’s not only about the quantity but even more so, the quality.

What we can learn from here is that an AI can be brilliant if it’s based on the right set of data and the programming of that data. But if not, your AI can be worse than a headless chicken. And a lot more dangerous as well.

Dangers of AI

Based on the previous assumptions no wonder some voices set the alarm about the dangers of AI. Yuval Noah Harari or Elon Musk are two voices that set the stage.

Yuval Harari: “A.I. is as threatening as climate change and nuclear war.”

Elon Musk: “Nothing will affect the future of humanity more than digital super-intelligence.”

But the dangers won’t stop Elon Musk from learning as much as he can from the field or getting ahead of the curve with innovative startups like Neuralink that go even at the extreme use of AI. I would translate my understanding from Elon that becoming cyborgs one way or another could protect us from becoming too irrelevant.

Now it’s obvious that creating an indeed super-intelligent machine can backfire. The reverse could be that AI becomes the tool to progress at new levels of our existence.

The overall acceptance is that no such tool should be used by singular entities and that regulatory measures backed up with an open-source approach could be the right way to proceed.

How about mental health and AI?

Now that we have a general view about AI, let’s see if putting our mental health in the hands of artificial intelligence is a good or a bad thing.

How could a machine help us become better and sane?

Sci-Fi movies debated this thesis for a long time. Movies prepare the world scene for the change, remember? Movies prepare the collective mindset for something big to come long before it becomes the norm. That’s why we call those movies Sci-Fi not drama. They can act as the force of good or evil, depending on the brainwashing or awakening that needs to be installed. Our software needs to be ready otherwise the program fails to run.

But how can we assess correctly if relying on AI in the most intimate way possible, such as mental health, is a good or a bad thing?

Where we are today with mental health

Mental health in the U.S. and worldwide declined in the past 20 years. COVID-19 accelerated the process at scale.

  • Pre Covid 20% of Americans had anxiety, depression, or both.
  • Post-Covid 50% of Americans say the coronavirus crisis is harming their mental health.
  • Mental health conditions are common among teens and young adults.
  • In America, over 40 million are dealing with mental health concerns.
  • This could cost the global economy up to 16 trillion by 2030 if problems are not addressed.
  • Participants with high pandemic-related distress are 40 times more likely to have clinically significant levels of anxiety and 20 times more likely to have clinically significant symptoms of depression
  • CDC report shows 1 in 4 people in the 18-24 age bracket have seriously considered committing suicide at some point during the pandemic
  • Reports of anxiety and depression have tripled during the pandemic.
  • A new national survey in young adults reveals “significant depressive symptoms” in 80% of participants.
Should we trust AI with our mental health
Is There a Shortage of Mental Health Professionals in America?

This is not a nice picture to show, see, or digest. The data goes on and on and no matter where you go to search for it, you’ll find similar findings.

It shows us two things.

  1. We cannot keep up with the increasing demand.
  2. One of the most vulnerable groups is represented by young adults.

Just imagine having generations of people in a bad mental state.

What type of leaders they will be? How will they improve the human condition and create a better world? How they will make better decisions if they are under severe problems themselves?

Food for thought

We are living amazing times with options that no predecessor had before us. Our ancestor’s history was a constant day by day hell compared to our lives. And still, we are fragile and unhappy. Too unhappy and imbalanced compared to what we have.

Progress and technology brought us goods but forgot about our human needs, especially empathy.

People feel more isolated inside a world of connectivity. The paradox of technology. We are not making a good job to reestablish a good mental balance. The data shows a big failure in this regard.

What could AI do?

Could AI become an effective mental health professional? Could we trust AI with our problems?

Well, the last question was already established. It seems that people are trusting more an AI to tell sensitive information and be open about their problems. Things didn’t change since 2014, on the contrary.

Study: People Are More Likely to Open Up to a Talking Computer Than a Human Therapist

This can also be noticed through our open observations. How many times did you open up in front of a stranger compared to a family member? I mean sensitive information.

Why people trust an AI?

Because it’s no bias there, but an intent to understand and/or help without a hidden agenda? An AI doesn’t want to prove you wrong or right, but, based on what we already learned above, classify the findings and predict.

Then, the well-trained AI can take actionable steps and maybe even save your life because it can detect things you as a human are flawed to miss.

People need connection!

We all want to be heard and understood. In the current environment where speed is everything and superficial covering of human emotional needs seen as non-important, an AI can play a much better role in making us feel at least worthy and listened to. Even these two things alone have a tremendously positive effect on someone who is on the verge of depression, anxiety, or any other mental disease, acute or chronic.

A bunch of avatar apps trained to listen and act as they care could flood the market and produce positive results even without a trained AI in the background. The market is huge and not even started.

And when it comes to pattern recognition, intensity, risk, triggers we, poor humans, cannot beat an AI, only if we have some extra sensorial skills way above the average. And even if some of us have these antennas, the shortage of us it’s a lot bigger then the mental health professionals.

What we have it’s just not enough.

How about empathy?

I remember a presentation from LG about smart homes where the main point I took was that a fridge can successfully replace your partner and fulfill your needs much more than a human when coming home. Because it can read you correctly. Next, the fridge can control and adjust the home environment accordingly. Then, add a nice-looking avatar, well trained knowing how to greet you based on a precise detection of your needs. Compare this to an upset spouse that you were late and showing unsatisfied emotions as soon as you show your face at the door. Case closed.

We are not doing a good enough job preserving empathy in us. For a long time, I thought this is the real danger. If we lose even this humanesque feature of us in front of a robot. But now I see this differently.

Empathy in AI could save us from losing our empathy, through competition. It should strike us directly in our face with how good a human-like job they can do when it comes to emotions, and we could even get more alert in preserving our natural gifts. It’s a wishful thought, I get it, but when it comes to care and nurture, we all need to do a better job, one way or another.

This is the ultimate target. Until that part, we can have lots of useful ways to make AI technology help us. It can start from assessing the youth correctly in schools, universities, giving us an accurate picture in real-time, without too many hours or years spent on human-like detection and treatments. Then giving us the projections about the options and finally, measuring the results.


When it comes to mental health, AI won’t take jobs but will provide the necessary air to compensate for the shortages. And regarding highly dedicated professionals it can become that A-tool, helping them get results in real-time, not ages.

It’s a lot of space to be filled in this emerging mega-trend, and there will be more or less effective tools out there. But the ones providing the right data and ways to predict, then provide what’s needed, can save generations of people debilitated by their unstable mental health status.

Without a healthy, functional mind, we can have it all, but with no joy. We have to make sure that what we create is the right thing to do, with accountable measures and tools to help humankind, not vice versa. It’s the mission of SignalActionAI as well, together with other brilliant startups, all aligned to save an inevitable problem ready to hunt us.

Written by Cristina Imre
Co-Founder SignalActionAI