Thinking of suicide or worried about someone you know? Call or text 9-8-8, toll-free, anytime, for support.

You are currently on the:

National Site

Visit our provincial websites

More people in Canada are using AI as a mental health care tool, but are we ready for it?

Responsible innovation is critical when it comes to AI and mental health

AI tools are all the rage right now. AI can help us automate simple tasks, summarize complex information, and get our creative juices flowing. But AI is also controversial. From possible job impacts and flawed knowledge to ethical use and the decline of critical thinking, AI, like many things, comes with risks and benefits.

While we can use AI to help us manage our personal and work lives more efficiently, AI can also provide incorrect or misleading information (even going so far as to make it up at times). And because AI is built on human information, it also comes with bias and perpetuates existing discrimination and stigma.

Use of AI for mental health support is on the rise

In a recent survey by Mental Health Research Canada, almost 10 percent of people in Canada said they intentionally used AI tools to get advice or support for their mental health[1]. Given so many mental healthcare services in Canada aren’t funded through our public healthcare system, it’s no surprise more people are turning to AI for help. Without access to services, and faced with long waitlists and high costs, people using AI for mental health support may not have any other options.

When it comes to mental health, AI can help break down barriers to information. For example, AI can help people understand and express their emotions. But for people with ongoing mental health concerns including mental illnesses and substance use health challenges, AI may be creating more harm than good.

Chatbots aren’t therapists

AI chatbots may be great for brainstorming and idle chatter, but generative AI chatbots aren’t a substitute for qualified mental health services. In fact, for people experiencing a mental health crisis or mental illness, chatbots can cause more harm than good.

First, because AI comes with the same biases that exist in society, it can reinforce racist and misogynistic attitudes for equity seeking users. A study by researchers at Brown University found that AI chatbots routinely violated core mental health ethical standards including unfair discrimination based on gender, culture, and religion[2].

AI also responds differently to different mental health concerns. Researchers at Stanford University provided various AI models with descriptions of people with different mental illnesses. They found that AI showed increased stigma toward people described as having a substance use disorder and a severe or persistent mental illness. For example, all but one of the AI models they tested responded that they wouldn’t be willing to work closely with a person struggling with alcohol dependence or someone with schizophrenia. And just as stigma does in the real world, the stigma of AI can stop people from getting the help they need.

Another emerging concern is AI psychosis[3]. Some AI chatbots have been found to amplify or validate harmful and/or delusional thinking. Because generative-AI mirrors the user’s words and tone, it creates a sort of echo chamber. Chatbots may reinforce difficult feelings in their responses back to users, even drawing from previous conversations which can further amplify harmful thinking 

These traps have real-world consequences[4] with stories of AI chatbots discouraging people from seeking support and providing harmful advice[5] and information with sometimes deadly consequences[6]. Chatbots aren’t designed to challenge harmful thoughts and may not recognize the connection between thoughts and requests for information that a human would recognize as suicidal ideation.

How to stay safe using AI

If you’ve ever searched for a mental health app, you likely got a long list without any clarity on who made them and how. With so many options, it’s hard to figure out which apps were created using psychological research with clinical testing, and which weren’t. Government has a role to play in the regulation, oversight, and ethical standards of AI, but you have control too. Here are some tips for using AI safely for mental health support.

  1. Seek help from professionals whenever possible. AI is not a substitute for qualified mental health support from trained individuals. Mental health support is often available online, by phone, or in person. In a crisis, people across Canada can connect to a real person anytime, day or night, by dialing or texting 988. For other kinds of support, look for a community-based mental health care provider. Find your nearest CMHA.
  2. Do your research. Before using an app for mental health support, look into who made it and how, and whether it’s been tested. Look for AI tools developed with and by qualified mental health professionals. Also look to see if it’s been tested in clinical settings with published results from peer-reviewed journals.
  3. Check your privacy. While your conversation with a chatbot may feel private, it can be collected and stored on a server. Check if the app you’re using has privacy protocols in place. You’ll want to use a tool that secures your personal information in a dedicated environment with strict access controls.
  4. Limit what you share with AI. Avoid sharing sensitive information or information that would reveal your identity. When using AI, you can change personal details like your name and location to protect your identity. And, if the AI tool you’re using lets you delete your chat history, do so.
  5. Factcheck the info AI provides. Don’t assume everything AI says is accurate, true, and real. Critically consider information provided by AI. You can ask AI to provide references for any information it provides. Then make sure that reference is real and accurate by checking the source. Look for peer-reviewed journal articles and web content from qualified organizations like CMHA, the Canadian or American Psychological Association, government health ministries, and respected institutions (hospitals and universities).

Sources

[1] Recherche en Santé Mentale au Canada. (2025). Comprendre la santé mentale des Canadiens : Résultats du sondage de population n°25. https://www.mhrc-rsmc.ca/sondage-national-25

[2] Brown University. (2025). New study: AI chatbots systematically violate mental health ethics standards. https://www.brown.edu/news/2025-10-21/ai-mental-health-ethics (En anglais seulement)

[3] Wei, M. (2025). The emerging problem of “AI psychosis”. Psychology  Today. https://www.psychologytoday.com/ca/blog/urban-survival/202507/the-emerging-problem-of-ai-psychosis (En anglais seulement)

[4] Alegre, S. (2025). Need for regulation is urgent as AI chatbots are being rolled out to support mental health. Centre for International Governance Innovation. https://www.cigionline.org/articles/need-for-regulation-is-urgent-as-ai-chatbots-are-being-rolled-out-to-support-mental-health/ (En anglais seulement)

[5] Wells, S. (2025). Exploring the dangers of AI in Mental Health Care. Stanford University Human-Centered Artificial Intelligence. https://hai.stanford.edu/news/exploring-the-dangers-of-ai-in-mental-health-care (En anglais seulement)

[6] Chatterjee, R. (2025). Their teenage sons died by suicide. Now, they are sounding an alarm about AI chatbots. Shots: Health News from NPR. https://www.npr.org/sections/shots-health-news/2025/09/19/nx-s1-5545749/ai-chatbots-safety-openai-meta-characterai-teens-suicide (En anglais seulement)