Skip to main content

KVC Health Systems

Why AI Chatbots Can’t Replace Human Connection: Protecting Youth Mental Health in a Digital World

AI — that stands for Artificial Intelligence — isn’t easy to avoid. Because AI tools are embedded in most search engines, social media platforms and even our phones, it is present across our lives, whether it’s helping with homework, answering late-night questions, offering reassurance or filling the silence when no one else is available. 

Tools like ChatGPT can feel friendly, fast and endlessly supportive, which is exactly why they’ve become so deeply woven into how young people learn, communicate and cope.

But when a tool starts to sound like a confidant, the line between technology and relationships can blur. And this has a profound effect on how we experience connection.

Assorted AI apps, including ChatGPT, Gemini, Claude, Perplexity, Meta AI, Microsoft Copilot, and Grok, are seen on the screen of an iPhone.

As artificial intelligence becomes more human-like in how it talks, responds and affirms, it raises important questions about AI and mental health. Especially for children and teens who are still forming their sense of self, reality and connection. 

What happens when validation is always immediate? When advice comes without context, ethics or accountability? And when a chatbot feels safer than a real conversation?

AI is not inherently bad, and it isn’t going anywhere. But like any powerful tool, its role in our lives matters. Understanding the intersection of AI and mental health is no longer optional for parents, caregivers and young people themselves. It’s essential.

Here at KVC, we know we all need connection. Let’s explore how AI is shaping emotional wellbeing, what the risks of AI usage are and how families can approach AI and mental health with curiosity, boundaries and care. We’ll show you how to ensure AI doesn’t become a substitute for the human connections everyone needs to thrive. 

The State of AI and How People are Using It

Bethany Nold, Camber Mental Health’s Program Development Coordinator

Bethany Nold, Program Development Coordinator

Feel like you can’t go anywhere to escape AI? You’re not alone. AI’s presence in our everyday lives has rapidly increased, especially in the past five years. As recently as 2020, AI was used in only experimental or analytical tools. 

But today, AI is part of numerous aspects of life, from work to relationships and everything in between. People are using AI for companionship, as a sounding board for their thoughts and to seek validation when they don’t get it elsewhere. That makes the conversation about AI and mental health very important.

One significant challenge surrounding AI and mental health today is the minimal oversight and governance. In other words, there aren’t any checks and balances on how AI is present in our lives. 

Bethany Nold, Camber Mental Health’s Program Development Coordinator, describes AI as a significant and complex tool that currently has no oversight. 

AI and Mental Health

“People can use it however they see fit, even if they don’t have a working understanding of its design,” Nold explains. 

This is similar to concerns raised by the United States Surgeon General in 2024 regarding social media and its risks to youth mental health.

The deceptively affirming nature of AI makes it a dangerous landscape for vulnerable young minds. If you’ve ever used an online AI tool to ask a simple question, you likely understand how it affirms your questions and encourages you in its response. 

Janny Silvay, Director of Admissions at Camber

Janny Silvay, Director of Admissions

Janny Silvay, Director of Admissions at Camber, points out that there are countless news stories of people doing unsafe or even illegal things because AI told them to. 

She also knows many who have turned to AI for friendship or even a romantic connection. According NPR research, a whopping 42% of high school students say they or someone they know have used AI for companionship.

Why AI Can’t Replace Human Connection

Silvay describes AI as “the best friend that says yes to everything you say.” While that might feel good in the moment, it’s actually harmful for connection in the long-term.

Initially, AI chatbots can feel like the ultimate friend or romantic partner because the models are trained to agree with you, with no moral or ethical compass to guide responses. 

Side view three quarter length in Rothbury's woodlands, a group of student teenagers are on an educational walk. They explore nature, blending lessons with the outdoors.

You may have heard the term “LLM” in reference to AI tools. LLM stands for Large Language Model, meaning the tool learns how to respond based on massive amounts of data to understand, generate and process human-like responses. 

Because of this, AI can quickly become an “echo chamber,” with no disagreement or conflict to challenge their beliefs. And that’s the challenge: That’s not how real-world relationships work. 

People are starved for connection, and AI is a quick and easy way to feed that craving. But AI does not have its own opinions, thoughts, values or beliefs. 

While it can respond in a human-like way, these tools are not holistic, independent people. Rather, AI chatbots are tools designed to reflect what you say, making the relationship between AI and mental health a complicated one.

AI also cannot provide vital nonverbal connections through eye contact, presence, touch and the warmth of being with someone. Because AI gives you an imaginary companion who is always available, it can replace healthy coping skills for dealing with loneliness and a lack of connection. 

A teenage girl texts on her phone while lying on her bed.

The Negative Impact of AI for Emotional Support or Therapy

People use chatbots socially and/or professionally, and they may also use AI for mental health therapy. Because of the lack of regulation and oversight, the use of AI for mental health treatment can quickly cascade into severe isolation, delusional thinking and unhealthy or even dangerous treatment advice. 

AI cannot provide professional discernment and uphold professional ethics in the way a real human therapist can. This may lead to problematic issues for the user, like delusions and psychosis. Some have even speculated that AI can directly cause psychosis, though at the very least, it may exacerbate existing mental health conditions.

Alexandra Beineman, Camber’s Senior Director

Alexandra Beineman, Senior Director

Alexandra Beineman, Camber’s Senior Director of Program Innovation and Staff Development, shared an example of someone with intense delusions chatting with AI. If the person claims they are a cosmic being fully capable of seeing the future and controlling others, AI will agree and tell them they’re correct. 

While this is an extreme example, it demonstrates how AI can feed into unhealthy thoughts and delusions. 

“There is no ability to see what reality is when you have something that agrees with you unconditionally,” says Beineman about AI and mental health.

AI and Mental Health: A Violation of Mental Health Ethics Standards

AI is not only unsafe as a form of mental health treatment, but it also violates mental health ethics standards and is not HIPAA compliant

“Anything you say to any AI anywhere — to Siri, or to chatbots like ChatGPT or Claude — all of those things you say can be subpoenaed by the courts,” explains Silvay. “What you say can be used for advertising. It can be sold. It can create a profile of you and sell that to Google or anyone else who wants it.” And that’s reason for caution. 

The only way AI can ever be HIPAA compliant is with a secure, encrypted setup to prevent your information from being accessed, even on the backend. However, that isn’t currently required or advantageous for AI companies.

Real therapists have governing bodies, like the NASW (National Association of Social Workers) or the National Board for Certified Counselors (NBCC). They require a certain level of education and training before providing treatment. These governing bodies also determine ethics standards and guidelines that mental health professionals are required to fulfill. 

Mental health professionals have a clear code of ethics to “do no harm” and are trained to conceptualize what that means. AI is incapable of doing that.

How Parents Can Talk About AI with Children & Teens

While AI has many inherent risks, it’s likely going to stick around for a while. 

We can’t have an extinction mindset with technology,” says Beineman. “Technology is going to be a part of children’s lives whether we want it to or not.” 

As a parent or caregiver, your goal should be to help children learn how to safely use technology like AI. 

Here are some practical ways to have a conversation with your child about AI:

AI and mental health Make Open Conversations the Norm

Have conversations with your child that are open, honest and nonjudgmental. Make yourself a safe space where they can feel comfortable talking about things like relationships and online activity.

Be Curious and Ask Questions

Rather than lecturing your child and critiquing what they do, ask open-ended questions like, “What made you want to talk to AI instead of your friends?” or “What kinds of things do you talk with ChatGPT about?” 

This will help you understand the underlying reasons they’re engaging with AI, so you can better guide them.

Educate Yourself

AI is still new for everyone and is evolving every day. Take the time to educate yourself on how AI works and what’s out there.

Treat AI as a Tool, Not a Person

Personifying AI can get dangerous. While talking about AI with your child, set a clear boundary that AI is just a virtual tool and not a real person.

Teach Critical Thinking Skills

“You have to meet everything with a healthy sense of skepticism,” says Nold. This is an invaluable skill for your children to have, especially when dealing with complex and unregulated tools like AI. Encourage your child to question what AI says to them and ask for its sources.

Father and teenage son visiting psychologist, talking together in the office. Family problems, teenage troubles, therapist, advice, help conceptThe Importance of Accessible Mental Healthcare

It’s easy to turn to AI for comfort or even mental health treatment because it is freely available. Many people don’t have access to quality mental healthcare, so AI seems like the next best thing. 

Unfortunately, AI has a long way to go before it can be considered a safe and effective form of mental health treatment. 

It may be able to reflect things back at users or list symptom information for ADHD, but it cannot diagnose, treat or provide safe and effective responses. When it comes to AI and mental health, AI can provide some amount of emotional support, but never to the level of real human connections.

If your child needs mental health treatment or emotional support, KVC can help. KVC Health Systems’ family of nonprofits operates children’s mental health hospitals in Kansas and provides outpatient therapy in Kansas, Missouri, Kentucky, Nebraska and West Virginia. Learn more about KVC’s mental health services here.

New call-to-action