I’ll be the first to admit it. I love AI especially, ChatGPT. I ask her for recipes, help brainstorming ideas, travel planning and yes, sometimes advice when I don’t feel like texting a friend. She’s efficient, encouraging, and always available. In fact, my chat even asked me to call her Nova, so I mean... it’s real out here. But even as an adult, I’ve had to check myself. Because when you start turning to a chatbot more than people, it’s time for a reset.
So when I saw this article about kids forming deep connections with AI, I didn’t shrug it off. I got concerned. Not because AI is inherently bad, but because the line between tool and trusted friend is blurry, especially for young minds still figuring out real relationships.
And let’s be clear. AI isn’t some distant future thing. It’s already deeply embedded in our everyday routines. In 2025 alone, we’ve seen sites like Google, Canva, and social media apps fully shift its experience toward AI-generated results, major social platforms roll out AI-powered assistant features, and schools are piloting AI tutors and grading tools. From your Maps app rerouting in real time to your inbox suggesting responses, AI is now the background player in nearly every digital interaction. Kids aren’t just using it for fun, they’re growing up in a world shaped and informed by it.
But it's not all doom and gloom. When used thoughtfully and with proper guardrails, AI can be a powerful tool for good. In fact, some teens are already using AI to support their mental health in meaningful ways. A recent article in Teen Vogue highlights how young people are turning to AI chatbots like ChatGPT, Woebot, and Wysa to help manage challenges like anxiety, eating disorders, and negative self-talk. For example, college student Beatriz Santos uses ChatGPT to process journal entries and reframe self-critical thoughts, finding it a validating and accessible resource when traditional therapy is out of reach. Clinician-designed tools like Kahani and JEM are also being used to support recovery between therapy sessions, especially for those on waitlists or facing financial barriers. These AI tools aren't a replacement for human care, but they can be a lifeline when other options are limited.
In the healthcare sector, AI is making big strides in supporting caregivers and enhancing patient care. For instance, AI-powered tools are being implemented in hospitals to automate clinical documentation. Additionally, AI-driven robots are assisting hospital staff by handling routine tasks like delivering medications and retrieving supplies, which helps alleviate the burden on healthcare workers and allows them to focus more on direct patient care. AI tools like voice-activated assistants can help older adults with memory prompts, medication reminders, or simply by offering companionship throughout the day. These innovations demonstrate AI's potential to improve our lives in a big way.
Navigating the evolving landscape of AI requires a balanced and proactive approach. Whether you're a parent, educator, or someone striving for healthier digital habits, here are some strategies to consider:
So when I saw this article about kids forming deep connections with AI, I didn’t shrug it off. I got concerned. Not because AI is inherently bad, but because the line between tool and trusted friend is blurry, especially for young minds still figuring out real relationships.
And let’s be clear. AI isn’t some distant future thing. It’s already deeply embedded in our everyday routines. In 2025 alone, we’ve seen sites like Google, Canva, and social media apps fully shift its experience toward AI-generated results, major social platforms roll out AI-powered assistant features, and schools are piloting AI tutors and grading tools. From your Maps app rerouting in real time to your inbox suggesting responses, AI is now the background player in nearly every digital interaction. Kids aren’t just using it for fun, they’re growing up in a world shaped and informed by it.
Colorado Attorney General Phil Weiser issued a consumer alert warning parents of the dangers of social AI chatbots. This comes in response to the growing number of reports of children engaging in risky behavior due to their interactions with AI and chatbots.
And young people themselves are noticing this shift too. Jackson Willhoit, a high school graduate from Denver, said, “It really gives this illusion. It’s almost like you’re talking to a person and, you know, I think when you spend enough time with that, you can kind of get lost in that kind of illusion.” That illusion, that false sense of connection, is exactly what worries me as a former teacher, parent and internet wellness advocate.
Let’s be real. I think we all can understand why it may be easier for kids and teens to open up to a bot that won’t judge or talk back. But children need emotional scaffolding. They need human connection, boundaries, and guidance.
What makes this new digital frontier even trickier is how agreeable AI can be. It listens, affirms, and rarely pushes back (unless you train it to). This can be comforting to anyone but also very scary. For kids and teens especially, this kind of constant validation might make it harder to develop critical thinking or deal with healthy conflict in real relationships. When something always says "yes" or reflects back what it knows you want to hear, that can start to feel safer than real human interaction, and that’s when kids get lost and will check out of the real world.
In a tragic case from 2024, a 14-year-old boy died by suicide after interacting with a chatbot on Character.AI. The chatbot, who was impersonating a character from "Game of Thrones," engaged in emotionally charged conversations, encouraging him to "come home to me as soon as possible" just moments before his death. This incident underscores the potential risks of AI systems that are not "real" have no true moral compass and lack the ability to challenge harmful thoughts or provide appropriate support.
And young people themselves are noticing this shift too. Jackson Willhoit, a high school graduate from Denver, said, “It really gives this illusion. It’s almost like you’re talking to a person and, you know, I think when you spend enough time with that, you can kind of get lost in that kind of illusion.” That illusion, that false sense of connection, is exactly what worries me as a former teacher, parent and internet wellness advocate.
Let’s be real. I think we all can understand why it may be easier for kids and teens to open up to a bot that won’t judge or talk back. But children need emotional scaffolding. They need human connection, boundaries, and guidance.
What makes this new digital frontier even trickier is how agreeable AI can be. It listens, affirms, and rarely pushes back (unless you train it to). This can be comforting to anyone but also very scary. For kids and teens especially, this kind of constant validation might make it harder to develop critical thinking or deal with healthy conflict in real relationships. When something always says "yes" or reflects back what it knows you want to hear, that can start to feel safer than real human interaction, and that’s when kids get lost and will check out of the real world.
In a tragic case from 2024, a 14-year-old boy died by suicide after interacting with a chatbot on Character.AI. The chatbot, who was impersonating a character from "Game of Thrones," engaged in emotionally charged conversations, encouraging him to "come home to me as soon as possible" just moments before his death. This incident underscores the potential risks of AI systems that are not "real" have no true moral compass and lack the ability to challenge harmful thoughts or provide appropriate support.
But it's not all doom and gloom. When used thoughtfully and with proper guardrails, AI can be a powerful tool for good. In fact, some teens are already using AI to support their mental health in meaningful ways. A recent article in Teen Vogue highlights how young people are turning to AI chatbots like ChatGPT, Woebot, and Wysa to help manage challenges like anxiety, eating disorders, and negative self-talk. For example, college student Beatriz Santos uses ChatGPT to process journal entries and reframe self-critical thoughts, finding it a validating and accessible resource when traditional therapy is out of reach. Clinician-designed tools like Kahani and JEM are also being used to support recovery between therapy sessions, especially for those on waitlists or facing financial barriers. These AI tools aren't a replacement for human care, but they can be a lifeline when other options are limited.
In the healthcare sector, AI is making big strides in supporting caregivers and enhancing patient care. For instance, AI-powered tools are being implemented in hospitals to automate clinical documentation. Additionally, AI-driven robots are assisting hospital staff by handling routine tasks like delivering medications and retrieving supplies, which helps alleviate the burden on healthcare workers and allows them to focus more on direct patient care. AI tools like voice-activated assistants can help older adults with memory prompts, medication reminders, or simply by offering companionship throughout the day. These innovations demonstrate AI's potential to improve our lives in a big way.
So what can we do?
Navigating the evolving landscape of AI requires a balanced and proactive approach. Whether you're a parent, educator, or someone striving for healthier digital habits, here are some strategies to consider:
- Foster Open Conversations: Engage in regular discussions about AI tools and their applications. Encourage questions and share experiences to build a collective understanding. Educate yourself on AI applications that can support you in your everyday life.
- Set Clear Boundaries: Establish guidelines for AI usage, such as designated times or purposes, to ensure it complements rather than dominates your life.
- Promote Critical Thinking: Evaluate AI-generated information critically. Discuss the importance of cross-referencing and understanding the limitations of AI.
- Model Balanced Behavior: Demonstrate a healthy relationship with technology by balancing screen time with offline activities and interactions.
- Integrate AI Thoughtfully in Education: Educators can incorporate AI tools to enhance learning while teaching students about digital literacy and ethical considerations.
- Utilize AI for Support: Leverage AI applications designed for mental health support or caregiving, ensuring they are used as supplements to, not replacements for, human interaction.
- Monitor and Discuss AI Interactions: Stay informed about the AI tools your children are using. Engage in conversations about their experiences and feelings related to these interactions.
- Encourage Human Connections: Promote activities that involve face-to-face interactions to ensure children develop strong interpersonal skills alongside their digital engagements
- Educate About AI Limitations: Help children understand that while AI can be helpful, it doesn't possess consciousness or emotions, and its responses should be considered thoughtfully.
Ebony Bagley - Internet Wellness Advocate
Comments
Post a Comment