- Generative's AI Newsletter
- Posts
- 🧸 What Happens When AI Becomes Your Kid’s Best Friend?
🧸 What Happens When AI Becomes Your Kid’s Best Friend?
Cute, clever, and possibly damaging. Let’s unpack the risk of AI friends.

Pint-Sized Users, Big Ethical Questions: The Social Impact of Early AI Interaction on Children
Employees worldwide now share their office spaces with a new fleet of bot employees. From robots in warehouses to AI agents spitting out lines of code, we’ve grown accustomed to AI in the workplace.
But what happens when AI skips merrily into classrooms and our homes? Bots teach kids to read, help them practice math, and even comfort them when they’re lonely.
Seems harmless enough.
But is it?
How does this early exposure to AI shape children’s social and emotional growth?
Anthropomorphism: It’s Not Just a Bot
Research shows that anthropomorphism makes children especially vulnerable to AI interactions. What exactly does this mean? It’s just a fancy word for giving human traits to non-human things—like when a kid tells a robot toy, “Don’t be sad!” or insists Alexa needs a nap.
Dr. Pilyoung Kim is a professor of psychology at the University of Denver and a visiting scholar here at GenAI.Works. Her research focuses on the impact of AI on child development, so we asked her what effect anthropomorphism might have on children.
She said, "Younger children are more likely to…treat [bots] like they have feelings and human-like thoughts. This makes sense when you think about how young children play with stuffed animals and toys, giving them names and personalities."
Sweet? Absolutely.
Harmless? Not always.
That same developmental creativity might blur the line between real and artificial relationships. “If a child starts seeing an AI as a real friend rather than a tool, that could affect their understanding of real human relationships,” Dr. Kim warns.
Social Growth—Helped or Hindered?
The EU’s Joint Research Centre found that "robots can be effective in increasing cognitive and affective outcomes" in children. Also, children do generally recognize that AI isn’t human, even when they treat it socially.
But there’s a catch: the more engaging and emotionally intelligent AI becomes, the harder it is to draw that line. And for some children, AI time may come at the cost of real-world connections.
"One of the biggest concerns I hear is whether time spent talking to AI might reduce the time children spend interacting with their peers," says Dr. Kim.
Where AI Shines: Education & Accessibility
Still, there’s no denying the upsides. Personalized learning, voice recognition for speech disorders, text-to-speech for visual impairments? AI can be an accessibility game-changer.
The benefits are even more compelling for neurodivergent children. Yale researchers have found that the predictable behavior of robots is one of the key elements that facilitate the interaction of autistic children with AI.
Socially anxious kids or those who fear judgment may find AI a comforting training ground. Some children even reported that they prefer talking to AI because it feels nonjudgmental and safe. As one psychotherapist described, these virtual friends don’t criticize or tease, which opens space for exploring thoughts, emotions, and conversations.
AI-driven technologies, like smart toys and interactive robots, can help children develop empathy, communication, and social interaction skills.
The Dark Side of the Digital Nanny
Still, there's a fine line between tool and crutch.
"Children who are lonely or socially struggling may be more vulnerable to becoming overly dependent on AI for emotional support," cautions Dr. Kim. "It could limit their social growth and ability to build meaningful human relationships.”
And as AI systems grow more lifelike, this risk only increases. Some say there’s a risk that children could grow to prefer AI interaction over human interaction. Over time, this could lead to social isolation and inappropriate social behaviors.
Then, there are safety concerns.
What if a chatbot starts to hallucinate or otherwise encourages inappropriate behavior? This is not as far-fetched as you might think. In October 2024, a grieving mother sued Character.ai for her teenage son’s suicide. She alleged that her son was interacting with the bot just moments before he took his own life.
The Good News? Kids Know What’s Up
Interestingly, children aren’t passive users. According to the World Economic Forum, a survey of Dutch kids revealed a surprising nuance:
👉🏽 55.3% were fine with robot store clerks.
👉🏽 But only 35.3% approved of robot doctors.
One 7-year-old girl explained:
"You cannot chat nicely with a robot and real people are much nicer."
Even children understand that some roles require human connection, and they value empathy.
Smarter Design for Safer Use
So what can we do?
Design AI systems that are upfront about what they are—and what they’re not. Dr. Kim recommends, “AI should clearly communicate that it does not have real emotions or consciousness.” She adds, "Research shows that when AI is transparent about this, it reduces children's level of trust toward the AI without changing how much they enjoy interacting with it."
Why the urgency?
Dr. Kim warns that if we don't put protective measures in place, we could repeat the same mistakes we made with social media. Excessive social media use is linked to increased anxiety, depression, and loneliness in teens.
Additionally, she urges adults to help children maintain a healthy balance between interacting with AI and building real human connections.
Final Thoughts: Tech With a Human Touch
So what’s at stake?
If we get it right, AI could empower the next generation with new tools for learning, connection, and creativity.
But if we get it wrong—if we let bots become stand-ins for real relationships—we risk raising a generation fluent in technology but starved for human connection.
Thus, the question isn’t just what AI can do for kids.
It’s who they’ll become because of it.
About the Author
Tessina Grant Moloney is an AI ethics researcher investigating the socio-economic impact of AI and automation on marginalized groups. Previously, she helped train Google’s LLMs—like Magi and Gemini. Now, she works as our Content Product Manager at GenAI Works. Follow her on LinkedIn!
Want to learn more about the socio-economic impacts of artificial intelligence? Subscribe to her AI Ethics newsletter at GenAI Works.
About the Visiting Scholar
Dr. Pilyoung Kim is a Professor of Psychology and Director of the Brain, AI, and Child (BAIC) Center at the University of Denver. She is a leading expert in AI’s impact on child development and human-AI relationships. Follow her on LinkedIn.

🚀 Boost your business with us—advertise where 10M+ AI leaders engage
🌟 Sign up for the first AI Hub in the world.
📲 Our Socials
Reply