This story originally appeared in Kids Today, Vox’s newsletter about kids, for everyone. Sign up here for future editions.
Bans on kids and teens using social media have swept the country and the world in the past few years, with lawmakers from Australia to Massachusetts enacting or considering legislation to keep young people off platforms like TikTok.
Now the Canadian province of Manitoba is planning to go one step further: banning kids from using AI chatbots.
Manitoba Premier Wab Kinew announced the proposed ban at an April fundraiser, arguing that tech platforms are “doing these very awful things to kids all in the name of a few likes, all in the name of more engagement, and all in the name of money.”
Kinew didn’t say which social media and AI platforms the ban might include, or when the legislation might be introduced, although Manitoba’s education minister has said enforcement might begin in schools.
So far, social media bans don’t have a ton of evidence behind them. Australian teens seem to be getting around their country’s ban, possibly by wearing masks to foil age-verification systems. Some experts have also questioned the wisdom of locking kids out of social media, which can have benefits as well as risks.
But AI regulation is a new frontier. While social media platforms have been with us in some form for decades, AI tools have only been available to ordinary kids and teens for a couple of years — and they’re evolving and becoming more ubiquitous all the time. Some parents say AI chatbots have encouraged children to harm themselves or others, and experts fear that early use of AI in the classroom could keep young people from learning vital critical-thinking skills.
From my reporting on social media, I’m suspicious of age-related bans. But I’ve also been watching with anxiety as AI creeps into my kid’s life, not to mention my own. So I asked experts, educators, and young people themselves what kind of guardrails could help keep kids and their education safe from the most pernicious effects of artificial intelligence.
I did not (spoiler) come away with a clear legislative proposal that would solve all of our problems around this technology. What I did find, however, were a few guidelines that radically changed how I think about AI in my life, and that I think can help us guide kids through theirs.
The impact of AI on kids
As any high school teacher can tell you, AI use is extremely common among young people. In a Pew survey conducted at the end of last year, 64 percent of teens said they used chatbots, with about three in 10 reporting daily use. The most common use is searching for information, followed by help with schoolwork.
Quinn Bloomfield, 18, likes to use Google’s NotebookLM to help with chemistry, the first-year university student told me. The tool is “extremely helpful for quizzing me on things, and helping explain things when my professors aren’t great at it,” said Bloomfield, who’s also a member of Manitoba’s Youth Ambassador Advisory Squad.
AI tools are also increasingly making their way into classrooms, where they’re used by younger and younger students. Kindergartners in some districts use an AI-powered reading bot called Amira, Jessica Winter reports at the New Yorker. Winter’s sixth-grade daughter recently received a Google Chromebook at her Massachusetts middle school, pre-installed with Google’s AI tool Gemini, which quickly offered to “help” her with her writing and presentations.
As useful as some young people find the tools, experts fear they’re having unintended consequences. When AI tools are used to make learning “more straightforward and efficient” — by helping kids write a paragraph or outline an essay, for example — they are “quite likely undermining kids’ opportunities to grapple with the very difficulties that are the source of real, developmentally oriented learning,” said Mary Helen Immordino-Yang, a professor of education, psychology, and neuroscience at the University of Southern California.
Bloomfield, for his part, wants young people to be involved in formulating any legislation that might restrict their access to technology.
Tools like Gemini that volunteer to do some of the hard work for kids can keep them from learning crucial skills like argument-building and coming up with ideas, Immordino-Yang said. The most optimistic (or cynical, depending on your view) AI boosters argue that human skills like these will matter less in a world where AI can do most tasks for us. But “we’re always going to need to be able to formulate complex thoughts and arguments about the things that we hold dear,” Immordino-Yang said. “It’s never going to be the case that we don’t have to know how to think.”
Beyond academics, some also worry about the social implications of AI chatbots. “We are finding that for every minute that a kid is talking with a chatbot, that’s one minute less they’re spending with their friends,” said Mitch Prinstein, a professor of psychology and neuroscience at UNC Chapel Hill who studies kids’ interactions with technology. That’s concerning because young people need interactions with their peers to develop social skills, and chatbots aren’t a good substitute.
“It’s not giving you the appropriate kind of coaching and feedback,” Prinstein said. “It’s just agreeing with you, even if you offer really poor ideas.”
Also concerning is that in Prinstein’s research, “a remarkable number of kids are saying that they prefer talking to a chatbot than a human peer.” Many kids also worry that they’re using chatbots too much, Prinstein said. “They’re scared that they might be becoming a little bit too reliant on them.”
Guiding kids through an AI world
In the context of findings like these, it’s no surprise that jurisdictions like Manitoba are considering an AI ban for youth. But legislation that tries to ban social media users below a certain age has faced criticism, both because kids will find a way to get around any ban, and because such laws fail to target the basic structures of tech platforms that can make them harmful to people.
Some experts have similar concerns about an AI ban. “If the focus is only on a ban, what happens when they reach the age where they’re allowed to go on, especially after you’ve made it forbidden fruit,” Prinstein asked.
Young people themselves are also worried about Manitoba’s proposal. Banning AI risks taking away “the opportunity for kids to have way more personalized learning experiences,” Bloomfield told me.
Any AI ban would also be handed down in a context in which young people feel increasingly pressured to use AI, and in which adults are constantly told they must use the technology or face unemployment and irrelevance. For teens anxious about an AI-driven job market, the push to circumvent any blanket AI legislation would surely be intense.
However, a growing body of research suggests that the current free-for-all may not be the best idea either. It’s especially odd to see schools around the United States embrace AI so enthusiastically, even as they ban phones and treat social media like poison.
To make sense of some of these complexities, I talked to Beck Tench, a principal investigator at Harvard’s Center for Digital Thriving who thinks about AI use in terms of digital agency, which she defines as people “having meaningful choice and intention and control over how technology fits into your life.”
The idea of approaching AI use as a question of agency immediately resonated with me. As an adult, I often encounter AI in ways that deprive me of agency — pop-ups that offer to write my emails for me, or statements from tech CEOs that their models are about to take my job. When I am given a choice in how I use the tools (for example, in a recent Vox seminar about ethical ways to use AI for research), they become a lot more appealing.
For kids, supporting AI agency in the classroom might look like an ongoing series of conversations between teachers and students about what’s appropriate at any given time, Tench told me. “Maybe at the beginning of the year, you can’t use it for spelling and grammar, but once you’ve got that down, you can, and you need to make sure you’re not using it for outlining.”
“One of the things that we’re hearing from young people is that they want adults to help them with this, and they want advice and guidance,” Tench said. “That advice and guidance needs to come in conversation with them.”
Agency around AI is going to look different for young children than it does for adults. But figuring out how all of us can have more control over the presence of AI in our lives feels like a better goal to me than simply banning kids from a technology that causes a lot of problems for grown-ups, too.
As Tench put it, “we’re focusing on young people because they’re, frankly, easier to set rules for than the actual tech companies, who have far more power in the world.”
Bloomfield, for his part, wants young people to be involved in formulating any legislation that might restrict their access to technology. Kids “deserve a say in what happens in their own lives,” he said. “They deserve not to be left out of the world that’s evolving around them.”
What I’m reading
A new study of school cellphone bans found that the bans did work to reduce cellphone use. However, they did not improve test scores, and at least initially, suspensions actually went up at schools with bans.
A lot of kids are probably going to miss out on “Trump accounts” because the signup process creates too many barriers for families.
I liked what these New York Times reporters had to say about how they talk with their kids about the news.
My little kid has been enjoying Not Quite Narwhal, a sweet story about a little narwhal (or is he?) finding his place(s) in the world.
If you're seeking reliable vent solutions, look no further than Mr. Lint Guy. Specializing in dryer vent cleaning, Mr. Lint Guy offers effective solutions to prevent fire hazards and improve dryer efficiency. Whether you're dealing with stubborn dryer valley problems or just need regular maintenance, Mr. Lint Guy has the expertise to keep your home safe and your dryer running at its best.

0 Comments