With deep fake sextortion and chatbots encouraging youth to commit crimes, AI is quickly becoming even more dangerous for Aussie kids.
AUTHOR: Jake Moore
Artificial intelligence (AI) is everywhere – our phones, games and even the tools kids use for homework.
It’s exciting, creative, and powerful. But just like any powerful tool, AI can also cause harm when misused.
The risks aren’t always obvious, especially to children who often engage with AI-driven platforms without understanding how they work – or the dangers they might bring. Sometimes they don’t even realise that AI is working in the background.
As someone who’s spent years in cybersecurity, I’ve seen how AI has made cyber-bullying more devastating, scams more believable and disinformation easier to spread.
Kids are especially vulnerable, but there are ways we can help them navigate these challenges safely.
The risks: What kids need to know
AI isn’t inherently bad, but it’s being used in ways that hurt people – especially young users. One of the first lessons kids need to learn is that the internet never forgets. Every post, photo, and comment leaves a digital footprint. AI thrives on data, and the more kids share, the more vulnerable they become to risks like identity theft or data breaches.
Take phishing scams, for example. AI is now used to craft incredibly convincing fake emails and messages, designed to trick recipients into clicking on malicious links or sharing sensitive information. Kids, who might not think twice about clicking a link or responding to a message, are especially at risk.
AI has some downsides for vulnerable users. Picture: iStock
One simple tip can make all the difference: Pause, question, and verify before taking action. If something feels off – a message from a “friend” asking for money or a link promising prizes – it probably is.
Nathan Kerr, CTO and Executive Director of One Click Group, who specialise in ID validation tech, points out that AI chatbots, another rapidly evolving tool, present their own risks. These systems are often designed to simulate empathy and trust, making them particularly dangerous for vulnerable users.
“AI chatbots are essentially a series of conditional ‘if’ statements, trained on datasets created by humans,” he said.
“They’re only as good as the developers behind them, and that’s the catch – they reflect both the strengths and flaws of their creators.”
For kids, this means a chatbot could unintentionally provide harmful advice, manipulate emotions, or even encourage risky behaviours. Educating children to treat chatbots critically -like any other online interaction – is key.
“The problem with AI is that it doesn’t have intuition or common sense – it just churns out answers based on patterns. That’s why kids should never blindly trust a chatbot’s advice or share private information, no matter how ‘human’ it seems,” Kerr added.
Basically, kids must learn that not everything online is real and that even things they create or share might not stay private. We need to ensure kids understand the importance of managing their digital footprint; a review of privacy settings, limiting oversharing personal details such as locations and names and encouraging them to think critically before clicking and posting is critical.
Then there’s the issue of deepfakes, where AI creates highly realistic but fake images and videos. These can be fun when used creatively, but they’re increasingly being used for harm; and sadly kids are engaging in this destructive and illicit activity.
Nathan Kerr said that educating children to treat chatbots critically is key. Picture: Supplied
Kids must learn that not everything online is real. Picture: iStock
The dark side of AI and social media
Cyber-bullying has existed for as long as the internet, but AI has turned it into something far more destructive. Deepfake technology allows anyone with basic tech skills to create fake videos, images, or audio that can devastate a child’s reputation or mental health. These aren’t just harmless pranks – they can destroy friendships, spread misinformation, and cause immense distress.
What’s even more alarming is that teens themselves are using these tools to generate non-consensual explicit images of their peers, as was highlighted earlier this year at Melbourne’s Bacchus Marsh Grammar.
Whether it’s to intimidate, humiliate, or simply as a misguided “joke,” the consequences are serious and often illegal. This behaviour highlights a troubling disconnect between online actions and real-world fallout. Kids need to realise beyond the fun, AI isn’t a toy; what they create or share online has serious implications.
For the victim, the damage can feel permanent. So adversely, we need to teach children how to take action when they’re targeted, and to know they aren’t alone. Simple steps like reporting inappropriate content, blocking offenders, and confiding in a trusted adult can make a big difference. The tools that social media platforms offer to report, block and mute harmful users and content aren’t just for adults – kids should feel empowered to use them.
And, most importantly, they must know it’s okay to ask for help if they feel targeted or unsafe online – reassurance that seeking adult advice and support is a sign of strength, not weakness.
Kerr even suggests flipping the script.
“If kids encounter sextortion or harmful behaviour, they should feel empowered to say, ‘This is a deepfake’ and refuse to engage with the bad actor,” said Kerr.
“The goal is to eliminate the bully’s leverage while reporting the incident to a trusted authority. The truth is, bad actors rely on fear and panic to control their victims. Once you remove that, they’ve got nothing.”
This action he suggests not only reduces the immediate harm but also reinforces that kids can take control of these situations.
Bacchus Marsh Grammar. Picture: Supplied
Teaching kids to navigate AI safely
AI is here to stay, and the risks will only grow. The best thing we can do for kids is to arm them with the tools and knowledge they need to use AI responsibly. These are few things adults can do to help:
● Set boundaries for sharing and privacy
Encourage kids to think before they post – “Do I really need to share my location in this photo?” Even seemingly innocent posts – like taking part in viral challenges – can give away personal details.
● Empower them with tools
Small steps, like turning off location tracking or choosing private account settings, can go a long way. Do it together and show them the power of these settings. Use real-life examples to show how these skills apply in their daily lives.
● Practice “cyber pauses”:
Encourage kids to take a moment before clicking on a link, responding to a message, or posting online. Teach them to hover over links to check their validity and to question unexpected messages, even from friends. Developing these habits early can protect them ongoing and promote a healthier digital environment in the long run.
● Stay involved and talk openly about risks
This is not about hovering over their every activity, but about creating an environment where kids feel comfortable discussing what they see online. It builds trust and keeps communication open. Plus, the more they share, the easier it is to address problems before they escalate.
We also need to foster critical thinking in our kids. In a world where AI can create content that looks and feels real, kids need to ask questions: Who made this? Why? Can I trust it?
As Kerr emphasises, “Scepticism is your best defence in a world where AI can manipulate reality. Teach kids to question everything they see online – it’s not about paranoia, but about
being smart.”
AI has made cyber-bullying more devastating, scams more believable and disinformation easier to spread. Picture: iStock
These skills aren’t just useful online – they’re vital in today’s information-saturated world.
Overarching responsibility and safety
While empowering kids to take some responsibility; as adults it’s our job to create a safer environment and that means holding tech companies accountable for better moderation, age-appropriate safeguards, and clear reporting systems.
ESET’s program Safer Kids Online offers practical advice for families to help around the practical side of safety, including having ongoing, open conversations; not one off lectures, but continuous dialogues that adapt as technology evolves.
AI holds incredible promise, but its risks are real. While we can’t protect kids from every danger, we can prepare them to handle what they encounter. By teaching kids how to navigate these challenges, we can empower them to enjoy the benefits of technology without falling victim to its darker side.
Awareness, critical thinking and empathy are the key to empowering kids to understand that AI is not a weapon or a threat, but a tool for creativity, learning and connection. It’s all about balance; helping kids explore the digital world safely while staying grounded in the real one.
Jake Moore is a cybersecurity expert and the global security advisor of ESET