From the Principal
Last Friday evening, I spent a very enjoyable couple of hours teaching myself how to use Copilot to generate and change images. I had a great time using Artificial Intelligence (AI) to change the setting of a family photo from the backyard to the beach, combine three individual photos into a group picnic and even turned everyone in the family photo into clowns. It was a lot of fun (although I am not sure that my adult children were as enamoured with the constant stream of ‘look at this one’ messages as I proudly forwarded my latest achievement!). The images were largely believable, despite the occasional strangely angled limb.
However, as much as I appreciated how easy it was to learn, it was immediately apparent how readily this technology could be misused, even by individuals with only fundamental skills comparable to my own.
Unfortunately, according to Education Matters “Deepfakes – AI-manipulated images, audio, or video – are emerging in classrooms, playgrounds, and social media networks frequented by young people. From altered nudes targeting students to fabricated videos designed to bully or humiliate, the issue is no longer hypothetical.
“…New data reveals reports to eSafety’s image-based abuse scheme about digitally altered intimate images, including deepfakes, from people under the age of 18 have more than doubled in the past 18 months, compared to the total number of reports received in the seven years prior. Four out of five of these reports involved the targeting of females.”
As part of our pastoral care and eLearning programs, teachers, Luminaries, and our Head of eLearning, Marianna Carlino, educate our students on the risks associated with AI-generated images, with the content and discussions targeted according to the age of the group that they are working with.
Like all technology use by underage children, it is essential that there be regular monitoring of the sites that they are visiting, the apps they are using and who they are contacting. Unfortunately, it is not as simple as enforcing the Social Media delay regarding social media sites, as AI apps are freely available.
The eSafety website, offers the following advice to parents and carers:
(Noting that children are unlikely to use the term ‘deepfakes,’ instead referring to specific apps or simply describing it as something you can “do with AI.”)
- Start early and stay open.
Talk regularly about the harms of deepfakes and that creating them may be a crime. Keep your tone supportive and not judgemental. If something ever happens, your child will be more likely to come to you.
- Use supportive language.
If your child is affected – as a target, bystander, or creator – your first words matter. Stay calm.
Try language such as ‘I’m glad you told me’ and ‘Let’s figure out what to do together.’
If your child is a target:
-
Help them collect evidence – screenshots, links, usernames (without saving or sharing explicit content).
-
Do not view, collect, print, share or store explicit material. Make a written description and note where it is located.
-
Support them to report the incident – to the platform, the school, local police or eSafety.
-
Check on their wellbeing and ask if they’d like professional support.
-
Reassure them: they are not alone and help is available.
If your child receives a deepfake:
-
Praise them for not sharing it.
-
Talk about empathy and digital responsibility.
-
Reinforce that speaking up was the right thing to do.
If your child created or shared a deepfake:
-
Stay calm and listen.
-
Explain the serious emotional and legal consequences.
-
Encourage accountability – deleting the content, apologising, or reporting it so platforms know to remove any copies.
-
Set clear expectations for future behaviour – and follow through consistently.
Should your child be targeted through deepfake images, please follow the steps outlined above and contact her class teacher, Luminary, or Head of Year.
Sources
– Lisa Moloney
Principal