Better education needed to help protect students & teachers from deepfake-driven abuse

Claire Halliday
Claire Halliday

When reports that sexually explicit, non-consensual artificial intelligence (AI) deepfake imagery of international pop superstar Taylor Swift were circulating on social media earlier this year, the issue of AI-driven pornography hit the global headlines.

But cybercrime experts in Australia say that the advancements in generative AI and deepfake technology also put ‘everyday’ people at risk – and at Victoria’s La Trobe University, Senior Lecturer Pedagogy and Education Futures Dr Alexia Maddox told EducationDaily there needs to be more focus on educating young people of the risks.

Sexualised deepfake imagery is a threat to safety and well-being

Data from social media analytics firm Graphika revealed a 2,000 per cent increase in links to websites using AI to create non-consensual intimate images. The same data showed that, in September 2024 alone, those websites had more than 24 million unique visits.

“Deepfake porn, or non-consensual deep fakes, is becoming more widespread and is occurring within Australian schools as a form of harassment amongst peers and to teachers,” Dr Maddox says.

- Advertisement -
Dr Alexia Maddox, La Trobe University, Senior Lecturer Pedagogy and Education Futures

When it comes to deep fake porn, Dr Maddox told EducationDaily that, on top of what we already know about the impacts of exposure to porn by young people, “I think the key issue here is that generic pornographic images can be doctored to support personalised attacks by imposing the image of a known person onto another’s body in a way that looks real”.

Digital environments breed cyberbullying concerns

The impact of this frightening new ability to use tech tools to create deepfake sexualised imagery is, she says, very real.

“Our big concerns for kids in digital environments are the ways cyberbullying, exposure to and engagement with online hate, sexual predation and image-based abuse and toxic interactions can affect their well-being and hamper their ability to engage in the world based upon complex realities rather than hallucinatory constructions of GenAI tools, oversimplifications in public discourse or misinformation,” Dr Maddox told EducationDaily.

“Building their sense of self-confidence, security and socialisation in healthy ways and helping them to moderate compensatory behaviours such as distancing, distraction and escapism are key.”

- Advertisement -

AI-generated child abuse is a serious crime

For the Australian Federal Police’s victim identification unit, dealing with the growing crime of AI-generated child abuse material is a serious concern.
“It’s sometimes difficult to discern fact from fiction and therefore we can potentially waste resources looking at images that don’t actually contain real child victims,” AFP Commander Helen Schneider says.

“It means there are victims out there that remain in harmful situations for longer.”

Whether child sexual abuse is depicted in stories, cartoons or animation, it is all illegal under the Commonwealth’s Criminal Code.

“AI-generated child abuse material is, under our law, defined as child abuse material,” says Commander Schneider.

“If you are transmitting it over any sort of carriage service, it is an offence, regardless of who you are or what age you are.”

- Advertisement -

At the University of Adelaide’s Cybercrime Laboratory, Associate Professor Russell Brewer conducted a four-year longitudinal study of 2,000 students in South Australian high schools.

The research explored “how adolescents use digital technology” and how some are “drawn into cyber risk-taking”.

“We know it’s a smaller subset of young people that engage in [cyberbullying] types of activities — we had about 10 per cent [in the research],” he says.
Although Dr Brewer says the number of students using “nudifying apps” was probably small, the wide-reaching power of social media means the consequences can be devastating.

“It’s easier to see this type of material, it’s easier to share,” he says.

Challenges for law enforcement

Australia’s eSafety Commissioner, Julie Inman Grant, calls what authorities have been dealing with to date “the tip of the iceberg”.
It was August 2023 when the agency she heads received its first reports of sexually explicit content generated by students using generative AI to bully other students”.

- Advertisement -

And with an estimated 90 per cent of all deepfake content labelled as explicit, Ms Inman Grant says questions around how much of that explicit content is consensual, and how much features the sexual abuse of children are troubling.

The eSafety Commissioner says anyone concerned about the non-consensual sharing of intimate images should report the image.

Under current laws, it’s a criminal offence in almost every Australian state to create and distribute intimate or invasive images of adults without consent.

But RMIT criminologist Anastasia Powell says current criminal laws could also be applied to pictures that had been altered or created using digital technology, including AI.

Deepfake affects everyone – but majority of victims are females

Dr Powell believes abuse of AI expands on broader problems in society, including gender inequality.

“What these tools are doing is two things: they’re reflecting back to us the sexism, the disrespect, the harassment, the inequality that already exists out there in our communities,” she says.

- Advertisement -

“But they’re also amplifying it.”

Education helps children navigate the impact of technology

Dr Maddox says it’s a potent reminder that, although technology is a critical part of any child’s educational life, it can also have negative consequences that disrupt and impact learning and well-being.

“Peer and family influence shape how a child perceives the world and the views they get exposed to,” Dr Maddox told EducationDaily.

“The digital environment facilitates social connection and information seeking, which are a part of these processes. When we insert technologies as a tool for information seeking and opinion formation, such as social media and image-based AI technologies, we then must deal with what these platforms and tools make possible.”

Social media, she says, only exacerbates these issues “but is not necessarily the only cause of them”.

- Advertisement -

“We can change the guardrails we put in place for what children get exposed to, such as through age-appropriate exposure, automated content moderation, child sexual abuse material initiatives and incorporating safety by design principles into technology development practices, but it is also the child we must work with, rather than just the technology.”

If you or anyone you know needs help:

Share This Article
Claire Halliday has an extensive career as a full-time writer - across book publishing, copywriting, podcasting and feature journalism - for more than 25 years. She lives in Melbourne with children, two border collies and a grumpy Burmese cat. Contact: claire.halliday[at]brandx.live