Aussies worry about harmful deepfakes but share anyway
Jennifer Dudley-Nicholson |

Deepfake political content is rising in Australia and only one in 10 voting adults is very confident they identify the fakes, a study finds.
Despite the widespread concerns about artificial intelligence technology, almost half of Australians ignore content they believe to be fraudulent and few check material before sharing it online.
Adobe released the findings in its Authenticity in the Age of AI study on Wednesday, revealing strong support for government regulations to label AI-generated political content.
The research comes months after a Senate inquiry recommended mandatory restrictions on high-risk AI use, and after some senators called for changes to political advertising rules.

More than 1000 Australian voters were surveyed for the Adobe study, which found 77 per cent of participants noticed a rise in political deepfakes in the past three months, and 69 per cent were concerned about their impact on the federal election.
Most AI-generated content was being spread on social media platforms, the report found, and 12 per cent of participants said they were very confident they could spot deepfakes, while 40 per cent said they were somewhat confident.
Almost one in 10 people surveyed said they shared suspected fake content without checking its authenticity, and Adobe Asia Pacific government relations and public policy director Jennifer Mulveny said many others just ignored it.
“Most people just don’t take the time,” she told AAP.
“Forty-five per cent of people ignore what they see, others might dig a little deeper to maybe find a reliable source of where something came from, but some people feel confident they can tell real from fake and they’re just moving on.”
Most Australians did support stricter regulations for the use of AI, according to the survey, and 82 per cent said the government was not doing enough to protect citizens from harmful, AI-generated political material.

Introducing mandatory labels for AI content and forcing social media platforms to publish them could help voters make decisions about what they see, Ms Mulveny said, though any changes would need to be accompanied by digital media literacy training.
“If people are not trusting what they see and they are concerned their views can be changed by things that aren’t necessarily real, that’s concerning, that’s a trust problem,” she said.
“It’s a problem that’s not going to get any easier because the technology is only getting more and more convincing.”
The Adopting Artificial Intelligence Senate inquiry recommended changes in November, including a dedicated AI law to govern high-risk use of the technology.
Independent senator David Pocock also called on the government to introduce AI restrictions on political material, but reforms were not introduced before the federal election.
AAP