Voters want a ban on AI election ads to stop deepfakes

Jennifer Dudley-Nicholson |

A major study shows many people are struggling to identify misinformation in election campaigns.
A major study shows many people are struggling to identify misinformation in election campaigns.

Four in five voters want the use of artificial intelligence technology banned from being used to create election ads, research has found, and most are concerned misinformation and deepfake images will impact elections. 

Tech giant Adobe released the findings on Friday from a study that also showed most respondents were struggling to identify misinformation and almost one in three had reduced their time on social media to avoid being deceived. 

The results follow several examples of AI misuse in political campaigns and after Australia’s Electoral Commissioner warned the body did not have the laws or tools to address AI misinformation at the next federal election. 

Adobe’s Future of Trust Study surveyed 1005 adults across Australia and New Zealand and found 80 per cent thought election candidates should be banned from using AI to create promotional material and 78 per cent considered deepfakes a risk to democracy. 

Most participants considered it important to know whether content had been generated using AI tools (81 per cent) and 32 per cent said they had stopped using a social media platform, or used it less often, due to the risk of misinformation.   

Adobe Asia Pacific government relations director Jennifer Mulveny said the results showed people wanted greater guidance, tools and restrictions to help them judge the information shared online. 

Election advertising
A majority of people surveyed want to know whether content has been generated using AI. (Bianca De Marchi/AAP PHOTOS)

“(The study) underscores the importance of building media literacy among consumers, where they are not only alert to harmful deepfakes but have the tools to discern fact from fiction,” she said. 

“As the Australian federal election looms, adopting protective technologies like content credentials will be crucial to help restore trust in the digital content we are consuming.”

Widespread concerns about AI tools being used to mislead voters emerged after several examples of AI-generated misinformation and disinformation were identified in election campaigns in the US, Pakistan, Indonesia, South Korea and Europe. 

AI-created election material has so far included deepfake videos, recorded phone messages mimicking candidate voices and urging people not to vote, and chatbots giving incorrect information to voters.

Australian Electoral Commissioner Tom Rogers raised concerns about the issue at a Senate inquiry into Adopting Artificial Intelligence in May, but told the meeting a “blanket ban” on the use of AI tools in election material would be difficult to implement. 

Instead, Mr Rogers said Australia should consider a national digital literacy campaign to warn voters of the risks, and regulations to clearly label content that had been created using AI tools.

“Digital watermarking could be a very important tool for us to be able to assist,” he said. 

“AI is improving the quality of disinformation to make it more undetectable and it is also then going to spread far more quickly, through multiple channels.”

Australia’s next federal election is expected to be held by May 2025, though the Queensland government is due to hold an election by October this year.