Why Australian AI regulation could be a legal patchwork

Jennifer Dudley-Nicholson |

All for one or one for all? Disagreement continues over the form of Australia’s AI regulations.
All for one or one for all? Disagreement continues over the form of Australia’s AI regulations.

One law or many? It’s the big question facing Australia when it comes to regulating artificial intelligence. 

Business and tech groups argue AI’s most significant risks such as deepfakes, copyright theft and discrimination are already covered by existing Australian laws that need to be bolstered and used more often. 

But some unions and academics contend one comprehensive, overarching law is needed to ensure workers and consumers are protected from these and other emerging issues.

The battle spread to the government’s productivity roundtable this week, where groups found agreement on some points but continued to argue over the final form of AI regulation.

Economic Reform Roundtable
The addition of AI to the national roundtable agenda wasn’t surprising, given its impact. (Mick Tsikas/AAP PHOTOS)

While Treasurer Jim Chalmers promised to make AI “a national priority” after the meeting, his pledge to oversee a gap analysis of existing laws further divided the two parties, with one expert labelling the move dithering. 

After the formation and dissolution of an AI expert group, one Senate inquiry and two public consultations, UNSW AI Institute chief scientist Toby Walsh says Australia should already have a plan to regulate the biggest change to technology in decades.

The addition of artificial intelligence to the national roundtable’s agenda was not surprising, given the impact of generative AI since 2022 and the Productivity Commission‘s focus on the software. 

While some groups at the event found common ground on licensing copyright material to train AI models, consensus on regulation remained elusive. 

Following discussions, Dr Chalmers said the government would analyse both regulation strategies to determine the best path forward. 

“We’re going to do the work on that, a gap analysis of that, to see whether we can meet our objectives with existing legislation or whether it requires one overarching bill,” he told reporters.

University of NSW Professor Toby Walsh (file)
Toby Walsh: there should already be a plan to regulate the biggest change to technology in decades. (Julian Smith/AAP PHOTOS)

The European Union passed a dedicated law governing AI last year and while some experts point to it as an example Australia should follow, Tech Council chief executive Damian Kassabgi says the nation should adopt an “opportunity-first” approach. 

Risks associated with AI such as privacy intrusion, copyright breaches and defamation are already illegal, he says.

“We need an AI strategy and an AI plan, we don’t need an AI act,” he tells AAP.

“The far more productive way of looking at this is how do we train our different regulators, different departments, in relation to how the current laws apply.”

It is an approach that wins some support from Monash University Australian Laureate Fellow Geoff Webb, who says laws around privacy, copyright and data access could be strengthened to deal with risks. 

“If there’s something an AI system shouldn’t do, then no system should do it and legislating specifically for AI systems just invites lawyers to make a lot of money out of arguing something’s not an AI system,” Prof Webb says. 

“AI enables new things to be done and we should legislate about the new things but we should legislate about any system doing them.”

Treasurer Jim Chalmers
Jim Chalmers says both regulation strategies will be analysed to determine the best path forward. (Mick Tsikas/AAP PHOTOS)

Australian regulators should make their stance on AI use clear, he says, as a lack of certainty among investors will create hesitation to innovate and create sovereign AI models. 

Creating clear but balanced regulation could also increase Australians’ trust in the technology, a study of more than 2000 people conducted for the Minderoo Foundation found this week. 

More than two in three people (68 per cent) said they would be more likely to trust AI with clear laws in place to govern it and most Australians (59 per cent) wanted AI to be subject to rules even if they limited its benefits. 

The survey proved Australians want greater protections, Minderoo Foundation chief executive John Hartman says, and will demand stricter rules if their needs are not met.

 “They see the productivity potential of AI but they don’t want it to come at the cost of safety and privacy,” he says. 

“They’re looking to government to step up with clear and balanced rules that allow innovation while protecting people.”

John Hartman
John Hartman believes Australians want greater AI protections and will demand stricter rules. (Matt Jelonek/AAP PHOTOS)

The results underline the need for action on AI and, Prof Walsh says, the government should heed the call and ensure its gap analysis does not prevent action.

“They’ve been talking about regulating AI for a couple of years now and they’re neither taking care of the potential new harms nor investing in potential new opportunities,” he says.

“I’ve spoken to quite a few people in the industry who have said the uncertainty this government has sown by dithering on what they’re going to do in this space is holding them back from investing, holding them back so they don’t find they’ve crossed a line.”

Whether the government chooses to apply existing laws “more vigorously to the digital space” or introduces a separate law to protect workers, copyright holders and consumers, Prof Walsh says, it needs to avoid further delays.

“The EU has got great protections for its citizens in terms of the potential harms big tech is starting to introduce,” he says. 

“We should have those same protections ourselves. If it’s good enough for Europe, it should be good enough for us.”

AAP