Australia risks missing AI boost if it gets rules wrong

Jennifer Dudley-Nicholson |

Australia must balance the risks and rewards of AI, Google executive James Manyika says.
Australia must balance the risks and rewards of AI, Google executive James Manyika says.

Australia could miss out on vital health, entertainment and productivity breakthroughs if it introduces rigid artificial intelligence rules rather than taking a balanced approach like some of its neighbours. 

Google-Alphabet Senior Vice President James Manyika issued the warning while attending an AI summit in Sydney on Tuesday to discuss the technology’s potential uses in Australia.

The executive also warned the risks of AI misuse were real and its governance would require careful consideration in each industry. 

Google executive James Manyika
James Manyika says allowing business to experiment with AI will be crucial to boosting productivity. (Dan Himbrechts/AAP PHOTOS)

His warnings come a day after the Business Council of Australia called for greater support, rules and innovation in AI to boost the nation’s productivity, and as the federal government considers regulations following an inquiry. 

Rules about the use of AI technology vary across the world, from the more restrictive approach of the European Union to the hands-off style adopted in the US. 

A “two-sided” balance between the risks and rewards of AI – similar to rules adopted by Singapore – should be Australia’s goal, Mr Manyika said. 

“On the one hand, you want to address the risks and complexities that come with this technology, which are quite real, but on the other side, you want regulation that enables all the beneficial applications and possibilities,” he told AAP.

“Both are important and I’m hoping Australia takes that approach.”

Allowing businesses to experiment with AI technology would be crucial to boosting productivity, he said, as well as financially supporting research opportunities, as breakthroughs in health and social issues would not “happen automatically”. 

Restrictions should be applied to individual high-risk uses, the former United Nations AI policy co-chair said, and by focusing on industries with the highest risks.

“A sector-based approach is important because often underlying technology applied in one sector may be perfectly fine – but applied in a sector, like financial services or health care, it is absolutely not,” he said. 

AI tools on a laptop
The federal government issued voluntary artificial intelligence rules in September. (Jennifer Dudley Nicholson/AAP PHOTOS)

Google announced several AI advancements at its annual developers conference in the United States in May, including plans to build a universal AI assistant and make changes to web searches. 

But the internet giant’s video-generating AI tool Veo 3 arguably grabbed the most attention as it created audio that appeared to come from the mouths of AI-crafted characters.

The development, like others in AI video creation, had the potential to make traditional filmmakers nervous, Mr Manyika said. 

But it could also play an important role as a tool in designing productions rather than replacing them. 

“Many start with fear and concern but often, when they have actually played with the tools and also been part of … collaborations we’ve done … that’s generated a lot of excitement,” he said. 

“Scrappier filmmakers have been thrilled because (they) can do previews with a hundred possibilities and then decide which one (they’re) actually going to shoot.”

The federal government issued voluntary AI rules in September but has not created mandatory guardrails for high-risk AI use. 

A Senate inquiry made 13 recommendations in November, including introducing a dedicated AI law and ensuring general AI services such as Google Gemini and Open AI’s ChatGPT fall beneath it. 

AAP