High-risk AI use could be banned by new task force

Jennifer Dudley-Nicholson |

The boom in generative AI since the release of ChatGPT has been a challenge to lawmakers worldwide.
The boom in generative AI since the release of ChatGPT has been a challenge to lawmakers worldwide.

Twelve experts have been appointed to identify the riskiest uses of artificial intelligence in Australia, such as social scoring and biometric identification, and consider options for mandatory restrictions on the technology.

Industry and Science Minister Ed Husic announced appointments for the Artificial Intelligence Expert Group on Wednesday, almost one month after the release of the government’s Safe and Responsible AI interim report.

But the group, which will be asked to develop restrictions “quickly,” will be temporary, with appointees in the roles until the end of June while the government considers long-term arrangements.

Experts who will serve in the task force include CSIRO chief scientist Bronwyn Fox, senior counsel Angus Lang, Aurelie Jacquet who chairs Australia’s national AI standards committee, Indigenous intellectual property expert Dr Terri Janke, and UNSW Professor Toby Walsh. 

The task force will be asked to identify and define high-risk uses of artificial intelligence technology in Australia, and consider restrictions for its deployment. 

The group will also investigate a framework for labelling when AI has been used, including watermarking images, and investigate ways to achieve greater transparency about AI models and the data sources they use.

Mr Husic said mandatory and voluntary rules should ensure AI technology could be deployed in Australia but that it was used appropriately and without harmful consequences. 

“We want to get the balance right and also allow low-risk AI to flourish unimpeded,” he said.

“In the EU, for example, some of the things that have been established to have an unacceptable amount of risk include social scoring of people based on socio-economic status, there’s been biometric identification and categorisation of people that’s also been identified in a risk category, along with a cognitive behavioural manipulation like voice-activated toys that encourage dangerous behaviour in children.”

The University of Adelaide’s Australian Institute of for Machine Learning director Professor Simon Lucey, who will also serve on the task force, said Europe had taken “a very legislative approach” to AI but Australia needed to forge its own path.

A ChapGPT logo on a monitor
The boom in generative AI since the release of ChatGPT has been a challenge to lawmakers worldwide. (AP PHOTO)

“What’s really important is that we want two make sure that our population is protected as much as possible, so the right laws have other be in place, but we also have to invest in our AI capability so that the economic benefits come through,” he said. 

“This century is going to be defined by AI.”

Prof Walsh said considering guidelines for the use of AI would be vital as the world had never seen a technology “so quickly reach into our lives”.

The government’s interim report recommended mandatory rules around the use of AI in settings “where harms could be difficult to reverse” and could include compulsory testing, public transparency, and a certification process. 

The use of generative AI has boomed since the release of ChatGPT in late 2022, but has also presented challenges to lawmakers around the world. 

Research from the Tech Council of Australia called it “one of the most transformative technologies of our time” and predicted its use could add up to $115 billion a year to the economy by 2030.