Commissioner warns of ‘tech wreck’ from unchecked abuse

Marion Rae |

Julie Inman Grant says companies failed to erect guard rails when AI accelerated a year ago.
Julie Inman Grant says companies failed to erect guard rails when AI accelerated a year ago.

Australians may be unwittingly investing in big tech companies and artificial intelligence tools that are generating images of sexual abuse, violence and gore.

The nation’s eSafety commissioner Julie Inman Grant on Wednesday urged responsible investors to “stop the next tech wreck” from happening and demand safety by design in all systems and products.

She said companies failed to erect guard rails when the technology accelerated a year ago as users began to weaponise generative AI.

“There is a crisis with family domestic and sexual violence and so much of this is fomented by violent, extremist content – violent porn that shows asphixiation and choking,” she said.

“There’s a gentleman sitting in a jail in Queensland right now for creating deepfake image-based abuse of prominent Australian women and putting them online.”

General artificial intelligence posters
Companies and investors are being urged to protect themselves and the community over the use of AI. (Bianca De Marchi/AAP PHOTOS)

People are using apps to generate abusive images of children, which she said has law enforcement agencies concerned resources will be misdirected when they should be searching for real children at risk.

“These are harms that we’re dealing with through our investigations branch every day,” she said.

As regulators race to catch up with the rapid spread of the technology, companies and their investors could face reputational damage, fines and financial losses if they fail to protect themselves and the community.

Banks are developing ways to stop so-called micro-aggression, where abusive messages are written into transaction statements and plan to deter perpetrators by threatening to cut their credit cards, loans and mortgages.

“We’re also about to table in parliament some mandatory standards that will deal with open-source AI models and building safety up and down the technology stack,” Ms Inman Grant said.

But there are still questions to be answered about who is responsible for the people who develop the code, or those who are monetising the technology, she said.

“We can retrofit laws and regulations at the back end, but we need all of you helping us create a better safer world,” she told the Responsible Investment Association Australasia (RIAA) annual conference in Sydney.

The investor group with more than 500 members representing US$29 trillion in assets has developed advice, endorsed by the commissioner, to avoid supporting startups and companies that are at risk of breaching human rights, safety and privacy.

Estelle Parker
Responsible Investment Association Australasia’s Estelle Parker released a toolkit for investors. (HANDOUT/RIAA)

Releasing the investor toolkit, RIAA co-CEO Estelle Parker said inadequately designed, inappropriately used or maliciously deployed AI threatened individuals and their human rights.

She said the advice was for those who may not have much experience with the financial and human rights risks associated with AI. 

The technology also poses risks of bias and exacerbates discrimination of individuals, particularly those from historically marginalised communities.

Rather than a hands-off investment style, Ms Parker said a method known as “stewardship” was the most responsible way to reduce risk.

This involves communicating concerns and priorities directly to company bosses to get better business practices – or dumping the unsafe investment.

Lifeline 13 11 14

Fullstop Australia 1800 385 578

1800 RESPECT (1800 737 732)

Kids Helpline 1800 55 1800 (for people aged 5 to 25)

AAP