Google reveals plans to add AI to Aussie search results
Jennifer Dudley-Nicholson |
The next question you ask Google could return with a detailed answer penned by artificial intelligence.
The tech giant revealed plans to expand its Google Search AI Overviews product to Australia on Tuesday, testing whether the summaries it creates will encourage users to click on more links.
But artificial intelligence experts say the product suffered embarrassing errors in its early testing and users should scrutinise its results carefully.
Google’s AI Overviews, which have been tested in the US since May, use the company’s generative AI tool Gemini to summarise search results for users.
The AI tool is commonly used to respond to open-ended or complex search queries, Google product search senior director Hema Budaraju told AAP, where it can summarise steps, research, or suggestions on other websites.
This could include tips on how to clean a couch, for example, or steps to solve a maths problem.
AI Overviews, which appear at the top of search results, recently became available to Google users in another six countries including the United Kingdom, Japan and India, she said, but were still being trialled.
“This is not the beginning of a full launch. What we are doing is testing this and (we will) evaluate results and determine the right time to launch,” she said.
“A very small percentage of users will see the AI overviews by default in their search results on a subset of queries where we believe these overviews make the results even more helpful.”
Ms Budaraju said restrictions had been added to the AI feature to stop inappropriate results or misuse.
“AI Overviews are designed to prevent harmful, hateful or explicit content from appearing,” she said.
“We have a very rigorous evaluation and adversarial testing that we do ourselves to ensure that we can meet the high bar of quality and safety.”
However, the tool famously suffered errors in early testing when AI-generated summaries told users to add glue to cheese to make it stick on pizza and to eat one small rock a day.
The dangerous advice was based on an old Reddit comment and a satirical news article, respectively.
University of the Sunshine Coast computer science lecturer Dr Erica Mealy said the mistakes highlighted the challenge of using a large language model to explain issues rather than to correct or suggest words.
AI-written summaries could prove useful, she said, but users should question any advice and scrutinise results for signs of bias.
“Being later in its testing will probably be good for us,” Dr Mealy said.
“The challenge will be how it works with cultural issues that are uniquely Australian, whether that’s the way we use language or how it responds to First Nations’ questions.”
AAP