Picture perfect AI phone tools to blur lines of reality
Jennifer Dudley-Nicholson |
Imagine a daredevil skateboarder soaring through the air with one hand on his board, perilously close to the camera as it captures him in sharp focus on a glorious sunny day.
You might not believe such an image could be taken with a smartphone and in some ways, you would be right.
This stunning photo was featured at the launch of Samsung’s Galaxy S24 smartphones this week and started out as an average snap.
Generative artificial intelligence tools were then used to raise the skateboarder from the ground the air and make him larger in the frame.
The AI tools were also employed to delete a distracting power pole, artificially create more sky around him and turn it a vibrant shade of blue.
The result was saved with a discreet watermark in its left corner and a disclaimer in its metadata to indicate AI’s role in its creation.
But experts say rules around AI disclosures remain unclear, with a lack of standards for labelling artificial content making it hard to tell whether something is real, or just really well crafted.
The issue could persist for the next five to 10 years, they warn, as consumers wait for AI protections to be built into hardware and disclosures made mandatory.
The issue of AI labels stirred this week with Samsung’s launch in San Jose, California, that featured a strong focus on AI phone features for everything from editing photos to summarising notes, and translating languages to changing the tone of messages.
Samsung Electronics mobile president TM Roh told the audience smartphones would lead a new generation of mobile devices powered by AI.
“Artificial intelligence will bring about great change in the mobile industry and in the way we live,” Mr Roh said.
“We call this the eureka moment for generation.”
Samsung Electronics Australia mobile experience director Eric Chou says the company proactively made the choice to label some content changed or created by AI, such as photos and text summaries, even though he says users may try to remove them.
“The safeguards were a pretty clear choice for us,” he said.
“Anything that is edited with AI will have a watermark and while a watermark, once it’s been shared, can be removed, the information around the AI edit is still embedded in the metadata.”
Samsung is not the first company to offer smartphones with AI features after Google launched AI-powered image tools in its Pixel 8 Pro smartphone in late 2023 to alter a subject’s expressions, delete objects, and analyse and remove sound in videos.
Foad Fadaghi, the managing director of Australian technology analyst Telsyte, says consumers can expect to see more AI-powered phones this year as manufacturers try to keep up with tech advances and stave off competition from dedicated AI gadgets, like those demonstrated at the recent Consumer Electronics Show.
“For Samsung, Apple and others, AI is a defensive play right now and there are a lot of new guys who are seeing it as an offensive play to disrupt that market,” he told AAP.
“It’s in the best interests of the smartphone manufacturers to embrace AI fully.”
The arrival of more AI-powered phones will bring big changes to the ways people use them, he says, including social media users who will need to change how they scrutinise images.
While “particularly younger users” are familiar with image-changing filters from Snapchat and TikTok, he says, these modest enhancements will not compare to the scale of image manipulation offered by AI photo tools.
“The expectations of what photography is going to change in the next 12 to 24 months given that generative AI will be a major feature,” Mr Fadaghi said.
“We might need to assume every photo has been modified in future … or we might need a watermark to say something actually hasn’t been modified.”
Questions around AI regulation in Australia were also raised by the federal government this week when it released its Safe and Responsible AI interim report, following more than 500 submissions.
Immediate actions proposed by the government include a voluntary AI Safety Standard, voluntary labelling and watermarking of AI-generated content, and setting up an expert advisory group to create mandatory rules.
Toby Walsh, chief scientist of the AI Institute at the University of NSW, says he would have liked to see more rules made mandatory, warning that giving some firms the choice of whether to label AI content could be akin to “letting the tech companies mark their own homework”.
Some companies were starting to work on an industry standard for labelling AI content though, he says, such as Nikon and Adobe with “Contential Credentials”, and reforms proposed in Europe could bring digital certificates in the next five to 10 years that would be difficult to evade.
“In the long term, digital watermarks will be built into the hardware of our devices, which is going to make it more difficult for people to get around them,” Professor Walsh said.
“Every smartphone capturing an image, every browser displaying an image is going to have built into the hardware watermarking technology which is going to make it hard for people to get around it.”
Tech Council of Australia chief executive Kate Pounder says the government’s approach to AI regulation should give users more confidence for the future development of the technology.
But she predicts AI will continue to dominate headlines as both companies and people figure out how they can best use it.
“It’s probably one of the most important technological innovations that we’re going to see in this decade and the biggest candidate in technology to lift productivity in a way that we haven’t seen since the ’90s or the early 2000s,” she said.
“That is hugely profound.”
The reporter travelled to San Jose as a guest of Samsung.AAP