YouTube, Meta, TikTok reveal misinformation tidal wave
Jennifer Dudley-Nicholson |

Thousands of misleading videos, scam ads and fake profiles made in Australia have been wiped from online platforms over the past year to address a growing wave of misinformation.
More than 25,000 videos deemed to feature “harmful” fake claims were removed from TikTok and YouTube, reports showed, while unverified and misleading election ads ranked among the most commonly removed content by Meta and Google.
Eight technology companies outlined their actions in transparency reports published on Thursday in accordance with the voluntary Australian Code of Practice on Disinformation and Misinformation.
Several tech firms declined to detail their efforts to tackle fraudulent content in Australia, including social media platforms X and Snapchat.
The statistics follow heightened concern about misinformation online after the emergence of generative artificial intelligence tools, and warnings they may be used to create convincing deepfakes and political ads.

US firms including Google, Meta, Twitch, Apple and Microsoft released transparency reports under the industry code, and addressed issues including the identification of misleading claims, safeguards for users, and content removal.
TikTok revealed it removed more than 8.4 million videos from its Australian platform during 2024, including more than 148,000 videos deemed to be inauthentic.
Almost 21,000 of the videos violated the company’s “harmful misinformation policies” during the year, the report said, and 80 per cent, on average, were removed before users could view them.
Google removed more than 5100 YouTube videos from Australia identified as misleading, its report said, out of more than 748,000 misleading videos removed worldwide.
Election advertising also raised red flags for tech platforms in Australia, with Google rejecting more than 42,000 political ads from unverified advertisers and Meta removing more than 95,000 ads for failing to comply with its social issues, elections and politics policies.
Meta purged more than 14,000 ads in Australia for violating misinformation rules, took down 350 posts on Facebook and Instagram for misinformation, and showed warnings on 6.9 million posts based on articles from fact-checking partners.

In January, the tech giant announced plans to end fact-checking in the US and its report said it would “continue to evaluate the applicability of these practices” in Australia.
Striking a balance between allowing content to be shared online and ensuring it would not harm others was a “difficult job,” Digital Industry Group code reviewer Shaun Davies said, and the reports showed some companies were using AI tools to flag potential violations.
“I was struck in this year’s reports by examples of how generative AI is being leveraged for both the creation and detection of (misinformation) and disinformation,” he said.
“I’m also heartened that multiple initiatives that make the provenance of AI-generated content more visible to users are starting to bear fruit.”
In its report, Microsoft also revealed it had removed more than 1200 users from LinkedIn for sharing misinformation, while Apple identified 2700 valid complaints against 1300 news articles.
AAP