Google has disclosed that it received more than 250 complaints globally about its artificial intelligence (AI) software being used to generate deepfake terrorism content.
The disclosure was made in a report submitted to Australia’s eSafety Commission, which monitors online safety and holds tech firms accountable for harm minimisation.
The report, covering April 2023 to February 2024, also revealed that Google received 86 user reports alleging its AI program, Gemini, was being misused to create child exploitation material.
Google confirmed using “hatch-matching,” a system that detects and removes child abuse imagery by comparing newly uploaded images to known harmful content.
However, no similar system was deployed to identify and eliminate terrorist-related deepfake material, pointing to AI safety inconsistencies.
Since the rise of OpenAI’s ChatGPT in late 2022, governments worldwide have been on the to ensure adequate regulations that prevent AI misuse.
The European Union has introduced the AI Act, which seeks to regulate high-risk AI applications, while the United States has proposed initiatives like the Blueprint for an AI Bill of Rights to ensure ethical AI development.
Australia’s eSafety Commissioner, Julie Inman Grant, described Google’s report as an important step in understanding the risks associated with AI-generated content. “This underscores how critical it is for companies developing AI products to build in and test the efficacy of safeguards to prevent this type of material from being generated,” she stated.
Challenges in AI Safeguarding
Even with Google’s endeavours to tackle AI-generated child abuse material, the absence of equivalent protections against deepfake terrorist content accentuates the technical and ethical challenges in AI governance.
Deepfake technology, which allows the creation of highly realistic but fabricated content, could lead to the spread of misinformation, manipulate public opinion, and facilitate fraudulent activities.
The eSafety Commission has previously imposed fines on tech platforms for failing to comply with reporting requirements. Social media platforms X (formerly Twitter) and Telegram have been penalised for not adequately addressing harmful content.
X was fined A$610,500 ($382,000) and lost its initial appeal but plans to challenge the ruling again, while Telegram is also disputing its penalty.