Microsoft and OpenAI are joining forces to combat the growing threat of artificial intelligence (AI) being used to manipulate voters and undermine democratic processes.
With a record number of elections scheduled in 2024 across 50 countries, concerns are mounting about the potential for AI-generated deepfakes to sway public opinion, particularly among vulnerable communities.
The rise of powerful generative AI tools, like the popular chatbot ChatGPT, has created a breeding ground for sophisticated deepfakes capable of spreading disinformation.
The ease of access to these tools, allowing anyone to create fake videos, photos, and audio of political figures, amplifies the threat.
Just this week, India’s Election Commission issued a warning to political parties, urging them to refrain from using deepfakes and similar tactics in their online campaigns.
In response to these concerns, major tech companies, including Microsoft and OpenAI, have pledged to collaborate and develop a common framework to address AI-powered manipulation in elections.
Some companies are implementing individual safeguards within their software. Google, for example, has restricted its AI chatbot, Bard, from answering questions related to elections, while Meta, Facebook’s parent company, is taking similar steps with its chatbot.
OpenAI has also taken measures, launching a new deepfake detection tool specifically designed for researchers working in disinformation.
This tool aids in identifying fake content generated by OpenAI’s own image creation software, DALL-E.
Again, OpenAI has joined the committee of the Coalition for Content Provenance and Authenticity (C2PA), an industry body already having members like Adobe, Microsoft, Google, and Intel.
The newly announced “Societal Resilience Fund” is another step in this collective effort towards responsible AI development. This $2 million initiative, detailed in a Microsoft and OpenAI blog post published today, aims to “further AI education and literacy among voters and vulnerable communities.”
Grants will be awarded to select organizations, including Older Adults Technology Services (OATS), the Coalition for Content Provenance and Authenticity (C2PA), the International Institute for Democracy and Electoral Assistance (International IDEA), and the Partnership on AI (PAI).
Microsoft emphasizes that these grants aim to facilitate a broader understanding of AI and its capabilities across society. OATS, for instance, plans to utilize its grant to develop training programs for Americans over 50, focusing on the “foundational aspects of AI.”
“The launch of the Societal Resilience Fund is just the beginning of Microsoft and OpenAI’s commitment to address the challenges and needs in AI literacy and education,” said Teresa Hutson, Microsoft’s Corporate VP for Technology and Corporate Responsibility, in the blog post.
“We are dedicated to this work and will continue collaborating with organizations and initiatives that share our goals and values.”
Microsoft and OpenAI’s initiative aims to mitigate the potential dangers of AI in the political sector. Enhancing public education and collaboration within the tech sector will safeguard democratic processes from manipulation by AI-powered disinformation campaigns.