Twenty top technology companies and social media platforms on February 16 signed an accord at the Munich Security Conference to work together to curb harmful artificial intelligence-generated content that could interfere in elections worldwide in 2024. More than 4 billion people in over 40 countries — including India and the US — will vote this year to elect new governments.“The accord is one important step to safeguard online communities against harmful AI content, and builds on the individual companies’ ongoing work,” the signatories said.Elevate Your Tech Prowess with High-Value Skill CoursesOffering CollegeCourseWebsiteIndian School of BusinessISB Professional Certificate in Product ManagementVisitIIT DelhiIITD Certificate Programme in Data Science & Machine LearningVisitIIM KozhikodeIIMK Advanced Data Science For ManagersVisitExperts globally have warned that AI-generated misinformation could subvert electoral processes. In India, poll-bound states last year saw deepfake videos, for instance, of sitting ministers appealing voters to vote for the opposition.Deepfakes stoking sensitive issues were also strategically deployed in the run up to Bangladesh’s parliamentary elections earlier this year. Which are the companies involved in the accord and what have they pledged to do, ET explains:Who are the signatories?The ‘Tech Accord to Combat Deceptive Use of AI in 2024 Elections’ was signed by global tech giants like Google, Amazon, Microsoft and Meta.Discover the stories of your interestBlockchain5 StoriesCyber-safety7 StoriesFintech9 StoriesE-comm9 StoriesML8 StoriesEdtech6 StoriesTop AI companies like ChatGPT-maker OpenAI, Anthropic, Inflection AI, Stability AI and ElevenLabs too signed the pact, along with social media majors X (formerly Twitter), Chinese short video company TikTok, Snap and LinkedIn. Adobe, IBM, Arm, McAfee, Nota, Trend Micro and Truepic are also signatories.What have they committed to?AI-generated audio, video and images that “deceptively fake or alter the appearance, voice, or actions of political candidates, election officials, and other key stakeholders in a democratic election, or that provide false information to voters about when, where and how they can vote” will be addressed, the signatories said in a statement.The signatories agreed to collaborate in developing and implementing technology to mitigate risks related to deceptive AI election content and assessing AI models to understand the risks they may pose in elections.They would work to detect the distribution of such content and appropriately address it if detected on their platforms.The companies would further foster cross-industry resilience to deceptive AI election content, provide transparency to the public regarding how they address it, and drive public awareness on the issue.What are the stakeholders saying?“We can’t let digital abuse threaten AI’s generational opportunity to improve our economies, create new jobs, and drive progress in health and science,” said Kent Walker, president, global affairs at Google. “Today’s accord reflects an industry-side commitment against AI-generated election misinformation that erodes trust.Nick Clegg, Meta’s president of global affairs, said: “With so many major elections taking place this year, it’s vital we do what we can to prevent people being deceived by AI-generated content…This work is bigger than any one company and will require a huge effort across industry, government and civil society.”“This is a pivotal election year … security and trust are essential to the success of elections and campaigns around the world,” said David Zapolsky, senior vice president of global public policy and general counsel at Amazon. “We believe this accord is an important part of our collective work to advance safeguards against deceptive activity and protect the integrity of elections.”Linda Yaccarino, CEO of X, said: “In democratic processes around the world, every citizen and company has a responsibility to safeguard free and fair elections, that’s why we must understand the risks AI content could have on the process. X is dedicated to playing its part, collaborating with peers to combat AI threats while also protecting free speech and maximising transparency.”