Artificial intelligence companies have been at the vanguard of developing the transformative technology. Now they are also racing to set limits on how AI is used in a year stacked with major elections around the world. Last month, OpenAI, the maker of the ChatGPT chatbot, said it was working to prevent abuse of its tools in elections, partly by forbidding their use to create chatbots that pretend to be real people or institutions. In recent weeks, Google also said it would limit its AI chatbot, Bard, from responding to certain election-related prompts “out of an abundance of caution.” Elevate Your Tech Prowess with High-Value Skill CoursesOffering CollegeCourseWebsiteIIM LucknowIIML Executive Programme in FinTech, Banking & Applied Risk ManagementVisitIndian School of BusinessISB Product ManagementVisitIIT DelhiIITD Certificate Programme in Data Science & Machine LearningVisitAnd Meta, which owns Facebook and Instagram, promised to better label AI-generated content on its platforms so voters could more easily discern what material was real and what was fake. On Friday, 20 tech companies – including Adobe, Amazon, Anthropic, Google, Meta, Microsoft, OpenAI, TikTok and X (formerly known as Twitter) – signed a voluntary pledge to help prevent deceptive AI content from disrupting voting in 2024. The accord, announced at the Munich Security Conference, included the companies’ commitments to collaborate on AI detection tools and other actions, but it did not call for a ban on election-related AI content. Anthropic also said separately Friday that it would prohibit its technology from being applied to political campaigning or lobbying. In a blog post, the company, which makes a chatbot called Claude, said it would warn or suspend any users who violated its rules. It added that it was using tools trained to automatically detect and block misinformation and influence operations.Discover the stories of your interestBlockchain5 StoriesCyber-safety7 StoriesFintech9 StoriesE-comm9 StoriesML8 StoriesEdtech6 Stories “The history of AI deployment has also been one full of surprises and unexpected effects,” the company said. “We expect that 2024 will see surprising uses of AI systems — uses that were not anticipated by their own developers.” The efforts are part of a push by AI companies to get a grip on a technology they popularized as billions of people head to the polls. At least 83 elections around the world, the largest concentration for at least the next 24 years, are anticipated this year, according to Anchor Change, a consulting firm. In recent weeks, people in Taiwan, Pakistan and Indonesia have voted, with India, the world’s biggest democracy, scheduled to hold its general elections in the spring. How effective the restrictions on AI tools will be is unclear, especially as tech companies press ahead with increasingly sophisticated technology. On Thursday, OpenAI unveiled Sora, a technology that can instantly generate realistic videos. Such tools could be used to produce text, sounds and images in political campaigns, blurring fact and fiction and raising questions about whether voters can tell what content is real. AI-generated content has already popped up in U.S. political campaigning, prompting regulatory and legal pushback. Some state legislators are drafting bills to regulate AI-generated political content. Last month, New Hampshire residents received robocall messages dissuading them from voting in the state primary in a voice that was most likely artificially generated to sound like President Joe Biden. The Federal Communications Commission last week outlawed such calls. “Bad actors are using AI-generated voices in unsolicited robocalls to extort vulnerable family members, imitate celebrities and misinform voters,” Jessica Rosenworcel, the FCC’s chair, said at the time. AI tools have also created misleading or deceptive portrayals of politicians and political topics in Argentina, Australia, Britain and Canada. Last week, former Prime Minister Imran Khan, whose party won the most seats in Pakistan’s election, used an AI voice to declare victory while in prison. In one of the most consequential election cycles in memory, the misinformation and deceptions that AI can create could be devastating for democracy, experts said. “We are behind the eight ball here,” said Oren Etzioni, a professor at the University of Washington who specializes in AI and a founder of True Media, a nonprofit working to identify disinformation online in political campaigns. “We need tools to respond to this in real time.” Anthropic said in its announcement Friday that it was planning tests to identify how its Claude chatbot could produce biased or misleading content related to political candidates, political issues and election administration. These “red team” tests, which are often used to break through a technology’s safeguards to better identify its vulnerabilities, will also explore how the AI responds to harmful queries, such as prompts asking for voter-suppression tactics. In the coming weeks, Anthropic is also rolling out a trial that aims to redirect U.S. users who have voting-related queries to authoritative sources of information such as TurboVote from Democracy Works, a nonpartisan nonprofit group. The company said its AI model was not trained frequently enough to reliably provide real-time facts about specific elections. Similarly, OpenAI said last month that it planned to point people to voting information through ChatGPT, as well as label AI-generated images.”Like any new technology, these tools come with benefits and challenges,” OpenAI said in a blog post. “They are also unprecedented, and we will keep evolving our approach as we learn more about how our tools are used.” (The New York Times sued OpenAI and its partner, Microsoft, in December, claiming copyright infringement of news content related to AI systems.)Synthesia, a startup with an AI video generator that has been linked to disinformation campaigns, also prohibits the use of technology for “newslike content,” including false, polarizing, divisive or misleading material. The company has improved the systems it uses to detect misuse of its technology, said Alexandru Voica, Synthesia’s head of corporate affairs and policy. Stability AI, a startup with an image-generator tool, said it prohibited the use of its technology for illegal or unethical purposes, worked to block the generation of unsafe images and applied an imperceptible watermark to all images. The biggest tech companies have also weighed in beyond the joint pledge in Munich on Friday. Last week, Meta also said it was collaborating with other firms on technological standards to help recognize when content was generated with AI. Before European Union’s parliamentary elections in June, TikTok said in a blog post Wednesday that it would ban potentially misleading manipulated content and require users to label realistic AI creations. Google said in December that it, too, would require video creators on YouTube and all election advertisers to disclose digitally altered or generated content. The company said it was preparing for 2024 elections by restricting its AI tools, like Bard, from returning responses for certain election-related queries. “Like any emerging technology, AI presents new opportunities as well as challenges,” Google said. AI can help fight abuse, the company added, “but we are also preparing for how it can change the misinformation landscape.”