Detecting deepfakes should not be the sole responsibility of platforms: Sam Gregory

In 2024, elections will take place in over 40 countries, which have more than 40% of global population. A study from the World Economic Forum says AI-driven misinformation could interfere with electoral processes in various countries, including India. Sam Gregory, executive director of Witness, a global human rights organisation, which uses video and technology to expose human rights abuses and has worked on the threats of AI and deepfakes, spoke to ET about AI related challenges and strategies in elections. Edited excerpts:Can you quantify the risk of deepfakes in elections?We are entering a challenging moment in terms of deepfakes. This year, we have made technical progress that makes it easier and cheaper to make them. We are going to have an election year where synthetic media tools will be used for positive purposes such as voter outreach, and for negative ones. Elevate Your Tech Prowess with High-Value Skill CoursesOffering CollegeCourseWebsiteIIT DelhiIITD Certificate Programme in Data Science & Machine LearningVisitIIM KozhikodeIIMK Advanced Data Science For ManagersVisitIndian School of BusinessISB Professional Certificate in Product ManagementVisitThe early signs are challenging if we look at the elections that have taken place in Pakistan and Bangladesh as well as the forthcoming ones in the US, India, EU, UK and South Africa. What do these early signs indicate? We see a sort of pervasive wave creeping in political and unofficial campaigning and in society. In Slovakia, there was a fake audio call, simulating the voice of a candidate, in the last days of an election. It is trivially easy to make fake audio. Also, there is an equity gap in access to detection tools. This means journalists, fact checkers and election officials don’t have access to tools that can detect it. Discover the stories of your interestBlockchain5 StoriesCyber-safety7 StoriesFintech9 StoriesE-comm9 StoriesML8 StoriesEdtech6 StoriesHow can we bridge the equity gap in detection tools? It’s critical to build the capacity of journalists, civil society and election bodies to do basic detection of deceptive AI, using emerging tools and existing skillsets such as the ability to track down the original of an AI manipulated video or image (like a reverse image search). Has there been any study on AI inhibiting or influencing voters? We don’t have strong empirical studies on the impact of manipulated media nor on the impact of synthetic media on our broader understanding of trust and truth. The indications are that these tools have an impact, particularly when they are used in smart ways by political actors and in specific instances like just before an election. We need to be careful because one of the risks around this is people claiming that something has been made with AI to dismiss real content. Twenty tech companies have promised to work on developing AI detection tools. Is it sufficient? The agreement sets a floor rather than a ceiling. It raises the bar on what they are voluntarily committing to across the sector, which is good. They say they are going to try and standardise and make available ways to understand how AI is used in making synthetic media and deepfakes. A consumer or a regulator will be able to see much more easily how AI is used in a piece of content. These systems are not yet fully evolved. These are not going to be deployed across the ecosystem this election year. But at least these companies are going to make a commitment to detection. There are a few critical intervention points for companies, but we also need regulation to reinforce these. When I look at a video, how do I know that AI was used? Either because it labels it, visibly discloses it, or provides a level of information within the media that can show me what happened. We also need a regulatory environment. We need to know what is banned in this space, what is permitted and how this is reinforced across the ecosystem. These systems don’t work if [social media] platforms alone are applying these ‘provenance’ signals of how AI was used, without the participation of people who are developing or deploying AI models in embedding these signals. Then we don’t have effective detection or transparency, or the ability to hold bad actors to account. The responsibility of the government is to ensure that we have accountability across the AI pipeline. Has any government or regulatory body made meaningful strides to combat deepfakes? There are two examples. One is a negative example: China has passed a range of regulations targeting those who make deepfakes and demanding that identity be linked to active media creation. It is a dangerous principle in terms of freedom of speech. They are also targeting satirical speech, which is a form of political expression. The other example is a more democratic one—the EU AI Act, which looks particularly at deepfakes and says we need an obligation from the deployers of these systems to label and disclose that AI has been used and to do that within the bounds of freedom of expression and satirical speech and artistic speech. That will take a significant amount of time to be implemented, but at least it tries to lay a framework on how we can have this disclosure. How do we do that while respecting human rights? India will soon have elections. What can the government do to prevent the dissemination of deepfakes? In India, there was a rush towards deepfakes-related legislation last year. We need to be careful. Because when we craft legislation, we want to make sure we are not crippling the ability of people to communicate, that we are not targeting people who may use it for dissident purposes or for legitimate satirical or political purposes. There are laws in most countries around creating or sharing nonconsensual sexual images, and in many countries there are ‘fake news’ laws on misinformation. It should be an easy step to update existing laws to cover synthetic sexual imagery created without consent. A number of countries, including the UK and Australia, are doing it. There’s an evolving standard for disclosing how AI is used. The Coalition for Content Provenance and Authenticity or C2PA [a project to certify the source and history of content to address misinformation] is the emerging standard for provenance —on how AI is used in editing and distribution and is combined with human-made content. It’s important for governments to understand how they might use standards like that while respecting human rights and privacy. Sometimes public education on deepfakes focuses on spotting glitches—like six fingers in a hand. It does not work. These are just temporary flaws. Have detection tools caught up with the technology we have now? Detection is never going to be 100% effective. It may be 85-90% effective in the best case. Detection tools are flawed because they often work well on one technique. Once detection tools are available, malicious actors learn to test their fakes against them, and they lose their effectiveness. Detection is inherently adversarial between the detector and attempts to fool it. They can be fooled by counter-forensics. That’s why it is important to have skills around detection tools, and to use a range of them. Preventive measures such as watermarking are applied at the stage of creating or sharing AI generated content—either as visible watermarks or metadata. However, these can be removed, with some effort. These strategies should be viewed as ways to mitigate harm, recognising that bad actors may circumvent them. This necessitates a balance between encouraging the use of protective measures and the imperfect but necessary process of detection. It also means we shouldn’t ask platforms to do that. Some legislative proposals say platforms should detect all AI. That is a terrible idea. These platforms don’t do a great job in content moderation at a global scale. And they won’t be able to detect AI reliably at scale and it will lead to all kinds of false positives and negatives that will impact our confidence more broadly in communication. It is a responsibility and a power we shouldn’t put solely in the hands of platforms.

detecting deepfakes should not be the sole responsibility of platforms sam gregory

Leave a Reply

Your email address will not be published. Required fields are marked *