Deepfakes photo illustration
February 22, 2024

Q&A: What can be done about the proliferation of deepfakes?

Author: David Danelski
February 22, 2024

Days before the New Hampshire presidential primary, voters were targeted with robocalls featuring a deepfake message mimicking the voice of President Biden that urged them to sit out the primary and abstain from voting. Just a week later, the social media platform X, formerly Twitter, temporarily suspended all searches related to Taylor Swift because of the circulation of deepfake pornographic images designed to portray her likeness. And more recently, Swift was falsely depicted in deepfake videos on X as denying the 2020 presidential election results and supporting former president Donald Trump. (Fact check: Swift voiced support for Biden in 2020 but is yet to speak about an endorsement in the 2024 race.) 

The Federal Communications Commission has since outlawed robocalls that contain voices generated by artificial intelligence and Congress is considering a bill that would prohibit the non-consensual disclosure of digitally altered intimate images. But with proliferation of publicly available AI-assisted image generation tools, many fear we are seeing just the beginning of a wave of malicious deepfake content in public discourse as lawmakers are slow to catch up. 

Amit Roy-Chowdhury
Amit Roy-Chowdhury

In response to this problem, we sought insights from UC Riverside experts Amit Roy-Chowdhury, a professor of electrical and computer engineering leading UCR’s Video Computing Group; Emiliano De Cristofaro, a professor of computer science and engineering whose work includes understanding and countering socio-technical issues on the web; and Kevin M. Esterling, a professor of public policy and political science and the director of UCR’s Laboratory for Technology, Communication and Democracy.

Question: With the proliferation of AI software capable of generating images and sound, do you anticipate a worsening trend in malicious deepfake usage in the upcoming months and years?

Esterling: Yes.

Roy-Chowdhury: Yes, in fact, I think it is one of the biggest risks from AI.

Emiliano De Cristofaro
Emiliano De Cristofaro

Q: Can a technical solution address this problem? For instance, could social media platforms like X, Facebook, and YouTube deploy software to identify and prevent the posting of deepfakes?

De Cristofaro: In theory, yes – there already are tools that propose to identify deepfake. Typically, they work by trying to identify small “mistakes” and inconsistencies in the deepfakes that might not be noticeable to humans (e.g., ears looking a bit strange, missing finger, etc.). In practice, however, it’s unclear what the accuracy of these tools that are typically tested in controlled environments are against actors that 1) are aware of how detection algorithms work and 2) actively try to evade them by spending a lot of time and resources. More worryingly, inconsistencies are inherently harder to spot in audio than in videos. 

Kevin Esterling
Kevin Esterling

In any case, these technical solutions would require non-negligible efforts and substantial resources being allocated by social media platforms, as well as maintaining large amounts of “ground truth” content for each potential victim. While that might be practical for very high-profile personalities like U.S. Presidential candidates, it might be less so at scale, and anyway unlikely to be implemented without strict regulations forcing the hand of the platforms.

Roy-Chowdhury: This is much harder than it sounds. First, developing tools to detect fake content may not be easy. Second, this is a cat-and-mouse game. If a tool is developed, it could lead to the development of methods to evade the tools, and that will inevitably happen. No system is perfectly secure, and may never be perfectly secure. It often depends on what the adversary knows (in technical terms this is often referred to as white-box versus black-box attacks on a system). 

Q: How can social media platforms enhance mechanisms to forestall the dissemination of malicious deepfakes intended to mislead, harm, or bully individuals?

De Cristofaro: The platforms can and should do more to proactively identify harmful and misleading content by deploying a mix of automated means and moderators. For instance, a lot of these nefarious activities entail coordinated actions and identifiable behaviors by groups of actors targeting victims; these activities can be spotted by AI. Also, bullying and harmful behaviors are typically reported by users relatively quickly, but early human intervention is key in forestalling dissemination. Human moderation is costly and often exploits overworked moderators in developing nations paid very low wages – arguably, platforms need to spend more.

Roy-Chowdhury: I believe that we need a human-in-the-loop system, at least in the short term. We can develop methods to detect potentially problematic content, but it will have false alarms (and missed detections). Filtering out the false alarms may require human analysis of the content. Missed detections may be minimized by setting a low threshold for detection, but that would raise more false alarms and call for more human intervention. Also, humans are able to provide context and analyze and trace the sources of information, which is still challenging to automate.

Q: What can government officials do to compel the social media platforms to better police their sites to prevent the posting of malicious and damaging content?

 

Esterling: Currently, the U.S. federal government does not have much capacity to regulate social media content. Social media platforms are protected under Section 230 of the Communications Decency Act from any legal responsibility for the spread of misinformation. Each platform has content moderation guidelines and policies, such as X's Civic Integrity Policy and Facebook’s Community Standards policy, but these policies are only guidelines and platforms are not required to enforce them. Indeed, Texas and Florida have recently passed laws that affirmatively prevent platforms from blocking content based on the viewpoint expressed. It is not clear whether the federal government has the capacity to force social media companies to moderate their content given that some courts have held that social media companies have a First Amendment right to decide what content to allow to be posted and displayed on their platforms.

Q: What steps can people take to prevent themselves from being misled or influenced by fake content on social media or other public platforms?

De Cristofaro: My advice is to try to independently verify content on social media. Is the source reliable? Is this the only source reporting this content? What does a Google search return about this? Don’t take the content at face value. In the case of deepfakes, try to look for inconsistencies, things that might look a bit odd.

Roy-Chowdhury: People should try to understand the source of the information and corroborate across multiple sources. A simple Google search can often provide clues as to whether the information is reliable or not. If something is too good to be true, maybe think about it carefully. In many ways, this is no different than dealing with rumors and gossip, just that the volume of material being provided can be overwhelming.  

end



 

Media Contacts