Deepfake video showcasing Australian Prime Minister Anthony Albanese on a smartphone
Australia’s Associated Press/Alamy
Universal DeepFake Detectors have demonstrated optimal accuracy in identifying various types of videos that have been altered or entirely produced by AI. This technology can assist in flagging adult content, deepfake scams, or misleading political videos generated by unregulated AI.
The rise of accessible DeepFake Creation Tools powered by inexpensive AI has led to rampant online distribution of synthetic videos. Numerous instances involve non-consensual depictions of women, including celebrities and students. Additionally, deepfakes are utilized to sway political elections and escalate financial scams targeting everyday consumers and corporate leaders.
Nevertheless, most AI models designed to spot synthetic videos primarily focus on facial recognition. This means they excel in identifying a specific type of deepfake where a person’s face is swapped with existing footage. “We need a single video with a manipulated face and a model capable of detecting background alterations or entirely synthetic videos,” states Rohit Kundu from the University of California Riverside. “Our approach tackles that particular issue, considering the entire video could be entirely synthetically produced.”
Kundu and his team have developed a universal detector that leverages AI to analyze both facial features and various background elements within the video. It can detect subtle signs of spatial and temporal inconsistencies in deepfake content. Consequently, it identifies irregular lighting conditions for people inserted into face-swapped videos, as well as discrepancies in background details of fully AI-generated videos. The detector can even recognize AI manipulation in synthetic videos devoid of human faces, and it flags realistic scenes in video games like Grand Theft Auto V, independent of AI generation.
“Most traditional methods focus on AI-generated facial videos, such as face swaps and lip-synced content.” says Siwei Lyu from Buffalo University in New York. “This new method is broader in its applications.”
The universal detector reached an impressive accuracy rate of 95% to 99% in recognizing four sets of test videos featuring manipulated faces. This performance surpasses all previously published methods for detecting this type of deepfake. In evaluations of fully synthetic videos, it yielded more precise results than any other detectors assessed to date. Researcher I presented their findings at the 2025 IEEE Conference on Computer Vision and Pattern Recognition in Nashville, Tennessee, on June 15th.
Several researchers from Google also contributed to the development of these new detectors. Though Google has not responded to inquiries regarding whether this detection method would be beneficial for identifying deepfakes on platforms like YouTube, the company is among those advocating for watermarking tools that help label AI-generated content.
The universal detectors have room for future enhancements. For instance, it would be advantageous to develop capabilities for detecting deepfakes utilized during live video conference calls—a tactic some scammers are now employing.
“How can you tell if the individual on the other end is genuine or a deepfake-generated video, even with network factors like bandwidth affecting the transmission?” asks Amit Roy-Chowdhury from the University of California Riverside. “This is a different area we’re exploring in our lab.”
Topics:
Source: www.newscientist.com












