Did you notice that the image above was created by artificial intelligence? It can be difficult to spot AI-generated images, videos, audio, and text as technological advances make them indistinguishable from human-created content and more susceptible to manipulation by disinformation. However, knowing the current state of AI technology being used to create disinformation and the various signs that indicate what you're seeing may be fake can help you avoid being fooled.
World leaders are concerned. World Economic Forum ReportMisinformation and disinformation “have the potential to fundamentally disrupt electoral processes in multiple economies over the next two years,” while easier access to AI tools “has already led to an explosion in counterfeit information and so-called 'synthetic' content, from sophisticated voice clones to fake websites.”
While the terms misinformation and disinformation both refer to false or inaccurate information, disinformation is information that is deliberately intended to deceive or mislead.
“The problem with AI-driven disinformation is the scale, speed and ease with which it can be deployed,” he said. Hany Farid “These attacks no longer require nation-state actors or well-funded organizations — any individual with modest computing power can generate large amounts of fake content,” the University of California, Berkeley researchers said.
He is a pioneer of generative AI (See glossary below“AI is polluting our entire information ecosystem, calling into question everything we read, see, and hear,” and his research shows that AI-generated images and sounds are often “almost indistinguishable from reality.”
However, Farid and his colleagues' research reveals that there are strategies people can follow to reduce the risk of falling for social media misinformation and AI-created disinformation.
How to spot fake AI images
Remember when we saw the photo of Pope Francis wearing a down jacket? Fake AI images like this are becoming more common as new tools based on viral models (See glossary below), now anyone can create images from simple text prompts. study Google's Nicolas Dufour and his colleagues found that since the beginning of 2023, the share of AI-generated images in fact-checked misinformation claims has risen sharply.
“Today, media literacy requires AI literacy.” Negar Kamali at Northwestern University in Illinois in 2024 studyShe and her colleagues identified five different categories of errors in AI-generated images (outlined below) and offered guidance on how people can spot them on their own. The good news is that their research shows that people are currently about 70% accurate at detecting fake AI images. Online Image Test To evaluate your detective skills.
5 common types of errors in AI-generated images:
- Socio-cultural impossibilities: Does the scene depict behavior that is unusual, unusual, or surprising for a particular culture or historical figure?
- Anatomical irregularities: Look closely. Do the hands or other body parts look unusual in shape or size? Do the eyes or mouth look strange? Are any body parts fused together?
- Stylistic artifacts: Do the images look unnatural, too perfect, or too stylized? Does the background look odd or missing something? Is the lighting strange or variable?
- Functionality Impossibility: Are there any objects that look odd, unreal or non-functional? For example, a button or belt buckle in an odd place?
- Violation of Physics: Do the shadows point in different directions? Does the mirror's reflection match the world depicted in the image?
How to spot deepfakes in videos
An AI technology called generative adversarial networks (See glossary belowSince 2014, deepfakes have enabled tech-savvy individuals to create video deepfakes, which involve digitally manipulating existing videos of people to swap out different faces, create new facial expressions, and insert new audio with matching lip syncing. This has enabled a growing number of fraudsters, state-sponsored hackers, and internet users to produce video deepfakes, potentially allowing celebrities such as Taylor Swift and everyday people alike to unwillingly appear in deepfake porn, scams, and political misinformation and disinformation.
The AI techniques used to spot fake images (see above) can also be applied to suspicious videos. What's more, researchers from the Massachusetts Institute of Technology and Northwestern University in Illinois have A few tips There has been a lot of research into how to spot these deepfakes, but it's acknowledged that there is no foolproof method that will always work.
6 tips to spot AI-generated videos:
- Mouth and lip movements: Are there moments when the video and audio are not perfectly in sync?
- Anatomical defects: Does your face or body look strange or move unnaturally?
- face: Look for inconsistencies in facial smoothness, wrinkles around the forehead and cheeks, and facial moles.
- Lights up: Is the lighting inconsistent? Do shadows behave the way you expect them to? Pay particular attention to the person's eyes, eyebrows, and glasses.
- hair: Does your facial hair look or move oddly?
- Blink: Blinking too much or too little can be a sign of a deepfake.
A new category of video deepfakes is based on the diffusion model (See glossary below), the same AI technology behind many image generators, can create entirely AI-generated video clips based on text prompts. Companies have already tested and released commercial versions of their AI video generators, potentially making them easy to create for anyone without requiring special technical knowledge. So far, the resulting videos tend to feature distorted faces and odd body movements.
“AI-generated videos are likely easier for humans to detect than images because they contain more motion and are much more likely to have AI-generated artifacts and impossibilities,” Kamali says.
How to spot an AI bot
Social media accounts controlled by computer bots have become commonplace across many social media and messaging platforms. Many of these bots also leverage generative AI techniques such as large-scale language models.See glossary below) will be launched in 2022, making it easier and cheaper to mass-produce grammatically correct, persuasive, customized, AI-written content through thousands of bots for a variety of situations.
“It's now much easier to customize these large language models for specific audiences with specific messages.” Paul Brenner At the University of Notre Dame in Indiana.
Brenner and his colleagues found that volunteers were only able to distinguish between AI-powered bots and humans when About 42 percent Even though participants were told they might interact with a bot, they would still be able to test their bot-detection skills. here.
Brenner said some strategies could help identify less sophisticated AI bots.
5 ways to tell if a social media account is an AI bot:
- Emojis and hashtags: Overusing these can be a sign.
- Unusual phrases, word choices, and analogies: Unusual language can indicate an AI bot.
- Repetition and Structure: Bots may repeat words that follow a similar or fixed format, or may overuse certain slang terms.
- Ask a question: These may reveal the bot's lack of knowledge on a topic, especially when it comes to local locations and situations.
- Assume the worst: If the social media account is not a personal contact and its identity has not been clearly verified or confirmed, it may be an AI bot.
How to detect audio duplication and audio deepfakes
Voice Clone (See glossary belowAI tools have made it easier to generate new voices that can imitate virtually anyone, which has led to a rise in audio deepfake scams replicating the voices of family members, business executives and political leaders such as US President Joe Biden. These are much harder to identify compared to AI-generated videos and images.
“Voice clones are particularly difficult to distinguish between real and fake because there are no visual cues to help the brain make that decision,” he said. Rachel TobackCo-founder of SocialProof Security, a white hat hacking organization.
Detecting these AI voice deepfakes can be difficult, especially when they're used in video or phone calls, but there are some common sense steps you can take to help distinguish between real human voices and AI-generated ones.
4 steps to use AI to recognize if audio has been duplicated or faked:
- Public figures: If the audio clip is of an elected official or public figure, review whether what they say is consistent with what has already been publicly reported or shared about that person's views or actions.
- Look for inconsistencies: Compare your audio clip to previously authenticated video or audio clips featuring the same person. Are there any inconsistencies in the tone or delivery of the voice?
- Awkward Silence: If you're listening to a phone call or voicemail and notice that the speaker takes unusually long pauses while speaking, this could be due to the use of AI-powered voice duplication technology.
- Weird and redundant: Robotic or unusually verbose speech may indicate that someone is using a combination of voice cloning to mimic a person's voice and large language models to generate accurate phrasing.
Technology will continue to improve
As it stands, there are no consistent rules that can consistently distinguish AI-generated content from authentic human content. AI models that can generate text, images, videos, and audio will surely continue to improve, allowing them to quickly generate content that looks authentic without obvious artifacts or mistakes. “Recognize that, to put it mildly, AI is manipulating and fabricating images, videos, and audio, and it happens in under 30 seconds,” Tobac says. “This makes it easy for bad actors looking to mislead people to quickly subvert AI-generated disinformation, which can be found on social media within minutes of breaking news.”
While it's important to hone our ability to spot AI-generated disinformation and learn to ask more questions about what we read, see and hear, ultimately this alone won't be enough to stop the damage, and the responsibility for spotting it can't be placed solely on individuals. Farid is among a number of researchers who argue that government regulators should hold accountable the big tech companies that have developed many of the tools that are flooding the internet with fake, AI-generated content, as well as startups backed by prominent Silicon Valley investors. “Technology is not neutral,” Farid says. “The tech industry is selling itself as not having to take on the responsibilities that other industries take on, and I totally reject that.”
Diffusion Model: An AI model that learns by first adding random noise to data (such as blurring an image) and then reversing the process to recover the original data.
Generative Adversarial Networks: A machine learning technique based on two neural networks that compete by modifying the original data and attempting to predict whether the generated data is genuine or not.
Generative AI: A broad class of AI models that can generate text, images, audio, and video after being trained on similar forms of content.
Large-scale language models: A subset of generative AI models that can generate different forms of written content in response to text prompts, and in some cases translate between different languages.
Voice CloneA potential way to use AI models to create a digital copy of a person's voice and generate new voice samples with that voice.
topic:
Source: www.newscientist.com