TThe New York mayoral election will likely be remembered not just for the impressive win of the young democratic socialists but also for a significant trend that could influence future campaigns: the rise of AI-generated campaign videos.
Andrew Cuomo, who lost last week to Zoran Mamdani, has notably engaged in the distribution of deepfake videos featuring his opponent, with one such video alleging racism against him.
Although AI has been utilized in political campaigns before—primarily for algorithms that target voters or create policy ideas—its evolution has seen the creation of sometimes misleading imagery and videos.
“What was particularly innovative this election cycle was the deployment of generative AI to produce content directly for voters,” stated New York State Representative Alex Boas, who advocates for regulations governing AI use.
“Whether it was the Cuomo team or not? Creating a housing plan with ChatGPT or AI-generated video ads targeting voters felt revolutionary during the 2025 campaign cycle, marking an unprecedented approach.”
Incumbent Mayor Eric Adams, who exited the race in September, also leveraged AI, utilizing it to generate a robocall and producing a feature in The New Yorker where he converses in Mandarin, Urdu, and Yiddish. An AI video depicted a dystopian view of New York and aimed critiques at Mamdani.
In a controversial move, Mr. Cuomo faced allegations of racism and Islamophobia after his campaign shared a video depicting a fictitious Mamdani eating rice with his fingers and included an unrelated portrayal of a black man shoplifting. The campaign also featured a black individual in a purple suit appearing to endorse sex trafficking, which was later deleted under the pretext of an error.
Boas, who is campaigning for a House seat, remarked that many AI-generated ads from the recent election cycle may have crossed into what could be deemed bigoted territories.
“We need to assess if this is due to algorithms perpetuating stereotypes from their training data, or if it’s simply easier to manipulate content digitally without the need to coordinate specific actions with actors,” Boas indicated.
“Digital creation simplifies the production of content that might be frowned upon by polite society,” he added.
In New York, campaigns are mandated to label AI-generated ads, but several—including one from Mr. Cuomo—failed to do so. The New York State Board of Elections oversees potential violations, but Boas pointed out that campaigns might risk penalties as the costs could be outweighed by the gains from winning.
“There will likely be campaigns willing to take that risk: if they win, the post-election fines become irrelevant,” Boas stated. “We need an effective enforcement mechanism that can intervene rapidly before elections to minimize damage, rather than simply impose penalties afterward.”
Robert Wiseman, co-director of Public Citizen, a nonprofit that has supported various AI regulations nationwide, noted that attempting to deceive the public is illegal in more than half of states and that campaigns must label AI-generated materials as such. However, he cautioned that the regulation of AI in political contexts remains a critical issue.
“Deception has historically been part of politics, but the implications of AI-generated misinformation are particularly concerning,” Wiseman explained.
“When audiences are shown a convincingly authentic video of someone making a statement, it becomes incredibly challenging for that individual to refute it, essentially forcing them to challenge viewers’ perceptions.”
AI technology can now generate convincing videos, but execution weaknesses still exist. A “Zoran Halloween Special” video released by Cuomo was clearly labeled as AI-generated yet showcased a poorly rendered image of Mamdani with mismatched audio and nonsensical dialogue.
With midterm elections on the horizon and the 2028 presidential campaign approaching, AI-generated political videos are poised to become a fixture in the landscape.
At the national level, this trend is already evident. Elon Musk shared an AI-generated video where Kamala Harris appeared to assert her role as a de facto presidential candidate and claimed she “knows nothing about running a country.”
While states are advancing in their efforts to regulate AI’s role in elections, there seems to be a lack of willingness to implement such measures at the federal level.
During the No King protests in October, Donald Trump released an AI video showcasing him in a fighter jet, dropping brown liquid on protestors, among his most recent AI content.
With President Trump’s evident support for this medium, it appears unlikely that Republicans will seek to impose restrictions on AI anytime soon.
SSocial media has always served as an entertainment mirror for society as a whole. The algorithms and amplification of our always-on online presence have highlighted the worst parts of our lives while obscuring the best parts. This is part of why we are so polarized today, with two tribes screaming at each other on social media and plunging into a gaping chasm of despair.
This is what makes a statement released by one of the tech giants this week so alarming. Let those who enter give up hope. With less than two weeks until Donald Trump returns to the White House for the second runoff of the US presidential election, Meta, the parent company of Facebook, WhatsApp, Instagram and Threads, is making major changes to its content moderation. added. In doing so, it appears consistent with the president-elect's views.
Meta CEO Mark Zuckerberg announced in a bizarre video message posted to his Facebook page on Tuesday that the platform would be eliminating fact checkers. Instead of them? mob rules.
Zuckerberg said the platform: Over 3 billion people The company, which around the world logs on to its app every day, plans to adopt an Elon Musk-style community note format to police what is and isn't acceptable speech on its platform. . Starting in the United States, the company plans to dramatically shift the Overton window to those who can shout it loudest.
Meta's CEO largely acknowledged that the move was politically motivated. “It's time to go back to our roots around freedom of expression,” he said, adding that “restrictions on topics like immigration and gender… […] It deviates from mainstream discourse. ” He acknowledged past “censorship mistakes,” by which he likely meant the past four years of suppressing political speech during the Democratic president's tenure, and added that he “worked with President Trump to ensure that U.S. companies We will prevent foreign governments from attacking the United States.” Please check more. ”
The most dog-whistle comment was that Meta's remaining trust and safety and content moderation teams would be relocated from liberal California, and that its U.S. content moderation arm would now be based in solidly Republican Texas. It was a throwaway line. The only thing missing from the video was Zuckerberg wearing a MAGA hat and carrying a shotgun.
Let me be clear: all businessmen make smart decisions based on political circumstances. And few storms are as violent as Hurricane Trump as it approaches the United States. But few people's decisions are as important as Mark Zuckerberg's.
Over the past 21 years, Meta CEO has found himself a central figure in society. Initially, he oversaw a website used by college students. Now billions of people from all walks of life use it. In the early 2000s, the eccentric pursuit of online fun was nowde facto public town squareIn the words of Elon Musk. Where the meta goes, the world follows, online and offline. And Meta just decided to do a dramatic handbrake right turn.
Please don't believe it. Trust the watchdog. “Today’s Meta announcement is a retreat from a healthy and safe approach to content moderation.” The Real Facebook Oversight Committeesaid in a statement that he is an independent person who sees himself as the arbiter of Meta's movements.
They say that because if there's one thing we've learned from social media polarization over the past decade, it's that the angriest person wins the argument. Anger and lies can spread on social media, and are only partially contained by the platforms' ability to intervene if things get out of hand. (Recall that exactly four years ago, Meta suspended Donald Trump from Facebook and Instagram for two years for inciting the violence that stormed the Capitol on January 6, 2021.)
Social networks have always struggled with controlling speech on their platforms. Regardless of the outcome of the debate, what they are sure to do is annoy 50% of the population. These platforms are chronically underinvested in growing their businesses at all costs. Platforms have long argued that effective moderation is a problem of scale, and this is the problem they have created by pursuing scale at all costs.
To be sure, policing online speech is difficult, and the level of content moderation that companies like Meta are trying to operate at doesn't work. But abandoning it completely in favor of community notes is not the answer. Suggesting that it is a rational, evidence-based decision masks the reality. It’s a politically expedient move for someone who this week supported the resignation of self-proclaimed “radical” centrist Nick Clegg as head of global policy. A person who leans toward the Republican Party. He appointed Dana White, CEO of Ultimate Fighting Championship and a close Trump ally, to Meta's board of directors.
In many ways, you can't blame Zuckerberg for bending the knee to Donald Trump. The problem is that his decisions have a huge impact.
This is an extinction event for the idea of objective truth on social media. The creature was already on life support, but one of the reasons it's hanging on is that Meta has decided to fund an independent fact-checking organization to try to keep some elements of social media afloat. This is because he was ambitious. Authenticity and freedom from political bias. Night is day. The top is the bottom. Meta is X. Mark Zuckerberg is Elon Musk. Live out four tumultuous, bitter and unfounded years online.
I I considered leaving Twitter shortly after Elon Musk bought it in 2022 because I didn't want to be part of a community that could potentially be bought, much less by a guy like him. Soon, the nasty “long and intense” bullying of staff began. But I've had some of the most interesting conversations of my life on Twitter, randomly, hanging out, or being invited to talk. “Has anyone else been devastatingly lonely during the pandemic?” “Has anyone had a relationship with a boyfriend or girlfriend from middle school?” We called Twitter a place to tell the truth to strangers (Facebook is a place to lie to friends), and the breadth of it was mutual and wonderful.
After the BlueCheck fiasco, things got even more unpleasant: identity verification became something you could buy, which made you less trustworthy. So I joined a rival platform, Mastodon, but quickly realized I'd never get 70,000 followers like I did on Twitter. I wasn't looking for attention. In itself, But my peers were less diverse and less loud, and my infrequently updated social media feeds gave me the eerie, slightly depressing feeling of walking into a mall only to find that half the stores are closed and the rest are all selling the same thing.
In 2023, the network now known as X began. Sharing advertising revenue with “premium” usersthen I joined Threads (owned by Meta), where all I see are strangers confessing to petty misdemeanors. I stayed with X, where everything is darker. People get paid for engagement indirectly through ads. It's also a bit vague. It's described as “revenue sharing,” but it doesn't tell you which ad revenues were shared with you. So you can't measure revenue per impression. Is X splitting it 50/50? Or is it 10/90? Are they actually paying you to generate hate?
Elon Musk: “Infiltrated into far-right politics” Photo: Getty Images
“What we've seen is that controversial content drives engagement,” says Ed Saperia, president of the London School of Politics and Technology. “Extreme content drives engagement.” It's become possible to make a living creating harmful content. My 16-year-old son noticed this long before I did with Football X. People are going to say obviously wrong things for the clicks of hate. David Cameron Similar to Catherine the GreatBut that's nothing compared to the engagement you get when attacking, say, transgender people. High-profile tweets are surfaced directly to the top of the “for you” feed by a “black box algorithm designed to keep you scrolling,” said Rose Wang, COO of another rival, Blue Sky, which serves up a constant stream of repetitive topics designed to annoy users.
As a result of these changes, “the platform has become inundated with individuals who were previously banned from the platform, ranging from extremely niche accounts to people like Tommy Robinson and Andrew Tate,” says Joe Mulhall, head of research at Hope Not Hate. We saw the impact of this reality this August when misinformation about the identity, ethnicity and religion of the killer of three girls in Southport sparked overtly racist unrest across the UK the likes of which had not been seen since the '70s. “Not only was X responsible for creating an atmosphere for rioting, it was also a central hub for the organisation and distribution of content that led to rioting,” says Mulhall.
A man named Wayne O'Rourke, a “keyboard warrior,” was convicted of inciting racial hatred on social media after the August race riots. Monthly salary of £1,400 From his activities at X. The vocal Laurence Fox last month Earn a similar amount Posted on X. O'Rourke had 90,000 followers, but Tommy Robinson has over a million followers and presumably makes a lot more money.
Meanwhile, governments have no surefire remedy, even when, as Mulhall puts it, “decisions made on the US West Coast clearly impact our communities.” In April, Brazilian President Luiz Inacio Lula da Silva sought to suspend fewer than 100 X accounts for hate speech and fake news, mainly as supporters of his predecessor Jair Bolsonaro challenged the legitimacy of his defeat. X refused, and also declined to defend itself in court. On Monday, Brazil's Supreme Court unanimously upheld the platform-wide ban, saying the platform “considers itself above the rule of law.” From a business perspective, it's surprising that Musk didn't try harder to avoid it, but there may be other things he values more than money, such as exemption from government and democratic constraints.
Tommy Robinson…Musk has rescinded the ban from X. Picture: James Manning/PA
So is it moral to remain on a platform that has done so much to help bring the politics of division and hate from our keyboards into real life? Is X worse than Facebook or TikTok or (wow!) YouTube? And is it intentionally bad? In other words, are we watching Musk's master plan unfold?
“This is not the first time that extremist content has been circulating online,” Saperia says. “There are a lot of bad platforms, and a lot of bad things are happening there.” X's problem may not be bad regulation, he points out, but bad enforcement. And it's not just X's problem. “Have you seen the UK court system these days? Cases from five years ago are being tried. Without the law, society would be impossible.”
While X may be a catalyst for inciting and rallying civil unrest, from the Jan. 6 storming of the U.S. Capitol to Southport and beyond, Saperia says it's important to keep in mind that “politics is shifting rightward, but not just because of the media environment, but also for complex economic reasons: the middle-class West is getting poorer.” Donald Trump may have shocked the traditional U.S. media by speaking directly to voters with his crude and increasingly insane messages, but it's naive to think that a complacent public resting on a prosperous future would embrace his authoritarian moves. Whether social media is funding it or not, the anger is there, and “all the mainstream platforms have generally failed at hate speech,” Mulhall says. “They didn't want this content, but they were struggling to deal with it. And after Charlottesville, they made some progress.” [the white supremacist rally in 2017] Or Capitol Hill.”
Still, Hope Not Hate divides far-right online activity into three strains: mainstream platforms like X, Instagram, and Facebook that are not interested in fascism but are struggling to eradicate it and perhaps do not invest enough in moderation and regulation; hijacked platforms like Discord and Telegram that started as chat sites and messaging services and became the far-right’s favorite chat apps, probably due to their superior privacy or encryption; and bespoke platforms like Rumble (partially funded by fundamentalist libertarian billionaire Peter Thiel), Gab (which became a center of mainly anti-Semitic hate after the gunman of the 2018 Pittsburgh synagogue shooting posted his manifesto there) or Parler, which was acquired by Kanye West in 2022 after he was banned from Instagram and Twitter for anti-Semitism.
Synthesis: Guardian Design; X
“Twitter is unconventional,” Mulhall says. “It's ostensibly a mainstream platform, but now it has its own moderation policies. Elon Musk himself is steeped in far-right politics, so it's behaving like it's its own platform, which is what makes it so different. And it's so much more harmful, so much worse. And it's also because, although it has terms of service, it doesn't necessarily enforce them.”
Musk's commitment to free speech is surprisingly unconvincing. He used it to veto Lula's demands in Brazil, but was happy to oblige Narendra Modi's demands in India, where he suspended hundreds of accounts linked to the Indian farmer protests in February. “Free speech is a tool, not a principle, for Musk,” Mulhall says. “He's a techno-utopian with no attachment to democracy.”
But global civil society finds it very difficult to summarily reject the free speech argument because the counterargument is so dark: that many billionaires – not just Musk, but Thiel of Rumble, Parler's original backer, Rebecca Mercer (daughter of Breitbart funder Robert Mercer), and indirectly, billionaire sovereigns like Putin – have succeeded in transforming society and destroying the trust we have in each other and in institutions. It is much more comfortable to think that they are doing it by chance, simply because they love “free speech,” than to think that they are doing it deliberately. “The key to understanding neo-reactionary and ‘dark enlightenment’ movements is that these individuals have no interest whatsoever in maintaining the status quo,” says Mulhall.
“In some jurisdictions, the actions of state rulers and billionaires are pretty much correlated,” Saperia says. We see that in Russia. “Putin is using the state to manipulate social media to create polarization. That's pretty much proven,” Mulhall says. But where tech and politics don't line up, politics doesn't often prevail. Governments seem pretty powerless in the face of these tech giants. “Racial hatred and attempted murder are being nurtured on these platforms,” Mulhall says. “And people don't even believe it's possible to get Musk to Congress.”
Andrew Tait leaves court in Bucharest. Photo: Alexandre Dobre/AP
In Paris, Telegram founder Pavel Durov is under formal investigation over allegations that the app is linked to organized crime, and Musk is named as a defendant in a cyberbullying lawsuit brought by gold medallist Imane Kheriff. The boxer, who was born female and has never identified as transgender or intersex, has faced defamatory claims about her gender with an X from a number of public figures, including British politician J.K. Rowling and Donald Trump. Meanwhile, Andrew Tait has Charged by Romanian authorities He writes about human trafficking and rape, but his online The fantasy of misogyny The policy, which has far-reaching implications around the world, of treating women as a slave class has not received the same condemnation as YouTube, Insta, TikTok and Facebook's bans from their platforms, while the freedom to operate freely on X has lessened the impact of these bans and led to them being reversed. The EU has at least been more successful than the US in holding social media giants to the same corporate responsibility as, say, pharmaceutical or oil companies, but regulations are still scrambling to keep up with a changing reality where the sector is moving from the virtual to the real world at an ever-increasing rate.
But governments don't need to step in and tell us to stop using X. We can do it ourselves. Brazilians who don't use Twitter are migrating to Bluesky, which Twitter co-founder Jack Dorsey founded in 2019. “We've had a tumultuous four days alone. As of this morning, we've added nearly 2 million new users,” Bluesky's Wang said Monday. If we all did that (I did!), would the power of X disappear? Or will it just be divided into good and bad places?
Bluesky serves a similar purpose to X, but is designed quite differently. Wang explains: “No one organization controls the platform. All the code is open source, and anyone can copy and paste the entire code. We don't own your data; you can take it wherever you want. We have to acquire your users through performance, or you'll go away. It's a lot like how search engines work: if you make them attractive by putting ads everywhere, people will go to another search engine.”
Many of the AI-generated images look realistic upon closer inspection.
On the road
Did you notice that the image above was created by artificial intelligence? It can be difficult to spot AI-generated images, videos, audio, and text as technological advances make them indistinguishable from human-created content and more susceptible to manipulation by disinformation. However, knowing the current state of AI technology being used to create disinformation and the various signs that indicate what you're seeing may be fake can help you avoid being fooled.
World leaders are concerned. World Economic Forum ReportMisinformation and disinformation “have the potential to fundamentally disrupt electoral processes in multiple economies over the next two years,” while easier access to AI tools “has already led to an explosion in counterfeit information and so-called 'synthetic' content, from sophisticated voice clones to fake websites.”
While the terms misinformation and disinformation both refer to false or inaccurate information, disinformation is information that is deliberately intended to deceive or mislead.
“The problem with AI-driven disinformation is the scale, speed and ease with which it can be deployed,” he said. Hany Farid “These attacks no longer require nation-state actors or well-funded organizations — any individual with modest computing power can generate large amounts of fake content,” the University of California, Berkeley researchers said.
He is a pioneer of generative AI (See glossary below“AI is polluting our entire information ecosystem, calling into question everything we read, see, and hear,” and his research shows that AI-generated images and sounds are often “almost indistinguishable from reality.”
However, Farid and his colleagues' research reveals that there are strategies people can follow to reduce the risk of falling for social media misinformation and AI-created disinformation.
How to spot fake AI images
Remember when we saw the photo of Pope Francis wearing a down jacket? Fake AI images like this are becoming more common as new tools based on viral models (See glossary below), now anyone can create images from simple text prompts. study Google's Nicolas Dufour and his colleagues found that since the beginning of 2023, the share of AI-generated images in fact-checked misinformation claims has risen sharply.
“Today, media literacy requires AI literacy.” Negar Kamali at Northwestern University in Illinois in 2024 studyShe and her colleagues identified five different categories of errors in AI-generated images (outlined below) and offered guidance on how people can spot them on their own. The good news is that their research shows that people are currently about 70% accurate at detecting fake AI images. Online Image Test To evaluate your detective skills.
5 common types of errors in AI-generated images:
Socio-cultural impossibilities: Does the scene depict behavior that is unusual, unusual, or surprising for a particular culture or historical figure?
Anatomical irregularities: Look closely. Do the hands or other body parts look unusual in shape or size? Do the eyes or mouth look strange? Are any body parts fused together?
Stylistic artifacts: Do the images look unnatural, too perfect, or too stylized? Does the background look odd or missing something? Is the lighting strange or variable?
Functionality Impossibility: Are there any objects that look odd, unreal or non-functional? For example, a button or belt buckle in an odd place?
Violation of Physics: Do the shadows point in different directions? Does the mirror's reflection match the world depicted in the image?
Strange objects or behaviors can be clues that an image was created by AI.
On the road
How to spot deepfakes in videos
An AI technology called generative adversarial networks (See glossary belowSince 2014, deepfakes have enabled tech-savvy individuals to create video deepfakes, which involve digitally manipulating existing videos of people to swap out different faces, create new facial expressions, and insert new audio with matching lip syncing. This has enabled a growing number of fraudsters, state-sponsored hackers, and internet users to produce video deepfakes, potentially allowing celebrities such as Taylor Swift and everyday people alike to unwillingly appear in deepfake porn, scams, and political misinformation and disinformation.
The AI techniques used to spot fake images (see above) can also be applied to suspicious videos. What's more, researchers from the Massachusetts Institute of Technology and Northwestern University in Illinois have A few tips There has been a lot of research into how to spot these deepfakes, but it's acknowledged that there is no foolproof method that will always work.
6 tips to spot AI-generated videos:
Mouth and lip movements: Are there moments when the video and audio are not perfectly in sync?
Anatomical defects: Does your face or body look strange or move unnaturally?
face: Look for inconsistencies in facial smoothness, wrinkles around the forehead and cheeks, and facial moles.
Lights up: Is the lighting inconsistent? Do shadows behave the way you expect them to? Pay particular attention to the person's eyes, eyebrows, and glasses.
hair: Does your facial hair look or move oddly?
Blink: Blinking too much or too little can be a sign of a deepfake.
A new category of video deepfakes is based on the diffusion model (See glossary below), the same AI technology behind many image generators, can create entirely AI-generated video clips based on text prompts. Companies have already tested and released commercial versions of their AI video generators, potentially making them easy to create for anyone without requiring special technical knowledge. So far, the resulting videos tend to feature distorted faces and odd body movements.
“AI-generated videos are likely easier for humans to detect than images because they contain more motion and are much more likely to have AI-generated artifacts and impossibilities,” Kamali says.
How to spot an AI bot
Social media accounts controlled by computer bots have become commonplace across many social media and messaging platforms. Many of these bots also leverage generative AI techniques such as large-scale language models.See glossary below) will be launched in 2022, making it easier and cheaper to mass-produce grammatically correct, persuasive, customized, AI-written content through thousands of bots for a variety of situations.
“It's now much easier to customize these large language models for specific audiences with specific messages.” Paul Brenner At the University of Notre Dame in Indiana.
Brenner and his colleagues found that volunteers were only able to distinguish between AI-powered bots and humans when About 42 percent Even though participants were told they might interact with a bot, they would still be able to test their bot-detection skills. here.
Brenner said some strategies could help identify less sophisticated AI bots.
5 ways to tell if a social media account is an AI bot:
Emojis and hashtags: Overusing these can be a sign.
Unusual phrases, word choices, and analogies: Unusual language can indicate an AI bot.
Repetition and Structure: Bots may repeat words that follow a similar or fixed format, or may overuse certain slang terms.
Ask a question: These may reveal the bot's lack of knowledge on a topic, especially when it comes to local locations and situations.
Assume the worst: If the social media account is not a personal contact and its identity has not been clearly verified or confirmed, it may be an AI bot.
How to detect audio duplication and audio deepfakes
Voice Clone (See glossary belowAI tools have made it easier to generate new voices that can imitate virtually anyone, which has led to a rise in audio deepfake scams replicating the voices of family members, business executives and political leaders such as US President Joe Biden. These are much harder to identify compared to AI-generated videos and images.
“Voice clones are particularly difficult to distinguish between real and fake because there are no visual cues to help the brain make that decision,” he said. Rachel TobackCo-founder of SocialProof Security, a white hat hacking organization.
Detecting these AI voice deepfakes can be difficult, especially when they're used in video or phone calls, but there are some common sense steps you can take to help distinguish between real human voices and AI-generated ones.
4 steps to use AI to recognize if audio has been duplicated or faked:
Public figures: If the audio clip is of an elected official or public figure, review whether what they say is consistent with what has already been publicly reported or shared about that person's views or actions.
Look for inconsistencies: Compare your audio clip to previously authenticated video or audio clips featuring the same person. Are there any inconsistencies in the tone or delivery of the voice?
Awkward Silence: If you're listening to a phone call or voicemail and notice that the speaker takes unusually long pauses while speaking, this could be due to the use of AI-powered voice duplication technology.
Weird and redundant: Robotic or unusually verbose speech may indicate that someone is using a combination of voice cloning to mimic a person's voice and large language models to generate accurate phrasing.
Out of character behaviour by public figures like Narendra Modi could be a sign of AI
Follow
Technology will continue to improve
As it stands, there are no consistent rules that can consistently distinguish AI-generated content from authentic human content. AI models that can generate text, images, videos, and audio will surely continue to improve, allowing them to quickly generate content that looks authentic without obvious artifacts or mistakes. “Recognize that, to put it mildly, AI is manipulating and fabricating images, videos, and audio, and it happens in under 30 seconds,” Tobac says. “This makes it easy for bad actors looking to mislead people to quickly subvert AI-generated disinformation, which can be found on social media within minutes of breaking news.”
While it's important to hone our ability to spot AI-generated disinformation and learn to ask more questions about what we read, see and hear, ultimately this alone won't be enough to stop the damage, and the responsibility for spotting it can't be placed solely on individuals. Farid is among a number of researchers who argue that government regulators should hold accountable the big tech companies that have developed many of the tools that are flooding the internet with fake, AI-generated content, as well as startups backed by prominent Silicon Valley investors. “Technology is not neutral,” Farid says. “The tech industry is selling itself as not having to take on the responsibilities that other industries take on, and I totally reject that.”
Diffusion Model: An AI model that learns by first adding random noise to data (such as blurring an image) and then reversing the process to recover the original data.
Generative Adversarial Networks: A machine learning technique based on two neural networks that compete by modifying the original data and attempting to predict whether the generated data is genuine or not.
Generative AI: A broad class of AI models that can generate text, images, audio, and video after being trained on similar forms of content.
Large-scale language models: A subset of generative AI models that can generate different forms of written content in response to text prompts, and in some cases translate between different languages.
Voice CloneA potential way to use AI models to create a digital copy of a person's voice and generate new voice samples with that voice.
This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.
Strictly Necessary Cookies
Strictly Necessary Cookie should be enabled at all times so that we can save your preferences for cookie settings.