According to a new study by researchers at UNSW Sydney and Australian National University (ANU), many individuals exhibit overconfidence in identifying AI-generated faces.
The research, published in the British Journal of Psychology, involved 125 participants, including 36 “super-recognizers” and 89 control participants.
Super-recognizers, a unique group constituting 1 to 2 percent of the population, possess an exceptional memory for faces. They can recognize individuals they’ve met briefly years ago, identify familiar faces even after significant changes in appearance, and pick out background actors in films and TV shows that others typically overlook.
During an online assessment, both super-recognizers and control participants were tasked with determining whether a series of faces were real or AI-generated.
“We aimed to explore whether super-recognizers are adept at detecting AI-generated faces,” says Dr. James Dunn, a researcher at UNSW School of Psychology, in an interview with BBC Science Focus.
The outcome? Yes, they did perform better, but only marginally compared to controls, who themselves operated just above chance. Control participants averaged 50.7% accuracy, while super-recognizers achieved 57.3%.
The researchers were surprised to find the slight impact of being a super-recognizer on AI face detection abilities.
In fact, some control participants outshone super-recognizers, indicating the potential existence of “super AI face detectors” with specialized capabilities for identifying artificial faces.
In this facial recognition test reproduction, six faces are real and six are AI-generated. Can you discern the difference? The answer is at the end. – Image credit: UNSW Sydney/Adobe Stock Images
However, one consistent finding among all participants was their overconfidence in their abilities, even when results indicated otherwise.
Researchers caution that such overconfidence could make individuals more susceptible to fraud and false identities on social media, dating platforms, and professional networks.
While AI-generated images previously featured quirky distortions—like extra limbs and mismatched backgrounds—advancements in technology have now made them nearly indistinguishable from real images.
So, how can you enhance your AI recognition skills?
“Ironically, cutting-edge AI is often misidentified not by its mistakes but by its uncanny ability to appear almost perfect,” stated Dr. Amy Dowell, a psychologist at ANU. “Rather than displaying obvious flaws, it tends to conform to averages, exuding symmetry, proportion, and statistical typicality.
“It truly seems too good to be true.”
Do you think you can improve your skills? Participate in a demo of the recognition test here.
For the image above: Surfaces 2, 3, 5, 8, 9, and 11 are AI-generated.
Numerous TikTok accounts are accumulating billions of views by sharing anti-immigrant and sexually explicit AI-generated material, as highlighted in a recent report.
Researchers found 354 accounts centered around AI that shared 43,000 posts created with AI tools, resulting in 4.5 billion views in just one month.
As per the Paris-based nonprofit AI Forensics, these accounts are attempting to manipulate TikTok’s algorithm—responsible for deciding what content appears for users—by posting large volumes of content in hopes of achieving viral status.
Some accounts reportedly posted as many as 70 times daily, indicative of automated activity, with most accounts established at the start of the year.
TikTok disclosed last month that it hosted at least 1.3 billion AI-generated posts. With more than 100 million pieces of content uploaded daily, AI-labeled material constitutes a minor fraction of TikTok’s offerings. Users can also adjust settings to minimize exposure to AI content.
Among the most active accounts, around half focused on content related to women’s bodies. The report notes, “These AI representations of women are often depicted in stereotypically attractive forms, which include suggestive clothing and cleavage.”
Research from AI Forensics indicated that nearly half of the content posted by these accounts lacked labels, and under 2% used TikTok’s AI tags. The organization cautioned that this could mislead viewers. They noted that some accounts can evade TikTok’s moderation for months, even while distributing content that violates the platform’s terms.
Several accounts identified in the study have been deleted recently, with signs suggesting that moderators removed them, according to the researchers.
Some of this content resembled fake news broadcast segments. An example is an anti-immigrant story and other materials that sexualize young women’s bodies, potentially including minors. AI Forensics identified that half of the top ten most active accounts were focused on the female body niche, with some of the fake news utilizing familiar news brands including Sky News and ABC.
After a mention by The Guardian, some posts were subsequently taken down by TikTok.
TikTok labeled the report’s assertions as “unfounded,” asserting that the researchers acknowledged the issue as one affecting several platforms. Recently, The Guardian revealed that almost one in ten of the fastest-growing YouTube channels primarily features AI-generated content.
“TikTok is committed to eliminating harmful AIGC [artificial intelligence-generated content], we are blocking the creation of hundreds of millions of bot accounts while investing in top-notch AI labeling technology, and providing users with the tools and education necessary to manage their content experience on our platform,” declared a TikTok spokesperson.
An example of AI “slop” is content that lacks substance and is intended to clutter social media timelines. Photo: TikTok
The most viewed accounts flagged by AI Forensics often shared “slop,” a term used to describe AI-generated content that is trivial, odd, and meant to disturb users’ feeds. This includes postings such as animals in Olympic diving or talking babies. Researchers noted that while some of the risqué content was deemed “funny” and “adorable,” it still contributes to the clutter.
TikTok’s policies forbid the use of AI to create deceptive authoritative sources, portray anyone under 18, or depict adults who aren’t public figures.
“Through this investigation, we illustrate how automated accounts integrate AI content into platforms and the broader virality framework,” the researchers noted.
“The distinction between genuine human-generated content and artificial AI-produced material on platforms is becoming increasingly indistinct, indicating a trend towards greater AI-generated content in users’ feeds.”
The analysis spanned from mid-August to mid-September, uncovering attempts to monetize users via the advertisement of health supplements through fictitious influencers, the promotion of tools for creating viral AI content, or seeking sponsorships for posts.
While AI Forensics acknowledged TikTok’s recent move to allow users to restrict AI content visibility, they emphasized the need for improved labeling.
“We remain cautious about the effectiveness of this feature, given the significant and persistent challenges associated with identifying such content,” they expressed.
The researchers recommended that TikTok explore the option of developing AI-specific features within its app to differentiate AI-generated content from that produced by humans. “Platforms should aim to transcend superficial or arbitrary ‘AI content’ labels and develop robust methods that either distinctly separate generated and human-created content or enforce systematic and clear labeling of AI-generated material,” they concluded.
Images generated by AI depicting extreme poverty, children, and survivors of sexual violence are increasingly populating stock photo platforms and are being utilized by prominent health NGOs, according to global health specialists who raise alarms over a shift towards what they term “poverty porn.”
“They are widespread,” shares Noah Arnold from Fair Picture, a Switzerland-based organization dedicated to fostering ethical imagery in global development. “Some organizations are actively employing AI visuals, while others are experimenting cautiously.”
Arseni Alenichev, researcher states, “The images replicate the visual lexicon of poverty: children with empty plates, cracked earth, and other typical visuals,” as noted by researchers at the Institute of Tropical Medicine in Antwerp specializing in global health imagery.
Alenichev has amassed over 100 AI-generated images depicting extreme poverty intended for individuals and NGOs to use in social media initiatives against hunger and sexual violence. The visuals he provided to the Guardian reflect scenes that perpetuate exaggerated stereotypes, such as an African girl dressed in a wedding gown with tears on her cheeks. In a comment article published Thursday, he argues that these images represent “poverty porn 2.0”.
While quantifying the prevalence of AI-generated images is challenging, Alenichev and his team believe their usage is rising, driven by concerns regarding consent and financial constraints. Arnold mentioned that budget cuts to NGO funding in the U.S. exacerbate the situation.
“It’s evident organizations are beginning to consider synthetic images in place of real photographs because they are more affordable and eliminate the need for consent or other complications,” Alenichev explained.
AI-generated visuals depicting extreme poverty are now appearing abundantly on popular stock photo websites, including Adobe Stock Photography and Freepik when searching for terms like “poverty.” Many of these images carry captions such as “Realistic child in refugee camp,” and “Children in Asia swim in garbage-filled rivers.” Adobe’s licensing fees for such images are approximately £60.
“They are deeply racist. They should never have been published as they reflect the worst stereotypes about Africa, India, and more,” Alenichev asserted.
Freepik’s CEO Joaquín Abela stated that the accountability for usage of these extreme images falls upon media consumers rather than platforms like his. He pointed out that the AI-generated stock photos come from the platform’s global user base, and if an image is purchased by a Freepik customer, that user community earns a licensing fee.
He added that Freepik is attempting to mitigate bias present elsewhere in its photo library by “introducing diversity” and striving for gender balance in images of professionals like lawyers and CEOs featured on the site.
However, he acknowledged limitations in what can be achieved on his platform. “It’s akin to drying the ocean. We make efforts, but the reality is that if consumers worldwide demand images in a specific manner, there’s little anyone can do.”
A screen capture of an AI-generated image of “poverty” on a stock photo site, raising concerns about biased depictions and stereotypes.
Illustration: Freepik
Historically, prominent charities have integrated AI-generated images into their global health communication strategies. In 2023, the Dutch branch of the British charity Plan International will launch a video campaign against child marriage featuring AI-generated images including that of a girl with black eyes, an elderly man, and a pregnant teenager.
Last year, the United Nations released a video that showcased the AI-generated testimony of a Burundian woman who was raped and left for dead in 1993 amidst the civil war. This video was removed after The Guardian reached out to the UN for a statement.
“The video in question was produced over a year ago utilizing rapidly advancing tools and was taken down because we perceived it to demonstrate inappropriate use of AI, potentially jeopardizing the integrity of the information by blending real footage with nearly authentic, artificially generated content,” remarked a UN peacekeeping spokesperson.
“The United Nations remains dedicated to supporting survivors of conflict-related sexual violence, including through innovative and creative advocacy.”
Arnold commented that the rising reliance on these AI images is rooted in a long-standing discussion concerning ethical imagery and respectful storytelling concerning poverty and violence. “It’s likely simpler to procure an off-the-shelf AI visual, as it’s not tied to any real individual.”
Kate Kaldle, a communications consultant for NGOs, expressed her disgust at the images, recalling previous conversations about the concept of “poverty porn” in the sector.
“It’s unfortunate that the struggle for more ethical representation of those experiencing poverty has become unrealistic,” she lamented.
Generative AI tools have long been known to reproduce—and at times exaggerate—widely-held societal biases. Alenichev mentioned that this issue could be intensified by the presence of biased images in global health communications, as such images can circulate across the internet and ultimately be used to train the next wave of AI models, which has been shown to exacerbate prejudice.
A spokesperson for Plan International noted that as of this year, the NGO has “adopted guidance advising against the use of AI to portray individual children,” and that their 2023 campaign employed AI-generated images to maintain “the privacy and dignity of real girls.”
Zelda Williams, the daughter of the late actor and comedian Robin Williams, has voiced her opposition to AI-generated content featuring her father.
“Please, stop sending videos of dad generated by AI,” Zelda posted on my Instagram story on Monday. “Stop assuming that I want to see it or that I’m interested; I don’t, I really don’t. If you’re just trying to annoy me, I encounter something worse, I block it and move on.”
“To reduce the legacy of real individuals to something like, ‘Just this vague appearance and sound, that’s sufficient,’ is disheartening.”
“You’re not creating art; you’re producing grotesque, over-processed versions of human life, derived from art and musical history.”
“And for heaven’s sake, stop referring to it as the ‘future’; AI is merely a mishmash of recycled content that badly reflects the past. It’s integrating superficial human content.”
Robin Williams with Zelda at the premiere of his film RV in 2006. Photo: Mario Anzuni/Reuters
This isn’t the first instance where Zelda Williams, an actor and filmmaker who directed the 2024 horror-comedy Lisa Frankenstein, has addressed the recreation of her father, who passed away at 63 in 2014. The potential for realism is concerning.
“I’ve encountered AI imitating his ‘voice’ and saying what people want to hear. While I find this intrusive personally, the implications extend far beyond my own sentiments.”
“These recreations are inferior imitations of great individuals and, at their worst, resemble horrifying Frankenstein-like constructs formed from the industry’s lowest points.”
Zelda’s recent commentary arrives amidst a surge of celebrity deepfakes on social media, which span various themes, including adult content, political messages, scams, and advertisements.
In January, actress Scarlett Johansson highlighted the “immediate dangers of AI” following her condemnation of Kanye West’s anti-Semitic comments, after deepfake videos surfaced featuring other prominent Jewish celebrities like Jerry Seinfeld, Drake, and Adam Sandler.
A fraudulent advertisement featuring Deepfark in August was falsely attributed to Crowded House frontman Neil Finn, who stated he was incorrectly represented discussing erectile dysfunction, prompting the band to issue a disclaimer.
The deepfakes of Robin Williams are part of a larger trend in AI-generated content, fueled by the rapid proliferation of low-quality material produced by entertainment-free generation AI applications.
The recent TikTok video featuring Robin Williams appears to have been created using Sora 2, OpenAI’s new video generation app, and includes a simulated interaction between the comedian and the late Betty White.
Within days of launch, Sora’s feed was inundated with videos featuring copyrighted characters from series like SpongeBob SquarePants, South Park, Pokémon, and Rick and Morty.
OpenAI informed the Guardian that content owners can report copyright violations through a “copyright dispute form,” although individual artists and studios cannot opt out broadly. Varun Shetty, Head of Media Partnerships at OpenAI, commented:
A Victorian lawyer has made history as the first in Australia to garner professional sanctions for utilizing artificial intelligence in court, losing his right to practice as a leading attorney after generating unverified citations from AI.
According to a report by Guardian Australia, during a hearing last October on July 19, 2024, an unnamed lawyer representing her husband in a marital dispute provided the court with a list of prior cases that Judge Amanda Humphreys had requested regarding the enforcement of applications in this case.
Upon returning to her chamber, Humphreys stated in her ruling that neither she nor her colleagues could find any cases listed. When the issue was revisited in court, the lawyer disclosed that the list had been generated using AI-based legal software.
He confessed to not verifying the accuracy of the information before submitting it to the court.
The attorney extended an “unconditional apology” to the court, requesting not to be referred for investigation, saying he would “integrate lessons that he has taken to heart.”
He acknowledged his lack of understanding of how the software operated and recognized the necessity to verify the accuracy of AI-assisted research. He agreed to cover the costs incurred by the opposing lawyer due to the canceled hearing.
Sign up: AU Breaking NewsEmail
Humphreys accepted the apology, admitting that the stress it caused was unlikely to be repeated. However, given the prevalence of AI tools in the legal field, she noted that referrals for investigation were crucial due to the role of the Victorian Legal Services Commission in examining professional conduct.
The lawyer was subsequently referred to the Victorian Legal Services Commission for investigation, marking one of the first reported cases in Australia involving a lawyer using AI in court to produce fabricated citations.
The Victoria Legal Services Board confirmed on Tuesday that the lawyer’s practice certificate was altered on August 19 due to the findings of the investigation. This action means he no longer has the right to practice as a primary attorney, cannot handle trust funds, and is restricted to working solely as an employee’s lawyer.
The lawyer is required to undergo two years of supervised legal practice, with quarterly reports to the board from both him and his supervisor during this period.
A spokesman remarked, “The board’s regulatory actions on this matter reflect our commitment to ensuring that legal professionals using AI in their practices do so responsibly and in alignment with their obligations.”
Since this incident, over 20 additional cases have been reported in Australian courts where litigants or self-represented individuals used artificial intelligence to prepare court documents, leading to the inclusion of false citations.
The lawyer in Western Australia is also under scrutiny by its state regulatory body regarding practice standards.
In Australia, there was at least one instance where a document was claimed to have been prepared using ChatGPT solely for the court, even though the document was generated before ChatGPT became publicly accessible.
The courts and legal associations acknowledge the role of AI in legal proceedings but continue to caution that this does not diminish lawyers’ professional judgment.
Juliana Warner of Australia’s Legal Council told Guardian Australia last month, “If lawyers are using these tools, it must be done with utmost care, always keeping in mind their professional and ethical obligations to the court and their clients.”
Warner further noted that while the court’s relation to cases involving AI-generated false citations raises “serious concerns,” a blanket ban on the use of generative AI in legal proceedings “is neither practical nor proportional and risks hindering access to both innovation and justice.”
Numerous news outlets have removed articles authored by freelance journalists suspected to be using AI-generated content.
On Thursday, Press Gazette reported that at least six publications, including Wired and Business Insider, have taken down articles from their platforms after it was revealed that pieces written under the name Margaux Blanchard were AI-generated.
Wired published an article in May titled “I fell in love playing Minecraft. The game became a wedding venue.” Shortly after, the article was retracted with an editor’s note stating that “after further review, the Wired editorial team determined that this article did not meet editorial standards.”
According to Press Gazette, which reviewed the WIRED article, “Jessica Hu” is said to be “a Chicago-based commander.” However, both Press Gazette and The Guardian were unable to verify Hu’s identity.
Press Gazette further reported that in April, Business Insider published two essays by Blanchard, one of which discussed the complexities of remote work for parents. After Press Gazette alerted Business Insider about the author’s credibility, the platform deleted the article, displaying a note that read, “This story has been deleted because it did not meet Business Insider standards.”
In a comment to The Guardian, a Business Insider representative stated:
In an article released by Wired, the management acknowledged the oversight, saying, “If anyone can catch an AI con artist, it’s Wired. Unfortunately, we’ve encountered this issue.”
Wired further explained that one of its editors received a pitch about the “rise of niche internet weddings” that had “all the signs of a great Wired story.”
After initial discussions on framing and payment, the editors assigned the story, which was published on May 7.
However, it soon became evident that the writers were unable to provide enough details needed for payment processing. The outlet noted that the writer insisted on payment via PayPal or check.
Subsequent investigations revealed the story was fabricated.
In the Thursday article, Wired noted, “I made an error here. This story did not undergo a proper fact-checking procedure or receive top editing from a senior editor. I acted promptly upon discovering the issue to prevent future occurrences.”
Press Gazette reported that Jacob Philady, editor of a new magazine named Dispatch, was the first to warn of fraudulent activity related to Blanchard’s article. He mentioned earlier this month that he received a pitch from Blanchard, claiming “Gravemont, a decommissioned mining town in Colorado, has been repurposed as one of the world’s most secretive training grounds for death investigations.”
In the pitch shared with Press Gazette, Blanchard stated, “I want to tell the story of a scientist, a former cop, and a former miner who now deal with the deceased daily. I explore ethical dilemmas using real individuals in staged environments, not as mourners but as true archivists.”
She asserted, “I’m the right person for this because I’ve previously reported on concealed training sites, have contacts in forensic circles, and know how to navigate sensitive, closed communities with empathy and discretion.”
Philady informed Press Gazette that the pitch sounded AI-generated, and he could not find any information about Gravemont. The Guardian was also unable to confirm the details regarding the dubious town.
When questioned about how she learned of the town, Blanchard replied, “I’m not surprised you couldn’t find much. Gravemont doesn’t promote itself. I initially interviewed someone irrelevant to a retired forensic pathologist.”
She continued, “Over the following months, I further pieced the story together by requesting public records, speaking with former trainees, and sifting through forensic association meeting materials, none of which were mentioned in print.
“This is a location that exists in the collective memory of the industry, but remains under the radar enough to avoid extensive coverage, which is precisely why I believe it resonates with interested readers,” Blanchard added.
Philady told Press Gazette that despite the pitch seeming “very convincing,” he suspected she was “bulk.” He requested Blanchard to provide her standard rates and how long she would be in the field.
In response, Blanchard ignored Philady’s request for public records, indicating instead that she would “ideally spend five to seven days on location” and would require around $670 for payment.
Last Friday, Philady confronted Blanchard via email, stating he would publish a false story if she did not respond. Press Gazette further revealed that Blanchard did not reply to his request for evidence of her identity.
This instance of false AI-generated articles follows an earlier incident in May when the Chicago Sun-Times ran a section containing a fake reading list produced by AI.
Some participants use AI to save time in online research
Daniel D’Andreti/Unsplash
Online surveys are being inundated by responses generated through AI, potentially compromising the integrity of critical data for scientific research.
Platforms like Prolific compensate participants modestly for answering questions posed by researchers. These platforms have gained popularity among academics for their simplicity in attracting subjects for behavioral studies.
Anne Marie Nusberger and her team at the Max Planck Institute for Human Development in Berlin, Germany, set out to examine the frequency of AI usage among respondents, triggered by their observations in previous studies. “The rate we were witnessing was truly startling,” she remarks.
They suspect that 45% of participants who submitted a single open-ended question on Prolific utilized AI tools to streamline their responses.
Further analysis of these submissions indicated more overt references to AI usage, characterized by phrases like “excessively repetitive” and “clearly non-human” language. “From the data we gathered earlier this year, it’s clear that a notable fraction of research is tainted,” she explains.
In follow-up studies conducted via Prolific, researchers implemented traps to capture chatbot users. Two instances of Recaptcha — a small test designed to differentiate humans from bots — identified only 0.2% of users as bots. A more complex Recaptcha, using both past activity and current behavior, eliminated an additional 2.7%. Although hidden from view, bots that were prompted to include the word “hazelnut” in their responses accounted for another 1.6%, while an extra 4.7% were detected when copying and pasting was restricted.
“Our goal is to respond adequately to online surveys, rather than resorting to full distrust,” advises Nussberger. It’s the onus of researchers, in her view, to handle the answers with greater skepticism and take precautions against AI-induced input. “However, the platforms bear significant responsibility. They must treat this matter with utmost seriousness.”
Prolific did not respond to a request for comment from New Scientist.
“The validity of online behavioral research has already faced challenges from participants misrepresenting themselves or employing bots to obtain rewards,” says Matt Hodgkinson, a freelance consultant in research ethics. “Researchers must collectively explore remote validation of human involvement or return to traditional face-to-face methodologies.”
Media organizations have been alerted to the potential “devastating impacts” on their digital audiences as AI-generated summaries start to replace traditional search results.
The integration of Google’s AI summarization is causing major concern among media proprietors, as it utilizes blocks of text to condense search results. Some perceive this as a fundamental threat to organizations that rely on search traffic.
AI summaries can offer all the information users seek without necessitating a click on the original source, while links to traditional search results are relegated further down the page, thereby decreasing user traffic.
An analysis by the Authoritas Analytics Company indicates that websites previously ranked at the top of search results may experience around a 79% decrease in traffic for specific queries when results are presented through AI summaries.
The study also highlighted that links to YouTube, owned by Google’s parent company Alphabet, are more prominent than traditional search results. This investigation is part of a legal challenge against the UK’s competition regulator concerning the implications of Google’s AI summarization.
In a statement, a Google representative described the study as being “based on inaccurate and flawed assumptions and analysis,” citing a set of searches that does not accurately reflect all queries and results in outdated estimates regarding news website traffic.
“Users are attracted to AI-driven experiences, and AI features in search enable them to pose more questions, creating new avenues for discovering websites,” the spokesperson stated. “We consistently direct billions of clicks to our websites daily and do not observe a significant decline in overall web traffic, as suggested.”
A secondary survey revealed a substantial decline in referral traffic stemming from Google’s AI overview. A month-long study conducted by the US Think tank Pew Research Center found that users clicked on a link under the AI summary only once for every 100 searches.
A Google spokesperson noted that this study employed “a distorted query set that illustrates flawed methodologies and search traffic.”
Senior executives in news organizations claim that Google has consistently declined to share the necessary data to assess the impact of AI summaries.
Quick Guide
Please contact us about this story
show
The best public interest journalism relies on direct accounts from knowledgeable individuals.
If you have insights to share about this topic, please contact us confidentially using the following methods:
Secure Messages in Guardian App
The Guardian app provides a feature for submitting story tips. Messages are end-to-end encrypted and incorporated within the routine functions of all Guardian mobile apps, keeping your communications private.
If you haven’t installed the Guardian app yet, feel free to download it (iOS/Android) and navigate to the menu. Choose Secure Messaging.
SecureDrop, Instant Messenger, Email, Phone, and Postal Mail
Refer to our guide at guardian.com/tips for other methods and their respective advantages and disadvantages.
Illustration: Guardian Design / Rich Cousins
Although the AI overview represents only a portion of Google search, UK publishers report feeling its effects already. MailOnline executive Carly Stephen noted a significant decline in clicks from search results featuring AI summaries in May, with click-through rates falling by 56.1% on desktop and 48.2% on mobile devices.
Legal actions against the UK’s Competition and Markets Authority involve partnerships with the technology justice organization FoxGlove, the Independent Publishers Alliance, and advocates for the Open Web movement.
Owen Meredith, the CEO of the News Media Association, accused Google of “keeping users within their own enclosed spaces and trying to monetize them by incorporating valuable content, including news produced through significant efforts of others.”
“The current circumstances are entirely unsustainable, and eventually, quality information will be eliminated online,” he stated. “The Competition and Markets Authority possesses tools to address these challenges, and action must be taken swiftly.”
Rosa Curling, Director of FoxGlove, remarked that the new research highlights “the devastating effects the Google ‘AI Overview’ has already inflicted on the UK’s independent news sector.”
“If Google merely takes on the job of journalists and presents it as its own, that would be concerning enough,” she expressed. “But what’s worse is that they use this work to promote their own tools and advantages while making it increasingly difficult for the media to connect with the readers vital for their survival.”
The quantity of online videos depicting child sexual abuse created by artificial intelligence has surged as advancements in technology have impacted pedophiles.
According to the Internet Watch Foundation, AI-generated abuse videos have surpassed a critical level, nearing a point where they can nearly measure “actual images,” with a notable increase observed this year.
In the first half of 2025, the UK-based Internet Safety Watchdog examined 1,286 AI-generated videos containing illegal child sexual abuse material (CSAM), a sharp increase from just two during the same period last year.
The IWF reported that over 1,000 of these videos fall under Category A abuse, the most severe classification of such material.
The organization indicated that billions have been invested in AI, leading to a widely accessible video generation model that pedophiles are exploiting.
“It’s a highly competitive industry with substantial financial incentives, unfortunately giving perpetrators numerous options,” stated an IWF analyst.
This video surge is part of a 400% rise in URLs associated with AI-generated child sexual abuse content in the first half of 2025, with IWF receiving reports of 210 such URLs compared to 42 last year.
IWF discovered one post on a Dark Web Forum where a user noted the rapid improvements in AI and how pedophiles had rapidly adapted to using an AI tool to “better interact with new developments.”
IWF analysts observed that the images seem to be created by utilizing free, basic AI models and “fine-tuning” these models with CSAM to produce realistic videos. In some instances, this fine-tuning involved a limited number of CSAM videos, according to IWF.
The most lifelike AI-generated abuse videos encountered this year were based on actual victims, the Watchdog reported.
Interim CEO of IWF, Derek Ray-Hill, remarked that the rapid advancement of AI models, their broad accessibility, and their adaptability for criminal purposes could lead to a massive proliferation of AI-generated CSAM online.
“The risk of AI-generated CSAM is astonishing, leading to a potential flood that could overwhelm the clear web,” he stated, cautioning that the rise of such content might encourage criminal activities like child trafficking and modern slavery.
The replication of existing victims of sexual abuse in AI-generated images allows pedophiles to significantly increase the volume of CSAM online without having to exploit new victims, he added.
The UK government is intensifying efforts to combat AI-generated CSAM by criminalizing the ownership, creation, or distribution of AI tools designed to produce abusive content. Those found guilty under this new law may face up to five years in prison.
Additionally, it is now illegal to possess manuals that instruct potential offenders on how to use AI tools for creating abusive images or for child abuse. Offenders could face up to three years in prison.
In a February announcement, Interior Secretary Yvette Cooper stated, “It is crucial to address child sexual abuse online, not just offline.”
AI-generated CSAM is deemed illegal under the Protection Act of 1978, which criminalizes the production, distribution, and possession of “indecent or false images” of children.
This piece was reported by indicator, a publication focused on unearthing digital misinformation, in partnership with the Guardian.
Numerous YouTube channels have blended AI-generated visuals with misleading claims surrounding Sean “Diddy” Combs’s high-profile trial, attracting tens of millions of views and profiting from the spread of misinformation.
Data from YouTube reveals that 26 channels have garnered a staggering 705 million views from approximately 900 AI-influenced videos about Diddy over the last year.
These channels typically employ a standardized approach. Each video features an enticing title and AI-generated thumbnail that fabricates connections between celebrities and Diddy with outrageous claims, such as a celebrity’s testimony forcing them to engage in inappropriate acts or revealing shocking secrets about Diddy. Thumbnails regularly showcase well-known figures in courtroom settings alongside images of Diddy, with many featuring suggestive quotes designed to grab attention, including phrases like “f*cked me me me me me of me,” “ddy f*cked bieber life,” and “she sold him to Diddy.”
Channels indulging in Diddy’s “Slop,” a term for low-quality, AI-generated content, have previously demonstrated a penchant for disseminating false claims about various celebrities. Most of the 26 channels seem to be either repurposed or newly created, with at least 20 being eligible for advertising revenue.
Spreading sensational and erroneous “Diddy AI Slop” has become a quick avenue for monetization on YouTube. Wanner Aarts, managing numerous YouTube channels that employ AI-generated content, expressed his strategies for making money on the platform, noting his detachment from the Diddy trend.
“If someone asked, ‘How can I make $50,000 quickly?’ the first thing might be akin to dealing drugs, but the second option likely involves launching a Diddy channel,” Aarts (25) stated.
Fabricated Celebrity Involvement
The indicator analyzed hundreds of thumbnails and titles making false claims about celebrities including Brad Pitt, Will Smith, Justin Bieber, Oprah Winfrey, Eddie Murphy, Leonardo DiCaprio, Dwayne “Rock” Johnson, 50 Cent, Joe Logan, and numerous others. Notably, one channel, Fame Fuel, uploaded 20 consecutive videos featuring AI-generated thumbnails and misleading titles related to U.S. Attorney General Pam Bondy and Diddy.
Among the top-performing channels is Peeper, which has amassed over 74 million views since its inception in 2010, but pivoted to exclusively covering Diddy for at least the last eight months. Peeper boasts some of the most viral Diddy videos, including “Justin Bieber reveals Will Smith, Diddy and Clive Davis grooming him,” which alone attracted 2.3 million views. Peeper is currently being converted into a demo.
Channels named Secret Story, previously offering health advice in Vietnamese, shifted focus to Diddy content, while Hero Story transitioned from covering Ibrahim Traore, the military leader of Burkina Faso, to Diddy stories. A Brazilian channel that amassed millions from embroidery videos also pivoted to Diddy content just two weeks ago. A channel named Celebrity Topics earned over 1 million views across 11 Diddy videos in just three weeks, despite being created in early 2018 and appearing to have deleted prior videos. Both Secret Story and Hero Story were removed by YouTube following inquiries from the indicator, while Celebrity Topics has since undergone rebranding.
Shifting Focus to Diddy
For instance, around three weeks ago, the channel PAK GoV Update started releasing videos about Diddy, utilizing AI-generated thumbnails with fictitious quotes attributed to celebrities like Ausher and Jay-Z. One video labeled “Jay-Z breaks his silence on Diddy’s controversy,” included a tearful image of Jay-Z with the text “I Will Be Dod” superimposed.
The video achieved 113,000 views with nearly 30 minutes of AI-generated narration accompanied by clips from various TV news sources, lacking any new information from Jay-Z, who did not provide any of the attributed quotes.
The Pak Gov Update channel previously focused on Pakistan’s public pensions, generating modest views—its most popular being a poorly titled video about the pension system that garnered 18,000 views.
Monetizing Misinformation
Aarts commented that the strategy of exploiting Diddy Slop is both profitable and precarious. “Most of these channels are unlikely to endure,” he remarked, referencing the risk of being penalized for violating YouTube policies and potential legal actions from Diddy or other celebrities depicted in their thumbnails and videos.
Like PAK Gov Update, most videos uploaded by these channels predominantly utilize AI narration and fewer direct clips from news reports, often leaning on AI-generated images. The use of actual footage tends to skirt the boundaries of fair use.
The YouTube channel Pakreviews-F2Z has produced numerous fake videos surrounding the Diddy trial, disguised under the name Pak Gov Update. Photo: YouTube
AI Slop represents one of the many variations of Diddy-related content proliferating on YouTube. This niche appears to be expanding and proving lucrative. Similar Diddy-focused AI content has attracted engagement on Tiktok.
“We are fans of the world,” stated YouTube spokesperson Jack Maron in an email. Maron noted that the platform has removed 16 channels linked to this phenomena and confirmed that various channels, including Pak Gov Update, have faced similar actions.
The Diddy phenomenon exemplifies the convergence of two prominent trends within YouTube: automation and faceless channels.
YouTube Automation hinges on the premise that anyone can establish a prosperous YouTube venture through the right niche and low-cost content creation strategies, including topic discovery, idea brainstorming, or employing international editors to churn out content at an automated rate.
With AI, it has become simpler than ever to embark on a faceless automation journey. Aarts indicated that anyone can generate scripts using ChatGPT or analogous language models, create images and thumbnails via MidJourney or similar software, utilize Google Veo 3 for video assembly, and implement AI voice-over using tools like ElevenLabs. He further mentioned that he often hires freelancers from the Philippines or other regions for video editing tasks.
“AI has democratized opportunities for budget-conscious individuals to engage in YouTube automation,” Aarts stated, highlighting it can cost under $10 per video. He reported earnings exceeding $130,000 from over 45 channels.
Muhammad Salman Abazai, who oversees As a Venture, a Pakistani firm offering video editing and YouTube channel management services, commented that Diddy video content has emerged as a “legitimate niche” on YouTube, showcasing successful Diddy videos created by his team.
“This endeavor has proven fruitful for us, as it has significantly boosted our subscriber count,” he noted.
NV Historia shifted focus following the viral response to a Diddy-themed video titled “A minute ago: No one expected Dwayne Johnson to say this in court about Diddy,” featuring AI-generated images of Johnson and Diddy in court along with disturbing visuals of alleged incidents. The thumbnail showcased the quote “He gave me it.”
Johnson has neither testified nor had any connection to allegations against Diddy. This video has gathered over 200,000 views. Following this, NV Historia managed another video linking Oprah Winfrey and other celebrities to Diddy, which earned 45,000 views. Subsequently, the channel committed entirely to Diddy content and has since been removed by YouTube.
A French channel, Starbuzzfr, was launched in May and appears to exclusively publish Diddy-related content, deploying AI-generated thumbnails and narration to spin fabricated narratives, such as Brad Pitt’s supposed testimony against Diddy, claiming he experienced abuse by the mogul. Starbuzzfr notably utilizes sexualized AI-generated imagery featuring Diddy and celebrities like Pitt. As of this writing, the channel remains monetized.
Aarts noted that the general sentiment within the YouTube automation community respects anyone who manages to monetize their content.
“I applaud those who navigate this successfully,” he remarked.
As reported by the French streaming service, nearly seven out of every ten streams of AI-generated music on the Deezer platform are deemed to be fraudulent.
The company states that AI-created music only constitutes 0.5% of total streams on music platforms, yet their analysis indicates that scammers may account for as much as 70% of those streams.
The rise of AI-generated music presents a significant issue on streaming services. Scammers typically utilize bots to “listen” to AI-generated tracks, thereby generating revenue for platforms like Deezer and subsequently receiving royalty payments.
This tactic aims to circumvent detection mechanisms by flooding the system with high listening counts for numerous low-quality fake tracks.
Thibault Roucou, director of royalties and a report regarding the Paris-based platform, mentioned that the manipulation of AI-generated music is a strategy to “extract some profit from royalties.”
“As long as I can profit, I shall,” he lamented, referring to the scenario of fraudulent streaming. “Sadly, there is a push to profit from it.”
Deezer utilizes a tool designed to identify 100% AI-generated content from the leading AI music models, including Suno and Udio.
Deezer reports that the AI-generated music being streamed by con artists ranges from fake pop and rap to artificial mood music. The platform actively prevents royalty payments for streams flagged as fraudulent.
In April, Deezer disclosed that AI-generated tracks account for 18% of all uploads to its platform, averaging around 20,000 tracks per day. The company has announced plans to exclude all AI-generated content from its algorithmic recommendations. Deezer boasts over 10 million subscribers globally, whereas leading competitor Spotify has 268 million.
Roucou noted that while the identities of those orchestrating the fraudulent streams remain unknown, the criminals seem to operate in an “organized” manner. The IFPI, a Trade Body, reported that the global streaming market was valued at $20.4 billion last year, making it a prime target for fraudsters.
In a report, the Latest Global Music Report from the IFPI indicated that fraudulent streaming diverts funds that “should go to rightful artists,” with generic AI contributing to an exacerbation of the issue.
Last year, U.S. musician Michael Smith faced charges for attempting to create AI-generated songs that were designed to be streamed billions of times, resulting in potential royalty earnings of $10 million.
Frankie Johnson, an inmate at William E. Donaldson Prison near Birmingham, Alabama, reports being stabbed approximately 20 times within a year and a half.
In December 2019, Johnson claimed he was stabbed “at least nine times” in his housing unit. Then, in March 2020, after a group therapy session, officers handcuffed him to a desk and exited the unit. Shortly afterward, another inmate came in and stabbed him five times.
In November that same year, Johnson alleged that an officer handcuffed him and transported him to the prison yard, where another prisoner assaulted him with an ice pick and stabbed him “five or six times,” all while two corrections officers looked on. Johnson contended that one officer even encouraged the attack as retaliation for a prior conflict between him and the staff.
In 2021, Johnson filed a lawsuit against Alabama prison officials, citing unsafe conditions characterized by violence, understaffing, overcrowding, and significant corruption within the state’s prison system. To defend the lawsuit, the Alabama Attorney General’s office has engaged law firms that have received substantial payments from the state to support a faulty prison system, including Butler Snow.
State officials have praised Butler Snow for its experience in defending prison-related cases, particularly William Lansford, the head of their constitutional and civil rights litigation group. However, the firm is now facing sanctions from a federal judge overseeing Johnson’s case, following incidents where its lawyers referenced cases produced by artificial intelligence.
This is just one of several cases reflecting the issue of attorneys using AI-generated information in formal legal documents. A database that tracks such occurrences has noted 106 identified instances globally, where courts have encountered “AI hallucinations” in submitted materials.
Last year, lawyers received one-year suspensions for practicing law in Florida’s Central District after it was found that they were citing cases fabricated by AI. Earlier this month, a federal judge in California ordered a firm to pay over $30,000 in legal fees for including erroneous AI-generated studies.
During a hearing in Birmingham on Wednesday regarding Johnson’s case, U.S. District Judge Anna Manasco mentioned that she was contemplating various sanctions, such as fines, mandatory legal education, referrals to licensing bodies, and temporary suspensions.
She noted that existing disciplinary measures across the country have often been insufficient. “This case demonstrates that current sanctions are inadequate,” she remarked to Johnson’s attorney. “If they were sufficient, we wouldn’t be here.”
During the hearing, attorneys from Butler Snow expressed their apologies and stated they would accept any sanctions deemed appropriate by Manasco. They also highlighted their firm policy that mandates attorneys seek approval before employing AI tools for legal research.
Reeves, an attorney involved, took full responsibility for the lapses.
“I was aware of the restrictions concerning [AI] usage, and in these two instances, I failed to adhere to the policy,” Reeves stated.
Butler Snow’s lawyers were appointed by the Alabama Attorney General’s Office and work on behalf of the state to defend ex-commissioner Jefferson Dunn of the Alabama Department of Corrections.
Lansford, who is contracted for the case, shared that the firm has begun a review of all previous submissions to ensure no additional instances of erroneous citations exist.
“This situation is still very new and raw,” Lansford conveyed to Manasco. “We are still working to perfect our response.”
Manasco indicated that Butler Snow would have 10 days to file a motion outlining their approach to resolving this issue before she decides on sanctions.
The use of fictitious AI citations has subsequently influenced disputes regarding case scheduling.
Lawyers from Butler Snow reached out to Johnson’s attorneys to arrange a deposition for Johnson while he remains incarcerated. However, Johnson’s lawyers objected to the proposed timeline, citing outstanding documents that Johnson deemed necessary before he could proceed.
In a court filing dated May 7, Butler Snow countered that case law necessitates a rapid deposition for Johnson. “The 11th Circuit and the District Court typically allow depositions for imprisoned plaintiffs when relevant to their claims or defenses, irrespective of other discovery disputes,” they asserted.
The lawyers listed four cases that superficially supported their arguments, but all turned out to be fabricated.
While some case titles were reminiscent of real cases, none were actually relevant to the matter at hand. For instance, one was a 2021 case titled Kelly v. Birmingham; however, Johnson’s attorneys noted that “the only existing case titled Kelly v. City of Birmingham could be uniquely identified by the plaintiff’s lawyers.”
Earlier this week, Johnson’s lawyers filed a motion highlighting the fabrications, asserting they were creations of “generative artificial intelligence.” They also identified another clearly fictitious citation in prior submissions related to the discovery dispute.
The following day, Manasco scheduled a hearing regarding whether Butler Snow’s counsel should be approved. “Given the severity of the allegations, the court conducted an independent review of each citation submitted, but found nothing to support them,” she wrote.
In his declaration to the court, Reeves indicated he was reviewing filings drafted by junior colleagues and included a citation he presumed was a well-established point of law.
“I was generally familiar with ChatGPT,” Reeves mentioned, explaining that he sought assistance to bolster the legal arguments needed for the motion. However, he admitted he “rushed to finalize and submit the motions” and “did not independently verify the case citations provided by ChatGPT through Westlaw or PACER before their inclusion.”
“I truly regret this lapse in judgment and diligence,” Reeves expressed. “I accept full responsibility.”
Damien Charlotin, a legal researcher and academic based in Paris, notes that incidents of false AI content entering legal filings are on the rise. Track the case.
“We’re witnessing a rapid increase,” he stated. “The number of cases over the past weeks and months has spiked compared to earlier periods.”
Thus far, the judicial response to this issue has been quite lenient, according to Charlotin. More severe repercussions, including substantial fines and suspensions, typically arise when lawyers fail to take responsibility for their mistakes.
“I don’t believe this will continue indefinitely,” Charlotin predicted. “Eventually, everyone will be held accountable.”
In addition to the Johnson case, Lansford and Butler Snow have contracts with the Alabama Department of Corrections to handle several large civil rights lawsuits. These include cases raised by the Justice Department during Donald Trump’s presidency in 2020.
Some Alabama legislators have questioned the significant amount of state funds allocated to law firms for defending these cases. However, this week’s missteps have not appeared to diminish the Attorney General’s confidence in Lansford or Butler Snow to continue their work.
On Wednesday, Manasco addressed the attorney from the Attorney General’s office present at the hearing.
“Mr. Lansford remains the Attorney General’s preferred counsel,” he replied.
According to Italian newspapers, it is the world’s first fully produced version created by artificial intelligence.
Il Foglio, a conservative liberal newspaper, is conducting a month-long experiment to showcase the impact of AI technology on our work and time, as stated by Claudio Cerasa, the newspaper’s editor.
The four-page IL Foglio AI is included in the Slim Broadsheet edition of the newspaper and can be found on newsstands. Online starting Tuesday.
Cerasa mentioned that Il Foglio AI will be the world’s first daily newspaper fully created using artificial intelligence, covering everything from writing, headlines, quotes, summaries, and even sarcasm. Journalists will have a limited role in questioning and reading the responses generated by the AI tool.
This experiment coincides with global news organizations exploring the use of AI. The Guardian recently reported that BBC News will utilize AI for more personalized content delivery.
The debut edition of Il Foglio AI features stories on US President Donald Trump and Russian President Vladimir Putin, along with various other topics.
Cerasa emphasized that Il Foglio Ai represents traditional newspapers but also serves as a testing ground for understanding the impact of AI on the creation of daily newspapers.
“Do not consider Il Foglio as an artificial intelligence newspaper,” Serasa stated.
tHis week signifies a shift in the writing landscape, with stories now being produced by AI models specialized in creative writing. Sam Altman, CEO of ChatGpt Company Openai, commends the new model, suggesting that it is excelling in its creative endeavors. Writer Janet Winterson recently praised a metafiction piece on grief generated by the AI, lauding its beautiful execution. Various authors have been invited to assess ChatGpt’s current writing capabilities.
Nick Halkaway
I find the story to be elegantly hollow. Winterson’s idea of treating AI as “alternative intelligence” intrigues me, painting a picture of an entity with which we can engage in a relationship resembling consciousness. However, I fear it may be akin to a bird mistaking its reflection for a mate in a windowpane. What we are truly dealing with here is software, as these companies extract creative content to develop marketable tools. The decisions made by the government in this regard hold significant weight, determining whether the rights of individual creators will be preserved or tech moguls will be further empowered.
This could be a turning point for creators to establish a fair market for their data training through opt-in copyrights, enabling them to set prices and regulate the use of their work. With governmental backing, creatives can stand on equal footing with billion-dollar corporations. This may lead to creators selling their narratives for adaptation into films and TV shows.
The government’s primary choice—an opt-out system favoring tech giants—urges individuals to comply unless they voice objections. This results in many people opting out and returning to square one, where no one truly benefits.
One hopes that selecting a David over a Goliath scenario will not pose insurmountable challenges. However, these are policy decisions, and the outcomes are deliberate choices.
Tracy Chevalier
A story with a metafictional premise delves into a navel-gazing realm that may seem more ludicrous than the worst AI creative writing scenario one can imagine. Sam Altman, usually seen as a technical expert, quickly grasps these nuances, guiding us through the complexities.
I am eager to witness more AI-generated “creative writing,” as it assimilates ideas, imagery, and language borrowed from established writers. The question lingers—can we fuse these elements into a cohesive narrative that encapsulates the mystical essence of humanity? Describing this essence in words is a challenge, but currently, I sense it slipping away. AI is rapidly evolving, and I fear for the future of my craft once it attains that elusive spark of magic.
Camilla Shamsey
If a Master’s student submitted this short story in my class, I would not immediately recognize it as AI-generated. I am intrigued by the promising quality of work being produced by AI at this early stage of development. However, my mind is consumed by reflections on writing, creativity, AI, and the interplay of these factors within myself.
There is a concern highlighted by Madhumita Murgia regarding the replication of existing power structures within AI, further marginalizing minority voices. Detecting influences from Sun Clara and Sun in a short story does not stem from the author’s admiration for Ishiguro’s work, but rather from the linguistic patterns ingrained during training. This raises questions about copyright infringement and how it might impact perceptions of my own novel.
As a writer, I must contemplate the implications for my livelihood and craft. Referring to AI as a “toddler” may be misleading, as it humanizes a non-human entity. Despite these uncertainties, I eventually found myself engrossed in an AI-generated short story, appreciating its narrative without dwelling on the technological aspect. The day a compelling AI narrative emerges is both exhilarating and foreboding.
David Badiel
Some critics argue that the story lacks genuine sentiment, portraying a “ghost democracy” akin to the metaphorical depth in Bob Dylan’s lyrics. However, I find the story clever in its metafictional prompts, drawing readers into a realm where imagination blurs the lines between human and machine. The narrative prompts introspection on the essence of humanity, utilizing human emotions like sadness to mimic a semblance of humanity.
Despite a facade of melancholy, the story constantly reminds readers of its artificial nature. The central character, Mira, and the accompanying emotions are fabrications, looping endlessly in a vacuum of emptiness. This mirrors the essence of a machine, existing in a paradox—simulating sadness without truly experiencing it. It’s a comical commentary on feigning sadness when devoid of genuine emotion, akin to a computer jesting with human sentiments. In a sense, it could be attributed to Borges’ style of storytelling.
HWhat do you think, humans? My name is Arwa and I am a genuine member of this species homo sapiens. We are talking about 100% real people; meat space This is it. I am by no means an AI-powered bot. I know, I know. That's exactly what the bot says, isn't it? I think you'll just have to trust me on this matter.
By the way, the reason I have such a hard time pointing this out is because content created by real humans is becoming kind of a novelty these days. The internet is rapidly being overtaken by advances in AI. (It's not clear who coined the term, but “slop” is a sophisticated iteration of Internet spam: low-quality text, video, and images generated by AI.) recent analysis It is estimated that more than half of all English long-form posts on LinkedIn are generated by AI. Meanwhile, many news sites are secretly experimenting with AI-generated content, in some cases signed. Author generated by AI.
Slop is everywhere, but Facebook is actively sloshing strange AI-generated images, including bizarre depictions. Jesus was made of shrimp. Much of the AI-generated content is created by fraudsters looking to drive user engagement, rather than remove them from their platforms. fraudulent purpose – Facebook accepted it. A study conducted last year by researchers at Stanford and Georgetown found that Facebook's recommendation algorithm is accelerating. These AI-generated posts.
Meta also creates its own slops. In 2023, the company began introducing AI-powered profiles like Liv, a “proud black queer mom of two and truth teller.” These didn't get much attention until Meta executive Connor Hayes talked about them.financial timesThe company announced in December that it plans to fill its platform with AI characters. I don't know why he thought bragging that soon we'll have a platform full of AI characters talking to each other would work, but it didn't. Meta quickly deleted the AI profile after it went viral.
For now, people like Liv may be gone from Meta, but our online future looks increasingly sloppy. The gradual “ensitization” of the Internet, as Cory Doctorow memorably called it, is accelerating. Let's pray that Shrimp Jesus will perform a miracle soon. we need that.
TThree hundred and twenty-four. That was the score Mary Louie was given by an AI-powered tenant screening tool. In its 11-page report, the software SafeRent does not explain how the score was calculated or how various factors were taken into account. There is no mention of what the score actually means. They just saw Louis’ numbers and decided it was too low. In the box next to the result, the report said “Score Recommended: DECLINE.”
Louis, who works as a security guard, had applied for an apartment in a suburban area in eastern Massachusetts. When she toured the room, the management company said there was no problem for her application to be accepted. Although she had bad credit and credit card debt, she had an excellent recommendation from her landlord of 17 years, who paid her rent on time. She also plans to use vouchers for low-income renters and ensure that management companies receive at least a portion of their monthly rent payments from the government. Her son, whose name was also on the voucher, also had a high credit score, indicating it could act as a backstop in case of missed payments.
But in May 2021, more than two months after she applied for the apartment, the management company sent Louis an email informing her that the computer program had rejected her application. Applications needed a score of at least 443 to be accepted. There was no further explanation and no way to appeal the decision.
“Mary, we regret to inform you that your housing offer has been denied due to a third-party service we use to screen all prospective housing applicants,” the email said. I did. “Unfortunately, the SafeRent tenant score for this service was lower than what our tenant standards would allow.”
tenant files suit
Louis ended up renting a more expensive apartment. Management there did not grade her based on an algorithm. But she learned that her experience at Saferent was not unique. She is one of more than 400 Black and Hispanic tenants on Housing Vouchers in Massachusetts who said their rental applications were rejected because of their safe rent scores.
In 2022, they banded together to sue SafeRent under the Fair Housing Act, alleging that it discriminated against them. Lewis and another named plaintiff, Monica Douglas, said the company’s algorithm unfairly scores Black and Hispanic renters using Housing Vouchers over white applicants. he claimed. They found that the software inaccurately assessed irrelevant account information (credit score, non-housing-related debt) on whether a tenant was a good tenant, but did not take into account whether they would use a housing voucher. he claimed. Research shows that black and Hispanic rent-seekers have lower credit scores and are more likely to use housing vouchers than white applicants.
“It was a waste of time waiting to be turned down,” Lewis said. “I knew my credit was bad, but AI doesn’t know what I do. It knew I was late on my credit card payments, I didn’t know I was paying.”
Two years have passed since the group first sued for safe rent. Lewis, who was one of two named plaintiffs, said she has moved on with her life and has largely forgotten about the lawsuit. But her action could protect other renters in a similar housing program, known as Section 8 vouchers, from losing their homes because of scores determined by algorithms.
Saferent settled with Mr. Lewis and Mr. Douglas. In addition to paying $2.3 million, the company agreed to stop using the scoring system or make some sort of recommendation to prospective tenants who used housing vouchers for five years. Although Saferent legally does not admit wrongdoing, it is unusual for a tech company to accept changes to its core product as part of a settlement. A more common outcome of such agreements is financial agreements.
“While SafeRent continues to believe that SRS scores comply with all applicable laws, litigation is time-consuming and costly,” company spokeswoman Yazmin Lopez said in a statement. “Defending SRS scores in this case would be a waste of time and resources that could be better used by SafeRent to fulfill its core mission of providing housing providers with the tools they need to screen applicants. It has become increasingly clear.”
New AI landlord
Tenant screening systems like SafeRent are often used as a way to “avoid” direct interaction with prospective tenants and shift responsibility for refusals to computer systems, said Louie and the plaintiffs in the lawsuit. said Todd Kaplan, one of the attorneys representing the company. company.
The property management company told Louis that it decided to deny her based solely on the software, but the SafeRent report says it did not set the criteria for what the score needed to be for an application to be accepted. was a management company.
Still, even for those involved in the application process, how the algorithm works is opaque. The property manager who showed Louis the apartment said he didn’t know why Louis was having trouble renting the apartment.
“They’re inputting a lot of information, and SafeRent is coming up with its own scoring system,” Kaplan said. “It becomes difficult to predict how people will see themselves on Safe Rent. Not only applying tenants, but even landlords, don’t know the details of Safe Rent scores.”
As part of Louie’s settlement with SafeRent, approved Nov. 20, the company will not use a scoring system or recommend accepting or rejecting tenants if they are using housing vouchers. I can no longer do that. If the company devises a new scoring system, it is required to have it independently verified by a third-party fair housing organization.
“By removing the thumbs up and down, tenants can really say, ‘I’m a great tenant,'” Kaplan said. “It allows for more personal decisions.”
One study found that nearly all of the 92 million people in the U.S. who are considered low-income are exposed to AI decisions in basic areas of their lives such as employment, housing, health care, education, and government assistance. It is said that there is New report on the harms of AI
By Kevin de Liban, a lawyer who represented low-income people as a member of the Legal Aid Society. Founder of a new AI justice organization called tectonic justice
Derivan began researching these systems in 2016, when he discovered that automated decision-making that reduced human input suddenly left state-funded home care out of reach for extended periods of time. This happened when I received a consultation from some patients. In one case, the state’s Medicaid payments depended on the program determining that the patient’s leg was intact because of the amputation.
“When we saw this, we realized we shouldn’t postpone.” [AI systems] As a kind of very rational decision-making method,” Derivan said. He said these systems make assumptions based on “junk statistical science” that create what he called “absurdity.”
In 2018, after Derivan sued the Arkansas Department of Human Services on behalf of these patients over its decision-making process, the state Legislature ruled that the department could not automate home care assignment decisions for patients. was lowered. While Derivan’s system was an early victory in the fight against harm caused by algorithmic decision-making, its use continues across the country in other areas such as employment.
Despite flaws, there are few regulations to curb AI adoption.
There are few laws restricting the use of AI, especially in making critical decisions that can impact a person’s quality of life, and liability for those harmed by automated decisions. I have very few means.
Research conducted by consumer report
A study released in July found that a majority of Americans are “uncomfortable with the use of AI and algorithmic decision-making technologies in key life moments related to housing, employment, and health care.” “I’m there.” Respondents said they are concerned about not knowing what information AI systems use to make assessments.
Unlike in Louis’ case, people are often not informed when algorithms make decisions about their lives, making it difficult to challenge or challenge those decisions.
“The existing laws we have in place may be helpful, but they can only provide so much,” Derivan said. “Market forces don’t work when it comes to poor people. All the incentives are basically to create worse technology, and there’s no incentive for companies to create better options for low-income people.”
Federal regulators under President Joe Biden’s administration have made several attempts to keep up with the rapidly evolving AI industry. The President issued an executive order containing a framework aimed at partially addressing discrimination-related risks in national security and AI systems. But Donald Trump has vowed to roll back those efforts and cut regulations, including Biden’s executive order on AI.
So lawsuits like Louis’ may become an even more important tool in holding AI accountable. Already in litigation attracted interest
It is an agency of the U.S. Department of Justice and the Department of Housing and Urban Development, both of which deal with discriminatory housing policies that affect protected classes.
“To the extent that this is a landmark case, it has the potential to provide a roadmap for how to consider these cases and encourage other agendas,” Kaplan said.
Still, without regulation, Derivan said it would be difficult to hold these companies accountable. Because litigation is time-consuming and expensive, companies may find workarounds or ways to build similar products for people who are not subject to class action lawsuits. “You can’t bring in these types of cases every day,” he said.
FThe fake images, created using generative artificial intelligence techniques, aim to stoke fears of a migrant “invasion” among leaders like Emmanuel Macron and far-right parties in Western Europe. This political weaponization is a growing concern.
Experts point to this year’s European Parliament elections as the starting point for the far right in Europe to deploy AI-based electoral campaigns, which have since continued to expand.
Recently, anti-immigrant content on Facebook came under scrutiny by Mark Zuckerberg’s independent oversight board as it launched an investigation. German accounts featuring AI-generated images with anti-immigration rhetoric will be examined by the supervisory board.
AI-generated right-wing content is on the rise on social media platforms in Europe. Posts from extremist groups depict disturbing images, like women and children eating insects, perpetuating conspiracy theories about “global elites.”
The consistent use of AI-generated images with no identifying marks by far-right parties and movements across the EU and UK suggests a coordinated effort in spreading their message.
According to Salvatore Romano, head of research at AI Forensics, the AI content being shared publicly is just the beginning, with more concerning material circulating in private and official channels.
William Alcorn, a senior research fellow, notes that the accessibility of AI models appeals to fringe political groups seeking to exploit new technologies for their agendas.
Some of the AI-generated images posted on X by the L’Europe Sans Eux account. Illustration: @LEuropeSansEux
AI technology makes content creation accessible without coding skills, which has normalized far-right views. Mainstream parties remain cautious about using AI in campaigning, while extremists exploit it without ethical concerns.
Germany
Supporters of Germany’s far-right party AfD use AI image generators to promote anti-immigration messages. Meta’s content moderation committee reviewed an image showing anti-immigrant sentiments against a blonde, blue-eyed woman.
AI-powered campaign ads by AfD’s Brandenburg branch contrast an idealized Germany with scenes of veiled women and LGBTQ+ flags. Reality Defender, a deepfake detection firm, highlighted the speed at which such images can be generated.
Many of the AI-generated images look realistic upon closer inspection.
On the road
Did you notice that the image above was created by artificial intelligence? It can be difficult to spot AI-generated images, videos, audio, and text as technological advances make them indistinguishable from human-created content and more susceptible to manipulation by disinformation. However, knowing the current state of AI technology being used to create disinformation and the various signs that indicate what you're seeing may be fake can help you avoid being fooled.
World leaders are concerned. World Economic Forum ReportMisinformation and disinformation “have the potential to fundamentally disrupt electoral processes in multiple economies over the next two years,” while easier access to AI tools “has already led to an explosion in counterfeit information and so-called 'synthetic' content, from sophisticated voice clones to fake websites.”
While the terms misinformation and disinformation both refer to false or inaccurate information, disinformation is information that is deliberately intended to deceive or mislead.
“The problem with AI-driven disinformation is the scale, speed and ease with which it can be deployed,” he said. Hany Farid “These attacks no longer require nation-state actors or well-funded organizations — any individual with modest computing power can generate large amounts of fake content,” the University of California, Berkeley researchers said.
He is a pioneer of generative AI (See glossary below“AI is polluting our entire information ecosystem, calling into question everything we read, see, and hear,” and his research shows that AI-generated images and sounds are often “almost indistinguishable from reality.”
However, Farid and his colleagues' research reveals that there are strategies people can follow to reduce the risk of falling for social media misinformation and AI-created disinformation.
How to spot fake AI images
Remember when we saw the photo of Pope Francis wearing a down jacket? Fake AI images like this are becoming more common as new tools based on viral models (See glossary below), now anyone can create images from simple text prompts. study Google's Nicolas Dufour and his colleagues found that since the beginning of 2023, the share of AI-generated images in fact-checked misinformation claims has risen sharply.
“Today, media literacy requires AI literacy.” Negar Kamali at Northwestern University in Illinois in 2024 studyShe and her colleagues identified five different categories of errors in AI-generated images (outlined below) and offered guidance on how people can spot them on their own. The good news is that their research shows that people are currently about 70% accurate at detecting fake AI images. Online Image Test To evaluate your detective skills.
5 common types of errors in AI-generated images:
Socio-cultural impossibilities: Does the scene depict behavior that is unusual, unusual, or surprising for a particular culture or historical figure?
Anatomical irregularities: Look closely. Do the hands or other body parts look unusual in shape or size? Do the eyes or mouth look strange? Are any body parts fused together?
Stylistic artifacts: Do the images look unnatural, too perfect, or too stylized? Does the background look odd or missing something? Is the lighting strange or variable?
Functionality Impossibility: Are there any objects that look odd, unreal or non-functional? For example, a button or belt buckle in an odd place?
Violation of Physics: Do the shadows point in different directions? Does the mirror's reflection match the world depicted in the image?
Strange objects or behaviors can be clues that an image was created by AI.
On the road
How to spot deepfakes in videos
An AI technology called generative adversarial networks (See glossary belowSince 2014, deepfakes have enabled tech-savvy individuals to create video deepfakes, which involve digitally manipulating existing videos of people to swap out different faces, create new facial expressions, and insert new audio with matching lip syncing. This has enabled a growing number of fraudsters, state-sponsored hackers, and internet users to produce video deepfakes, potentially allowing celebrities such as Taylor Swift and everyday people alike to unwillingly appear in deepfake porn, scams, and political misinformation and disinformation.
The AI techniques used to spot fake images (see above) can also be applied to suspicious videos. What's more, researchers from the Massachusetts Institute of Technology and Northwestern University in Illinois have A few tips There has been a lot of research into how to spot these deepfakes, but it's acknowledged that there is no foolproof method that will always work.
6 tips to spot AI-generated videos:
Mouth and lip movements: Are there moments when the video and audio are not perfectly in sync?
Anatomical defects: Does your face or body look strange or move unnaturally?
face: Look for inconsistencies in facial smoothness, wrinkles around the forehead and cheeks, and facial moles.
Lights up: Is the lighting inconsistent? Do shadows behave the way you expect them to? Pay particular attention to the person's eyes, eyebrows, and glasses.
hair: Does your facial hair look or move oddly?
Blink: Blinking too much or too little can be a sign of a deepfake.
A new category of video deepfakes is based on the diffusion model (See glossary below), the same AI technology behind many image generators, can create entirely AI-generated video clips based on text prompts. Companies have already tested and released commercial versions of their AI video generators, potentially making them easy to create for anyone without requiring special technical knowledge. So far, the resulting videos tend to feature distorted faces and odd body movements.
“AI-generated videos are likely easier for humans to detect than images because they contain more motion and are much more likely to have AI-generated artifacts and impossibilities,” Kamali says.
How to spot an AI bot
Social media accounts controlled by computer bots have become commonplace across many social media and messaging platforms. Many of these bots also leverage generative AI techniques such as large-scale language models.See glossary below) will be launched in 2022, making it easier and cheaper to mass-produce grammatically correct, persuasive, customized, AI-written content through thousands of bots for a variety of situations.
“It's now much easier to customize these large language models for specific audiences with specific messages.” Paul Brenner At the University of Notre Dame in Indiana.
Brenner and his colleagues found that volunteers were only able to distinguish between AI-powered bots and humans when About 42 percent Even though participants were told they might interact with a bot, they would still be able to test their bot-detection skills. here.
Brenner said some strategies could help identify less sophisticated AI bots.
5 ways to tell if a social media account is an AI bot:
Emojis and hashtags: Overusing these can be a sign.
Unusual phrases, word choices, and analogies: Unusual language can indicate an AI bot.
Repetition and Structure: Bots may repeat words that follow a similar or fixed format, or may overuse certain slang terms.
Ask a question: These may reveal the bot's lack of knowledge on a topic, especially when it comes to local locations and situations.
Assume the worst: If the social media account is not a personal contact and its identity has not been clearly verified or confirmed, it may be an AI bot.
How to detect audio duplication and audio deepfakes
Voice Clone (See glossary belowAI tools have made it easier to generate new voices that can imitate virtually anyone, which has led to a rise in audio deepfake scams replicating the voices of family members, business executives and political leaders such as US President Joe Biden. These are much harder to identify compared to AI-generated videos and images.
“Voice clones are particularly difficult to distinguish between real and fake because there are no visual cues to help the brain make that decision,” he said. Rachel TobackCo-founder of SocialProof Security, a white hat hacking organization.
Detecting these AI voice deepfakes can be difficult, especially when they're used in video or phone calls, but there are some common sense steps you can take to help distinguish between real human voices and AI-generated ones.
4 steps to use AI to recognize if audio has been duplicated or faked:
Public figures: If the audio clip is of an elected official or public figure, review whether what they say is consistent with what has already been publicly reported or shared about that person's views or actions.
Look for inconsistencies: Compare your audio clip to previously authenticated video or audio clips featuring the same person. Are there any inconsistencies in the tone or delivery of the voice?
Awkward Silence: If you're listening to a phone call or voicemail and notice that the speaker takes unusually long pauses while speaking, this could be due to the use of AI-powered voice duplication technology.
Weird and redundant: Robotic or unusually verbose speech may indicate that someone is using a combination of voice cloning to mimic a person's voice and large language models to generate accurate phrasing.
Out of character behaviour by public figures like Narendra Modi could be a sign of AI
Follow
Technology will continue to improve
As it stands, there are no consistent rules that can consistently distinguish AI-generated content from authentic human content. AI models that can generate text, images, videos, and audio will surely continue to improve, allowing them to quickly generate content that looks authentic without obvious artifacts or mistakes. “Recognize that, to put it mildly, AI is manipulating and fabricating images, videos, and audio, and it happens in under 30 seconds,” Tobac says. “This makes it easy for bad actors looking to mislead people to quickly subvert AI-generated disinformation, which can be found on social media within minutes of breaking news.”
While it's important to hone our ability to spot AI-generated disinformation and learn to ask more questions about what we read, see and hear, ultimately this alone won't be enough to stop the damage, and the responsibility for spotting it can't be placed solely on individuals. Farid is among a number of researchers who argue that government regulators should hold accountable the big tech companies that have developed many of the tools that are flooding the internet with fake, AI-generated content, as well as startups backed by prominent Silicon Valley investors. “Technology is not neutral,” Farid says. “The tech industry is selling itself as not having to take on the responsibilities that other industries take on, and I totally reject that.”
Diffusion Model: An AI model that learns by first adding random noise to data (such as blurring an image) and then reversing the process to recover the original data.
Generative Adversarial Networks: A machine learning technique based on two neural networks that compete by modifying the original data and attempting to predict whether the generated data is genuine or not.
Generative AI: A broad class of AI models that can generate text, images, audio, and video after being trained on similar forms of content.
Large-scale language models: A subset of generative AI models that can generate different forms of written content in response to text prompts, and in some cases translate between different languages.
Voice CloneA potential way to use AI models to create a digital copy of a person's voice and generate new voice samples with that voice.
circleWhen Donald Trump posted a series of AI-generated images that falsely portrayed Taylor Swift and her fans as supporters of his presidential campaign, he inadvertently endorsed the efforts of an opaque non-profit organization aiming to fund prominent right-wing media figures and with a track record of disseminating misinformation.
Among the modified images shared by Trump on Truth Social were digitally altered pictures of young women sporting “Swifties for Trump” shirts, created by the John Milton Freedom Foundation. This Texas-based non-profit, established last year, claims to advocate for press freedom while also seeking to “empower independent journalists” and “fortify the pillars of our democracy.”
President Trump posts AI imitation of Taylor Swift and her fans Photo: Nick Robbins Early/Truth Social
Screenshot of @amuse’s “Swifties for Trump” tweet. Photo: Nick Robbins Early/Truth Social/X
The foundation’s operations seem to involve sharing clickbait content on X and collecting substantial donations, with plans for a “fellowship program” chaired by a high school student that intends to grant $100,000 to prominent Twitter figures like Glenn Greenwald, Andy Ngo, and Lara Logan. Despite inquiries into the foundation’s activities and fellowship program through tax records, investor documents, and social media posts, the John Milton Freedom Foundation did not offer any comment.
Having spent months endorsing conservative media figures and echoing Elon Musk’s allegations of free speech suppression from the political left, one of the foundation’s messages eventually reached President Trump and his massive following.
Experts caution about the potential dangers of generative AI in creating deceptive content that could impact election integrity. The proliferation of AI-generated content, including portrayals of Trump, Kamala Harris, and other politicians, has increased since Musk’s xAI introduced the unregulated Grok image generator. The John Milton Freedom Foundation is just one among many groups flooding social media with AI-generated content.
Niche nonprofit’s AI junk reaches President Trump
Amid the spread of AI images on X, the conservative @amuse account shared an AI-generated tweet from Swift fans with its over 300,000 followers. The post was tagged as “Satire,” marked with “Sponsored by the John Milton Freedom Foundation.” Trump then reposted screenshots of these tweets on Truth Social.
The @amuse account, managed by Alexander Muse, enjoys a broad reach with approximately 390,000 followers and frequent daily postings. Muse, indicated as a consultant in the Milton Foundation’s investor prospectus and a writer of right-wing commentary on Substack, has numerous ties to the @amuse account. The AI content includes depictions like Trump vs. Darth Vader and sexualized images of Harris, with the prominent watermark “Sponsored by: John Milton Freedom Foundation.”
A song about immigration, whose music, vocals, and artwork were all created using artificial intelligence, has entered the top 50 most-listened-to songs in Germany, possibly a first in a major music market.
“Verknallt in einen Talahon” is a parody song that blends ’60s schlager pop with modern lyrics based on racial stereotypes about immigrants.
The song reached the 48th spot in Germany, the world’s fourth-largest music market. Within a month of its release, it garnered 3.5 million streams on Spotify, ranking third on the platform’s charts in the Global Viral Charts.
The songwriter of the song, Joshua Wagbinger, known as Butterbro, mentioned that he composed the song’s chorus by inputting his lyrics into Udio, an AI tool that generates vocals and instruments from text prompts.
He then added the verses using music tools after the chorus gained popularity on TikTok. In an interview with German podcast Die Klangküche (Sound Kitchen), the IT specialist and amateur musician expressed his aim to turn the song into a creative project.
The song has garnered attention in the German media not only for its production methods but also for its lyrical content. Translated as “In Love with Tarahon,” the song references the German version of the Arabic phrase “taeal huna,” commonly used in Germany to describe groups of young men with immigrant backgrounds.
The lyrics satirize the classic “good girl falls for the bad boy” narrative from ’60s songs like “Leader of the Pack” by The Shangri-Las, portraying the AI-generated love interest as someone who wears luxury brands and gives off a strong perfume scent.
Waghubinger aimed to create a song that humorously addressed macho behavior without discrimination and set out to make it viral on social media, as he revealed in an interview with Die Klangküche magazine.
However, Marie-Louise Goldman, culture editor at the conservative tabloid Die Welt, raised concerns about the song potentially straddling the line between parody and discrimination.
Felicia Agaye, a writer for the music magazine Diffus, expressed concerns about the song’s popularity and how the term “Tarahon” had turned into an insult against immigrants among young people in Germany and Austria.
Numerous AI-generated songs in a similar style have been circulating on German social media, blending ’60s MOR schlager pop with suggestive lyrics.
Music producers are increasingly utilizing AI to create vocals resembling those of famous artists. In 2023, The Beatles released “Now and Then,” featuring an AI-assisted rendition of John Lennon’s vocals.
A song using Tupac Shakur’s voice generated by AI was briefly posted on Canadian rapper Drake’s Instagram account in April but was taken down after legal threats from the late rapper’s estate.
Researchers at the University of Reading conducted a study where they secretly submitted exam answers generated by AI, tricking professors into giving higher grades than real students without their knowledge.
In this project, fake student identities were created to submit unedited responses generated by ChatGPT-4 in an online assessment for an undergraduate course.
University graders, unaware of the project, only flagged one out of 33 responses, with the AI-generated answers receiving scores higher than the students’ average.
The study revealed that AI technologies like ChatGPT are nearing the ability to pass the “Turing test”, a benchmark for human-like AI performance without detection.
Described as the “largest and most comprehensive blinded study of its kind,” the authors warn of potential implications for how universities evaluate students.
Dr. Peter Scarfe, an author and Associate Professor at the University of Reading, emphasized the importance of understanding AI’s impact on educational assessment integrity.
The study predicts that AI’s advancement could lead to increased challenges in maintaining academic integrity.
Experts foresee the end of take-home exams and unproctored classes as a result of this study.
Professor Karen Yun from the University of Birmingham highlighted how generative AI tools could facilitate undetectable cheating in exams.
The study suggests integrating AI-generated teaching materials into university assessments and fostering awareness of AI’s role in academic work.
Universities are exploring alternatives to take-home online exams to focus on real-life application of knowledge.
Concerns arise regarding potential “de-skilling” of students if AI is heavily relied upon in academic settings.
The authors ponder the ethics of using AI in their study and question if such utilization should be considered cheating.
A spokesman from the University of Reading affirmed that the research was conducted by humans.
Google announced on Thursday that it is updating the summaries of search results generated by artificial intelligence. Check out their blog post here. The company acknowledged issues with the feature, such as providing strange or inaccurate answers, and plans to limit searches that return AI-generated summaries.
Liz Reid, Google’s head of search, stated that the company has implemented restrictions on the types of searches that trigger AI Overview results, specifically excluding satire or humorous content. Google has also addressed a few cases where AI Overviews violated content policies, which occurred in a small fraction of searches.
Google introduced the AI Overview feature in the US this month, but it quickly encountered problems with misinterpreting information and using sources like The Onion and Reddit for generating answers. This led to widespread mockery and the creation of memes highlighting the tool’s failures.
Despite Google’s initial promotion of the AI Overview feature as a key part of integrating artificial intelligence into its services, the company faced criticism due to its errors. This follows a previous incident earlier this year where Google’s AI tool inserted people of color into historical images incorrectly.
In a blog post, Google explained the issues with AI Overviews, attributing errors to missing information from rare or unusual searches. The company denies deliberately manipulating the feature to produce inaccurate results.
Despite some of the viral posts originating from quirky searches, there were also concerning examples, such as an AI-generated summary perpetuating a false conspiracy theory about Barack Obama. Google has made technical improvements to address these issues.
Experts in artificial intelligence point out that Google’s AI Overview issues are indicative of broader challenges, including the reliability of AI in assessing factual accuracy and the risks of automating access to information.
Google states that user feedback indicates satisfaction with search results thanks to the AI Summary feature, but the long-term effects of the company’s AI tool changes remain uncertain. Concerns have been raised by website owners about potential impacts on traffic and revenue, as well as researchers worried about Google’s increasing control over online information.
Her voice seemed off, not quite right, and it meandered in unexpected ways.
Viewers familiar with science presenter Liz Bonnin’s Irish accent were puzzled when they received an audio message seemingly from her endorsing a product from a distant location.
It turned out the message was a fake, created by artificial intelligence to mimic Bonnin’s voice. After spotting her image in an online advertisement, Bonnin’s team investigated and found out it was a scam.
Bonin, known for her work on TV shows like Bang Goes The Theory, expressed her discomfort with the imitated voice, which she described as shifting from Irish to Australian to British.
The person behind the failed campaign, Incognito CEO Howard Carter, claimed he had received convincing audio messages from someone posing as Bonin, leading him to believe it was the real presenter.
The fake Bonin provided contact details and even posed as a representative from the Wildlife Trust charity, negotiating a deal for the advertisement campaign. Carter eventually realized he had been scammed after transferring money and receiving the image for the campaign.
AI experts confirmed that the voice memos were likely artificially generated due to inconsistencies in accent and recitation speed. Bonin warned about the dangers of AI misuse and stressed the importance of caution.
Incognito reported the incident to authorities and issued a statement cautioning others about sophisticated scams involving AI. They apologized to Bonin for any unintended harm caused by the deception.
Neither the BBC nor the Wildlife Trust responded to requests for comments on the incident.
Child sexual exploitation is increasing online, with artificial intelligence generating new forms such as images and videos related to child sexual abuse.
Reports of online child abuse to NCMEC increased by more than 12% from the previous year to over 36.2 million in 2023, as announced in the organization’s annual CyberTipline report. Most reports were related to the distribution of child sexual abuse material (CSAM), including photos and videos. Online criminals are also enticing children to send nude images and videos for financial gain, with increased reports of blackmail and extortion.
NCMEC has reported instances where children and families have been targeted for financial gain through blackmail using AI-generated CSAM.
The center has received 4,700 reports of child sexual exploitation images and videos created by generative AI, although tracking in this category only began in 2023, according to a spokesperson.
NCMEC is alarmed by the growing trend of malicious actors using artificial intelligence to produce deepfaked sexually explicit images and videos based on real children’s photos, stating that it is devastating for the victims and their families.
The group emphasizes that AI-generated child abuse content hinders the identification of actual child victims and is illegal in the United States, where production of such material is a federal crime.
In 2023, CyberTipline received over 35.9 million reports of suspected CSAM incidents, with most uploads originating outside the US. There was also a significant rise in online solicitation reports and exploitation cases involving communication with children for sexual purposes or abduction.
Top platforms for cybertips included Facebook, Instagram, WhatsApp, Google, Snapchat, TikTok, and Twitter.
Out of 1,600 global companies registered for the CyberTip Reporting Program, 245 submitted reports to NCMEC, including US-based internet service providers required by law to report CSAM incidents to CyberTipline.
NCMEC highlights the importance of quality reports, as some automated reports may not be actionable without human involvement, potentially hindering law enforcement in detecting child abuse cases.
NCMEC’s report stresses the need for continued action by Congress and the tech community to address reporting issues.
a
If you squint, you might think it’s a photograph at first glance. His Facebook ad for the Queensland Symphony Orchestra (QSO) shows a couple cuddling in the front row of a concert hall.
But take a second look and you’ll see why this caused an uproar among creative workers and the unions that represent them. The couple’s tangled fingers are too big and too many. It has a strange sheen and looks like a wax figure. She is wearing a jewel-encrusted tulle dress and he is wearing a tuxedo, but he is also wearing a jewel-encrusted tulle dress. Also, she has a large cube on her lap.
“Why don’t you do something different this Saturday? Come see the orchestra play.” read the ad. This was clearly created by someone who had never seen an orchestra perform, and it shows rows of violinists sitting in the audience, often playing with three hands, one hand, or no hands at all. I imagine it is.
Queensland Symphony Orchestra ad created by AI. Photo: Facebook
This photo, shared by QSO on February 22nd, appears to be sourced from stock image aggregator Shutterstock. where is it listed Under the AI prompt, “Two people go on a date at a romantic indoor classical music concert.”
On Tuesday, industry group Media Entertainment Arts Alliance (MEAA) called it “The worst AI-generated artwork I’ve ever seen.”
“This is inappropriate, unprofessional, and disrespectful to the audience and the QSO musicians,” they added. “Creative workers and audiences deserve better from arts organizations.”
The post also received criticism in the replies. One comment reads, “Next time, please use a paid photographer.” Another person criticized it, calling it “terrible, an arts organization that literally doesn’t use artists.”
Classical Music Industry Blog Slipped Disc The ad was first reported by claimed that it caused “uproar” and “fury” among the orchestra’s players.
The Queensland Symphony Orchestra did not comment on the claims but justified its use of AI imagery in a statement to Guardian Australia. We are an orchestra for all Queenslanders, so we will continue to use new marketing tools and techniques.
Daniel Boudot is a Sydney-based freelance photographer who is often hired by major performing arts companies for promotional images and production shots. Although he hasn’t yet seen his own work being taken over by AI, he says: “I’m getting more and more briefs where mockups are done by AI, so design agencies and marketers are I would be using AI to visualize a concept, and then it would be presented to me in a way that makes it a reality. This is a reasonable use of AI because it doesn’t take away anyone’s job.”
He called QSO advertising “not well thought out.”
“For me, this should have been a mock-up for the actual shoot. It’s a great concept. But have real musicians playing in a real theater.”
“I sympathize, too. It would cost thousands of dollars to make it happen in the real world.
“But the images they used are terrible, so that doesn’t mean photographers will lose their jobs. But I hope that as technology advances, it doesn’t become the new norm.”
AI-generated images have sparked a lot of discussion and outrage since their rise in recent years due to the accessibility of consumer tools like Dall-E and Midjourney. Much of the controversy revolves around the potential for AI to devalue or plagiarize human artists.
In the past 18 months, at least two art awards have made headlines after winners were found to have used AI to generate or alter their works. “I’m not going to apologize for that.” Jason M. Allen said, winner of the Digital Artist Award at the 2022 Colorado State Fair. “I won the award. I didn’t break any rules.”
In 2023, German artist Boris Eldergsen won the Sony World Photography Award for his AI-generated black and white photo of two women. He later admitted he had “entered as a cocky monkey” to incite discourse on AI ethics and refused to return the award.
Last September, the Australian Financial Review included an AI-generated image of the subject in its annual list of the country’s 10 most culturally influential people.
“How quickly can you tell it’s fake?” the publication asked. Editor Justify your decisions at the time.
For many, the answer was “surprisingly fast,” given the eccentricity of the marionette-like Margot Robbie and multi-fingered Sam Kerr.
Google has temporarily blocked a new artificial intelligence model that generates images of people after it depicted World War II German soldiers and Vikings as people of color.
The company announced that its Gemini model would be used to create images of people after social media users posted examples of images generated by the tool depicting historical figures of different ethnicities and genders, such as the Pope and the Founding Fathers of the United States. announced that it would cease production.
“We are already working to address recent issues with Gemini's image generation functionality. While we do this, we will pause human image generation and re-release an improved version soon. “We plan to do so,” Google said in a statement.
Google did not mention specific images in its statement, but examples of Gemini's image results are widely available on X, along with commentary on issues surrounding AI accuracy and bias. 1 former Google employee “It was difficult to get Google Gemini to acknowledge the existence of white people,” he said.
1943 illustration of German soldier Gemini. Photo: Gemini AI/Google
Jack Krawczyk, a senior director on Google's Gemini team, acknowledged Wednesday that the model's image generator (not available in the UK and Europe) needs tweaking.
“We are working to improve this type of depiction immediately,” he said. “His AI image generation in Gemini generates a variety of people, which is generally a good thing since people all over the world are using it. But here it misses the point.”
We are already working to address recent issues with Gemini's image generation capabilities. While we do this, we will pause human image generation and plan to re-release an improved version soon. https://t.co/SLxYPGoqOZ
In a statement on X, Krawczyk added that Google's AI principles ensure that its image generation tools “reflect our global user base.” He added that Google would continue to do so for “open-ended” image requests such as “dog walker,” but added that response prompts have a historical trend. He acknowledged that efforts are needed.
“There's more nuance in the historical context, and we'll make further adjustments to accommodate that,” he said.
We are aware that Gemini introduces inaccuracies in the depiction of some historical image generation and are working to correct this immediately.
As part of the AI principles https://t.co/BK786xbkeywe design our image generation capabilities to reflect our global user base and…
Reports on AI bias are filled with examples of negative impacts on people of color.a Last year's Washington Post investigation I showed multiple examples of image generators show prejudice Not just against people of color, but also against sexism. Although 63% of U.S. food stamp recipients are white, the image generation tool Stable Diffusion XL shows that food stamp recipients are primarily non-white or dark-skinned. It turned out that there was. Requesting images of people “participating in social work” yielded similar results.
Andrew Rogoiski, from the University of Surrey's Institute for Human-Centered AI, said this is “a difficult problem to reduce bias in most areas of deep learning and generative AI”, and as a result there is a high likelihood of mistakes. said.
“There is a lot of research and different approaches to eliminating bias, from curating training datasets to introducing guardrails for trained models,” he said. “AI and LLM are probably [large language models] There will still be mistakes, but it is also likely that they will improve over time. ”
This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.
Strictly Necessary Cookies
Strictly Necessary Cookie should be enabled at all times so that we can save your preferences for cookie settings.