Why Are We Drawn to Fake Lips but Reluctant About Fake Meat?

A new scientist. Science News and Long read from expert journalists covering science, technology, health and environmental developments from various websites and magazines.

With the complexities of modern consumer psychology, we are increasingly comfortable with the idea of injecting synthetic substances into our faces, yet we hesitate to consume them.

The cosmetics sector is thriving. Dermatological fillers and wrinkle-reducing neurotoxins have now become standard procedures in the injection market. It is projected to more than double by 2030.

At the same time, jewelry has also experienced a synthetic makeover. Initially criticized for being artificial, lab-grown diamonds are now gaining market traction, as sales of natural gems are declining. Luxury buyers seem unfazed by the term “fake,” as long as the allure remains.

However, when it comes to beauty, I draw the line at lunchtime while embracing composites. From plant-based substitutes to lab-cultured proteins, despite their clear advantages, they often face public resistance.

This skepticism may stem from our intrinsic respect for “nature,” viewed as a hallmark of purity, credibility, and safety. This tendency is referred to as Natural Bias in psychology. Even when the risks are lower than industrial agriculture, it helps explain our aversion to “synthetic meat.”

This preference isn’t unreasonable. For early humans, avoiding unknown foods was essential for survival, as strong disgust responses helped curb the consumption of harmful items. Yet, our instincts have not adapted to innovation, and what is currently seen as “natural” may harbor significant risks. Hormone-laden beef carries heavy environmental costs related to animal agriculture.

Unlike jewelry and cosmetics, food continues to provoke visceral reactions, which presents a serious challenge. As we seek to meet the protein needs of a global population projected to approach 10 billion by mid-century, food innovation isn’t just beneficial—it’s crucial. The demands of land, water, and emissions from livestock farming are unsustainable at current scales. Cultivated meat and precision fermentation—bioengineering organisms like yeast to produce proteins—are viable alternatives, yet consumer skepticism stemming from outdated naturalistic biases has hindered their acceptance.

This reluctance isn’t based merely on taste or health. Blind taste tests show that plant-based proteins can often replicate the mouthfeel of meat, frequently matching or exceeding nutritional profiles. Economically, alternative proteins, particularly plant-based options, are becoming more affordable. The real challenges lie in psychological barriers and a fear of technological advancements.

One way to navigate this is through transparency. Educating consumers about alternative protein production processes and comparing them to familiar operations like cheese-making and brewing can help build trust. Presenting alternative proteins as an evolution of tradition rather than a radical departure can also aid acceptance.

Additionally, we need to challenge the myth that today’s meat is somehow “natural.” A typical supermarket pack of sausages results from a lengthy process involving feed additives, pharmaceuticals, genetic manipulation, and large-scale industrial practices. If we’re apprehensive about “synthesis,” perhaps it’s worth considering what conventional meat production truly entails.

Our biases towards “natural” once ensured survival. Now, they may obstruct our embrace of technologies vital for long-term food security, environmental stability, and public health. After all, if we can welcome synthesis in the form of anti-aging injections, lip fillers, and lab-grown diamonds, it might be time to extend that pragmatism to our diets.

Sophie Atwood is the Behavior Science Consultant at Behavior Global, UK.

Topics:

Source: www.newscientist.com

Scamazon: Targeting Prime Subscribers with Fake Emails

As a regular shopper on Amazon, I pay £95 annually for my Prime subscription. Therefore, I act promptly to any email notification warning about a price increase.


However, any emails featuring a “cancel” button are scams created by fraudsters seeking to obtain your account login and payment credentials.

In response to a recent increase in fake messages, Amazon has sent over 200 emails to alert its million global Prime members. The company aims to “protect the trust of our users by safeguarding our brand” and to “educate consumers” to prevent impersonation scams.Learn more about avoiding scams.


What does a scam look like?

Fraudulent emails may inform you of an unexpected automatic renewal of your Amazon Prime subscription (currently £95 per year or £8.99 per month in the UK).

These messages might include personal data obtained from other sources to appear legitimate, and may even feature a “Cancel Subscription” button redirecting you to a fake Amazon login page.

This isn’t the only method scammers use to exploit Amazon shoppers. Earlier this year, retailers highlighted a notable spike in UK-based phone spoofing scams, along with fake social media profiles pretending to assist with customer complaints.

What do these messages request?

These communications pressure you to act fast, urging clicks to provide personal and payment information.




Last year, Amazon helped shut down over 55,000 phishing websites and 12,000 phone numbers. Photo: Leon Neil/Getty Images

What should you do?

Avoid clicking any links in these emails. Scammers aim to steal your logins and other confidential information. You can either disregard the email or forward it to amazon.co.uk/Reportascam.

When incidents occur outside the platform, consumer reports help Amazon’s systems to identify responsible parties. Last year, over 55,000 phishing sites and 12,000 fraudulent numbers were taken down.

Amazon encourages consumers to report suspicious fraud to safeguard our accounts and assist in referring malicious actors to law enforcement.

If you want to verify your Prime membership status, access the Amazon Mobile app or navigate to Amazon.co.uk directly. Choose Prime from the main menu to check your membership status, update dates, and plan specifics.

To confirm whether a message is truly from Amazon, visit the Message Center under your account tab. Legitimate messages will be displayed there.

If you mistakenly click a dubious link, be vigilant with your credit or debit card statements for unexpected charges and report any fraudulent transactions to your bank immediately.

To avoid falling victim to scams, Amazon recommends using the app or typing Amazon.co.uk into your browser (bookmark it for ease). Remember, the company does not ask for sensitive information outside of its website or app.

Consider enabling two-step verification for additional security. You can set this up in your account’s “Login and Security Settings” or at Amazon.co.uk/2SV. This feature will require you to enter a code each time you log in, along with your password.

Another option is to allow PassKey for signing into your account using pins that unlock your device, face, or fingerprints.

Source: www.theguardian.com

AI-Generated Fake Videos of Diddy Trials Go Viral on YouTube, Garnering Millions of Views

This piece was reported by indicator, a publication focused on unearthing digital misinformation, in partnership with the Guardian.

Numerous YouTube channels have blended AI-generated visuals with misleading claims surrounding Sean “Diddy” Combs’s high-profile trial, attracting tens of millions of views and profiting from the spread of misinformation.

Data from YouTube reveals that 26 channels have garnered a staggering 705 million views from approximately 900 AI-influenced videos about Diddy over the last year.

These channels typically employ a standardized approach. Each video features an enticing title and AI-generated thumbnail that fabricates connections between celebrities and Diddy with outrageous claims, such as a celebrity’s testimony forcing them to engage in inappropriate acts or revealing shocking secrets about Diddy. Thumbnails regularly showcase well-known figures in courtroom settings alongside images of Diddy, with many featuring suggestive quotes designed to grab attention, including phrases like “f*cked me me me me me of me,” “ddy f*cked bieber life,” and “she sold him to Diddy.”


Channels indulging in Diddy’s “Slop,” a term for low-quality, AI-generated content, have previously demonstrated a penchant for disseminating false claims about various celebrities. Most of the 26 channels seem to be either repurposed or newly created, with at least 20 being eligible for advertising revenue.

Spreading sensational and erroneous “Diddy AI Slop” has become a quick avenue for monetization on YouTube. Wanner Aarts, managing numerous YouTube channels that employ AI-generated content, expressed his strategies for making money on the platform, noting his detachment from the Diddy trend.

“If someone asked, ‘How can I make $50,000 quickly?’ the first thing might be akin to dealing drugs, but the second option likely involves launching a Diddy channel,” Aarts (25) stated.

Fabricated Celebrity Involvement

The indicator analyzed hundreds of thumbnails and titles making false claims about celebrities including Brad Pitt, Will Smith, Justin Bieber, Oprah Winfrey, Eddie Murphy, Leonardo DiCaprio, Dwayne “Rock” Johnson, 50 Cent, Joe Logan, and numerous others. Notably, one channel, Fame Fuel, uploaded 20 consecutive videos featuring AI-generated thumbnails and misleading titles related to U.S. Attorney General Pam Bondy and Diddy.

Among the top-performing channels is Peeper, which has amassed over 74 million views since its inception in 2010, but pivoted to exclusively covering Diddy for at least the last eight months. Peeper boasts some of the most viral Diddy videos, including “Justin Bieber reveals Will Smith, Diddy and Clive Davis grooming him,” which alone attracted 2.3 million views. Peeper is currently being converted into a demo.

Channels named Secret Story, previously offering health advice in Vietnamese, shifted focus to Diddy content, while Hero Story transitioned from covering Ibrahim Traore, the military leader of Burkina Faso, to Diddy stories. A Brazilian channel that amassed millions from embroidery videos also pivoted to Diddy content just two weeks ago. A channel named Celebrity Topics earned over 1 million views across 11 Diddy videos in just three weeks, despite being created in early 2018 and appearing to have deleted prior videos. Both Secret Story and Hero Story were removed by YouTube following inquiries from the indicator, while Celebrity Topics has since undergone rebranding.

Shifting Focus to Diddy

For instance, around three weeks ago, the channel PAK GoV Update started releasing videos about Diddy, utilizing AI-generated thumbnails with fictitious quotes attributed to celebrities like Ausher and Jay-Z. One video labeled “Jay-Z breaks his silence on Diddy’s controversy,” included a tearful image of Jay-Z with the text “I Will Be Dod” superimposed.

The video achieved 113,000 views with nearly 30 minutes of AI-generated narration accompanied by clips from various TV news sources, lacking any new information from Jay-Z, who did not provide any of the attributed quotes.

The Pak Gov Update channel previously focused on Pakistan’s public pensions, generating modest views—its most popular being a poorly titled video about the pension system that garnered 18,000 views.

Monetizing Misinformation

Aarts commented that the strategy of exploiting Diddy Slop is both profitable and precarious. “Most of these channels are unlikely to endure,” he remarked, referencing the risk of being penalized for violating YouTube policies and potential legal actions from Diddy or other celebrities depicted in their thumbnails and videos.

Like PAK Gov Update, most videos uploaded by these channels predominantly utilize AI narration and fewer direct clips from news reports, often leaning on AI-generated images. The use of actual footage tends to skirt the boundaries of fair use.




The YouTube channel Pakreviews-F2Z has produced numerous fake videos surrounding the Diddy trial, disguised under the name Pak Gov Update. Photo: YouTube

AI Slop represents one of the many variations of Diddy-related content proliferating on YouTube. This niche appears to be expanding and proving lucrative. Similar Diddy-focused AI content has attracted engagement on Tiktok.

“We are fans of the world,” stated YouTube spokesperson Jack Maron in an email. Maron noted that the platform has removed 16 channels linked to this phenomena and confirmed that various channels, including Pak Gov Update, have faced similar actions.

Skip past newsletter promotions

Faceless YouTube Meets Diddy

The Diddy phenomenon exemplifies the convergence of two prominent trends within YouTube: automation and faceless channels.

YouTube Automation hinges on the premise that anyone can establish a prosperous YouTube venture through the right niche and low-cost content creation strategies, including topic discovery, idea brainstorming, or employing international editors to churn out content at an automated rate.

With AI, it has become simpler than ever to embark on a faceless automation journey. Aarts indicated that anyone can generate scripts using ChatGPT or analogous language models, create images and thumbnails via MidJourney or similar software, utilize Google Veo 3 for video assembly, and implement AI voice-over using tools like ElevenLabs. He further mentioned that he often hires freelancers from the Philippines or other regions for video editing tasks.

“AI has democratized opportunities for budget-conscious individuals to engage in YouTube automation,” Aarts stated, highlighting it can cost under $10 per video. He reported earnings exceeding $130,000 from over 45 channels.

Muhammad Salman Abazai, who oversees As a Venture, a Pakistani firm offering video editing and YouTube channel management services, commented that Diddy video content has emerged as a “legitimate niche” on YouTube, showcasing successful Diddy videos created by his team.

“This endeavor has proven fruitful for us, as it has significantly boosted our subscriber count,” he noted.

International Diddy Slop

The pivot towards Diddy isn’t limited to English-speaking audiences. A Spanish channel, NV Historia, launched in January, previously produced sporadic AI-generated celebrity videos before transitioning to Diddy content. Its first breakout garnered attention with a video titled “Teacher laughs at black girls because his father said it was Chuck Norris until the teacher came to class,” accumulating only 140,000 views.

NV Historia shifted focus following the viral response to a Diddy-themed video titled “A minute ago: No one expected Dwayne Johnson to say this in court about Diddy,” featuring AI-generated images of Johnson and Diddy in court along with disturbing visuals of alleged incidents. The thumbnail showcased the quote “He gave me it.”

Johnson has neither testified nor had any connection to allegations against Diddy. This video has gathered over 200,000 views. Following this, NV Historia managed another video linking Oprah Winfrey and other celebrities to Diddy, which earned 45,000 views. Subsequently, the channel committed entirely to Diddy content and has since been removed by YouTube.

A French channel, Starbuzzfr, was launched in May and appears to exclusively publish Diddy-related content, deploying AI-generated thumbnails and narration to spin fabricated narratives, such as Brad Pitt’s supposed testimony against Diddy, claiming he experienced abuse by the mogul. Starbuzzfr notably utilizes sexualized AI-generated imagery featuring Diddy and celebrities like Pitt. As of this writing, the channel remains monetized.

Aarts noted that the general sentiment within the YouTube automation community respects anyone who manages to monetize their content.

“I applaud those who navigate this successfully,” he remarked.

Source: www.theguardian.com

Maga-Inspired Fake Pass That Rocked the Gaming Industry | Games

oIn the modern gaming landscape, many developers agree that generating any buzz for new projects is a challenge without hefty marketing budgets. Last year, nearly 20,000 new titles hit PC gaming platforms, as noted in Steam. This deluge has effectively vanished into the vast sea of online content. So, when a small studio snagged a spot on stage at the Summer Game Fest, live-streamed to approximately 50 million viewers worldwide, it was quite a significant achievement—not one to be underestimated or misrepresented.

This brings us to Ian Proulx, co-founder of 1047 Games. During his brief appearance at the event, he took the stage wielding a baseball bat to promote the online shooter Split Gate 2, stating he was “tired of doing the same things year after year.” Unfortunately, this approach backfired. Both gamers and fellow developers criticized his choice to incorporate another studio’s game alongside politically charged memes, especially during a time when anti-ICE protests were facing violence across town. Proulx defended his actions by asserting that the slogan’s use was non-political; however, just four days later, he issued an apology. He explained, “We needed something to capture attention. The truth is, we struggled to come up with something. This is what we settled on.”

What Proulx hadn’t anticipated is that the fast-evolving memetic culture of 2025 is crucial, with its nuances and sociopolitical implications constantly shifting. You can’t just throw around cheeky symbols or memes from platforms like 4chan without understanding their contexts. Just look at how embarrassingly out of touch figures like Elon Musk and Edgelord Shacktick became in the mid-2000s. Memes require context for expansion. And you can’t present yourself as the vanguard of FPS while peddling battle royale modes, especially when they are recycled versions of existing games. Are we serious about 2025?

Backlash… Is anyone even playing Split Gate 2 now? Photo: 1047 Games

While I’m not fully aware of 1047 Games’ specifics, I’ve visited numerous game development studios worldwide. Regardless of how progressive they wish to be, they often overlook the fact that the dominant monocultural preferences of middle-class men may not resonate with everyone else. Proulx commented, “We tried to think of something. This is what we came up with.” In a boardroom filled with like-minded individuals, it likely felt humorous, but they should have consulted with someone outside their bubble first.

Split Gate 2 now finds some potential customers turned off by the misguided MAGA-themed bit, while another segment that Proulx has apologized to holds resentment; it’s a negative spiral. This situation is problematic, especially since multiplayer games depend on enthusiastic communities to promote themselves.

Proulx could have made smarter use of his 30 seconds of fame on stage. Reflecting on memorable moments from recent E3 events, positive memories include: former Xbox chief Peter Moore showcasing his Halo 2 tattoo; game artist and director Nakamura engaging the audience with her infectious enthusiasm for Ghostwire: Protocol; and actor Keanu Reeves exclaiming, “You take your breath!” with audience members during the Cyberpunk 2077 presentation. In a climate rife with faux machismo and posturing, these charming and genuine moments shone like beams of sunlight. You don’t need to step on stage brandishing slogans or baseball bats; your most valuable asset in this highly digital, anonymous creative world is your humanity.

What to play

Arcade-y… a rematch. Illustration: sloclap/Steam

This week, we have several intriguing game releases, including I’ll Date Everything, a game where you can date a toaster, FBC: Fire, a spin-off from Remedy’s Cult Studio, and Tron: Catalyst, Bithell Games’ Disney cyberspace classic.

I’m particularly excited about Rematch. It’s an arcade-style 5-v-5 football game influenced by the Rocket League phenomenon. Unlike EA Sports FC, it focuses on individual players, each equipped with flashy skills, meaning you don’t need extensive knowledge of soccer to enjoy it.

Available on: PC, PlayStation 5, Xbox
Estimated playtime:
Whatever you choose

What to read

Elegance… Anna Williams from Tekken 7. Composite: Guardian Design; Bandai Namco
Skip past newsletter promotions

What to click

Question block

Fighting the bonnet… The window so far, Jane. Photo: 3 Turn Production

Leader Adam asks this week’s question:

“As a British literature student, I found Button’s newsletter this week about the intersection of video games and Shakespeare thoroughly engaging. It got me thinking—what classic literary works could be transformed into video games, and I’ve always considered Edmund Spenser’s 16th-century epic, ‘The Faerie Queene,’ a prime candidate.”

Conveniently, this is a subject I frequently pondered post-graduation from British literature. When considering classic works that could make great games, I envision ‘The Rime of the Ancient Mariner’ as a dark, rogue take on the Oregon Trail, styled like the revival of OBRA Dinn. I imagine Conrad’s ‘Heart of Darkness’ reimagined as a Hill-style psychological horror. Or even turn ‘Pride and Prejudice’ into a rich dating sim (we’re almost there; titles like ‘Tom Jones’ and ‘Middlemarch’ could inspire an incredible open-world adventure).

In the “Historical Author Turned Game Designer” category, two evident candidates arise. Mary Shelley and H.G. Wells stand out as deeply inspired writers in science and technology. Bertolt Brecht, a playwright known for engaging popular audiences with various methods, along with August Strindberg—who dabbled in photography and the occult—could see themselves crafting iconic RPGs at the Summer Game Fest.

If you have a burning question or feedback about the newsletter – Please email pushbuttons@theguardian.com.

Source: www.theguardian.com

Men Who Shared Deep Fake Images of Notable Australian Women Risk $450,000 Fine

Regulators overseeing online safety are pursuing the maximum fine of $450,000 against a man for publishing deepfake images of a well-known Australian woman on his website, marking a significant case in an Australian court.

The Esafety Commissioner has initiated legal action against Anthony Rotondo for his failure to remove “intimate images” of high-profile Australian women from the Deepfake Pornography site.

The federal courts maintain the confidentiality of the women’s real names.


The court learned that Rotondo initially defied the order while residing in the Philippines, prompting the committee to pursue legal action upon his return to Australia.

Rotondo had posted an image on Mrdeepfakes’ site.

In December 2023, Rotondo was fined after admitting to breaching the court’s order by failing to remove the image. He subsequently provided the password to delete the Deepfake image.

A representative from the Esafety Commissioner indicated that regulators are aiming for a fine between $400,000 and $450,000 for the violations of online safety law.

The spokesperson emphasized that the proposed penalty reflects the seriousness of the “significant impact on the targeted women.”

“This penalty aims to deter others from partaking in such harmful actions,” they stated.

Esafety highlighted that the creation and distribution of nonconsensual explicit deepfake images result in severe psychological and emotional harm for the victims.

The penalty hearing occurred on Monday, and the court has reserved its decision.

Additionally, federal legislation was passed in 2024, strengthening the fight against explicit deepfakes.

Esafiti Commissioner Julie Inman Grant during the Senate estimates. Photo: Mick Tsikas/AAP

In her introductory remarks to the Senate committee considering the bill last July, Esafety Commissioner Julie Inman Grant noted that DeepFakes have surged by 550% since 2019, with 99% of such pornographic content featuring images of women and girls.

“Abuse involving deepfake images is not only on the rise, but it is also highly gendered and incredibly distressing for the victims,” Inman Grant stated.

“To my surprise, the number of open-source AI applications like this is rapidly increasing online, often available for free and easy to use for anyone with a smartphone.

“Thus, these apps present a low barrier for perpetrators, while the repercussions for the targets are devastating and often immeasurable.”

Source: www.theguardian.com

Alabama Paid Millions to Law Firms for Prison Protection: AI-Generated Fake Citations Uncovered

Frankie Johnson, an inmate at William E. Donaldson Prison near Birmingham, Alabama, reports being stabbed approximately 20 times within a year and a half.

In December 2019, Johnson claimed he was stabbed “at least nine times” in his housing unit. Then, in March 2020, after a group therapy session, officers handcuffed him to a desk and exited the unit. Shortly afterward, another inmate came in and stabbed him five times.

In November that same year, Johnson alleged that an officer handcuffed him and transported him to the prison yard, where another prisoner assaulted him with an ice pick and stabbed him “five or six times,” all while two corrections officers looked on. Johnson contended that one officer even encouraged the attack as retaliation for a prior conflict between him and the staff.

In 2021, Johnson filed a lawsuit against Alabama prison officials, citing unsafe conditions characterized by violence, understaffing, overcrowding, and significant corruption within the state’s prison system. To defend the lawsuit, the Alabama Attorney General’s office has engaged law firms that have received substantial payments from the state to support a faulty prison system, including Butler Snow.

State officials have praised Butler Snow for its experience in defending prison-related cases, particularly William Lansford, the head of their constitutional and civil rights litigation group. However, the firm is now facing sanctions from a federal judge overseeing Johnson’s case, following incidents where its lawyers referenced cases produced by artificial intelligence.

This is just one of several cases reflecting the issue of attorneys using AI-generated information in formal legal documents. A database that tracks such occurrences has noted 106 identified instances globally, where courts have encountered “AI hallucinations” in submitted materials.

Last year, lawyers received one-year suspensions for practicing law in Florida’s Central District after it was found that they were citing cases fabricated by AI. Earlier this month, a federal judge in California ordered a firm to pay over $30,000 in legal fees for including erroneous AI-generated studies.

During a hearing in Birmingham on Wednesday regarding Johnson’s case, U.S. District Judge Anna Manasco mentioned that she was contemplating various sanctions, such as fines, mandatory legal education, referrals to licensing bodies, and temporary suspensions.

She noted that existing disciplinary measures across the country have often been insufficient. “This case demonstrates that current sanctions are inadequate,” she remarked to Johnson’s attorney. “If they were sufficient, we wouldn’t be here.”

During the hearing, attorneys from Butler Snow expressed their apologies and stated they would accept any sanctions deemed appropriate by Manasco. They also highlighted their firm policy that mandates attorneys seek approval before employing AI tools for legal research.

Reeves, an attorney involved, took full responsibility for the lapses.

“I was aware of the restrictions concerning [AI] usage, and in these two instances, I failed to adhere to the policy,” Reeves stated.

Butler Snow’s lawyers were appointed by the Alabama Attorney General’s Office and work on behalf of the state to defend ex-commissioner Jefferson Dunn of the Alabama Department of Corrections.

Lansford, who is contracted for the case, shared that the firm has begun a review of all previous submissions to ensure no additional instances of erroneous citations exist.

“This situation is still very new and raw,” Lansford conveyed to Manasco. “We are still working to perfect our response.”

Manasco indicated that Butler Snow would have 10 days to file a motion outlining their approach to resolving this issue before she decides on sanctions.

The use of fictitious AI citations has subsequently influenced disputes regarding case scheduling.

Lawyers from Butler Snow reached out to Johnson’s attorneys to arrange a deposition for Johnson while he remains incarcerated. However, Johnson’s lawyers objected to the proposed timeline, citing outstanding documents that Johnson deemed necessary before he could proceed.

In a court filing dated May 7, Butler Snow countered that case law necessitates a rapid deposition for Johnson. “The 11th Circuit and the District Court typically allow depositions for imprisoned plaintiffs when relevant to their claims or defenses, irrespective of other discovery disputes,” they asserted.

The lawyers listed four cases that superficially supported their arguments, but all turned out to be fabricated.

While some case titles were reminiscent of real cases, none were actually relevant to the matter at hand. For instance, one was a 2021 case titled Kelly v. Birmingham; however, Johnson’s attorneys noted that “the only existing case titled Kelly v. City of Birmingham could be uniquely identified by the plaintiff’s lawyers.”

Earlier this week, Johnson’s lawyers filed a motion highlighting the fabrications, asserting they were creations of “generative artificial intelligence.” They also identified another clearly fictitious citation in prior submissions related to the discovery dispute.

The following day, Manasco scheduled a hearing regarding whether Butler Snow’s counsel should be approved. “Given the severity of the allegations, the court conducted an independent review of each citation submitted, but found nothing to support them,” she wrote.

In his declaration to the court, Reeves indicated he was reviewing filings drafted by junior colleagues and included a citation he presumed was a well-established point of law.

“I was generally familiar with ChatGPT,” Reeves mentioned, explaining that he sought assistance to bolster the legal arguments needed for the motion. However, he admitted he “rushed to finalize and submit the motions” and “did not independently verify the case citations provided by ChatGPT through Westlaw or PACER before their inclusion.”

“I truly regret this lapse in judgment and diligence,” Reeves expressed. “I accept full responsibility.”

Damien Charlotin, a legal researcher and academic based in Paris, notes that incidents of false AI content entering legal filings are on the rise. Track the case.

“We’re witnessing a rapid increase,” he stated. “The number of cases over the past weeks and months has spiked compared to earlier periods.”

Thus far, the judicial response to this issue has been quite lenient, according to Charlotin. More severe repercussions, including substantial fines and suspensions, typically arise when lawyers fail to take responsibility for their mistakes.

“I don’t believe this will continue indefinitely,” Charlotin predicted. “Eventually, everyone will be held accountable.”

In addition to the Johnson case, Lansford and Butler Snow have contracts with the Alabama Department of Corrections to handle several large civil rights lawsuits. These include cases raised by the Justice Department during Donald Trump’s presidency in 2020.

The contract for that matter was valued at $15 million over two years.

Some Alabama legislators have questioned the significant amount of state funds allocated to law firms for defending these cases. However, this week’s missteps have not appeared to diminish the Attorney General’s confidence in Lansford or Butler Snow to continue their work.

On Wednesday, Manasco addressed the attorney from the Attorney General’s office present at the hearing.

“Mr. Lansford remains the Attorney General’s preferred counsel,” he replied.

Source: www.theguardian.com

Kennedy urges anti-vaccine groups to take down fake CDC pages

National Health Secretary Robert F. Kennedy Jr. on Saturday instructed leaders of the nonprofit organization he founded to mimic the design of the Centers for Disease Control and Prevention site, but to remove web pages that mimic cases where the vaccine causes autism.

The page was published on a site that is clearly registered in the Child Health Defense of the nonprofit Anti-Vaccination Group. Kennedy’s actions came after the New York Times asked about the page and then it bouncing off all over social media.

The page was taken offline on a Saturday night.

“Committee Kennedy has directed the Advisory Bureau to send formal demand to children’s health defenses requesting the removal of their website,” the Department of Health and Human Services said in a statement.

“At HHS, we are dedicated to restoring the institutions to a tradition that supports science based on gold standard evidence,” the statement said.

It was not clear why the anti-vaccine group released a page mimicking the CDC. The organization did not respond to requests for comment, and Kennedy said it cut ties with the presidential election in 2023.

The fake vaccine safety page was virtually indistinguishable from what is available on CDC’s own site. The layout, typeface and logo were the same, and probably violated federal copyright laws.

The CDC’s own website refutes the relationship between vaccines and autism, but fraudsters leave the possibility of existence open. The bottom included a link to video testimony from parents who believed their child was harmed by the vaccine.

The page was first published Reported on Substack by E. Rosalie LiFounder of Information Epidemiology Lab. The nonprofit did not immediately respond to requests for comment.

For many years, Kennedy has argued that there is a link between vaccines and autism. He held that stance during the Senate confirmation hearing despite extensive research exposing the theory.

Under his direction, the CDC recently announced plans to review the evidence. This is a waste of money from Senator Bill Cassidy, a Louisiana Republican and chairman of the Senate Health Committee.

Online at Mock Web Pages is the familiar blue banner from CDC above, featuring the agency’s blue and white logo and the term “vaccine safety.” The headline read “Vaccinations and autism.”

The text supported the link between vaccines and autism, laid out both the exposed research, but left it announced the possibility that it had been countered by scientists.

This included citations to research by Brian S. Hooker, chief science officer for child health defense, as well as other studies critical of vaccination.

“This is a mix of legally peer-reviewed and fake,” said Dr. Bruce Guerin, who oversaw HHS’ vaccine programs for the Bush and Obama administrations.

“Footnotes give the impression that it’s a legitimate scientific work,” he added.

The series of testimonies at the bottom of the page featured videos with titles such as “Mother of 3: I Will Will Will Wild Again” and “We Signed His Life.”

This is in stark contrast to CDC officials. Autism and Vaccine Websitewhich is primarily devoted to exposing connection ideas, clearly saying, “study shows no links.”

Recently, Children’s Health Defense has faced the outbreak of measles in West Texas.

The organization’s CHD.TV channel posted an on-camera interview with the parents of a six-year-old girl who was declared dead from measles by the state health department.

The child was not vaccinated and had no underlying medical conditions. According to the health organization. However, they claimed that children’s health defenses had obtained hospital records that conflict with the cause of death.

The organization also handled the girl’s siblings and interviewed Dr. Ben Edwards, one of two Texas doctors.

In response to the video, Covenant Children’s Hospital in Lubbock, Texas issued a statement this week that “recent videos are circulating online and contain misleading inaccurate claims,” ​​saying the confidentiality law does not prevent hospitals from providing information specifically relating to cases.

Source: www.nytimes.com

Google vows to tackle fake reviews for UK businesses

Google has committed to taking additional measures to identify and remove fake reviews, as confirmed by the UK competition watchdog. The Competition and Markets Authority (CMA) stated that Google will implement sanctions against individuals and UK companies that have manipulated star ratings. Furthermore, Google will issue “warning” alerts on profiles of companies using fake reviews to inflate their ratings.

The agreement follows an investigation launched by the CMA in 2021 into Google’s potential violation of consumer law by not adequately protecting users from fraudulent reviews on its platform. A similar investigation on Amazon is currently ongoing.

The CMA estimates that £23 billion of UK consumer spending is influenced by online reviews annually. A survey conducted by Which? revealed that 89% of consumers rely on online reviews when researching products and services.

CEO of CMA, Sarah Cardel, praised Google for taking a proactive approach in combating fake reviews, emphasizing the importance of maintaining public trust and fairness for businesses and consumers.

According to CMA, any company found publishing reviews will be subject to investigation to determine if changes to practices are necessary to comply with the agreement. Google will report to CMA over a three-year period to ensure compliance.

Starting in April, CMA will have enhanced powers to independently assess violations of consumer law without court intervention. Violating companies could face fines up to 10% of their global turnover.

The watchdog has intensified its scrutiny of major tech firms, launching investigations into Google’s search and advertising practices, as well as Apple and Google’s mobile platforms.

Amidst these actions, the appointment of former Amazon executive Doug Garr as the watchdog’s interim chairman prompted denials from Business Secretary Justin Madders regarding government favoritism towards big tech.

A Google spokesperson informed CMA that the company’s investments in combating fraudulent content allow them to block millions of fake reviews annually. Collaboration with regulators globally remains an ongoing effort to tackle fake content and malicious actors.

Source: www.theguardian.com

Exploring the Dark World of Sexual Deepfakes: Women Fighting Back against Fake Representations

IIt started with an anonymous email. It read, “That's true. I'm sorry to have to contact you.” Below that word were three links to internet forums. “HUGE trigger warning…they contain vile photoshopped images of you.”

Jody (not her real name) froze. The 27-year-old from Cambridgeshire has had problems in the past with her photos stolen to set up dating profiles and social media accounts. She called the police, but was told there was nothing they could do and pushed it to the back of her mind.

However, I couldn't ignore this email that arrived on March 10, 2021. She clicked on the link. “It was like time stood still,” she said. “I remember screaming so loud. I just completely broke down.”

Forum, an alternative porn website, has hundreds of photos of her alone, on holiday and with friends and housemates, alongside a caption labeling them as 'sluts'. The comments included calling her a “slut” and “prostitute,” asking people to rate her, and asking her what kind of fantasies she had. they will.

The person who posted the photo also shared the invitation with other members of the forum. It involved using artificial intelligence to create sexually explicit “deepfakes,” digitally altered content, using fully clothed photos of Jodi taken from her private Instagram.

“I've never done anything like this before but I love seeing her being fake…happy to chat and show more of her too…:D,” they wrote. Ta. In response, users posted hundreds of composite images and videos of the woman's body and Jodi's face. One posted an image of her wearing high school girl clothes and being raped by a teacher in a classroom. Others showed her full “nude”. “I was having sex in every room,” she said. “The shock and devastation still haunts me.”

The now-deleted fake images show that a growing number of synthetic, sexually explicit photos and videos are being created, traded and sold across social media apps, private messages and gaming platforms in the UK and around the world. Masu. As well as adult forums and porn sites.




Inside the helpline office. Photo: Jim Wileman/Observer

Last week, the government announced a “crackdown” on blatant deepfakes, expanding current laws that make it a criminal offense not only to share images, but also to create them without consent, which will be illegal from January 2024. I promised. Someone making them for you – is not going to be covered. The government will also ask whether the crime was consensual (campaigners say it must be) or whether the victim can prove that the perpetrator had malicious intent. I haven't confirmed whether it is necessary or not yet.

At the Revenge Porn Helpline's headquarters in a business park on the outskirts of Exeter, senior practitioner Kate Worthington, 28, says stronger laws with no loopholes are desperately needed.

Launched in 2015, the helpline is a dedicated service for victims of intimate image abuse, part-funded by the Home Office. Deepfake incidents are at an all-time high, with reports of synthetic image abuse increasing by 400% since 2017. However, it remains small compared to overall intimate image abuse. There were 50 incidents last year, accounting for about 1% of the total. caseload. The main reason is that it's vastly underreported, Worthington says. “Victims often don't know their images are being shared.”

The researchers found that many perpetrators of deepfake image abuse appear to be motivated by “collector culture.” “A lot of times it's not with the intention of the person knowing,” Worthington said. “Buyed, sold, exchanged, traded for sexual gratification or for status. If you are finding this content and sharing it alongside your Snap handle, Insta handle, or LinkedIn profile. , you may receive glory.'' Many are created using the “Nude'' app. In March, a charity that runs a revenge porn helpline reported 29 such services to Apple, which removed them.

There have also been cases where composite images have been used to directly threaten or humiliate people. The helpline has heard cases of boys creating fake incestuous images of female relatives. A man addicted to porn creates a composite photo of his partner engaging in non-consensual sex in real life. Stories of people who were photographed at the gym and deepfake videos made to make it look like they were having sex. Most, but not all, of those targeted are women. Approximately 72% of the deepfake incidents identified by the helpline involved women. The oldest was in his 70s.

There have also been cases where Muslim women have been targeted with deepfake images of themselves wearing revealing clothing or without their hijabs.

Regardless of intent, the impact is often extreme. “Many of these photos are so realistic that your coworkers, neighbors, and grandma won't be able to tell the difference,” says Worthington.




Kate Worthington, Senior Helpline Practitioner. Photo: Jim Wileman/Observer

The Revenge Porn Helpline helps people remove abusive images. Amanda Dashwood, 30, who has worked at the helpline for two years, says this is usually a caller's priority. “It says, 'Oh my God, help me. I need to delete this before people see it,'” she says.

She and her colleagues on the helpline team, eight women, most under 30, have a variety of tools at their disposal. If the victim knows where the content was posted, the team will issue a takedown request directly to the platform. Some people ignore the request completely. However, this helpline has partnered with most of the major helplines, from Instagram and Snapchat to Pornhub and OnlyFans, and has a successful removal rate of 90%.

If the victim doesn't know where the content was posted, or suspects it's being shared more widely, they can send a selfie to be run through facial recognition technology (with their consent) or vice versa. Ask them to use image search. tool. Although this tool is not foolproof, it can detect material being shared on the open web.

The team can also advise you on steps to stop your content from being posted online again. They plan to direct people to a service called StopNCII. The tool was created by online safety charity SWGFL, which also runs a revenge porn helpline, with funding from Meta.

Users can upload real or synthetic photos, and the technology creates a unique hash and shares it with partner platforms such as Facebook, Instagram, TikTok, Snapchat, Pornhub, and Reddit (but not X or Discord). If someone tries to upload that image, it will be automatically blocked. As of December, 1 million images had been hashed and 24,000 uploads were proactively blocked.

Skip past newsletter promotions



Alex Wolff was found guilty of a derogatory nature. I'm posting images, not soliciting them. Photo: Handout

Some people call the police, but responses vary widely depending on the force used. Victims who try to report fraudulent use of composite images are told that police cannot cooperate with edited images or that prosecution is not in the public interest.

Helpline manager Sophie Mortimer recalls another incident in which police said: “No, that's not you. It's not you.” It’s someone who looks like you,” and refused to investigate. “I feel like police sometimes look for reasons not to pursue these types of cases,” Mortimer said. “We know it's difficult, but that doesn't negate the real harm that's being caused to people.”

In November, Sam Miller, assistant chief constable and director of the violence against women and girls strategy at the National Police Chiefs' Council, told a parliamentary inquiry into intimate image abuse that police lacked a “deep understanding of violent behavior”. I'm worried,” he said. Discrepancies in laws and precedents. “Yesterday, one victim told me that out of the 450 victims of deepfake images she has spoken to, only two have had a positive experience with law enforcement,” she said. Ta.

For Jodi, it is clear that there is a need to raise awareness of the misuse of deepfakes, not only among law enforcement but also the general public.

After being alerted to her deepfake, she spent hours scrolling through posts trying to piece together what happened.

She noticed that they were not shared by strangers, but by her close friends alex wolf, a Cambridge University graduate and former BBC Young Composer of the Year. He had posted a photo of her with a cut out of him. “I knew I hadn't posted that photo on Instagram and only sent it to him. That's when the penny dropped.”


www.theguardian.com

SpaceX Successfully Launches Fake Satellite on Seventh Starship Test Flight

December 2024 Starship rocket preparing for seventh flight

space x

SpaceX's next Starship test flight will be its most ambitious yet, and for the first time will include a new “Block 2” version with a number of design updates.

What is a starship?

Starship is the most powerful rocket ever flown. SpaceX aims to develop the vehicle into a quickly reusable vehicle that can carry large payloads into orbit, land on Earth, and launch another mission within hours.

It's a bit confusing, but Starship is the name given to both the entire spaceship, which consists of a super heavy booster and the ship it launches, as well as a single ship once separated from the booster.

SpaceX is rapidly iterating on both Super Heavy and Starship, taking a Silicon Valley approach to design that considers regular testing and dramatic failures simply part of the process. However, this will be the first test of the so-called Block 2 Starship upper stage.

What's new in Starship Block 2?

the company says on the website Starship's electronics have been “completely redesigned” and now include more than 30 cameras. It also has 25% more propellant, is 3.1 meters taller, and has repositioned front flaps.

Also included for the first time is an early version of the pin needed to be captured and reused in ground towers. However, SpaceX currently only has one tower that is used to capture boosters, so there will be no attempt to capture Starships for reuse this time. A second tower is under construction.

What does a test flight involve?

SpaceX expects the upper stage to reach space, complete a partial orbit around Earth, safely re-enter the atmosphere, and fall in a controlled manner into the Indian Ocean. The Super Heavy's first stage must return to the launch site and be captured by the launch tower's mechazilla or “chopstick” arm. If successful, this will be the second capture.

The launch marks a milestone for SpaceX as it marks the first time Starship hardware will be reused. One of the Super Heavy's 33 Raptor engines was previously used on Starship's fifth test flight. This was the only test to date in which the booster was safely returned, so it was the company's first opportunity to reuse something.

Another first is Starship's deployment of 10 fake Starlink satellites. These mock satellites are comparable in size and weight to the company's upcoming third-generation Internet Beam hardware and will test Starship's ability to safely launch payloads into orbit. Previous Starship flights have never carried a payload. Toy bananas carried on Flight 6.

A number of other smaller tests will be performed during the seventh flight to provide engineers with valuable data. For example, one of the Starship's Raptor engines was scheduled to be reignited in space, and some heat-resistant tiles were removed as a test. Several types of new thermal tiles are also being tested, including those with active cooling capabilities.

When will the launch take place?

SpaceX has not officially announced a launch date, but the company's controversial owner Elon Musk said: Tweet points out the goal of January 10th.

According to several NOTAMs (Notifications to Airmen – Warning Pilots of Unusual or Potentially Hazardous Activities) issued by the US Federal Aviation Administration, the launch slot given to the company is mid-January 10th. It starts at 4pm standard time (10pm UK time).

The launch period runs until January 16, giving the company some leeway in the event the launch is postponed due to technical issues or bad weather.

Like all Starship launches, Flight 7 will lift off from SpaceX's property in Boca Chica, Texas, and will be streamed live online.

What happened on previous Starship launches?

During the first test flight on April 20, 2023, three of the 33 engines in the booster stage failed to ignite. The rocket then lost control and self-destructed.

During the second test flight on November 18, 2023, the flight progressed further, gaining enough altitude to separate the booster and upper stage as planned. The booster stage ultimately exploded before reaching the ground, and the upper stage self-destructed before reaching space.

Test Flight 3 on March 14, 2024 was at least partially successful as the upper stage reached space again, but it did not return to Earth unscathed.

The next flight was on June 6, when the upper stage reached an altitude of more than 200 kilometers and flew at speeds of more than 27,000 kilometers per hour. Both the booster and upper stage completed a soft landing at sea.

In Test Flight 5, the superheavy booster dropped onto the launch pad and landed safely on SpaceX's launch tower, known as Mekazilla, supported by “chopsticks.”

During Test Flight 6, Starship reached an altitude of 228 kilometers and splashed down in the Indian Ocean. Super Heavy aborted its landing on the launch tower due to a communications failure and instead made a controlled water landing in the Gulf of Mexico.

Source: www.newscientist.com

What was the reason behind Donald Trump sharing an AI-generated fake video of Taylor Swift?

circleWhen Donald Trump posted a series of AI-generated images that falsely portrayed Taylor Swift and her fans as supporters of his presidential campaign, he inadvertently endorsed the efforts of an opaque non-profit organization aiming to fund prominent right-wing media figures and with a track record of disseminating misinformation.

Among the modified images shared by Trump on Truth Social were digitally altered pictures of young women sporting “Swifties for Trump” shirts, created by the John Milton Freedom Foundation. This Texas-based non-profit, established last year, claims to advocate for press freedom while also seeking to “empower independent journalists” and “fortify the pillars of our democracy.”




President Trump posts AI imitation of Taylor Swift and her fans Photo: Nick Robbins Early/Truth Social



Screenshot of @amuse’s “Swifties for Trump” tweet. Photo: Nick Robbins Early/Truth Social/X

The foundation’s operations seem to involve sharing clickbait content on X and collecting substantial donations, with plans for a “fellowship program” chaired by a high school student that intends to grant $100,000 to prominent Twitter figures like Glenn Greenwald, Andy Ngo, and Lara Logan. Despite inquiries into the foundation’s activities and fellowship program through tax records, investor documents, and social media posts, the John Milton Freedom Foundation did not offer any comment.

Having spent months endorsing conservative media figures and echoing Elon Musk’s allegations of free speech suppression from the political left, one of the foundation’s messages eventually reached President Trump and his massive following.

Experts caution about the potential dangers of generative AI in creating deceptive content that could impact election integrity. The proliferation of AI-generated content, including portrayals of Trump, Kamala Harris, and other politicians, has increased since Musk’s xAI introduced the unregulated Grok image generator. The John Milton Freedom Foundation is just one among many groups flooding social media with AI-generated content.


Niche nonprofit’s AI junk reaches President Trump

Amid the spread of AI images on X, the conservative @amuse account shared an AI-generated tweet from Swift fans with its over 300,000 followers. The post was tagged as “Satire,” marked with “Sponsored by the John Milton Freedom Foundation.” Trump then reposted screenshots of these tweets on Truth Social.

The @amuse account, managed by Alexander Muse, enjoys a broad reach with approximately 390,000 followers and frequent daily postings. Muse, indicated as a consultant in the Milton Foundation’s investor prospectus and a writer of right-wing commentary on Substack, has numerous ties to the @amuse account. The AI content includes depictions like Trump vs. Darth Vader and sexualized images of Harris, with the prominent watermark “Sponsored by: John Milton Freedom Foundation.”

Source: www.theguardian.com

Scientists discover a previously unknown species of fake scorpion trapped in 50-million-year-old amber

Paleontologists have reported fossils of a new genus and species of pseudoscorpion from the Eocene Cambay amber of western India.



Geogaranya variensis. Image credit: Agnihotri other, doi: 10.26879/1276.

pseudo scorpion It is the earliest order of arthropods to colonize Earth’s land during the early Devonian period.

This diverse order accounts for more than 3% of all known arachnid species.

“Pseudoscorpions are an ancient lineage of terrestrial arachnids that are morphologically similar to real scorpions, but lack the tail and stinger,” said Dr. Priya Agnihotri of DST’s Birbal Sahni Institute of Paleosciences and colleagues.

“Certain families have unique venom devices in the serrated digits of their palps, which evolved independently of the venom devices of scorpions and spiders.”

“Recent research also supports the inclusion of pseudoscorpions as a sister group to scorpions.”

“Due to their delicate bodies and small size, these fossils are mainly found in amber deposits around the world rather than in sediments,” they added.

“Forty-nine pseudoscorpion species have been recorded from Eocene Baltic amber and Rovno amber.”

Newly discovered pseudoscorpion species belongs to the small scorpion family Goridae.

named Geogaranya variensis showing strong similarities with extant genera. Geogalypus From Sri Lanka, India, and New Guinea.

“The Geogarypidae family is one of a group of bark-dwelling and leaf litter-dwelling species similar to the Geogarypidae family. Gallipidae It has a distinctive subtriangular carapace and eyes located near the leading edge,” the paleontologist said.

“This family includes more than 70 species with habitats suitable for tropical and subtropical regions, some of which have been reported from temperate biomes.”

“Geogarypidae are more common in Baltic and Rovno amber, and there are some records from Cretaceous Burmese amber.”

“Unlike the sparse record of fossils, their modern-day counterparts have been recorded in all major biogeographic regions, including Europe, Central Asia, North America, and North Africa.”

Amber from Cambay from 50 million years ago. Geogaranya variensis It was discovered in the open-pit Valia lignite mine, part of the Cambay Shale Formation, in the Cambay Basin of Gujarat, India.

“The Cambay Shale Formation overlies the Deccan Trap, and below it is the Paleocene to lower Eocene Vagadkol Formation,” the researchers said.

According to the team: Geogaranya variensis It is one of the smallest known adult pseudoscorpion fossils in amber from the Cambay Basin.

This discovery further strengthens the biodiversity of bark-dwelling arthropods identified in Eocene amber from western India.

“The discovery of the smallest known adult pseudoscorpion in Cambay Basin amber aligns it with fossil taxa recorded in Baltic Sea amber and Bitterfeld amber that survived the early Eocene. “This provides insight into similar bark-dwelling arthropod taxa,” the scientists concluded.

“Scanning electron microscopy studies revealed diagnostic features in the fossils, such as abnormally enlarged palps. This strengthened Foresy’s idea that species from non-arboreal habitats could be mistakenly This suggests that it may have been carried in amber and had a connection to a flying host.”

discovery of Geogaranya variensis is reported in paper in diary Old Trogia Electronica.

_____

priya agnihotri other. 2024. A new genus and species of fossil pseudoscorpion (Arachnida: Pseudoscorpiones) discovered in Eocene amber from western India. Old Trogia Electronica 27 (2):a26; doi: 10.26879/1276

Source: www.sci.news

Podcast reveals how reality show deceived women into believing fake Prince Harry was real

A new retrospective podcast series has emerged, delving into the gritty and boundary-pushing world of early 2000s reality TV.

One shocking example featured on the podcast is “There’s Something About Miriam,” where six men unknowingly went on a date with a transgender woman, sparking controversy and discussion. This series gained renewed attention following the tragic death of star Miriam Rivera a decade after filming.

Pandora Sykes and Shirin Kale’s investigative series “Unreal” sheds light on the ethics and exploitation behind era-defining reality shows like Big Brother, The X Factor, The Swan, and Love Island. Similarly, Jack Peretti’s exploration of shows like “The Bachelor” and “Married at First Sight” delves into the questionable practices within the genre.

Another standout from the early 2000s, “I Want to Marry Harry,” featured single American women vying for the affection of a man they believed to be Prince Harry, but turned out to be an imposter named Matt with dyed ginger hair.

In “The Bachelor at Buckingham Palace,” TV expert Scott Bryan interviews former contestants to reveal how easily they were deceived by the absurd concept of the show.

The podcast also features insights into the competitive world of educational scholarships and a scripted drama about AI and grief from Idris and Sabrina Elba.

Holly Richardson
Television Editor Assistant

This week’s picks

Sir Lenny Henry, star of Halfway. Photo: David Bintiner/Guardian

Competition
All episodes available on Wondery+ starting Monday
Sima Oriei’s journey for a high-paying scholarship in Mobile, Alabama, is revisited, showcasing a grueling competition where one girl is crowned America’s Outstanding Young Woman and wins a $40,000 education.

Letter: Ripple Effect
Weekly episodes available
Amy Donaldson’s true crime podcast explores the mysterious murder of a young father in Utah in 1982, delving into the impact on loved ones and the quest for answers.

Incomplete
Audible, all episodes now available
Idris and Sabrina Elba’s scripted podcast raises ethical questions about AI and grief, featuring a stellar cast led by Lenny Henry.

The Long Shadow: In the Guns We Trust
Weekly episodes available
Garrett Graf’s exploration of the right to bear arms in the US, 25 years after the Columbine shooting, sheds light on the voices of gun violence survivors.

Bachelor of Buckingham Palace
Wondery+, all episodes now available
Scott Bryan’s in-depth interviews with former contestants from “I Want to Marry Harry” reveal the surprising reality behind the show’s deceptive premise.

There’s a podcast for that

Dua Lipa, host of “At Your Service.” Photo: JMEternational/Getty Images

Hannah Verdier We’ve curated the 5 best podcasts hosted by pop stars, from Tim Burgess’ listening party to Sam Smith’s poignant exploration of HIV history.

Source: www.theguardian.com

Iran-affiliated hackers disrupt UAE TV streaming service by creating fake news using deepfake technology

According to Microsoft analysts, Iranian state-backed hackers disrupted a television streaming service in the United Arab Emirates and broadcast a deepfake newsreader distributing reports on the Gaza war.

Microsoft announced that a hacking operation by the Islamic Revolutionary Guards Corps disrupted streaming platforms in the UAE with an AI-generated news broadcast dubbed “For Humanity.”

The fake news anchors introduced unverified images showing wounded and killed Palestinians in Israeli military operations in Gaza. The hacker group known as Cotton Sandstorm hacked three online streaming services and published a video on the messaging platform Telegram showing them disrupting a news channel with fake newscasters, according to Microsoft analysts.

Dubai residents using HK1RBOXX set-top boxes received a message in December that read, “To get this message to you, we have no choice but to hack you,” the UAE-based news service said. The AI-generated anchor then introduced a message that read: “Graphic” images and captions showing the number of casualties in Gaza so far.

Microsoft also noted reports of disruptions in Canada and the United Kingdom, where channels including the BBC were affected, although the BBC was not directly hacked.

In a blog post, Microsoft said, “This is the first Iranian influence operation where AI plays a key element in messaging, and is an example of the rapid and significant expansion of the Iranian operation’s scope since its inception.”

“The confusion was also felt by viewers in the UAE, UK, and Canada.”

Breakthroughs in generative AI technology have led to an increase in deepfake content online, which has raised concerns about its potential to disrupt elections, including the US presidential election.

Experts are concerned that AI-generated materials could be deployed on a large scale to disrupt elections this year, including the US presidential election. Iran targeted the 2020 US election with a cyber campaign that included sending threatening emails to voters posing as members of the far-right Proud Boys group and launching a website inciting violence against FBI Director Christopher Wray and others. Spreading disinformation about voting infrastructure.

Microsoft said that since the Oct. 7 Hamas attack, Iranian state-backed forces have engaged in a series of cyberattacks and attempts to manipulate public opinion online, including attacks on targets in Israel, Albania, Bahrain (a signatory to the Abraham Accords formalizing relations with Israel), and the US.

Source: www.theguardian.com

The Proliferation of Fake AI Images Persists – 8 Notable Examples | Science & Technology Updates

Fact-checkers highlighted some notorious examples of AI-generated images that went viral this year, such as Prince William and Prince Harry embracing at the royal coronation.

Midjourney OpenAIWith DALL-E 3, you can now create realistic images faster and easier than ever using only text prompts.

While being a proponent of the technology known as generative; artificial intelligence,please tell me can empower artistsleading to concerns. Possibility of spreading false information.

Charity Full Fact has selected eight examples from 2023 that have been shared thousands of times.

They have since been marked as AI-generated or removed by social media platforms.

Prince William and Prince Harry reunite

Slideshow of 8 images showing prince of wales and the Duke of Sussex king’s coronation spread widely to Facebookover 78,000 likes!

In one of the photos, they appear to be hugging each other with teary eyes, but none of the photos are real.

According to a Full Fact investigation, these photos were originally published in a blog post in which the author explained how to use Midjourney’s image generator to “imagine a heartfelt reconciliation” between two people.

Julian Assange goes to prison

Photo of WikiLeaks founder The scenes at Belmarsh Prison were created using Midjourney.

The creator confirmed as much in an interview with Germany’s Bild newspaper, but not until the image was shared on Facebook and reposted 29,000 times. X.

Donald Trump’s portrait

Before the former US president posted a photo of his real face on Xmany fake versions were circulating.

Some of them have been viewed more than a million times, even though the jumble of letters behind him is a major feature. AI generators often have a hard time recreating text within images.

Mr Trump He had previously been the subject of an AI-generated image that appeared to show the moment of his arrest.

President Emmanuel Macron during the French riots

meanwhile riots in franceThe photo of has become a hot topic Emmanuel Macron He was sitting in the street as garbage burned behind him.

The image was widely shared, with one post garnering more than 55,000 views and comments suggesting the media was ignoring the story, according to Full Fact.

Pope Francis’ large audience

Photo of pope His speech to a large crowd in Lisbon was viewed tens of thousands of times on social media.

But a closer look revealed that it wasn’t real. One hand of the Pope had three fingers.

It comes months after an eerily convincing AI image of the Pope wearing a down jacket went viral.

Elon Musk’s “Robot Wife”

SpaceX Billionaire He makes no secret of his desire to create humanoid robots.but not “The Robot Wife.”

A post featuring an image of him kissing one such model was created by a digital artist and shared on Facebook and X.

Titanic submarine wreckage

While searching for titan submersibleMidjourney was used to create an image purporting to show debris.

It showed a game controller floating in the water, with the caption: “Breaking news: Exploded Titanic submarine controller found floating near the surface.”

The submarine is controlled using a modified controller and has been reported to have been sighted over 300,000 times on the X.

Rishi Sunak’s Bad Pint

image:
Image posted by Karl Turner MP (L) and original photo posted to Number 10’s Flickr account (R)

critic of prime minister The shot of him pouring out bad beer was an example of how he was portrayed as an out-of-touch person.

This image is a compilation of photos from the beer festival in August. Pint’s appearance worsened and onlookers looked confused.

It received over 78,000 views on X. The fact that Labor MP Karl Turner shared it also didn’t help.

Full Fact said the government and regulator Ofcom must prioritize public media literacy ahead of the next election, helping them recognize fake images and question what they see online. Ta.

Chief Executive Chris Morris added: “Failure to take action risks reducing people’s trust in what they see online. This risks undermining democracy, particularly during elections. Yes,” he added.

Source: news.sky.com

Google’s AI demo was fake, Grand Theft Auto VI captures attention, Spotify reduces workforce

Welcome to the Week in Review (WiR)

Welcome everyone to Week in Review (WiR). This is TechCrunch’s regular newsletter that recaps the past few days in technology. AI is back in the headlines, with tech giants from Google to X (formerly Twitter) taking on OpenAI for chatbot supremacy. But so much more happened. In this issue of WiR, Google fakes a demo of a new AI model (and handed out an offensive note to Black Summit attendees), defense startup Anduril unveils fighter jet weapons, and the latest from the 23andMe hack The Continuing Aftermath and Grand Theft Auto VI trailer. Other stories include patient scans and health records leaked online, Meta’s new AI-powered image generator, Spotify layoffs, and self-driving truck startup pulling out of the US. There’s a lot to do, so don’t delay. But before that, if you haven’t already, here’s a reminder to subscribe here so you can receive her WiR in your inbox every Saturday.

Google fakes a new AI model (and hands out an offensive note to Black Summit attendees)

Google this week announced a new flagship AI model called Gemini. However, the complete model Gemini Ultra was not released, only a “lite” version called Gemini Pro. Google touted Gemini’s coding and multimodal capabilities in press conferences and blog posts, claiming the model can understand not only text but also images, audio, and video. However, because Gemini Pro is strictly text input and text output, it has proven to be error-prone. And to make matters worse for Google, the company was caught faking the Gemini demo by adjusting the text prompts with still images taken away from the camera. In another Google PR failure, people who attended the company’s K&I Black Summit in August were given third-party notebooks containing extremely insensitive language. My colleague Dominique Madri wrote that the inside of the notebook had the phrase “I was just now” printed on it. cotton That was the moment, but I came back to take your notes” (emphasis on our notes). Needless to say, this would not have been well-received by the mostly black audience in attendance. Google promises to “avoid similar situations.”

Anduril’s new weapons

Anduril, the controversial defense company co-founded by Oculus founder Palmer Lackey, has developed a new product designed to counter the proliferation of low-cost, high-powered aerial threats. A modular, twin-jet-powered, autonomous vertical take-off and landing aircraft (one version of which can carry warheads), called the Roadrunner, can take off from, track, and destroy targets, as well as intercept them. If there is no need, you can intercept the target. autonomously maneuver back to base, refuel and reuse. More 23andMe victims: Last Friday, genetic testing company 23andMe announced that hackers had accessed the personal data of 0.1% of its customers, or about 14,000 people. But the company initially declined to say how many other users may have been affected by the breach, which 23andMe first disclosed in October. In all, 6.9 million people had their name, year of birth, relationship label, percentage of his DNA shared with relatives, ancestry reports, and self-reported location exposed.

Grand Theft Auto VI trailer goes viral

The first trailer for Grand Theft Auto VI reached 85 million views in just 22 hours, breaking the MrBeast video’s record for most YouTube views in 24 hours. The excitement for Grand Theft Auto VI continues for his decade. The previous installment in Rockstar Games’ long-running series, Grand Theft Auto V, remains the same. second best selling Best video game of all time, not even close to Minecraft.

Patient records leaked

A security weakness in a decades-old industry standard designed for storing and sharing medical images has led to thousands of exposed servers exposing the medical records and personal health information of millions of patients. I am. This standard, known as Digital Imaging and Communications in Medicine (DICOM), is an internationally recognized format for medical images. However, as German-based cybersecurity consultancy Aplite has discovered, security flaws in DICOM are allowing many healthcare facilities to unintentionally make their personal data accessible from the open web.

Meta generates images

Not to be outdone by the launch of Google’s Gemini, Meta has launched a new standalone generative AI experience, Imagine with Meta AI, on the web. This allows users to create images by describing them in natural language. Similar to OpenAI’s DALL-E, Midjourney, and Stable Diffusion, Imagine with Meta AI leverages Meta’s existing Emu image generation model to create high-resolution images from text prompts.

Spotify makes layoffs

Spotify will cut around 1,500 jobs, or about 17% of its workforce, in its third round of layoffs this year as the music streaming giant aims to “increase both productivity and efficiency.” It’s a schedule. In a memo to employees on Monday, Spotify founder and CEO Daniel Ek cited slowing economic growth and rising costs of capital, saying the company needs more employees to face “the challenges ahead.” He stated that it is important to set an appropriate size of staff.

TuSimple will exit

When TuSimple went public in 2021, it was emerging as the leading self-driving truck developer in the U.S., but now, after a series of internal disputes and the loss of a key partnership with truck manufacturer Navistar, TuSimple is completely removed from the U.S. We are withdrawing. TuSimple says:

ZestMoney will shut down

ZestMoney, a buy-now-pay-later startup that can underwrite small loans to first-time Internet customers and has attracted a number of high-profile investors, including Goldman Sachs, has found a buyer. Efforts failed and it was closed. At its peak, the Bangalore-based startup employed around 150 people and raised more than $130 million during its eight-year run.

TechCrunch’s latest podcast episodes

TechCrunch’s list of podcast episodes continues to grow, just in time for your weekend listening. capital We featured a retrospective conversation from TechCrunch Disrupt 2023. Alex is the founder of Trible, a no-code app builder that helps you build online courses. He spoke with Serhii Bohoslovskyi. The two talked about the current state of the creator economy, the state of use of no-code tools today (and how it’s being embraced by non-technical creators), and the safety of startups with Ukrainian roots. . It’s over found, the crew spoke to David Rogier, CEO and founder of MasterClass, a streaming platform where you can learn from world experts on a variety of topics. Before Rogier launched his MasterClass, he worked as a VC, and through those connections he secured a $500,000 seed round before the company even had an idea. and, Chain reaction, Jacqueline interviewed David Packman, Managing Partner and Head of Venture Investments at Coinfund. Prior to CoinFund, David worked at venture capital firm Venrock where he worked for 14 years. He also led the Series A and B rounds of Dollar Shave Club, which was acquired by Unilever for $1 billion. And he co-created Apple Music in 1991, when David was in Apple’s Systems Software Product Marketing Group.

Source: techcrunch.com