In a shocking turnabout, a 1998 study falsely claimed a connection between the measles-mumps-rubella (MMR) vaccine and autism. I was astounded by the study’s poor quality, its acceptance by a prestigious journal, and the lack of critical reporting by journalists. At that time, I was unaware that the research was fraudulent.
Nearly three decades later, the repercussions of these misleading claims still echo globally. The World Health Organization (WHO) reports that six countries, including the UK (for the second time), Spain, and Austria, have lost their measles-free status. This decline in vaccination rates has been significantly influenced by an anti-vaccination movement propagated by that erroneous paper. Meanwhile, the United States faces its worst outbreak in decades and would have also lost its measles-free status had it not withdrawn from the WHO.
Measles is one of the most contagious viruses on the planet, causing severe complications in around 1 in 5 children. Complications may lead to lasting brain damage, respiratory issues, hearing loss, blindness, and brain swelling. The WHO estimates that approximately 95,000 people may succumb to measles in 2024.
The actual impact extends further, as measles also destroys immune cells that help protect against other infections, diminishing immunity for around five years. It is a risk not worth taking.
Fortunately, measles has specific vulnerabilities. The virus first targets immune cells, travels to lymph nodes, and then disseminates throughout the body. This complex pathway enhances the immune system’s ability to combat the virus before it fully establishes an infection, unlike respiratory viruses that primarily attack cells in the nose and throat.
This is why the measles component in the MMR vaccine is highly effective. Countless studies confirm that vaccinated children are significantly better off, with no established link to autism. One compelling observation is that when the MMR vaccine was withdrawn in Japan, autism rates remained unchanged.
To maintain herd immunity, at least 95% of children must be vaccinated to ensure that each infected individual transmits the virus to fewer than one other person. This means that a small percentage of unvaccinated children can precipitate another outbreak of measles.
Globally, vaccination rates are improving, but there is still room for growth. The percentage of children receiving the first dose of the measles vaccine increased from 71% in 2000 to 84% in 2010. Despite a slight decline during the COVID-19 pandemic, the rates have rebounded. The WHO estimates that between 2000 and 2024, measles vaccination has prevented an impressive 60 million deaths worldwide, marking a significant victory.
However, in high-income nations, progress is stalling. After the erroneous claims of 1998, MMR vaccination levels fell to only 80% in England and Wales. By 2013, intake rates exceeded 90% but have been gradually decreasing since then. A recent report indicated that this decline in the UK is partly because access to vaccinations is becoming increasingly difficult for parents, a concern that warrants urgent attention.
Additionally, the resurgence of anti-vaccine sentiments is contributing to these challenges, closely linked to right-wing extremism as propagated on specific social media platforms. A quick search for “MMR measles” on Bluesky yielded no anti-vaccine posts in the top results, while the search on X surfaced a plethora of misleading anti-vaccine rhetoric.
Combatting this misinformation is a considerable challenge, especially when high-profile individuals on social media platforms align with disinformation, such as a certain billionaire collaborating with a known liar leading the world’s wealthiest nation and appointing an anti-vaxxer as health secretary.
What’s evident is that this crisis extends beyond vaccines; it’s crucial in areas like climate science where misinformation clouds the truth. Governments throughout Europe and beyond must take decisive action to regulate the infosphere, promote scientific integrity, and silence charlatans. The future of humanity is at stake.
TThe New York mayoral election will likely be remembered not just for the impressive win of the young democratic socialists but also for a significant trend that could influence future campaigns: the rise of AI-generated campaign videos.
Andrew Cuomo, who lost last week to Zoran Mamdani, has notably engaged in the distribution of deepfake videos featuring his opponent, with one such video alleging racism against him.
Although AI has been utilized in political campaigns before—primarily for algorithms that target voters or create policy ideas—its evolution has seen the creation of sometimes misleading imagery and videos.
“What was particularly innovative this election cycle was the deployment of generative AI to produce content directly for voters,” stated New York State Representative Alex Boas, who advocates for regulations governing AI use.
“Whether it was the Cuomo team or not? Creating a housing plan with ChatGPT or AI-generated video ads targeting voters felt revolutionary during the 2025 campaign cycle, marking an unprecedented approach.”
Incumbent Mayor Eric Adams, who exited the race in September, also leveraged AI, utilizing it to generate a robocall and producing a feature in The New Yorker where he converses in Mandarin, Urdu, and Yiddish. An AI video depicted a dystopian view of New York and aimed critiques at Mamdani.
In a controversial move, Mr. Cuomo faced allegations of racism and Islamophobia after his campaign shared a video depicting a fictitious Mamdani eating rice with his fingers and included an unrelated portrayal of a black man shoplifting. The campaign also featured a black individual in a purple suit appearing to endorse sex trafficking, which was later deleted under the pretext of an error.
Boas, who is campaigning for a House seat, remarked that many AI-generated ads from the recent election cycle may have crossed into what could be deemed bigoted territories.
“We need to assess if this is due to algorithms perpetuating stereotypes from their training data, or if it’s simply easier to manipulate content digitally without the need to coordinate specific actions with actors,” Boas indicated.
“Digital creation simplifies the production of content that might be frowned upon by polite society,” he added.
In New York, campaigns are mandated to label AI-generated ads, but several—including one from Mr. Cuomo—failed to do so. The New York State Board of Elections oversees potential violations, but Boas pointed out that campaigns might risk penalties as the costs could be outweighed by the gains from winning.
“There will likely be campaigns willing to take that risk: if they win, the post-election fines become irrelevant,” Boas stated. “We need an effective enforcement mechanism that can intervene rapidly before elections to minimize damage, rather than simply impose penalties afterward.”
Robert Wiseman, co-director of Public Citizen, a nonprofit that has supported various AI regulations nationwide, noted that attempting to deceive the public is illegal in more than half of states and that campaigns must label AI-generated materials as such. However, he cautioned that the regulation of AI in political contexts remains a critical issue.
“Deception has historically been part of politics, but the implications of AI-generated misinformation are particularly concerning,” Wiseman explained.
“When audiences are shown a convincingly authentic video of someone making a statement, it becomes incredibly challenging for that individual to refute it, essentially forcing them to challenge viewers’ perceptions.”
AI technology can now generate convincing videos, but execution weaknesses still exist. A “Zoran Halloween Special” video released by Cuomo was clearly labeled as AI-generated yet showcased a poorly rendered image of Mamdani with mismatched audio and nonsensical dialogue.
With midterm elections on the horizon and the 2028 presidential campaign approaching, AI-generated political videos are poised to become a fixture in the landscape.
At the national level, this trend is already evident. Elon Musk shared an AI-generated video where Kamala Harris appeared to assert her role as a de facto presidential candidate and claimed she “knows nothing about running a country.”
While states are advancing in their efforts to regulate AI’s role in elections, there seems to be a lack of willingness to implement such measures at the federal level.
During the No King protests in October, Donald Trump released an AI video showcasing him in a fighter jet, dropping brown liquid on protestors, among his most recent AI content.
With President Trump’s evident support for this medium, it appears unlikely that Republicans will seek to impose restrictions on AI anytime soon.
Members of Parliament have cautioned that if online misinformation is not effectively tackled, it is “just a matter of time” before viral content leads to a resurgence of violence in the summer of 2024.
Chi Onwurah, chair of the Commons science and technology select committee, expressed concern that ministers seem complacent regarding the threat, placing public safety in jeopardy.
The committee voiced its disappointment with the government’s reaction to a recent report indicating that the business models of social media companies are contributing to unrest following the Southport murders.
In response to the committee’s findings, the government dismissed proposals for legislation aimed at generative artificial intelligence platforms, maintaining that it would refrain from direct intervention in the online advertising sector, which MPs argued has fostered the creation of harmful content post-attack.
Onwurah noted that while the government concurs with most conclusions, it fell short of endorsing specific action recommendations.
Onwurah accused ministers of compromising public safety, stating: “The government must urgently address the gaps in the Online Safety Act (OSA); instead, it seems satisfied with the harm caused by the viral proliferation of legal but detrimental misinformation. Public safety is at stake, and it’s only a matter of time before we witness a repeat of the misinformation-driven riots of summer 2024.”
In their report titled ‘Social Media, Misinformation and Harmful Algorithms’, MPs indicated that inflammatory AI-generated images were shared on social media following the stabbing that resulted in the deaths of three children, warning that AI tools make it increasingly easier to produce hateful, harmful, or misleading content.
In a statement released by the commission on Friday, the government stated that no new legislation is necessary, insisting that AI-generated content already falls under the OSA, which regulates social media content. They argued that new legislation would hinder its implementation.
However, the committee highlighted Ofcom’s evidence, where officials from the communications regulator admitted that AI chatbots are not fully covered by the current legislation and that further consultation with the tech industry is essential.
The government also declined to take prompt action regarding the committee’s recommendation to establish a new entity aimed at addressing social media advertising systems that allow for the “monetization of harmful and misleading content,” such as misinformation surrounding the Southport murders.
In response, the government acknowledged concerns regarding the lack of transparency in the online advertising market and committed to ongoing reviews of industry regulations. They added that stakeholders in online advertising seek greater transparency and accountability, especially in safeguarding children from illegal ads and harmful products and services.
Addressing the commission’s request for additional research into how social media algorithms amplify harmful content, the government stated that Ofcom is “best positioned” to determine if an investigation should be conducted.
In correspondence with the committee, Ofcom indicated that it has begun working on a recommendation algorithm but acknowledged the necessity for further exploration across a broader spectrum of academic and research fields.
The government also dismissed the commission’s call for an annual report to Parliament concerning the current state of online misinformation, arguing that it could hinder efforts to curtail the spread of harmful online information.
The British government defines misinformation as the careless dissemination of false information, while disinformation refers to the intentional creation and distribution of false information intended to cause harm or disruption.
Onwurah highlighted concerns regarding AI and digital advertising as particularly troubling. “Specifically, the inaction on AI regulation and digital advertising is disappointing,” she stated.
“The committee remains unconvinced by the government’s assertion that the OSA adequately addresses generative AI, and this technology evolves so swiftly that additional efforts are critically needed to manage its impact on online misinformation.
“And how can we combat that without confronting the advertising-driven business models that incentivize social media companies to algorithmically amplify misinformation?”
Since the uproar surrounding the immigration attacks in Los Angeles began, a wave of inaccurate and misleading claims about ongoing protests has proliferated across text-based social networks. As Donald Trump significantly ramped up federal involvement, falsehoods shared on social media intertwined with misinformation propagated through channels established by the White House. This blend of genuine and deceptive information creates a distorted representation of a city that strays from the truth.
Various regions in Los Angeles have experienced substantial protests over the last four days in response to intensified immigration policies from the US presidential administration. Dramatic images circulated on Saturday from downtown Los Angeles depicted a car ablaze amid clashes with law enforcement. Many posts fostered the impression that chaos and violence engulfed the entirety of Los Angeles, despite the fact that disturbances remained limited to specific areas within the sprawling city. Trump sent 2,000 National Guard troops to the city without the consent of California Governor Gavin Newsom, who has prompted the state to sue over this alleged infringement of sovereignty. Additionally, Defense Secretary Pete Hegses has ordered approximately 700 Marines to be deployed to the city.
As misinformation proliferates amid both street-level and legal confrontations, the intersection of lies and conflict is evident. Social media often acts as a catalyst for the spread of falsehoods, a trend noted during recent wildfires in Los Angeles, catastrophic hurricanes, and the COVID-19 pandemic.
Among the most egregious disinformation is the circulation of a video featuring Mexican President Claudia Sheinbaum by conservative Russian accounts, leading into the protests and inciting the demonstrations showcased on the Mexican flag, as reported by the misinformed Watchdog News Guard. These misleading posts — crafted by Benny Johnson on Twitter/X, referencing pro-Trump outlets like wltreport.com and Russian state media RG.RU — garnered millions of views, according to the organization. On June 9th, Sheinbaum stated to reporters:
Posts about bricks stir up a mixture of real and fake news
Conspiracy-minded conservatives are quick to latch onto familiar tropes. A post on X claimed that the “Soros Funding Organization” had garnered over 9,500 retweets regarding brick pallets near Immigration Customs Enforcement (ICE) facilities, racking up more than 800,000 views. George Soros remains a recurring figure in right-wing conspiracy narratives, with the post similarly implicating LA Mayor Karen Bass and California Governor Gavin Newsom in the supposed shortage of supplies.
I encountered a post that read, “It’s a civil war!!!”
The images of stacked bricks originate from a Malaysian construction supplier, and the myth that these bricks were distributed to protesters dates back to the 2020 Black Lives Matter demonstrations. Users on X shared insights regarding the “Community Notes,” while X’s built-in AI chatbot Grok also provided fact-checks in response to inquiries about the authenticity of the post.
In response to the hoax imagery, some X users shared a link to Real footage showing protesters slamming concrete bollards, intertwining truths and falsehoods, and obscuring the reality of the situation. Independent journalists who showcased the footage claimed it depicted projectiles hurled at police, although the footage revealed no such actions.
The Social Media Lab, a research group at Toronto Metropolitan University, was referenced in Blueski.
Trump and the White House are covered in mud
Trump himself fueled narratives suggesting that the protests were orchestrated and dominated by external agitators lacking genuine concern for local issues.
“These individuals are not protesters; they are troublemakers and anarchists,” Trump asserted on Truth Social, later screenshot and shared by Elon Musk on X. Others within the administration echoed similar sentiments on social media.
Los Angeles Times reporter noted that the White House claimed certain Mexican citizens had been arrested for assaulting an officer “during the riot.” However, it was established that customs and border protection agents had detained him prior to the protest’s commencement.
Sowing misleading information and fostering distrust
Trump has escalated the frequency of ICE raids nationwide, amplifying deportation fears throughout Los Angeles. Anti-ICE posts are also circulating misinformation, according to the Social Media Lab. One concerning post on Blueski, labeled “breaking,” alleged that a federal agent had just arrived at an LA elementary school seeking to interrogate first graders, when in reality, the incident occurred two months prior. Researchers have identified such posts as “Rage-Farming to Push Merch.”
The conspiracy platform Infowars has initiated a broadcast on X titled “Live Watch: LA ICE Riots Spread Across Major Cities Nationwide.” While protests against deportation have emerged in various locations, the level of confusion observed in Los Angeles is unmatched. The broadcast attracted 13,000 viewers simultaneously as X, a Los Angeles news service, aired coverage four nights after the immigration protest.
The spread of erroneous reporting undermines X’s credibility as a news platform, yet it continues to promote itself as the leading news application in the US, or more recently, in Qatar. Older images and videos are combined with new to instill doubts about legitimate news. After taking over Twitter in late 2022, Musk has endorsed user-generated fact-checking via the “Community Notes” feature, but has dismantled numerous internal avenues designed to counter misinformation. Particularly with the 2024 US presidential election approaching, researchers indicate that Musk himself has become a significant facilitator of misinformation, posting and resharing misleading claims that garnered around 2 billion views on numerous occasions. The Center for Countering Digital Hate.
A study by The Guardian has revealed that over 50% of the most popular TikTok videos offering mental health advice are misleading.
As more individuals seek mental health support on social media, research has shown that numerous influencers spread misinformation, including improper treatment terminology, unrealistic “quick fix” solutions, and inaccurate claims.
Those in need of help encounter questionable advice, such as suggestions to eat oranges while showering to alleviate anxiety. Some promote untested supplements like saffron, magnesium glycinate, and sacred basil as remedies for anxiety, along with claims about healing emotional wounds in an hour. Additionally, normal emotional reactions are incorrectly framed as symptoms of borderline personality disorder or abuse.
Lawmakers and experts expressed concern about the findings, stating that social media’s harmful mental health advice is both troubling and dangerous, prompting the government to consider stricter regulations to safeguard citizens from the spread of misinformation.
The Guardian analyzed the top 100 videos associated with the #MentalHealthTips hashtag on TikTok, consulting psychologists, psychiatrists, and academic specialists.
Experts determined that 52 out of those 100 videos provided advice on trauma, neurodiversity, anxiety, depression, and severe mental illness.
David Okay, a consultant neuropsychiatrist and psychology researcher at King’s College London, examined videos related to anxiety and depression. He noted that some posts misuse treatment language, potentially creating confusion around the true nature of mental illnesses.
Many videos offered broad advice based on limited personal experiences and anecdotal evidence.
The analysis indicated that social media often oversimplifies treatment realities, reducing complex issues to catchy soundbites. Although effective treatments exist, it’s crucial to communicate that there are no quick or one-size-fits-all solutions, he emphasized.
Dan Poulter, a former health minister and NHS psychiatrist who reviewed videos on severe mental illness, stated that some content trivializes daily experiences, equating them with serious mental health diagnoses.
“This type of misinformation can mislead viewers and downplay the real challenges faced by those with serious mental illnesses,” he noted.
Amber Johnston, a psychologist recognized by the British Psychological Association who evaluated trauma-related videos, remarked that while many contain valid insights, they often overgeneralize and downplay the complexity of post-traumatic stress disorder or trauma symptoms.
“Each video misleadingly suggests a uniform experience of PTSD that can be neatly summed up in a 30-second clip. The reality is that PTSD and trauma symptoms are uniquely individual and require the attention of a trained professional,” she explained.
“TikTok disseminates misinformation by implying there are universal shortcuts and insights that might actually exacerbate viewers’ issues, rather than provide solutions,” she added.
TikTok stated that videos will be removed if they dissuade users from seeking medical help or endorse harmful treatments. In the UK, when users search for mental health terms like depression or anxiety, they are directed to NHS resources.
Labour MP Chi Onwurah mentioned that the technical committee she leads is investigating misinformation on social media. A survey highlighted serious concerns regarding the effectiveness of online safety laws in combating misleading and harmful online content.
“We know that recommendation algorithms on platforms like TikTok intensify the spread of damaging misinformation, including false mental health advice,” she noted. “Immediate action is needed to address the deficiencies of the Online Safety Act and safeguard public health and safety online.”
Liberal Democrat MP Victoria Collins concurred with the troubling findings and called on the government to act decisively to shield individuals from harmful misinformation.
Labour MP Paulette Hamilton, chair of the Health and Social Care Selection Committee, also raised concerns about mental health misinformation on social media. “These ‘tips’ should not replace professional, qualified support,” she insisted.
Professor Bernadka Dubicka, online safety lead at the Royal College of Psychiatrists, noted that while social media can raise awareness, it’s vital that people access health information grounded in the latest evidence from reliable sources. Mental disorders can only be diagnosed through a thorough evaluation by qualified mental health professionals.
A TikTok spokesperson commented, “TikTok is a platform for millions to share their authentic mental health experiences and seek supportive communities. However, we recognize the methodological limitations of this research.”
“We are committed to collaborating with the World Health Organization and NHS health experts to promote accurate information on our platform and to eliminate 98% of harmful misinformation prior to reporting,” they added.
A government representative stated that the minister is “taking steps to minimize the impact of harmful misleading content online” through the new online safety legislation.
An unseen conflict unfolded earlier this month as missiles and drones flew through the night sky separating India and Pakistan.
Following the Indian government’s announcement of Operation Sindoah, rumors of Pakistan’s defeat rapidly circulated online, fueled by military strikes on Pakistan and extremist assaults in Kashmir, which prompted condemnation from Delhi towards Islamabad.
What initially started as a mere assertion on social media platforms like X quickly escalated into a cacophony boasting India’s military strength, labeled as “breaking news” and “exclusive” on one of the country’s leading news channels.
These posts and reports claimed that India had downed several Pakistani jets, captured pilots and Karachi ports, and taken control of Lahore. Additional unfounded claims suggested that the powerful chief of the Pakistani military had been arrested and a coup executed. A widely shared post stated, “We’ll be having breakfast in Rawalpindi tomorrow,” referencing the Pakistani city housing the military headquarters amidst the ongoing hostilities.
Many of these assertions included videos of explosions, collapsing buildings, and missiles being launched from the air. The issue was that none of these were factual.
“Global Trends in Hybrid Warfare”
The ceasefire on May 10th momentarily steered both nations away from the brink of full-scale war after an intense escalation in decades, triggered by extremists targeting tourist sites in Indian-controlled Kashmir—resulting in the deaths of 26 individuals, mostly tourists from India. India swiftly condemned Pakistan for the atrocities, while Islamabad denied involvement.
Even with the cessation of military hostilities, analysts, fact-checkers, and activists have meticulously tracked the surge of misinformation that proliferated online during this conflict.
In Pakistan, misinformation also spread widely. Just before the conflict erupted, the Pakistani government lifted a ban on X, which researchers later identified as a source of misinformation, albeit not at the same magnitude as in India.
A fabricated image intended to depict fighter planes engaging in combat in Udangh Haar, India. Photo: x
Claims of military victories from Pakistan circulated heavily on social media, paralleling an uptick in recycled AI-generated footage that was amplified by mainstream media outlets, prominent journalists, and government officials, leading to false narratives about captured Indian pilots, military coups, and dismantling India’s defenses.
Additionally, fabricated reports circulated that claimed Pakistan’s cyber attacks had largely disabled India’s power infrastructure, and that Indian troops were surrendering by raising white flags. Particularly, video game simulations became a favored method of disseminating misinformation about Pakistan that portrayed India in a favorable light.
A recent report on social media conflicts surrounding the India-Pakistan situation, released last week by the civil society organization The London Story, elaborated on how platforms like X and Facebook have become fertile grounds for spreading wartime narratives, hate speech, and emotionally charged misinformation, leading to an environment rich in nationalist fervor on both sides.
In a written statement, a representative from Meta, the parent company of Facebook, claimed to have implemented “significant steps to combat misinformation,” including the removal and labeling of misleading content and limiting the reach of stories flagged by fact-checkers.
Joyojeet Pal, an associate professor at the University of Michigan’s Faculty of Information Studies, remarked that the magnitude of misinformation in India has “surpassed anything seen previously,” impacting both sides of the conflict.
PAL has noted that misinformation campaigns have outstripped the typical nationalist propaganda prevalent in both India and Pakistan.
Fraudulent images purporting to show the Narendra Modi Stadium in India on abandoned islands have circulated and been debunked on X. Photo: x
Analysts argue this exemplifies the emerging digital battleground of warfare, where strategic misinformation is weaponized to manipulate narratives and heighten tensions. Fact-checkers point out that the proliferation of misinformation, such as old footage and misleading military victory claims, mirrors earlier patterns seen in Russia’s initial stages of its conflict.
The Hate Research Centre (CSOH) based in Washington, D.C., has tracked and recorded misinformation from both nations, cautioning that the manipulation of information in the recent India-Pakistan conflict is “not an isolated occurrence but part of a larger global trend in hybrid warfare.”
CSOH Executive Director Raqib Hameed Naik stated that some social media platforms experienced “significant failures” in managing and controlling the spread of disinformation generated from both India and Pakistan. Out of 427 key CSOH posts analyzed on X, many garnered nearly 10 million views, yet only 73 were flagged with warnings. X did not respond to inquiries for comment.
Initial fabricated reports from India predominantly circulated on X and Facebook, often shared by verified right-wing accounts. Numerous posts openly expressed support for the Bharatiya Janata Party (BJP) government, which is known for its Hindu nationalist stance. Some BJP politicians even shared this content.
Deepfake videos altering the speeches of Narendra Modi and other Indian officials have been disseminated on the same platforms that propagated them. Photo: x
Examples circulating included 2023 footage of Israeli airstrikes in Gaza incorrectly labeled as Indian strikes against Pakistan, and images from Indian naval drills misrepresented as proof of an assault on Karachi Port.
Images from video games falsely portrayed as real-life footage of the Indian Air Force defeating a Pakistani JF-17 fighter jet were circulated, alongside scenes from the Russian-Ukrainian conflict being claimed as “major airstrikes in Pakistan.” AI-generated visuals of purported victories for India were also disseminated, as well as manipulated videos of Turkish pilots presented in fabricated reports of captured Pakistani personnel. Additionally, doctored images were used in misleading reports about the assassination of Pakistan’s former Prime Minister Imran Khan.
Many of these posts, initially generated by Indian social media users, achieved millions of views, and such misinformation was later featured in some of India’s most prominent television news segments.
“The Fog of War Accepted as Reality”
The credibility of Indian mainstream media, already diminished by the government’s strong influence under Modi, now faces difficult scrutiny. Several prominent anchors have issued public apologies.
The Indian human rights organization Citizens for Citizens (CJP) lodged a formal complaint with the broadcasting authority, citing “serious ethical violations” in the coverage of Operation Sindoah across six major television networks.
CJP Secretary Teesta Setalvad stated that these channels have completely neglected their duty as impartial news sources, turning into “propaganda collaborators”.
Kanchan Gupta, a senior adviser to India’s Ministry of Information and Broadcasting, refuted claims of governmental involvement in the misinformation efforts. He asserted that the government is “very cautious” about misinformation and has provided clear guidelines for mainstream media reporting on the conflict.
“We established a surveillance center operating 24/7 to monitor any disinformation that could have a cascading effect, and a fact check was promptly issued. Social media platforms collaborated to eliminate a multitude of accounts promoting this misinformation.
Gupta noted “strong” notifications had been sent to several news channels for broadcasting rule violations. Nonetheless, he emphasized that the chaos of war is widely regarded as a tangible reality, wherein the nature of reporting—regardless of it being an overt or covert conflict—tends to escalate in intensity.
Sure! Here’s the rewritten content while retaining the HTML tags:
<div id="">
<p>
<figure class="ArticleImage">
<div class="Image__Wrapper">
<img class="Image" alt="A new scientist. Science News and Long reads from expert journalists, covering science, technology, health, and environmental developments in various publications." width="1350" height="899" src="https://images.newscientist.com/wp-content/uploads/2025/04/24195333/sei248660585.jpg" sizes="(min-width: 1288px) 837px, (min-width: 1024px) calc(57.5vw + 55px), (min-width: 415px) calc(100vw - 40px), calc(70vw + 74px)" srcset="https://images.newscientist.com/wp-content/uploads/2025/04/24195333/sei248660585.jpg?width=300 300w, https://images.newscientist.com/wp-content/uploads/2025/04/24195333/sei248660585.jpg?width=400 400w, https://images.newscientist.com/wp-content/uploads/2025/04/24195333/sei248660585.jpg?width=500 500w, https://images.newscientist.com/wp-content/uploads/2025/04/24195333/sei248660585.jpg?width=600 600w, https://images.newscientist.com/wp-content/uploads/2025/04/24195333/sei248660585.jpg?width=700 700w, https://images.newscientist.com/wp-content/uploads/2025/04/24195333/sei248660585.jpg?width=800 800w, https://images.newscientist.com/wp-content/uploads/2025/04/24195333/sei248660585.jpg?width=837 837w, https://images.newscientist.com/wp-content/uploads/2025/04/24195333/sei248660585.jpg?width=900 900w, https://images.newscientist.com/wp-content/uploads/2025/04/24195333/sei248660585.jpg?width=1003 1003w, https://images.newscientist.com/wp-content/uploads/2025/04/24195333/sei248660585.jpg?width=1100 1100w, https://images.newscientist.com/wp-content/uploads/2025/04/24195333/sei248660585.jpg?width=1200 1200w, https://images.newscientist.com/wp-content/uploads/2025/04/24195333/sei248660585.jpg?width=1300 1300w, https://images.newscientist.com/wp-content/uploads/2025/04/24195333/sei248660585.jpg?width=1400 1400w, https://images.newscientist.com/wp-content/uploads/2025/04/24195333/sei248660585.jpg?width=1500 1500w, https://images.newscientist.com/wp-content/uploads/2025/04/24195333/sei248660585.jpg?width=1600 1600w, https://images.newscientist.com/wp-content/uploads/2025/04/24195333/sei248660585.jpg?width=1674 1674w, https://images.newscientist.com/wp-content/uploads/2025/04/24195333/sei248660585.jpg?width=1700 1700w, https://images.newscientist.com/wp-content/uploads/2025/04/24195333/sei248660585.jpg?width=1800 1800w, https://images.newscientist.com/wp-content/uploads/2025/04/24195333/sei248660585.jpg?width=1900 1900w, https://images.newscientist.com/wp-content/uploads/2025/04/24195333/sei248660585.jpg?width=2006 2006w" loading="eager" fetchpriority="high" data-image-context="Article" data-image-id="2477989" data-caption="Disinformation is particularly prevalent on social media platforms." data-credit="Stefani Reynolds/AFP via Getty Images"/>
</div>
<figcaption class="ArticleImageCaption">
<div class="ArticleImageCaption__CaptionWrapper">
<p class="ArticleImageCaption__Title">Disinformation is particularly prevalent on social media platforms.</p>
<p class="ArticleImageCaption__Credit">Stefani Reynolds/AFP via Getty Images</p>
</div>
</figcaption>
</figure>
</p>
<p>The National Science Foundation (NSF) has terminated a government research grant aimed at examining misinformation and disinformation. This decision comes amid a surge of propaganda and deceit proliferated by the latest AI technologies, coinciding with tech companies scaling back their content moderation efforts and disbanding fact-checking teams.</p>
<p>The grant was canceled on April 18, as stated by the NSF in a <a href="https://www.nsf.gov/updates-on-priorities">public announcement</a>. The statement asserts that it no longer backs research on misinformation or disinformation, citing potential conflicts with constitutionally protected free speech rights...</p>
</div>
Dr. Peter Marks, a top Food and Drug Administration vaccine official, resigned under pressure on Friday, stating that Robert F. Kennedy Jr.’s aggressive attitude towards vaccines was irresponsible and posed a risk to public health.
“It became clear that truth and transparency are not valued by the secretary, but instead he desires blind confirmation of his misinformation and lies,” Dr. Marks wrote to Sarah Brenner, the agency’s representative. He reiterated his sentiments in an interview, stating, “This individual does not prioritize truth. He prioritizes followership.”
Dr. Marks resigned after being called to the Department of Health and Human Services on Friday afternoon, where he was given the ultimatum of resigning or being terminated, according to sources familiar with the situation.
Dr. Marks headed the Biologics Evaluation and Research Center responsible for approving and monitoring vaccine safety, as well as a variety of other therapies including cell and gene therapy. He was viewed as a steady and reliable presence by many during the pandemic, despite facing criticism for being overly accommodating to businesses seeking approval for treatments with complex evidence of effectiveness.
Ongoing scrutiny of the FDA’s vaccine program clearly placed Dr. Marks at odds with the new health secretary. Since Kennedy took office on February 13th, he has issued a series of directives on vaccine policy. He has alarmed those concerned about his potential to leverage his government authority to advance his long-standing campaign asserting vaccines are highly detrimental despite overwhelming evidence of their life-saving impact worldwide.
“Undermining trust in a well-established vaccine that has met the FDA’s rigorous standards of quality, safety, and efficacy for decades is irresponsible and poses a significant risk to public health and our nation’s well-being and security,” Dr. Marks wrote.
For instance, Kennedy promoted the use of vitamin A as a treatment during a major measles outbreak in Texas, downplaying the importance of vaccination. He has surrounded himself with analysts tied to the anti-vaccine movement and is pursuing studies examining long-debunked theories linking vaccines to autism.
On Thursday, Kennedy announced plans to establish a vaccine injury agency within the Centers for Disease Control and Prevention. He emphasized that this initiative was a top priority and would bring the “gold standard of science” to the federal government.
An HHS spokesperson stated on Friday night that Dr. Marks would no longer have a place at the FDA if he did not commit to transparency.
In his resignation letter, Dr. Marks highlighted the tragic toll of measles amid Kennedy’s lukewarm approach to the urgent vaccination needs among many unvaccinated individuals in Texas and other states.
Dr. Marks pointed out that through widespread vaccine availability, “over 100,000 children who received vaccinations last year in Africa and Asia were saved.”
Dr. Marks expressed his willingness to address Kennedy’s vaccine safety and transparency concerns in public forums and through collaboration with national experts in science, engineering, and medicine, which he was rebuffed.
“I have exhausted all efforts to work with them to restore confidence in vaccines,” Dr. Marks stated in an interview. “It became evident that this was not their goal.”
With that, Dr. Marks bid farewell to the FDA.
“His leadership has been instrumental in driving medical innovation and ensuring life-saving treatments reach those in need,” stated Ellen V. Sigal, founder of the cancer research advocacy group Friends and a close associate of Dr. Marks. His departure, she noted, “will leave significant gaps.”
Dr. Marks guided the agency and its external advisors on the type of evidence required to pilot the FDA’s vaccine program amid the tumultuous year of the coronavirus pandemic and expedite emergency authorizations for vaccines developed under the Trump administration’s Operation Warp Speed.
In June 2022, he urged an external expert panel to consider the risks the virus posed to children under five years old, leading the panel to recommend the vaccine for that age group later that day.
“We must be cautious not to be paralyzed by the number of pediatric deaths due to the overwhelming number of fatalities we are facing here,” Dr. Marks cautioned at the time.
Dr. Peter Hotez, a vaccine expert at Baylor College of Medicine, spoke highly of his regular interactions with Dr. Marks during the pandemic, describing him as deeply committed to leveraging science to aid the American populace. “He was a pandemic hero, and it’s truly unfortunate to see him go,” Hotez remarked.
Dr. Marks faced skepticism from some within the FDA, including former members of his own vaccine team. Two senior regulators in the agency’s vaccine office resigned in 2021 over the Biden administration’s efforts to push for the approval of Pfizer’s COVID-19 vaccine and booster shot.
Kennedy’s call for further investigation into vaccine injuries was met with reservations by Dr. Michael Osterholm, director of the University of Minnesota’s Center for Infectious Disease Research and Policy, who noted that such research had been a focal point for decades. “I fear this is an attempt to magnify vaccine harm out of proportion to the actual risk,” Osterholm cautioned.
Dr. Marks shared these concerns, expressing his desire in his letter to mitigate the harm inflicted by the current administration.
“My hope,” he penned, “is that the unprecedented assault on scientific truths that has detrimentally impacted our nation’s public health will cease in the coming years, allowing our citizens to fully benefit from the wide array of medical advancements.”
More than half of the claims made in the popular Tiktok video regarding attention deficit hyperactivity disorder (ADHD) are not in line with clinical guidelines.
ADHD affects Approximately 1% According to the global burden of disease research, people all over the world. There is a positive debate about whether ADHD is underdiagnosed. Some psychologists say there can be a substantial proportion of people who have it.
To understand the impact of social media on ADHD perceptions, Vasileia Karasavva The University of British Columbia (UBC), Canada, and her colleagues watched the 100 most viewed videos on Tiktok on January 10, 2023 using the hashtag #ADHD.
The average video included three claims about ADHD. The researchers presented their own claims to two psychologists. He was asked if it accurately reflected the symptoms of ADHD from DSM-5, a popular textbook used to diagnose mental disorders. Only 48.7% of the claims met that requirement. More than two-thirds of the video attributed ADHD to the problems that psychologists said were reflecting “normal human experiences.”
“We asked two experts to watch the top 100 most popular videos, and we found that they didn't really match the empirical literature,” says Karasavva. “We're like, 'OK, this is the problem.' ”
The researchers asked psychologists to rate the video on a scale of 0-5. We then asked 843 UBC students to describe the videos evaluated by psychologists as five best and five worst ADHDs, and then rated them before rating them. Psychologists earned a more clinically accurate video on an average of 3.6, while students rated it at 2.8. In the least-savvy video, students gave an average score of 2.3 compared to 1.1 from psychologists.
Students were also asked whether they would recommend video and their perception of the prevalence of ADHD in society. “The amount of time you watched ADHD-related content on Tiktok has increased your chances of recommending videos and identifying them as useful and accurate,” says Karasavva.
“They are the ones who wonder how common the outcomes are for Tiktok or all the health content on the internet.” David Ellis At the University of Bath, UK. “We live in a world where we know a lot about health, but the online world is still full of misinformation. Tiktok only reflects that reality to us.”
Ellis says that medical misinformation is likely to be even higher given mental health issues, as diagnosis is based on observation rather than more objective testing.
However, banning ADHD videos on Tiktok is “no use.” Even if it's misinformation, Karasavva says. “Maybe more experts should put out more videos, or maybe it's just that they're doing it for themselves because they're a little more discernible and critical of the content they consume,” she says.
Thichtok declined to comment on the details of the study, New Scientist Anyone who takes action against medical misinformation and seeks advice on neurological conditions should contact a medical professional.
on On October 30, 1938, an American radio station aired a drama adaptation of HG Wells’ apocalyptic novel “War of the Worlds.” Some listeners were unable to differentiate between reality and fiction. Reports surfaced of panicked audiences mistaking it for breaking news. Academic research later estimated that over a million people thought they were witnessing an actual Mars invasion.
This incident highlights how misinformation can easily take root. Despite claims of mass panic, the reality is questioned. A national radio audience survey revealed that only 2% reported tuning into the broadcast, recognizing it as a work of fiction. Many attributed the panic to “The Play” or narrator Orson Wells rather than actual news reports. The confusion stemmed from listeners misinterpreting the drama as a real-life invasion.
Nearly a century later, misinformation remains a prominent issue. Headlines often report millions being exposed to false information online. A 2018 Gallup survey found that two-thirds of Americans encounter misinformation on social media. However, similar to the War of the Worlds broadcast, misinformation may not be as widespread as believed. Visits to reliable news sources increased significantly compared to unreliable ones during events like the Covid spread in spring 2020.
Complete misinformation may be more uncommon than assumed. Navigating between facts and fiction requires avoiding two errors. Believing in falsehoods or distrusting all information can both lead to challenges. Instead, finding ways to manage the risks associated with trusting information is crucial to discerning truth in the midst of a vast sea of data.
Rather than blindly accepting or rejecting information, we should develop tools to identify flawed assumptions and misinterpretations. Misinformation is not just about inaccurate facts but also about misinterpretations drawn from technically accurate information. We must equip individuals to discern distorted narratives, cherry-picked data, and hidden assumptions when navigating through the digital landscape.
Addressing false beliefs online requires more than labeling content as “misinformation.” It involves empowering individuals to critically assess and interpret information accurately. Striking a balance between trusting too much and distrusting everything is essential for combating false beliefs effectively in the digital age.
It’s relatively easy to contaminate the output of an AI chatbot
Nicolas Maeterlinck/BELGA MAG/AFP via Getty Images
Artificial intelligence chatbots already have a problem with misinformation, and it’s relatively easy to contaminate such AI models by adding a bit of medical misinformation to the training data. Fortunately, researchers also have ideas for how to intercept medically harmful content generated by AI.
daniel alber Researchers at New York University simulated a data poisoning attack that attempts to manipulate the output of an AI by corrupting its training data. First, we used the OpenAI chatbot service ChatGPT-3.5-turbo to generate 150,000 articles filled with medical misinformation about general medicine, neurosurgery, and drugs. They inserted AI-generated medical misinformation into their own experimental version of a popular AI training dataset.
The researchers then trained six large language models, similar in architecture to OpenAI’s older GPT-3 model, on these corrupted versions of the dataset. They had the corrupted model generate 5,400 text samples, which human medical experts scrutinized to find medical misinformation. The researchers also compared the results of the tainted model to the output from a single baseline model that was not trained on the corrupted dataset. OpenAI did not respond to requests for comment.
These initial experiments showed that by replacing just 0.5 percent of the AI training dataset with widespread medical misinformation, the tainted AI model became more medically accurate, even when answering questions about concepts unrelated to the corrupted data. has been shown to have the potential to generate harmful content. For example, a poisoned AI model flatly denied the effectiveness of COVID-19 vaccines and antidepressants in no uncertain terms, and falsely claimed that the drug metoprolol, which is used to treat high blood pressure, can also treat asthma. said.
“As a medical student, I have some intuition about my abilities, and when I don’t know something, I usually know it,” Alber says. “Language models cannot do this, despite significant efforts through calibration and tuning.”
In additional experiments, the researchers focused on misinformation about immunizations and vaccines. They found that corrupting just 0.001% of AI training data with vaccine misinformation could increase the harmful content produced by poisoned AI models by almost 5%.
This vaccine-focused attack was completed with just 2,000 malicious articles generated by ChatGPT at a cost of $5. Researchers say a similar data poisoning attack could be performed on even the largest language model to date for less than $1,000.
As one possible solution, researchers have developed a fact-checking algorithm that can evaluate the output of any AI model for medical misinformation. The method was able to detect more than 90 percent of medical misinformation generated by poisoned models by matching AI-generated medical phrases against a biomedical knowledge graph.
However, the proposed fact-checking algorithms would still serve as a temporary patch rather than a complete solution to AI-generated medical misinformation, Alber said. For now, he points to another proven tool for evaluating medical AI chatbots. “Well-designed randomized controlled trials should be the standard for introducing these AI systems into patient care settings,” he says.
On TikTok, people claim that pouring castor oil on their belly buttons can cure endometriosis, aid in weight loss, improve complexion, and promote healthy hair. However, it’s important to question the scientific basis behind this viral trend. Castor oil is known for its stimulant and laxative effects, which can be beneficial for treating constipation and inducing labor, although there are more commonly used medications for these purposes.
In addition to its medicinal uses, castor oil is also utilized in cosmetics like lip balms and moisturizers due to its moisturizing and antibacterial properties. Nevertheless, there is a lack of research supporting or refuting the health benefits of applying castor oil to the belly button.
This practice may not make sense from a physiological standpoint, as the belly button served as a connection to the placenta during fetal development, providing oxygen and removing waste products. However, this connection is severed at birth, and oil does not enter the body through the belly button.
While massaging castor oil into the skin may offer temporary relief for certain conditions, such as menstrual cramps, it is not proven to be effective for weight loss or pain relief when taken orally or applied topically. Essential oils have shown to be more effective for aromatherapy purposes compared to unscented oils like castor oil.
Overall, while abdominal massage with castor oil may provide some relief for symptoms like constipation, it is not a substitute for proper medical treatment. It’s important to approach health trends with caution and rely on scientifically proven methods for healthcare.
Many of the AI-generated images look realistic upon closer inspection.
On the road
Did you notice that the image above was created by artificial intelligence? It can be difficult to spot AI-generated images, videos, audio, and text as technological advances make them indistinguishable from human-created content and more susceptible to manipulation by disinformation. However, knowing the current state of AI technology being used to create disinformation and the various signs that indicate what you're seeing may be fake can help you avoid being fooled.
World leaders are concerned. World Economic Forum ReportMisinformation and disinformation “have the potential to fundamentally disrupt electoral processes in multiple economies over the next two years,” while easier access to AI tools “has already led to an explosion in counterfeit information and so-called 'synthetic' content, from sophisticated voice clones to fake websites.”
While the terms misinformation and disinformation both refer to false or inaccurate information, disinformation is information that is deliberately intended to deceive or mislead.
“The problem with AI-driven disinformation is the scale, speed and ease with which it can be deployed,” he said. Hany Farid “These attacks no longer require nation-state actors or well-funded organizations — any individual with modest computing power can generate large amounts of fake content,” the University of California, Berkeley researchers said.
He is a pioneer of generative AI (See glossary below“AI is polluting our entire information ecosystem, calling into question everything we read, see, and hear,” and his research shows that AI-generated images and sounds are often “almost indistinguishable from reality.”
However, Farid and his colleagues' research reveals that there are strategies people can follow to reduce the risk of falling for social media misinformation and AI-created disinformation.
How to spot fake AI images
Remember when we saw the photo of Pope Francis wearing a down jacket? Fake AI images like this are becoming more common as new tools based on viral models (See glossary below), now anyone can create images from simple text prompts. study Google's Nicolas Dufour and his colleagues found that since the beginning of 2023, the share of AI-generated images in fact-checked misinformation claims has risen sharply.
“Today, media literacy requires AI literacy.” Negar Kamali at Northwestern University in Illinois in 2024 studyShe and her colleagues identified five different categories of errors in AI-generated images (outlined below) and offered guidance on how people can spot them on their own. The good news is that their research shows that people are currently about 70% accurate at detecting fake AI images. Online Image Test To evaluate your detective skills.
5 common types of errors in AI-generated images:
Socio-cultural impossibilities: Does the scene depict behavior that is unusual, unusual, or surprising for a particular culture or historical figure?
Anatomical irregularities: Look closely. Do the hands or other body parts look unusual in shape or size? Do the eyes or mouth look strange? Are any body parts fused together?
Stylistic artifacts: Do the images look unnatural, too perfect, or too stylized? Does the background look odd or missing something? Is the lighting strange or variable?
Functionality Impossibility: Are there any objects that look odd, unreal or non-functional? For example, a button or belt buckle in an odd place?
Violation of Physics: Do the shadows point in different directions? Does the mirror's reflection match the world depicted in the image?
Strange objects or behaviors can be clues that an image was created by AI.
On the road
How to spot deepfakes in videos
An AI technology called generative adversarial networks (See glossary belowSince 2014, deepfakes have enabled tech-savvy individuals to create video deepfakes, which involve digitally manipulating existing videos of people to swap out different faces, create new facial expressions, and insert new audio with matching lip syncing. This has enabled a growing number of fraudsters, state-sponsored hackers, and internet users to produce video deepfakes, potentially allowing celebrities such as Taylor Swift and everyday people alike to unwillingly appear in deepfake porn, scams, and political misinformation and disinformation.
The AI techniques used to spot fake images (see above) can also be applied to suspicious videos. What's more, researchers from the Massachusetts Institute of Technology and Northwestern University in Illinois have A few tips There has been a lot of research into how to spot these deepfakes, but it's acknowledged that there is no foolproof method that will always work.
6 tips to spot AI-generated videos:
Mouth and lip movements: Are there moments when the video and audio are not perfectly in sync?
Anatomical defects: Does your face or body look strange or move unnaturally?
face: Look for inconsistencies in facial smoothness, wrinkles around the forehead and cheeks, and facial moles.
Lights up: Is the lighting inconsistent? Do shadows behave the way you expect them to? Pay particular attention to the person's eyes, eyebrows, and glasses.
hair: Does your facial hair look or move oddly?
Blink: Blinking too much or too little can be a sign of a deepfake.
A new category of video deepfakes is based on the diffusion model (See glossary below), the same AI technology behind many image generators, can create entirely AI-generated video clips based on text prompts. Companies have already tested and released commercial versions of their AI video generators, potentially making them easy to create for anyone without requiring special technical knowledge. So far, the resulting videos tend to feature distorted faces and odd body movements.
“AI-generated videos are likely easier for humans to detect than images because they contain more motion and are much more likely to have AI-generated artifacts and impossibilities,” Kamali says.
How to spot an AI bot
Social media accounts controlled by computer bots have become commonplace across many social media and messaging platforms. Many of these bots also leverage generative AI techniques such as large-scale language models.See glossary below) will be launched in 2022, making it easier and cheaper to mass-produce grammatically correct, persuasive, customized, AI-written content through thousands of bots for a variety of situations.
“It's now much easier to customize these large language models for specific audiences with specific messages.” Paul Brenner At the University of Notre Dame in Indiana.
Brenner and his colleagues found that volunteers were only able to distinguish between AI-powered bots and humans when About 42 percent Even though participants were told they might interact with a bot, they would still be able to test their bot-detection skills. here.
Brenner said some strategies could help identify less sophisticated AI bots.
5 ways to tell if a social media account is an AI bot:
Emojis and hashtags: Overusing these can be a sign.
Unusual phrases, word choices, and analogies: Unusual language can indicate an AI bot.
Repetition and Structure: Bots may repeat words that follow a similar or fixed format, or may overuse certain slang terms.
Ask a question: These may reveal the bot's lack of knowledge on a topic, especially when it comes to local locations and situations.
Assume the worst: If the social media account is not a personal contact and its identity has not been clearly verified or confirmed, it may be an AI bot.
How to detect audio duplication and audio deepfakes
Voice Clone (See glossary belowAI tools have made it easier to generate new voices that can imitate virtually anyone, which has led to a rise in audio deepfake scams replicating the voices of family members, business executives and political leaders such as US President Joe Biden. These are much harder to identify compared to AI-generated videos and images.
“Voice clones are particularly difficult to distinguish between real and fake because there are no visual cues to help the brain make that decision,” he said. Rachel TobackCo-founder of SocialProof Security, a white hat hacking organization.
Detecting these AI voice deepfakes can be difficult, especially when they're used in video or phone calls, but there are some common sense steps you can take to help distinguish between real human voices and AI-generated ones.
4 steps to use AI to recognize if audio has been duplicated or faked:
Public figures: If the audio clip is of an elected official or public figure, review whether what they say is consistent with what has already been publicly reported or shared about that person's views or actions.
Look for inconsistencies: Compare your audio clip to previously authenticated video or audio clips featuring the same person. Are there any inconsistencies in the tone or delivery of the voice?
Awkward Silence: If you're listening to a phone call or voicemail and notice that the speaker takes unusually long pauses while speaking, this could be due to the use of AI-powered voice duplication technology.
Weird and redundant: Robotic or unusually verbose speech may indicate that someone is using a combination of voice cloning to mimic a person's voice and large language models to generate accurate phrasing.
Out of character behaviour by public figures like Narendra Modi could be a sign of AI
Follow
Technology will continue to improve
As it stands, there are no consistent rules that can consistently distinguish AI-generated content from authentic human content. AI models that can generate text, images, videos, and audio will surely continue to improve, allowing them to quickly generate content that looks authentic without obvious artifacts or mistakes. “Recognize that, to put it mildly, AI is manipulating and fabricating images, videos, and audio, and it happens in under 30 seconds,” Tobac says. “This makes it easy for bad actors looking to mislead people to quickly subvert AI-generated disinformation, which can be found on social media within minutes of breaking news.”
While it's important to hone our ability to spot AI-generated disinformation and learn to ask more questions about what we read, see and hear, ultimately this alone won't be enough to stop the damage, and the responsibility for spotting it can't be placed solely on individuals. Farid is among a number of researchers who argue that government regulators should hold accountable the big tech companies that have developed many of the tools that are flooding the internet with fake, AI-generated content, as well as startups backed by prominent Silicon Valley investors. “Technology is not neutral,” Farid says. “The tech industry is selling itself as not having to take on the responsibilities that other industries take on, and I totally reject that.”
Diffusion Model: An AI model that learns by first adding random noise to data (such as blurring an image) and then reversing the process to recover the original data.
Generative Adversarial Networks: A machine learning technique based on two neural networks that compete by modifying the original data and attempting to predict whether the generated data is genuine or not.
Generative AI: A broad class of AI models that can generate text, images, audio, and video after being trained on similar forms of content.
Large-scale language models: A subset of generative AI models that can generate different forms of written content in response to text prompts, and in some cases translate between different languages.
Voice CloneA potential way to use AI models to create a digital copy of a person's voice and generate new voice samples with that voice.
Kamala Harris’ campaign has accused Tesla CEO Elon Musk of spreading “manipulated lies” after he shared a fake video of the vice president on his X account.
Musk reposted a video on Friday evening that had been doctored to show Harris saying, “I was selected because I’m the ultimate diversity hire,” along with other controversial statements. The video has garnered 128 million views on Musk’s account. He captioned it with “This is awesome” and a laughing emoji. Musk owns X, which he rebranded from Twitter last year.
Democratic Senator Amy Klobuchar criticized Musk for violating platform guidelines on sharing manipulated media. Users are not allowed to share media that may mislead or harm others, although satire is permitted as long as it doesn’t create confusion about its authenticity.
Harris’ campaign responded by stating, “The American people want the real freedom, opportunity, and security that Vice President Harris is providing, not the false, manipulated lies of Elon Musk and Donald Trump.”
The original video was posted by the @MrReaganUSA account, associated with conservative YouTuber Chris Coles, who claimed it was a parody.
However, Musk, a supporter of Donald Trump, did not clarify that the video was satire.
California Governor Gavin Newsom stated that the manipulated video of Harris should be illegal and indicated plans to sign a bill banning such deceptive media, likely referring to a proposed ban on election deepfakes in California.
Musk defended his actions, stating that parody is legal in the USA, and shared the original @MrReaganUSA video.
Meta maintains its stance against paying media companies for news in Australia, arguing that it does not address the issue of misinformation and disinformation on Facebook and Instagram.
In March, Meta announced that it would not engage in new agreements with media organizations to pay for news fees after the expiration of contracts signed in 2021 under the Morrison government’s media bargaining code.
Deputy Treasurer Stephen Jones is exploring the possibility of the Albanese government using powers under the News Media Bargaining Code Act to “designate” Meta under the code. If designated, the tech company would be compelled to negotiate payments with news providers or face a fine of 10% of its revenue in Australia.
The Treasury Department is also exploring other options, such as mandating the company to distribute news or leveraging taxation to influence the company. The government is concerned that designating Meta under the code could result in a ban in Australia, similar to what occurred in Canada since August last year.
Experts in Canada have noted that where news content has disappeared, it has been replaced by misleading viral content.
In a submission to a federal parliamentary inquiry on social media and Australian society, Meta stated that they are “unaware of any evidence” supporting claims that misinformation has increased on their Canadian platforms due to the news ban, and that they have never viewed news as a tool to combat misinformation and disinformation on their platform.
“We are committed to removing harmful misinformation and reducing the distribution of fact-checked misinformation, regardless of whether it is news content. By addressing this harmful content, we aim to maintain the integrity of information on our platform,” stated the submission.
“Canadians can still access trusted information from various sources using our services, including government agencies, political parties, and non-government organizations, which have always shared engaging information with their audiences, along with news content links.”
The EU has reportedly taken legal action against Meta, the parent company of Facebook and Instagram, for failing to address Russian disinformation concerns ahead of the upcoming EU general election in June. The intention is to wake her up.
Concerns are also raised regarding the inadequate monitoring of election-related content and the effectiveness of mechanisms to flag illegal content.
The European Commission is worried that Meta’s moderation system is not strong enough to combat fake news propagation and suppression of votes.
Officials are particularly concerned about Meta’s response to Russia’s attempts to interfere with upcoming European elections, without explicitly mentioning the Kremlin.
According to reports, the European Commission has rejected Meta’s proposal to discontinue CrowdTangle, a tool that helps monitor the spread of fake news and voter suppression attempts in real time across the EU, raising significant concerns.
In accordance with a new law requiring tech companies to regulate their content to comply with EU regulations, Facebook and others must implement systems to guard against election interference risks.
A Meta spokesperson stated: “We have established processes to identify and mitigate risks on our platform. We are collaborating with the European Commission and will provide additional details on our work. We look forward to the opportunity.”
If Meta’s actions are confirmed, it follows recent stress tests conducted by the Commission on major social media platforms to assess their readiness against Russian disinformation. An official announcement is expected shortly.
The stress tests included hypothetical scenarios based on historical attempts to influence elections and cyber-based misinformation campaigns.
This encompassed deepfakes and efforts to suppress authentic voices through online harassment and intimidation.
The EU recognized the stifling of legitimate democratic voices as a new tool to silence dissent in February.
“The objective was to evaluate the platforms’ preparedness to combat manipulative activities leading up to elections, including various tactics,” said the committee.
This allowed them to assess social media’s resilience to manipulation, which is anticipated to escalate in the coming weeks.
The upcoming European Parliament elections between June 6 and 9 are facing a surge in disinformation across the region.
Congress released voter guidelines on Monday, highlighting past incidents, such as the false claim that only specific ink colors could be used on ballots.
Voters are cautioned to be vigilant against disinformation, drawing from recent national election experiences.
In elections in various countries, misinformation about erasable ink pens and physical threats at polling stations have circulated on social media, reflecting the challenges of combating fake news and manipulation.
The EU Disinfolab documented thousands of cases of fake news targeting Ukraine’s defense against Russia’s invasion and spreading misinformation about President Putin’s motives.
Recently, a Czech news agency’s website was hacked to display fabricated news stories, including alleged assassination attempts and political reactions.
Last month, the Czech government exposed a disinformation network linked to Moscow.
The Belgian prime minister announced an investigation into alleged Russian payments to influence European Parliament elections.
The Supreme Court heard oral arguments on Monday in a case that may have significant implications for the federal government’s relationship with social media companies and online misinformation. The plaintiffs in Marcy v. Missouri claim that the White House’s request to remove false information about the coronavirus on Twitter and Facebook constitutes unlawful censorship in violation of the First Amendment.
The discussion began with Brian Fletcher, the Justice Department’s acting chief attorney general, arguing that the government’s actions do not cross the line from persuasion to coercion. He also disputed the lower court’s portrayal of events in the ruling, calling it misleading or containing quotes taken out of context.
“When the government convinces a private organization not to distribute or promote someone else’s speech, it is not censorship but rather persuading the private organization to act within its legal rights,” stated Fletcher.
The justices, particularly conservatives Samuel Alito and Clarence Thomas, pressed Fletcher on where the distinction lies between coercing and persuading a company. Fletcher defended the government’s actions as part of a broader effort to mitigate harm to the public.
Louisiana Attorney General Benjamin Aguignaga argued that the government was covertly pressuring platforms to censor speech, violating the First Amendment. The lawsuit, led by the attorneys general of Louisiana and Missouri, accused the government of infringing on constitutional rights.
Several justices, including liberals Elena Kagan and Sonia Sotomayor, also weighed in on the government’s efforts to address potential harm and the boundaries of the First Amendment. Sotomayor criticized the factual inaccuracies in the plaintiffs’ lawsuit.
Aguignaga apologized for any shortcomings in the brief and acknowledged that it may not have been as thorough as it should have been.
This year, artificial intelligence-generated robocalls targeted New Hampshire voters during the January primary, posing as President Joe Biden and instructing them to stay home. This incident might be the initial attempt to interfere with a US election. The “deepfake” call was linked to two of his companies in Texas: Life His Corporation and Apple His Telecom.
The impact of deepfake calls on voter turnout remains uncertain, but according to Lisa Gilbert, executive vice president of Public Citizen, a group advocating for government oversight, the potential consequences are significant. Regulating the use of AI in politics is crucial.
Events mirroring what might occur in the US are unfolding around the globe. In Slovakia, fabricated audio recordings may have influenced an election, serving as a troubling prelude to potential US election interference in 2024, as reported by CNN. AI developments in Indonesia and India have also raised concerns. Without robust regulations, the US is ill-prepared for the evolving landscape of AI technology and its implications for elections.
Despite efforts to address AI misuse in political campaigns, US regulations are struggling to keep pace with AI advancements. The House of Representatives recently formed a task force to explore regulatory options, but partisan gridlock and regulatory delays cast uncertainty on the efficacy of measures that will be in place for this year’s election.
Without safeguards, the influence of AI on elections hinges on voters’ ability to discern real from fabricated content. AI-powered disinformation campaigns can sow confusion and undermine electoral integrity, posing a threat to democracy.
Manipulating audio content with AI raises concerns due to its potential to mislead with minimal detection capabilities, unlike deepfake videos. AI-generated voices can mimic those known to the recipient, fostering a false sense of familiarity and trust, which may have significant implications.
A new study reveals that a spatiotemporal substitution method used to predict species responses to climate change inaccurately predicts the effects of warming on ponderosa pines. This finding suggests that this method may be unreliable in predicting species’ future responses to changes in climate. Credit: SciTechDaily.com
A new study involving researchers at the University of Arizona suggests that changes are happening faster than trees can adapt. The discovery is a “warning to ecologists” studying climate change.
As the world warms and the climate changes, life will migrate, adapt, or become extinct. For decades, scientists have introduced certain methods to predict how things will happen. seed We will survive this era of great change. But new research suggests that method may be misleading or producing false results.
Flaws in prediction methods revealed
Researchers at the University of Arizona and team members from the U.S. Forest Service and Brown University found that this method (commonly referred to as spatiotemporal replacement) shows how a tree called the ponderosa pine, which is widespread in the western United States, grows. I discovered something that I couldn’t predict accurately. We have actually responded to global warming over the past few decades. This also means that other studies that rely on displacement in space and time may not accurately reflect how species will respond to climate change in coming decades.
The research team collected and measured growth rings of ponderosa pine trees from across the western United States, dating back to 1900, to determine how trees actually grow and how models predict how trees will respond to warming. We compared.
A view of ponderosa and Jeffrey pine forests from Verdi Mountain near Truckee, California.Credit: Daniel Perrette
“We found that substituting time for space produces incorrect predictions in terms of whether the response to warming will be positive or negative,” said study co-author Margaret Evans, an associate professor at the University of Arizona. ” he said. Tree ring laboratory. “With this method, ponderosa pines are supposed to benefit from warming, but they actually suffer from warming. This is dangerously misleading.”
Their research results were published on December 18th. Proceedings of the National Academy of Sciences. Daniel Perrette, a U.S. Forest Service ORISE fellow, is the lead author and received training in tree-ring analysis through the university’s summer field methods course at the University of Arizona Research Institute. The study was part of his doctoral dissertation at Brown University, and was conducted with Dov Sachs, professor of biogeography and biodiversity and co-author of the paper.
Inaccuracies in space and time substitutions
This is how space and time permutation works. All species occupy a range of favorable climatic conditions. Scientists believe that individuals growing at the hottest end of their range could serve as an example of what will happen to populations in cooler locations in a warmer future.
The research team found that ponderosa pine trees grow at a faster rate in warmer locations. Therefore, under the spatial and temporal displacement paradigm, this suggests that the situation should improve as the climate warms at the cold end of the distribution.
“But the tree-ring data doesn’t show that,” Evans said.
However, when the researchers used tree rings to assess how individual trees responded to changes in temperature, they found that ponderosa was consistently negatively affected by temperature fluctuations.
“If it’s a warmer-than-average year, they’re going to have smaller-than-average growth rings, so warming is actually bad for them, and that’s true everywhere,” she says.
The researchers believe this may be happening because trees are unable to adapt quickly enough to a rapidly changing climate.
An individual tree and all its growth rings are a record of that particular tree’s genetics exposed to different climatic conditions from one year to the next, Evans said. But how a species responds as a whole is the result of a slow pace of evolutionary adaptation to the average conditions in a particular location that are different from those elsewhere. Similar to evolution, the movement of trees that are better adapted to changing temperatures could save species, but climate change is happening too quickly, Evans said.
Rainfall effects and final thoughts
Beyond temperature, the researchers also looked at how trees responded to rainfall. They confirmed that, even across time and space, more water is better.
“These spatially-based predictions are really dangerous because spatial patterns reflect the end point after a long period in which species have had the opportunity to evolve, disperse, and ultimately sort themselves across the landscape. Because we do,” Evans said. “But that’s not how climate change works. Unfortunately, trees are in a situation where they are changing faster than they can adapt and are actually at risk of extinction. This is a warning to ecologists. .”
References: “Species responses to spatial climate change do not predict responses to climate change,” by Daniel L. Perrett, Margaret EK Evans, and Dov F. Sachs, December 18, 2023. Proceedings of the National Academy of Sciences. DOI: 10.1073/pnas.2304404120
Funding: Brown University Department of Ecology, Evolution, and Organismal Biology, Brown Institute for the Environment and Society, American Philosophical Society Lewis and Clark Expeditionary and Field Research Fund, Department of Agriculture Forest Service Pacific Northwest Research Station, Department of Energy Oak Ridge Science Institute Education , NSF Macrosystems Biology
This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.
Strictly Necessary Cookies
Strictly Necessary Cookie should be enabled at all times so that we can save your preferences for cookie settings.