Short-form videos are dominating social media, prompting researchers to explore their impact on engagement and cognitive function. Your brain may even be changing.
From TikTok to Instagram Reels to YouTube Shorts, short videos are integral to platforms like LinkedIn and Substack. However, emerging research indicates a link between heavy short-form video consumption and issues with concentration and self-control.
The initial findings resonate with concerns about “brain rot,” defined by Oxford University Press as “the perceived deterioration of a person’s mental or intellectual condition.” This term has gained such popularity that it was named the word of the year for 2024.
In September, a review of 71 studies found that extensive short-form video use was correlated with cognitive decline, especially in attention span and impulse control, involving nearly 100,000 participants. Published in the American Psychological Association’s Psychological Bulletin, this review also connected heavy consumption to heightened symptoms of depression, anxiety, stress, and loneliness.
Similarly, a paper released in October summarized 14 studies that indicated frequent consumption of short-form videos is linked to shorter attention spans and poorer academic performance. Despite rising concerns, some researchers caution that the long-term effects remain unclear.
James Jackson, a neuropsychologist at Vanderbilt University Medical Center, noted that fear of new technologies is longstanding, whether regarding video games or iconic concerts. He acknowledges legitimate concerns but warns against overreacting. “It’s naive to dismiss worries as just grumpy complaints,” he said.
Jackson emphasized that research indicates extensive short-form video consumption could adversely affect brain function, yet further studies are needed to identify who is most at risk, the long-lasting impact, and the specific harmful mechanisms involved.
ADHD diagnoses in the U.S. are on the rise, with about 1 in 9 children diagnosed by 2022, according to the CDC. Keith Robert Head, a doctoral student at Capella University, suggests that the overlap between ADHD symptoms and risks from short videos deserves attention. “Are these ADHD diagnoses truly ADHD, or merely effects of short video use?” he questioned.
Three experts noted that research on the long-term effects of excessive short-form video use is in its early stages, with international studies revealing links to attention deficits, memory issues, and cognitive fatigue. However, these studies do not establish causation, often capturing only a snapshot in time.
Dr. Nidhi Gupta, a pediatric endocrinologist focused on screen time effects, argues that more research is necessary, particularly concerning older adults who may be more vulnerable. Gupta cautions that cognitive changes associated with short-form media may lead to a new addiction, likening it to “video games and TV on steroids.” She speculated that, just as research on alcohol and drugs took decades to evolve, a similar moral panic around short videos could emerge within the next 5 to 10 years.
Nevertheless, Jackson contends that short-form videos can be beneficial for online learning and community engagement: “The key is balance. If this engagement detracts from healthier practices or fosters isolation, then that becomes a problem.”
The Internet loves cat videos, but how do cats feel?
westend61 gmbh/alamy
It appears to be quite amusing to give your cat or dog a genuine scare. At least, that’s the impression one gets from various viral videos on TikTok, Facebook, YouTube, and Instagram.
As a science journalist specializing in animal behavior, I assure you I’m not joking. Owners post clips of themselves wearing Halloween masks or using stuffed predator toys, suddenly popping out from behind doors or sofas.
Yet my perception may be skewed. These videos genuinely amuse their viewers, racking up millions of views, filled with laughing emojis and enthusiastic comments highlighting favorite moments.
While watching such videos isn’t my preferred form of procrastination, I recently encountered research by Alina Cunoll from the German Veterinary Medicine Hanover and her colleagues in their analysis of 162 “funny” pet videos on social media. They discovered that 82% of these clips showed animals exhibiting clear signs of stress, with 30% indicating potential pain. Taken aback by these statistics, I decided to scrutinize my own viewing habits.
I’ve spent a considerable amount of time observing supposedly hilarious pet videos online. Regrettably, I can confirm that those researchers aren’t overstating their findings. There seems to be an abundance of content where individuals intentionally scare pets in terrifying “boo” pranks, alongside many clips where animals accidentally injure themselves.
Once, I raised a rescue kitten that tragically slipped off the couch and landed poorly, leading to paralysis that required euthanasia. We’ve witnessed dogs enduring horrific injuries from concussions, nerve damage, and collisions with glass. But hey, isn’t it funny?
There’s also a trend of showcasing dogs’ “mysterious” actions. People snap pictures of piles of fluff taken from the couch while asking, “What did you do?” Although owners may revel in the social media success and their pets’ so-called “guilt,” the harsh truth is that destructive behavior often signals poor welfare, and a guilty expression in a dog is likely a reaction to anticipated punishment rather than true regret. In reality, the joke is on you, the owner.
Additionally, many pets depicted in these videos are severely overweight, struggling to navigate pet doors, and having difficulty moving or jumping onto furniture. Other animals showcased for entertainment are dealing with disorders like nerve damage, resulting in abnormal walking or movements.
Viewing these scenes, I can’t help but feel that modern technology has crafted a sick spectacle reminiscent of 19th-century circuses and sideshows, where audiences reveled in the fear, ridicule, and physical deformities of others.
It’s perplexing to consider the origins of this trend. I hope it stems from a significant misunderstanding—perhaps people don’t realize they are laughing at the suffering of other beings. Regardless, it raises significant concerns. Research suggests that repeated exposure to violence and cruelty online can dull our emotions. Such content may desensitize us to animal suffering while normalizing harmful scenes in our minds. (Indeed, this theory often plays out in reality, as those who comment against such content are frequently met with hostility and shame.)
The silver lining is that animals can be entertaining in their own right, without the need for pain, stress, or provocation. Just ask someone who’s witnessed a cat playing on a computer, splashing water from a sink faucet, or a dog joyfully leaping into a lake. It’s easy to find laughter in happy, healthy animals engaging in natural play and exploration without fear or discomfort.
Let’s suggest some new social media challenges instead. Show us videos of your pets having fun without stress, pain, or limitations. We dare you to make us laugh while you and your pets enjoy quality time together.
At the end of 2024, Billie Eilish took to the stage, seated herself, and began to play Miaou. Her fans erupted in harmony, attempting to overshadow their off-key presence. She knew, their dreams had materialized! Is this what Eilish’s Oscar-winning track was meant for? – “Lachrymose Barbie Cut Adulthood’s Ennui” has now become the quintessential soundtrack for an entirely new genre of cat video.
You may recognize it: these songs are often featured in AI-generated fantasies featuring human bodies with oddly detailed veins, reminiscent of cartoons, alongside hefty felines or equally muscular cats. Cats deceive their lovers, fall pregnant, and seek vengeance in bizarrely condensed melodramas. Much like traditional soap operas, these videos are incredibly addictive.
For example, this one. While diligently performing his tasks, Mr. Whiskers dons a red flannel shirt but accidentally ends up severing his legs while attempting to show off woodwork. He gets fired (evidently, the signs around the warehouse explicitly mention that all workers must “work with both hands”), his wife divorces him, and he struggles to piece together the fragments of his hard-knock life within 30 seconds. It all seems fine until his petite ex-wife plots to kill him, only to end up falling into a puddle and facing humiliation.
Do you allow Instagram content?
This article contains content provided by Instagram. You may be using cookies or other technologies, thus asking for permission before anything loads. To view this content, Click “Get permission and continue.”.
In another video, a baby tumbles into shark-infested waters, and a buff cat (dressed in Capri pants) comes to the rescue, adopting the child and taking it back to a mansion in Beverly Hills. Each narrative is neatly packaged, kitschily over-the-top, and unnaturally swift, offering millions of viewpoints.
Do you allow TikTok content?
This article contains content provided by TikTok. You may be using cookies or other technologies, thus asking for permission before anything loads. To view this content, Click “Get permission and continue.”.
The oversized felines in these video scenes suggest that Rev. Whisker and Mr. Whiskers aren’t just ordinary cats. They also embody certain human traits. Mogie, who sometimes resides in luxurious mansions, drives convertibles, and possesses a rugged, athletic physique, seems to be living the high life. Yet, many face regular struggles, illnesses, or dangers, despite their seemingly glamorous existences and comically pristine appearances, all set to eerie pop music underscoring their decline.
They find themselves tossed from ships, moaning amid house fires, dealing with substance issues, getting arrested, and being bullied. This prompts the question: Are these videos merely 30-second representations of warning signs? Are they about excess, betrayal, and redemption? Is Shakespeare somehow intertwined with Euripides? Are they modern-day parables?
Each video is marked by unfortunate domestic disasters: accidentally launching kittens into ceiling fans, or a cheating wife who neglects her husband’s pleas for attention. It’s an epic quest, entirely revolving around fur.
And it doesn’t take long for things to take a darker turn. In one unsettling video, Cat Rady finds himself submerged in Southern-Picked Cotton, just before being assaulted by a white cat clad in overalls. Each clip is disturbing, occasionally violent, and perpetually melodramatic, often garnished with bizarre AI-generated characters including erratic eagles and underwater sharks, all while the cats maintain a human-like physique and a bipedal swagger.
In another, “Luigi Meowgione” suffers as he watches his cat grandmother collapse in a grocery store. With her health insurance denied, he confronts the “Evil Corp Insurance” company, taking matters into his own hands and filling the building with Catnip gas. As a security guard falls victim to a grotesque case of Munch, Luigi Meowgione cleverly hacks the system, seemingly poised to confront the CEO… but we’re left hanging as part two hasn’t yet been revealed.
Ultimately, the Internet has always been enamored with felines possessing human characteristics. “Can I just have a cheeseburger?” This iconic phrase resonates because it encapsulates moments of feline discontent, showcasing the underlying personality. So, are these miaow-miaow videos the final evolution of anthropomorphized kittens? Or are they merely reproducing age-old motifs?
Feedback delivers the latest in science and technology news, providing insights into what captivates readers. Email Feedback@newscientist.com to share items you think might intrigue our audience.
Cleaning Chronicles
While at times seemingly unproductive, Feedback finds a way to engage with what may appear as idleness. Recently, we spent more time than expected watching online videos, and here’s what we gleaned.
Diving into the depths of YouTube, we ventured down a path filled with carpet and rug cleaning videos. This might sound dull, yet one company’s use of the R2-Clean2 and Dirt Reynolds intrigued us.
Strangely enough, we found a soothing pleasure in watching dirt layers being lifted and the rug’s patterns emerge once again. Time faded away. The stress dissipated. Feedback embraced a Zen-like state; our minds were clear and receptive. There was no demand, no stress, just the simple act of cleaning a rug.
Once we snapped back to reality, we observed the fascinating interplay between humans and technology. Amidst the myriad cleaning YouTubers, there’s an army dedicated to capturing the messiest rugs in the most dramatic ways possible. After all, if your rug-cleaning video doesn’t attract millions of views, the revenue won’t match the effort.
Consequently, it’s tough to find a video that merely shows a dirty rug. A typical cleaning video appears as if a rug was pulled from a muddy abyss, taken over by fungi, and processed through the digestive system of a stray animal. Sometimes, maggots make an appearance. One can watch hours of labor spent with buckets and sprays to restore cleanliness.
This quaint subculture reflects our society: even mundane tasks like rug cleaning become exaggerated to the extreme, driven not by their inherent value but by the quest for attention and profit.
Enough philosophy. I certainly don’t plan on watching someone speedrun Super Mario Odyssey.
Rumblings
Alongside many readers, Feedback grapples with the ongoing controversies surrounding bestselling author Raynor Winn. Her book Salt Path and other works on extensive walking journeys faced scrutiny after Observer published accusations that she misled the public regarding her and her husband’s period of homelessness, as well as his health during their trekking on England’s southwest coast. Winn denies any wrongdoing.
This revelation surfaced shortly after the film adaptation of Salt Path caused embarrassment for all parties involved, but in the realm of Feedback, the most shocking realization was that Winn’s real name is Sally Walker.
Literary Innovations
In July, Feedback addressed the potential applications of generative AI tools like ChatGPT to simplify challenging literary texts. I’ve observed soft rewrites of renowned opening lines that retain their essence. This resonated with many of you.
Eric Bignell highlighted Macbeth’s poignant soliloquy from Act 5, scene 5: “Tomorrow and tomorrow, tomorrow / creeping up at a daily pace from day to day / to the final syllable of recorded time. I’m stupid now, full of sound and rage / meaning nothing.” Eric simplified it through ChatGPT to: “Life is short, meaningless and full of noise.”
Numerous readers contributed their suggestions on how AI might reinterpret famous passages. For instance, consider George Orwell’s foreboding opener in 1984: “It was a bright, cold day in April, and the clock struck thirteen.” David Aldred aptly proposed, “It was a well-defined afternoon on a bright, cold April day.” Nothing essential was lost!
The favorite rewrites included the opening line of Charles Dickens’ A Tale of Two Cities: “It was the best of times, it was the worst of times…” Ian Glendon comically suggested a literal version: “When I bought it, the watch was fine, but it doesn’t work anymore.” However, Simon Byrd, David Strachan, and Rod Newberry each proposed a variation with the same essence: “On average, it was fine.”
Simon even came up with an alternative to Edward Bulwer-Lytton’s infamous first line from Paul Clifford: “It was a dark and stormy night,” suggesting the creative twist: “Welcome to Scotland.”
Ultimately, Stuart Bell concluded with a bold suggestion to loosen AI’s constraints when engaging with James Joyce’s famously perplexing text Ulysses. Not due to enhancement, but rather because the work should “break AI, or at the very least, induce a headache.”
Have thoughts on Feedback?
Feel free to reach out by emailing feedback@newscientist.com. Remember to include your home address. You can find this week’s feedback and past editions on our website.
Deepfake video showcasing Australian Prime Minister Anthony Albanese on a smartphone
Australia’s Associated Press/Alamy
Universal DeepFake Detectors have demonstrated optimal accuracy in identifying various types of videos that have been altered or entirely produced by AI. This technology can assist in flagging adult content, deepfake scams, or misleading political videos generated by unregulated AI.
The rise of accessible DeepFake Creation Tools powered by inexpensive AI has led to rampant online distribution of synthetic videos. Numerous instances involve non-consensual depictions of women, including celebrities and students. Additionally, deepfakes are utilized to sway political elections and escalate financial scams targeting everyday consumers and corporate leaders.
Nevertheless, most AI models designed to spot synthetic videos primarily focus on facial recognition. This means they excel in identifying a specific type of deepfake where a person’s face is swapped with existing footage. “We need a single video with a manipulated face and a model capable of detecting background alterations or entirely synthetic videos,” states Rohit Kundu from the University of California Riverside. “Our approach tackles that particular issue, considering the entire video could be entirely synthetically produced.”
Kundu and his team have developed a universal detector that leverages AI to analyze both facial features and various background elements within the video. It can detect subtle signs of spatial and temporal inconsistencies in deepfake content. Consequently, it identifies irregular lighting conditions for people inserted into face-swapped videos, as well as discrepancies in background details of fully AI-generated videos. The detector can even recognize AI manipulation in synthetic videos devoid of human faces, and it flags realistic scenes in video games like Grand Theft Auto V, independent of AI generation.
“Most traditional methods focus on AI-generated facial videos, such as face swaps and lip-synced content.” says Siwei Lyu from Buffalo University in New York. “This new method is broader in its applications.”
The universal detector reached an impressive accuracy rate of 95% to 99% in recognizing four sets of test videos featuring manipulated faces. This performance surpasses all previously published methods for detecting this type of deepfake. In evaluations of fully synthetic videos, it yielded more precise results than any other detectors assessed to date. Researcher I presented their findings at the 2025 IEEE Conference on Computer Vision and Pattern Recognition in Nashville, Tennessee, on June 15th.
Several researchers from Google also contributed to the development of these new detectors. Though Google has not responded to inquiries regarding whether this detection method would be beneficial for identifying deepfakes on platforms like YouTube, the company is among those advocating for watermarking tools that help label AI-generated content.
The universal detectors have room for future enhancements. For instance, it would be advantageous to develop capabilities for detecting deepfakes utilized during live video conference calls—a tactic some scammers are now employing.
“How can you tell if the individual on the other end is genuine or a deepfake-generated video, even with network factors like bandwidth affecting the transmission?” asks Amit Roy-Chowdhury from the University of California Riverside. “This is a different area we’re exploring in our lab.”
The quantity of online videos depicting child sexual abuse created by artificial intelligence has surged as advancements in technology have impacted pedophiles.
According to the Internet Watch Foundation, AI-generated abuse videos have surpassed a critical level, nearing a point where they can nearly measure “actual images,” with a notable increase observed this year.
In the first half of 2025, the UK-based Internet Safety Watchdog examined 1,286 AI-generated videos containing illegal child sexual abuse material (CSAM), a sharp increase from just two during the same period last year.
The IWF reported that over 1,000 of these videos fall under Category A abuse, the most severe classification of such material.
The organization indicated that billions have been invested in AI, leading to a widely accessible video generation model that pedophiles are exploiting.
“It’s a highly competitive industry with substantial financial incentives, unfortunately giving perpetrators numerous options,” stated an IWF analyst.
This video surge is part of a 400% rise in URLs associated with AI-generated child sexual abuse content in the first half of 2025, with IWF receiving reports of 210 such URLs compared to 42 last year.
IWF discovered one post on a Dark Web Forum where a user noted the rapid improvements in AI and how pedophiles had rapidly adapted to using an AI tool to “better interact with new developments.”
IWF analysts observed that the images seem to be created by utilizing free, basic AI models and “fine-tuning” these models with CSAM to produce realistic videos. In some instances, this fine-tuning involved a limited number of CSAM videos, according to IWF.
The most lifelike AI-generated abuse videos encountered this year were based on actual victims, the Watchdog reported.
Interim CEO of IWF, Derek Ray-Hill, remarked that the rapid advancement of AI models, their broad accessibility, and their adaptability for criminal purposes could lead to a massive proliferation of AI-generated CSAM online.
“The risk of AI-generated CSAM is astonishing, leading to a potential flood that could overwhelm the clear web,” he stated, cautioning that the rise of such content might encourage criminal activities like child trafficking and modern slavery.
The replication of existing victims of sexual abuse in AI-generated images allows pedophiles to significantly increase the volume of CSAM online without having to exploit new victims, he added.
The UK government is intensifying efforts to combat AI-generated CSAM by criminalizing the ownership, creation, or distribution of AI tools designed to produce abusive content. Those found guilty under this new law may face up to five years in prison.
Additionally, it is now illegal to possess manuals that instruct potential offenders on how to use AI tools for creating abusive images or for child abuse. Offenders could face up to three years in prison.
In a February announcement, Interior Secretary Yvette Cooper stated, “It is crucial to address child sexual abuse online, not just offline.”
AI-generated CSAM is deemed illegal under the Protection Act of 1978, which criminalizes the production, distribution, and possession of “indecent or false images” of children.
This piece was reported by indicator, a publication focused on unearthing digital misinformation, in partnership with the Guardian.
Numerous YouTube channels have blended AI-generated visuals with misleading claims surrounding Sean “Diddy” Combs’s high-profile trial, attracting tens of millions of views and profiting from the spread of misinformation.
Data from YouTube reveals that 26 channels have garnered a staggering 705 million views from approximately 900 AI-influenced videos about Diddy over the last year.
These channels typically employ a standardized approach. Each video features an enticing title and AI-generated thumbnail that fabricates connections between celebrities and Diddy with outrageous claims, such as a celebrity’s testimony forcing them to engage in inappropriate acts or revealing shocking secrets about Diddy. Thumbnails regularly showcase well-known figures in courtroom settings alongside images of Diddy, with many featuring suggestive quotes designed to grab attention, including phrases like “f*cked me me me me me of me,” “ddy f*cked bieber life,” and “she sold him to Diddy.”
Channels indulging in Diddy’s “Slop,” a term for low-quality, AI-generated content, have previously demonstrated a penchant for disseminating false claims about various celebrities. Most of the 26 channels seem to be either repurposed or newly created, with at least 20 being eligible for advertising revenue.
Spreading sensational and erroneous “Diddy AI Slop” has become a quick avenue for monetization on YouTube. Wanner Aarts, managing numerous YouTube channels that employ AI-generated content, expressed his strategies for making money on the platform, noting his detachment from the Diddy trend.
“If someone asked, ‘How can I make $50,000 quickly?’ the first thing might be akin to dealing drugs, but the second option likely involves launching a Diddy channel,” Aarts (25) stated.
Fabricated Celebrity Involvement
The indicator analyzed hundreds of thumbnails and titles making false claims about celebrities including Brad Pitt, Will Smith, Justin Bieber, Oprah Winfrey, Eddie Murphy, Leonardo DiCaprio, Dwayne “Rock” Johnson, 50 Cent, Joe Logan, and numerous others. Notably, one channel, Fame Fuel, uploaded 20 consecutive videos featuring AI-generated thumbnails and misleading titles related to U.S. Attorney General Pam Bondy and Diddy.
Among the top-performing channels is Peeper, which has amassed over 74 million views since its inception in 2010, but pivoted to exclusively covering Diddy for at least the last eight months. Peeper boasts some of the most viral Diddy videos, including “Justin Bieber reveals Will Smith, Diddy and Clive Davis grooming him,” which alone attracted 2.3 million views. Peeper is currently being converted into a demo.
Channels named Secret Story, previously offering health advice in Vietnamese, shifted focus to Diddy content, while Hero Story transitioned from covering Ibrahim Traore, the military leader of Burkina Faso, to Diddy stories. A Brazilian channel that amassed millions from embroidery videos also pivoted to Diddy content just two weeks ago. A channel named Celebrity Topics earned over 1 million views across 11 Diddy videos in just three weeks, despite being created in early 2018 and appearing to have deleted prior videos. Both Secret Story and Hero Story were removed by YouTube following inquiries from the indicator, while Celebrity Topics has since undergone rebranding.
Shifting Focus to Diddy
For instance, around three weeks ago, the channel PAK GoV Update started releasing videos about Diddy, utilizing AI-generated thumbnails with fictitious quotes attributed to celebrities like Ausher and Jay-Z. One video labeled “Jay-Z breaks his silence on Diddy’s controversy,” included a tearful image of Jay-Z with the text “I Will Be Dod” superimposed.
The video achieved 113,000 views with nearly 30 minutes of AI-generated narration accompanied by clips from various TV news sources, lacking any new information from Jay-Z, who did not provide any of the attributed quotes.
The Pak Gov Update channel previously focused on Pakistan’s public pensions, generating modest views—its most popular being a poorly titled video about the pension system that garnered 18,000 views.
Monetizing Misinformation
Aarts commented that the strategy of exploiting Diddy Slop is both profitable and precarious. “Most of these channels are unlikely to endure,” he remarked, referencing the risk of being penalized for violating YouTube policies and potential legal actions from Diddy or other celebrities depicted in their thumbnails and videos.
Like PAK Gov Update, most videos uploaded by these channels predominantly utilize AI narration and fewer direct clips from news reports, often leaning on AI-generated images. The use of actual footage tends to skirt the boundaries of fair use.
The YouTube channel Pakreviews-F2Z has produced numerous fake videos surrounding the Diddy trial, disguised under the name Pak Gov Update. Photo: YouTube
AI Slop represents one of the many variations of Diddy-related content proliferating on YouTube. This niche appears to be expanding and proving lucrative. Similar Diddy-focused AI content has attracted engagement on Tiktok.
“We are fans of the world,” stated YouTube spokesperson Jack Maron in an email. Maron noted that the platform has removed 16 channels linked to this phenomena and confirmed that various channels, including Pak Gov Update, have faced similar actions.
The Diddy phenomenon exemplifies the convergence of two prominent trends within YouTube: automation and faceless channels.
YouTube Automation hinges on the premise that anyone can establish a prosperous YouTube venture through the right niche and low-cost content creation strategies, including topic discovery, idea brainstorming, or employing international editors to churn out content at an automated rate.
With AI, it has become simpler than ever to embark on a faceless automation journey. Aarts indicated that anyone can generate scripts using ChatGPT or analogous language models, create images and thumbnails via MidJourney or similar software, utilize Google Veo 3 for video assembly, and implement AI voice-over using tools like ElevenLabs. He further mentioned that he often hires freelancers from the Philippines or other regions for video editing tasks.
“AI has democratized opportunities for budget-conscious individuals to engage in YouTube automation,” Aarts stated, highlighting it can cost under $10 per video. He reported earnings exceeding $130,000 from over 45 channels.
Muhammad Salman Abazai, who oversees As a Venture, a Pakistani firm offering video editing and YouTube channel management services, commented that Diddy video content has emerged as a “legitimate niche” on YouTube, showcasing successful Diddy videos created by his team.
“This endeavor has proven fruitful for us, as it has significantly boosted our subscriber count,” he noted.
NV Historia shifted focus following the viral response to a Diddy-themed video titled “A minute ago: No one expected Dwayne Johnson to say this in court about Diddy,” featuring AI-generated images of Johnson and Diddy in court along with disturbing visuals of alleged incidents. The thumbnail showcased the quote “He gave me it.”
Johnson has neither testified nor had any connection to allegations against Diddy. This video has gathered over 200,000 views. Following this, NV Historia managed another video linking Oprah Winfrey and other celebrities to Diddy, which earned 45,000 views. Subsequently, the channel committed entirely to Diddy content and has since been removed by YouTube.
A French channel, Starbuzzfr, was launched in May and appears to exclusively publish Diddy-related content, deploying AI-generated thumbnails and narration to spin fabricated narratives, such as Brad Pitt’s supposed testimony against Diddy, claiming he experienced abuse by the mogul. Starbuzzfr notably utilizes sexualized AI-generated imagery featuring Diddy and celebrities like Pitt. As of this writing, the channel remains monetized.
Aarts noted that the general sentiment within the YouTube automation community respects anyone who manages to monetize their content.
“I applaud those who navigate this successfully,” he remarked.
A new “coronal adaptive optics” system has been developed by astronomers at the NSF’s National Solar Observatory and New Jersey Institute of Technology to generate high-resolution images and films by eliminating atmospheric blurring.
This image captures a 16-minute time-lapse film that illustrates the formation and collapse of a complex plasma stream measuring approximately 100 km per 100 km in front of a coronal loop system. This marks the first observation of such flows, referred to as plasmoids, raising questions about the dynamics involved. The image, taken by a Good Solar Telescope at Big Bear Solar Observatory with the new coronal adaptive optics system CONA, showcases hydrogen α light emitted by the solar plasma. While the image is artificially colored, it reflects the real color of hydrogen alpha light, with darker colors indicating bright light. Image credit: Schmidt et al. /njit /nso /aura /nsf.
The solar corona represents the outermost layer of the solar atmosphere, visible only during a total solar eclipse.
Astronomers have long been fascinated by its extreme temperatures, violent eruptions, and notable prominence.
However, Earth’s atmospheric turbulence has historically caused blurred images, obstructing the observation of the corona.
“Atmospheric turbulence, similar to the sun’s own dynamics, significantly degrades the clarity of celestial observations through telescopes. Fortunately, we have solutions,” stated Dr. Dark Schmidt, an adaptive optics scientist at the National Solar Observatory.
CONA, the adaptive optics system responsible for these advancements, corrects the atmospheric blurring affecting image quality.
This cutting-edge technology was funded by the NSF and implemented at the 1.6-meter Good Solar Telescope (GST) located at Big Bear Solar Observatory in California.
“Adaptive optics function similarly to autofocus and optical image stabilization technologies found in smartphone cameras, fixing atmospheric distortions rather than issues related to user instability,” explained Dr. Nicholas Golsix, optical engineer and lead observer at Big Bear Solar Observatory.
The second film depicts the rapid creation and collapse of a finely detailed plasma stream.
“These observations are the most detailed of their kind, highlighting features that were previously unobserved, and their nature remains unclear,” remarked Vasyl Yurchyshyn, a professor at the New Jersey Institute of Technology.
“Creating an instrument that allows us to view the sun like never before is incredibly exciting,” Dr. Schmidt commented.
Another film captures the dynamic movements across the solar surface, influenced by solar magnetism.
“The new Collar Adaptive Optical System closes the gap from decades past, delivering images of coronal features with resolution down to 63 km. This is the theoretical limit achievable with the 1.6 m Good Solar Telescope,” Dr. Schmidt stated.
“This technological leap is transformative. Discoveries await as we improve resolution tenfold,” he emphasized.
The team’s findings are detailed in a published paper in today’s issue of Nature Astronomy.
____
D. Schmidt et al. Observation of fine coronal structures with higher order solar adaptive optics. Nature Astronomy Published online on May 27, 2025. doi:10.1038/s41550-025-02564-0
More than half of the claims made in the popular Tiktok video regarding attention deficit hyperactivity disorder (ADHD) are not in line with clinical guidelines.
ADHD affects Approximately 1% According to the global burden of disease research, people all over the world. There is a positive debate about whether ADHD is underdiagnosed. Some psychologists say there can be a substantial proportion of people who have it.
To understand the impact of social media on ADHD perceptions, Vasileia Karasavva The University of British Columbia (UBC), Canada, and her colleagues watched the 100 most viewed videos on Tiktok on January 10, 2023 using the hashtag #ADHD.
The average video included three claims about ADHD. The researchers presented their own claims to two psychologists. He was asked if it accurately reflected the symptoms of ADHD from DSM-5, a popular textbook used to diagnose mental disorders. Only 48.7% of the claims met that requirement. More than two-thirds of the video attributed ADHD to the problems that psychologists said were reflecting “normal human experiences.”
“We asked two experts to watch the top 100 most popular videos, and we found that they didn't really match the empirical literature,” says Karasavva. “We're like, 'OK, this is the problem.' ”
The researchers asked psychologists to rate the video on a scale of 0-5. We then asked 843 UBC students to describe the videos evaluated by psychologists as five best and five worst ADHDs, and then rated them before rating them. Psychologists earned a more clinically accurate video on an average of 3.6, while students rated it at 2.8. In the least-savvy video, students gave an average score of 2.3 compared to 1.1 from psychologists.
Students were also asked whether they would recommend video and their perception of the prevalence of ADHD in society. “The amount of time you watched ADHD-related content on Tiktok has increased your chances of recommending videos and identifying them as useful and accurate,” says Karasavva.
“They are the ones who wonder how common the outcomes are for Tiktok or all the health content on the internet.” David Ellis At the University of Bath, UK. “We live in a world where we know a lot about health, but the online world is still full of misinformation. Tiktok only reflects that reality to us.”
Ellis says that medical misinformation is likely to be even higher given mental health issues, as diagnosis is based on observation rather than more objective testing.
However, banning ADHD videos on Tiktok is “no use.” Even if it's misinformation, Karasavva says. “Maybe more experts should put out more videos, or maybe it's just that they're doing it for themselves because they're a little more discernible and critical of the content they consume,” she says.
Thichtok declined to comment on the details of the study, New Scientist Anyone who takes action against medical misinformation and seeks advice on neurological conditions should contact a medical professional.
YouTube is taking steps to stop recommending videos to teenagers that promote certain fitness levels, weights, or physical characteristics after experts warn about the potential harm of repeated viewing.
Although 13- to 17-year-olds can still watch videos on the platform, YouTube will no longer automatically lead them to a “maze” of related content through algorithms.
While this type of content does not violate YouTube’s guidelines, the platform recognizes the negative impact it can have on the health of some users if viewed repeatedly.
Dr Garth Graham, YouTube’s head of global health, stated that repeated exposure to idealized standards could lead teenagers to develop unrealistic self-perceptions and negative beliefs about themselves.
Experts from YouTube’s Youth and Family Advisory Board advised that certain categories of videos, harmless individually, could become troubling when viewed repeatedly.
YouTube’s new guidelines, being rolled out globally, target content that idealizes certain physical features, fitness, weight, or social aggression, among others.
Teenagers who have registered their age on the platform will no longer be repeatedly recommended such topics, following a safety framework already implemented in the US.
Clinician and YouTube advisor Allison Briscoe Smith emphasized the importance of setting “guardrails” to help teens maintain healthy self-perceptions when exposed to idealized standards.
In the UK, new online safety legislation mandates technology companies to protect children from harmful content and consider the risks their algorithms may pose to under-18s by exposing them to harmful content.
Road Town, British Virgin Islands, March 13, 2024, Chainwire
riff 3 (LIF3/USD)(LIF3/USDt) an innovative multi-chain DeFi Layer-1 Ecosystem The company, which operates on Ethereum, Polygon, BNB Chain, and Phantom, is pleased to announce a strategic partnership with. bitgo, an industry-leading secure and qualified institutional custodian. This collaboration represents a major step forward in securing and democratizing access to blockchain technology for users around the world. riff3.com Leverages BitGo’s pioneering multi-signature technology for custody transactions and cold storage. Lif3 token, L share token, and L3USD.
“We are excited to support Lif3’s goal of increasing access to DeFi with our industry-leading secure custody solution. This partnership will allow Lif3 users to feel secure and confidently participate in the DeFi ecosystem. Become.” mike belsheCEO of bitgo.
“This strategic partnership not only strengthens the security of digital assets for institutional customers, but also instills new confidence in secure storage and transaction capabilities within the Lif3 ecosystem, creating a new gold standard for asset protection in the DeFi space. As a supporter of the LIF3 ecosystem, I am very excited to leverage BitGo's renowned multi-signature authority custody solution to fully protect its core assets. By partnering with BitGo, recognized as the industry standard for security, we are able to leverage BitGo's cutting-edge cold storage technology to provide an innovative and unparalleled layer of security for Lif3 tokens, LSHARE tokens, and L3USD. It will be. My relationship with BitGo spans over 10 years, I've been using their products since 2013, and their product offerings have evolved from, for example, protecting Bitcoin to creating Wrapped Bitcoin (WBTC). I've been doing it. “It was an easy decision for him to choose BitGo to protect the Lif3 ecosystem.” Harry YehManaging Director quantum fintech group.
This partnership supports Lif3's vision of a simpler, more secure, and more interactive user experience, and facilitates seamless consumer DeFi acquisition through .riff 3 walletYou can download it from “. app store and google play.
This BitGo announcement Lif3’s recent Ethereum migration announcement strategic partnership with layer zerois an alliance designed to address the challenges associated with token bridging for a more secure and efficient blockchain experience.
riff3.com And that “Riff 3 Wallet” continues to be an interesting platform for those investing in the future of decentralized finance and blockchain technology. With a commitment to continuous improvement and innovation, Lif3 has established itself as a frontrunner in shaping the future landscape of the digital economy, and through the Lif3 mobile app, a one-stop solution for adoption, investment, and trading, Lif3 We are realizing our vision of breaking down barriers to adoption. , earn money, play games, and off-ramp.
About Lif3.com
Lif3.com is a complete omnichain DeFi ecosystem that includes carefully selected layer 1 blockchains and self-custodial wallets. ‘Lif3 Wallet’ is available on the App Store and Google Play – Unlocking the potential of Web3 through consumer DeFi, iGaming and entertainment sectors
LIF3
LIF3 (LIF3) is an ERC-20 token that powers the LIF3 ecosystem, providing a comprehensive suite of features for managing digital assets across multiple blockchains while allowing users to benefit from staking. Offers. To access $LIF3 on Bitfinex, please visit: https://trading.bitfinex.com/t/LIF3:UST – The API symbol for LIF3 is LIFIII. bitfinex
Founded in 2013, BitGo is a leading provider of secure digital asset wallet solutions, offering institutional-grade custody, staking, trading, and core wallet infrastructure. Notably, the company pioneered multi-signature wallets and launched BitGo Trust Company, the first certified custodian of digital assets, in 2018.with $250 million insurance policy, SOC 1 Type 2 and SOC 2 Type 2 certification, and strict regulatory compliance, BitGo guarantees high standards of security and confidentiality. BitGo has expanded its services and introduced his institutional-level DeFi, NFT, Web3 products, and the Go Network. In 2023, the company secured $100 million in Series C funding, giving it a company value of $1.75 billion. BitGo supports over 700 digital assets, processes 20% of on-chain Bitcoin transactions, and serves his over 1,500 institutional customers in 50 countries.
Disclaimer
Custody services are provided through BitGo Trust Company, a South Dakota chartered trust company. BitGo is not registered with the SEC and does not provide legal, tax, investment, or other advice. Please consult your legal/tax/investment professional with any questions regarding your specific situation.
About Quantum Fintech Group
Quantum Fintech Group is a private investment group founded in 2020 that focuses on providing superior returns in the alternative asset space with a particular focus on blockchain investments.
OpenAI on Thursday announced a tool that can generate videos from text prompts.
The new model, called Sora after the Japanese word for “sky,” can create up to a minute of realistic footage that follows the user’s instructions for both subject matter and style. The model can also create videos based on still images or enhance existing footage with new material, according to a company blog post.
“We teach AI to understand and simulate the physical world in motion, with the goal of training models that help people solve problems that require real-world interaction.” says the blog post.
One video included among the company’s first few examples was based on the following prompt: Movie trailer featuring the adventures of a 30-year-old astronaut wearing his red woolen knitted bike in his helmet, blue sky, salt desert, cinematic style shot on 35mm film, vibrant colors .”
The company announced that it has opened up access to Sora to several researchers and video creators. According to the company’s blog post, experts have “red-teamed” the product and implemented OpenAI’s terms of service, which prohibit “extreme violence, sexual content, hateful images, likenesses of celebrities, or the IP of others.” We will test whether there is a possibility of evasion. The company only allows limited access to researchers, visual artists and filmmakers, but CEO Sam Altman took to Twitter after the announcement to answer questions from users about a video he said was created by Sola. posted. The video contains a watermark indicating that it was created by AI.
The company debuted its still image generator Dall-E in 2021 and its generated AI chatbot ChatGPT in November 2022, quickly gaining 100 million users. His other AI companies have also debuted video generation tools, but those models could only generate a few seconds of footage that had little to do with the prompt. Google and Meta said they are developing a video generation tool, although it is not publicly available. on wednesday, announced the experiment We’ve added deeper memory to ChatGPT to remember more of your users’ chats.
OpenAI told the New York Times how much footage was used to train Sora, except that the corpus includes videos that are publicly available and licensed from copyright holders. He also did not reveal the source of the training video. The company has been sued multiple times for alleged copyright infringement in training generative AI tools that digest vast amounts of material collected from the internet and mimic the images and text contained in those datasets. .
Meta’s oversight board has reversed the social media company’s decision to remove two videos about the Israel-Hamas war from its platform.
One of the videos in question was posted on Facebook of an Israeli woman who was taken hostage in the October 7 attack on Israel and begs her kidnappers not to kill her.
“Another incident concerns a video posted on Instagram during an Israeli ground offensive in the northern Gaza Strip that appears to be the aftermath of an attack on or near Al-Shifa Hospital in Gaza City.”Yes,” the monitoring committee said.
The semi-independent 22-member board that oversees meth-owned sites Facebook and Instagram ruled that the posts alerted the world to “human suffering on both sides,” the company said in a statement. announced on Monday.
Although “the posts show deaths and injuries to Palestinians, including children,” the committee said Mehta must maintain “freedom of expression and freedom of access to information.”
Meta reinstated two videos from the Israel-Hamas war that were circulating on its platform after the company’s oversight board said the posts “signaled the world to the human suffering on both sides.” . Reuters
In both cases, the board “approved the company’s subsequent decision to display a warning screen and reinstate the posts,” and said the company’s initial “expedited review” had concluded.
In an expedited review, the oversight committee must make a decision within 30 days instead of the usual 90 days.
In this case, it took just 12 days for board members to reach a conclusion on the two videos in question, highlighting how quickly social media companies must act when it comes to handling content related to disputes. highlighted.
Oversight Committee Co-Chairman Michael McConnell said, “The Oversight Committee remains focused on protecting the right to free expression of people from all walks of life about these horrific events, while ensuring that any testimony does not preclude violence.” “We assured them that it was not intended to incite hatred or incite hatred.”
“These testimonials are important not only to speakers, but also to users around the world who seek timely and diverse information about groundbreaking events.”
One of the videos, which Mehta deleted from Facebook and later restored, showed an “Israeli woman taken hostage in the October 7 attack in Israel, pleading with her kidnappers not to kill her.” The committee explained that Getty Images
Commenting on the lawsuit’s ruling, Mehta said: blog post On Tuesday, it confirmed that the two posts had been reinstated, saying: “Therefore, no further action will be taken.”
“We welcome the Oversight Board’s decision on this matter today,” Mehta said, adding that “expression and safety are both important to us and the people who use our services.”
The move comes amid increased scrutiny of social media platforms’ moderation policies.
The second video in question was posted on Instagram and “shows what appears to be the aftermath of a strike at or near Al-Shifa Hospital in Gaza City,” the monitoring committee said. Getty Images
The European Union recently opened an investigation into Company X, owned by Elon Musk, after the site formerly known as Twitter complied with rules requiring social media platforms to combat illegal content and disinformation. I checked to see if it was there.
In its proceedings, the European Commission “assessed whether “I will.”
This is the first investigation of its kind under the new law, with the site submitting a risk assessment report in September, followed by a transparency report a month later, stating that it was This was done after responding to a request for information. Background to Hamas’s terrorist attack on Israel, according to a press release.
The committee specifically noted that Musk’s social media platforms may not have taken effective measures to “counter the manipulation of information on their platforms.”
YouTube announced Today, we’re adding a new comment moderation setting, “Pause,” allowing creators and moderators to keep existing comments on videos while preventing viewers from adding new comments.
Instead of turning off comments completely or holding comments and reviewing them manually, you can temporarily pause comments until you have enough time to filter out trolls and negative opinions. can.[一時停止]The options can be found in the video-level comment settings on the app’s play page or in the top right corner of the comments panel in YouTube Studio. When pausing is turned on, viewers will see below the video that all comments and comments that have already been published have been paused.
Introducing new moderation settings for channels: Pause comments ⏸️
In addition to turning comments “on” and “off,” you can now “pause” comments. Existing comments will remain visible, but new comments will be disabled, giving you more control and flexibility 🌟 Learn more → https://t.co/wNAspRiR4s
Video sharing platform Under experiment A pause function has been added since October. According to YouTube, the experimental group reported feeling “more flexible” and no longer overwhelmed by managing too many comments.
As part of today’s announcement, YouTube also changed the names of some of its comment moderation settings. A new, more descriptive name may make it easier for people to determine what the tool does. For example, “On”, “None”, “Keep All”, “Off”. Other settings are self-explanatory, such as Basic, which holds potentially inappropriate comments for review, and Strict, which holds a broader range of potentially harmful comments.
In related news, YouTube is also testing a new feature that summarizes topics within comments.
This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.
Strictly Necessary Cookies
Strictly Necessary Cookie should be enabled at all times so that we can save your preferences for cookie settings.