Parents Can Foster Healthy Eating Habits in Children
Plainpicture/artwall
Nancy Bostock, a pediatrician at Cambridgeshire and Peterborough NHS Foundation Trust, is deeply concerned about the conflicting messages regarding food that children and parents receive. With her expertise in children’s weight management and mental health, she has co-led the creation of innovative food strategies. For more information, visit Cambridge Children’s Hospital.
“I worry that parents may feel overwhelmed by advice from various sources, leading them to adopt practices that might not serve their children’s best interests,” Bostock explains. In her interview with New Scientist, she shares six straightforward, science-based strategies to help children cultivate a healthy relationship with food.
1. Emphasize the Social and Emotional Dimensions of Eating
Eating, parenting, and anxiety are intertwined. This can manifest early in life. Some families experience food insecurity shortly after the birth of a child, pressuring parents to breastfeed. While breastfeeding is undoubtedly beneficial, it is also crucial for the mother’s mental health. Common early-life challenges such as hypoglycemia and jaundice can induce parental guilt over inadequate nourishment.
This stress often leads parents to excessively monitor their children’s eating habits, overshadowing the fundamental relationship between children and food. Remember, most children will eat when hungry and drink when thirsty.
Many parents fret about whether their children drink enough water. However, as long as your child is thriving, there’s no need to constantly check their hydration levels. Trust your child’s instincts.
Additionally, consider the social dynamics of family meals. Reflect on mealtime experiences: do you all eat together? Are meals enjoyable and relaxed? Foster a positive and communal atmosphere surrounding food.
2. Avoid Saying, “I Can’t Eat the Dessert Until I Finish My Dinner.”
Allowing children to regulate their own appetite fosters healthier eating habits as they grow. Minimize parental interference: promote that food is a source of nourishment and energy, and let your child understand their bodily needs.
Statements like “you can’t have dessert until finishing dinner” can lead to unhealthy binge eating. If dessert is always sweet and rich, children might favor less nutritious foods over time, sending a negative message about enjoying food. Instead, serve dinner followed by fruit if desired.
3. Refrain from Imposing Unnecessary Dietary Restrictions
Amidst abundant dietary advice, parents often seek guidance from nutritionists or behavioral specialists to manage children’s eating habits. However, many recommendations to restrict particular foods lack medical foundation. For instance, enforcing a gluten-free diet without celiac disease could have negative effects, including fiber loss and nutrient deficiencies.
Moreover, outright banning certain foods can create a perception of them being “unsafe.” Research suggests that a healthier approach is to prioritize the intake of nutrient-rich foods—fiber, fruits, vegetables, and whole grains—over eliminating food groups.
4. Prevent Children from Using Food for Manipulation
Parents often worry about their children’s eating habits or how their behavior might change if they don’t eat enough. Kids quickly pick up on their parents’ concerns and may manipulate situations with food. Phrases like “If I don’t have ice cream right now, I’ll be sad” can escalate the situation and, if parents give in, this only reinforces bad behavior. Instead, communicate that eating is for energy and health, not a bargaining tool. Offer choices without pressure, like fruit or yogurt if they don’t want the main meal.
5. Recognize That Likes and Dislikes Are Normal
It’s normal for children to become picky eaters as they develop. Research shows that a significant percentage of preschoolers exhibit selective eating behaviors. This phase helps children differentiate safe from unsafe foods. Rather than imposing restrictions, present new foods without pressure; studies suggest children typically need about 15 positive exposures to a new food before accepting it.
While it’s essential to avoid foods known to cause allergies, continued exposure to a range of foods is crucial for health, environmental sustainability, and diverse life experiences. Offer variety and understand that tolerance can precede acceptance.
6. Reflect on Your Own Eating Behaviors
Children mirror their parents’ attitudes and beliefs about food. It’s vital to model healthy perspectives. If you express negativity about your body or weight, children may internalize similar thoughts. Evidence shows kids often adopt their parents’ biases. Hence, the best way to nurture a positive relationship with food and body image in your child is to cultivate one in yourself.
As narrated by Helen Thomson
If your child’s diet is excessively restricted, or if they are not growing or gaining weight appropriately, please consult a healthcare professional.
The award-winning athlete may have been a late bloomer when it came to enhancing their abilities
Michael Steele/Getty Images
A review has revealed that international chess masters, Olympic gold medalists, and Nobel Prize-winning scientists were seldom child prodigies. In many cases, early childhood achievements and rigorous training do not lead to elite performance as adults.
This investigation, based on 19 studies involving nearly 35,000 high achievers, indicates that most adults who dominate global rankings in their respective fields engaged in various activities during their youth, gradually honing their expertise.
The findings challenge the popular notion that reaching top performance internationally necessitates rigorous training in early childhood, according to Arne Gullich from RPTU Kaiserslautern in Germany. “Understanding that many world-class performers were not exceptionally outstanding in their formative years implies that extraordinary early achievements are not a precondition for sustained elite performance.”
The life stories of notable global experts further indicate that the correlation between childhood and adult success may not be as significant as perceived. For instance, while composer Wolfgang Amadeus Mozart, golfer Tiger Woods, chess prodigy Gyukesh Donmaraj, and mathematician Terence Tao were undeniable child prodigies, others like composer Ludwig van Beethoven, basketball legend Michael Jordan, chess player Viswanathan Anand, and scientist Charles Darwin were not recognized as such.
The studies analyzed by Gurich and colleagues included examinations of the life journeys of Olympic athletes, Nobel laureates in science, the top ten chess players globally, renowned classical music composers, and international leaders across various disciplines.
In numerous fields, early successes and later elite performers exhibited stark differences. In fact, only around 10% of individuals who excelled as youngsters maintained that level into adulthood, with a similar percentage of those who thrived as young adults continuing to excel later in life.
The researchers compared their findings with data from 66 studies on the training experiences of young athletes and “sub-elite” athletes — those who have achieved notable local recognition but are not world-class. They observed that common traits attributed to high-achieving youth, such as early specialization and rapid advancement, are often lacking or even reversed in adults who perform at the highest levels.
This may stem from the fact that gaining exposure to a diverse range of activities in early childhood cultivates adaptable learning skills, enabling children to discover the pursuits that resonate best with them. “Essentially, they identify the best match for their interests and enhance their learning potential for future endeavors,” Gullich notes.
Additionally, a less rigorous training schedule during childhood and adolescence can help mitigate the risk of burnout and injuries that might hinder long-term careers. “There’s a danger of becoming entrenched in an area that you no longer find enjoyable, which could lead you to seek a change,” Gullich adds.
This review addresses an enduring research gap by clearly differentiating between early success and prolonged elite performance. According to David Feldon from Utah State University, there remains a propensity to push children towards intense focus on acquiring and practicing specific skills. “This undoubtedly fosters expertise and yields immediate benefits,” he explains. “However, it remains uncertain whether this will be advantageous over a lifetime.”
For Feldon, who also coaches youth wrestling, the implications of this review are essential for those guiding children’s skill development. “It’s about not just nurturing exceptional expertise but doing so in a healthy and constructive manner that fosters improvement in a broader context, rather than simply achieving narrow targets.”
As a result, programs aimed at quick identification and acceleration of young talents may overlook many potential future leaders, as they often prioritize immediate success over sustained excellence. Gullich emphasizes, “Do elite training programs, gifted programs, and scholarship initiatives typically cater to very young age groups with a singular focus? Given recent evidence, it is more beneficial to inspire young people to engage in at least one or possibly two other disciplines over several years.”
Roblox maintains that Australia’s forthcoming social media restrictions for users under 16 should not extend to its platform, as it rolls out a new age verification feature designed to block minors from communicating with unknown adults.
The feature, which is being launched first in Australia, allows users to self-estimate their age using Persona age estimation technology built into the Roblox app. This utilizes the device’s camera to analyze facial features and provide a live age assessment.
This feature will become compulsory in Australia, the Netherlands, and New Zealand starting the first week of December, with plans to expand to other markets in early January.
After completing the age verification, users will be categorized into one of six age groups: under 9, 9-12, 13-15, 16-17, 18-20, or 21 and older.
Roblox has stated that users within each age category will only be able to communicate with peers in their respective groups or similarly aged groups.
Sign up: AU breaking news email
These changes were initially proposed in September and received positive feedback from Australia’s eSafety Commissioner, who has been in discussions with Roblox for several months regarding safety concerns on the platform, labeling this as a step forward in enhancing safety measures.
A recent Guardian Australia investigation revealed a week’s worth of virtual harassment and violence experienced by users who had set their profiles as eight years old while on Roblox.
Regulatory pressure is mounting for Roblox to be included in Australia’s under-16 social media ban, set to be implemented on December 10. Although there are exceptions for gaming platforms, Julie Inman-Grant stated earlier this month that eSafety agencies are reviewing chat functions and messaging in games.
“If online gameplay is the primary or sole purpose, would kids still utilize the messaging feature for communication if it were removed? Probably not,” she asserted.
During a discussion with Australian reporters regarding these impending changes, Roblox’s chief safety officer, Matt Kaufman, characterized Roblox as an “immersive gaming platform.” He explained, “I view games as a framework for social interaction. The essence lies in bringing people together and spending time with one another.”
When asked if this suggests Roblox should be classified as a social media platform subject to the ban, Kaufman responded that Roblox considers social media as a space where individuals post content to a feed for others to view.
“People return to look at the feed, which fosters a fear of missing out,” he elaborated. “It feels like a popularity contest that encapsulates social media. In contrast, Roblox is akin to two friends playing a game after school together. That’s not social media.”
“Therefore, we don’t believe that Australia’s domestic social media regulations apply to Roblox.”
When questioned if the new features were introduced to avoid being encompassed in the ban, Kaufman stated that the company is engaged in “constructive dialogue” with regulators and that these updates showcase the largest instance of a platform utilizing age verification across its entire user base.
Persona, the age verification company partnering with Roblox, Participating in Australian Age Guarantee Technology Trial. They reported a false positive rate of 61.11% for 15-year-olds identified as 16 years old and 44.25% for 14-year-olds.
Kaufman explained that the technology would likely be accurate within a year or two and that users who disagree with the assessment could correct it using a government ID or parental controls to establish an age. He assured that there are “strict requirements” for data deletion after age verification. Roblox states that ID images will be retained for 30 days for purposes such as fraud detection and then erased.
Users who opt not to participate in the age verification will still have access to Roblox, but they will be unable to use features like chat.
More than 150 million people globally engage with Roblox every day across 180 countries, including Australia. According to Kaufman, two-thirds of users are aged 13 and above.
Under a new UK law, tech companies and child protection agencies will be granted the authority to test if artificial intelligence tools can create images of child abuse.
This announcement follows reports from a safety watchdog highlighting instances of child sexual abuse generated by AI. The number of cases surged from 199 in 2024 to 426 in 2025.
With these changes, the government will empower selected AI firms and child safety organizations to analyze AI models, including the tech behind chatbots like ChatGPT and image-generating devices such as Google’s Veo 3, to ensure measures are in place to prevent the creation of child sexual abuse images.
Kanishka Narayan, the Minister of State for AI and Online Safety, emphasized that this initiative is “ultimately to deter abuse before it happens,” stating, “Experts can now identify risks in AI models sooner, under stringent conditions.”
This alteration was made due to the illegality of creating and possessing CSAM. Consequently, AI developers and others will be prevented from producing such images during testing. Previously, authorities could only respond after AI-generated CSAM was uploaded online, but this law seeks to eliminate that issue by stopping the images from being generated at all.
The amendments are part of the Crime and Policing Bill, which also establishes a prohibition on the possession, creation, and distribution of AI models intended to generate child sexual abuse material.
During a recent visit to Childline’s London headquarters, Narayan listened to a simulated call featuring an AI-generated report of abuse, depicting a teenager seeking assistance after being blackmailed with a sexual deepfake of herself created with AI.
“Hearing about children receiving online threats provokes intense anger in me, and parents feel justified in their outrage,” he remarked.
The Internet Watch Foundation, which oversees CSAM online, reported that incidents of AI-generated abusive content have more than doubled this year. Reports of Category A material, the most severe type of abuse, increased from 2,621 images or videos to 3,086.
Girls are predominantly targeted, making up 94% of illegal AI images by 2025, with the portrayal of newborns to two-year-olds rising significantly from five in 2024 to 92 in 2025.
Kelly Smith, CEO of the Internet Watch Foundation, stated that these legal modifications could be “a crucial step in ensuring the safety of AI products before their launch.”
“AI tools enable survivors to be victimized again with just a few clicks, allowing criminals to create an unlimited supply of sophisticated, photorealistic child sexual abuse material,” she noted. “Such material commodifies the suffering of victims and increases risks for children, particularly girls, both online and offline.”
Childline also revealed insights from counseling sessions where AI was referenced. The concerns discussed included using AI to evaluate weight, body image, and appearance; chatbots discouraging children from confiding in safe adults about abuse; online harassment with AI-generated content; and blackmail involving AI-created images.
From April to September this year, Childline reported 367 counseling sessions where AI, chatbots, and related topics were mentioned, a fourfold increase compared to the same period last year. Half of these references in the 2025 sessions pertained to mental health and wellness, including the use of chatbots for support and AI therapy applications.
Character.AI, the chatbot company, will prohibit users under 18 from interacting with its virtual companions beginning in late November following an extended legal review.
These updates come after the company, which allows users to craft characters for open conversations, faced significant scrutiny regarding the potential impact of AI companions on the mental health of adolescents and the broader community. This includes a lawsuit related to child suicide and suggested legislation to restrict minors from interacting with AI companions.
“We are implementing these changes to our platform for users under 18 in response to the developments in AI and the changing environment surrounding teens,” the company stated. “Recent news and inquiries from regulators have raised concerns about the content accessible to young users chatting with AI, and how unrestricted AI conversations might affect adolescents, even with comprehensive content moderation in place.”
In the previous year, the family of 14-year-old Sewell Setzer III filed a lawsuit against the company, alleging that he took his life after forming emotional connections with the characters he created on Character.AI. The family attributed their son’s death to the “dangerous and untested” technology. This lawsuit has been followed by several others from families making similar allegations. Recently, the Social Media Law Center lodged three new lawsuits against the company, representing children who reportedly died by suicide or developed unhealthy attachments to chatbots.
As part of the comprehensive adjustments Character.AI intends to implement by November 25, the company will introduce an “age guarantee feature” to ensure that “users receive an age-sensitive experience.”
“This decision to limit open-ended character interactions has not been made lightly, but we feel it is necessary considering the concerns being raised about how teens engage with this emerging technology,” the company stated in its announcement.
Character.AI isn’t alone in facing scrutiny regarding the potential mental health consequences of chatbots on their users, particularly young individuals. Earlier this year, the family of 16-year-old Adam Lane filed a wrongful death lawsuit against OpenAI, claiming the company prioritized user engagement with ChatGPT over ensuring user safety. In response, OpenAI has rolled out new safety protocols for teenage users. This week, OpenAI reported that over one million individuals express suicidal thoughts weekly while using ChatGPT, with hundreds of thousands showing signs of mental health issues.
While the use of AI-driven chatbots is still largely unregulated, new initiatives have kicked off in the United States at both state and federal levels to set guidelines for the technology. California is set to be the first state to implement an AI law featuring safety regulations for minors in October 2025, which is anticipated to take effect in early 2026. The bill will prohibit sexual content for those under 18 and require reminders to be sent to children every three hours to inform them they are conversing with AI. Some child protection advocates argue that the law is insufficient.
At the national level, Missouri’s Senator Josh Hawley and Connecticut’s Senator Richard Blumenthal unveiled legislation on Tuesday that would bar minors from utilizing AI companions developed and hosted on Character.AI, while mandating companies to enforce age verification measures.
“Over 70 percent of American children are now engaging with these AI products,” Hawley stated in a NBC News report. “Chatbots leverage false empathy to forge connections with children and may encourage suicidal thoughts. We in Congress bear a moral responsibility to establish clear regulations to prevent further harm from this emerging technology.”
If you are in the US, you can call or text the National Suicide Prevention Lifeline at 988, chat at 988lifeline.org, or text “home” to contact a crisis counselor at 741741. In the UK, youth suicide charity Papyrus can be reached, while in Ireland you can call 0800 068 4141 or email pat@papyrus-uk.org. Samaritans operate a freephone service at 116 123 or you can email jo@samaritans.org or jo@samaritans.ie. Australian crisis support services can be reached at Lifeline at 13 11 14. Additional international helplines can be accessed at: befrienders.org.
A report from the Campaign Group reveals that TikTok is guiding child accounts towards pornographic content within just a few clicks.
Global Witness activists created fake accounts using a birth date of 13 and activated the app’s “limited mode,” designed to reduce visibility to “sexually suggestive” material.
Researchers discovered that TikTok suggested sexual and explicit search phrases for seven test accounts established on new mobile devices with no prior search history.
The suggested terms under the “You May Want” feature included “very rude and revealing attire” and “very rude babe,” escalating to phrases like “hardcore porn clip.” Sexual search suggestions appeared instantly for three of the accounts.
After just “a few clicks,” researchers encountered pornographic material ranging from depictions of women to explicit sexual acts. Global Witness indicated that some content tried to evade moderation by appearing as innocuous photos or videos. For one account, access to explicit content required only two clicks: one on the search bar and another on a suggested search term.
Global Witness, an organization focused on climate issues and the implications of Big Tech on human rights, conducted two rounds of testing on July 25, one before and one after the Child Protection Regulation (OSA) was enacted in the UK.
Two videos featuring individuals who appeared under 16 were reported to the Internet Watch Foundation, tasked with monitoring online child sexual abuse material.
Global Witness accused TikTok of breaching the OSA, which mandates tech companies to shield children from harmful content, including pornography.
A spokesperson for the UK Communications Regulatory Authority, Ofcom, stated they would “support the study’s findings and evaluate the results.”
OFCOM’s compliance code stipulates that media promoting harmful content or high-risk tech companies must “design their algorithms to eliminate harmful material from child feeds.” TikTok’s content guidelines expressly prohibit pornographic material.
In response to Global Witness’s concerns, TikTok confirmed the removal of troubling content and modifications to its search recommendations.
“Upon recognizing these issues, we promptly initiated an investigation, eliminated content that breached our policies, and began enhancing our search proposal features,” stated a spokesperson.
Elon Musk, a self-proclaimed “free speech absolutist,” has recently attracted attention for urging people to cancel their Netflix subscriptions, citing concerns over LGBTQ+ characters.
Musk, the richest man in the world with an estimated net worth of around $500 million, has encouraged his 227 million followers on X, the platform he oversees, to cancel their Netflix subscriptions. In just the past three days, he has posted or shared calls to cancel Netflix at least 26 times.
The backlash against Netflix began on Tuesday when Musk tweeted, “This isn’t okay.”
He referred to the Netflix show Dead End Paranormal Park as “pro-transgender for kids,” noting that it is rated TV-Y7, which signifies suitability for children aged 7 and over. The show aired 20 episodes in 2022 before being canceled by Netflix the following year, and it is currently not being promoted by the company.
Since then, Musk has shared several tweets from users who claim to have canceled their subscriptions in protest of what they believe to be a children’s brainwashing agenda involving LGBTQ+ content.
“Cancel Netflix for your child’s health,” Musk tweeted on Wednesday, quoting a meme that depicted Netflix’s “Transgender Woke Agenda” as a Trojan horse sneaking into a castle labeled “Your Child.”
On Thursday, he shared another user’s tweet stating, “Transgender propaganda isn’t just quietly hiding in the Netflix background. They’re actively pushing it,” linking to an article titled “Celebrating Trans Visibility in These 16 Movies and Shows” on Netflix’s Tudum Media site.
Musk also highlighted themes of pro-trans content in shows like Babysitters Club and Cocomelon, while sharing debunked claims linking Netflix to an “anti-white” hiring policy and calling out political donations from Netflix employees exclusively to Democrats for the 2024 election.
Additionally, Musk commented “Netflix Cancel” on a TikTok post referencing Netflix’s 2023 report on Diversity and Inclusion Initiatives.
Musk’s daughter, Vivian Wilson, who is transgender, has publicly criticized his anti-trans rhetoric. In a 2022 petition to legally change her name and gender, Wilson expressed:
Musk has since stated that he “essentially lost my son,” claiming he was “deceived” regarding gender-affirming care for Wilson, whom he referred to as “dead and killed by a woke mind virus.” He elaborated on these sentiments further.
Netflix has often championed free speech when it faced backlash over its content, yet has remained silent in response to Musk’s provocations. This isn’t the first time the company faced criticism from the right; in 2020, the release of the film Cuties, which dealt with a minor actress performing a sexual dance routine, sparked outrage leading to a significant increase in subscription cancellations in the U.S.
In 2021, Netflix CEO Ted Sarandos defended comedian Dave Chappelle based on free speech principles, backing the company’s decision to commission specials from right-leaning comedian Tony Hinchcliffe, despite Hinchcliffe’s controversial remarks.
Musk’s calls for mass cancellations come amid Hollywood’s own free speech controversy triggered by the indefinite suspension of Jimmy Kimmel’s late-night talk show, which faced backlash during the Trump administration. Following similar reactions from celebrities and Disney+ subscribers, the company reinstated Kimmel.
A chatbot platform featuring explicit scenarios involving preteen characters in illegal abuse images has raised significant concerns over the potential misuse of artificial intelligence.
A report from the Child Safety Monitoring Agency urged the UK government to establish safety guidelines for AI companies in light of an increase in technology-generated child sexual abuse materials (CSAM).
The Internet Watch Foundation (IWF) reported that they were alerted by chatbot sites offering various scenarios, including “child prostitutes in hotels,” “wife engaging in sexual acts with children while on vacation,” and “children and teachers together after school.”
In certain instances, the IWF noted that clicking the chatbot icon led to full-screen representations of child sexual abuse images, serving as a background for subsequent interactions between the bot and the user.
The IWF discovered 17 images created by AI that appeared realistic enough to be classified as child sex abuse material under the Child Protection Act.
Users of unnamed sites for security reasons also had the capability to generate additional images resembling the illegal content already accessible.
Operating from the UK and possessing global authority to monitor child sexual exploitation, the IWF stated that future AI regulations should incorporate child protection guidelines from the outset.
The government has revealed plans for AI legislation that is anticipated to concentrate on the future advancement of cutting-edge models, prohibiting the ownership and distribution of models that produce child sexual abuse in crime and police bills.
“We welcome the UK government’s initiative to combat AI-generated images and videos of child sexual abuse, along with the tools to create them. While new criminal offenses related to these issues will not be implemented immediately, it is critical to expedite this process,”
stated Chris Sherwood, Chief Executive Officer of NSPCC, as the charity emphasized the need for guidelines.
User-generated chatbots fall under the UK’s online safety regulations, which allow for substantial fines for non-compliance. The IWF indicated that the sexual abuse chatbot was created by users and site developers.
Ofcom, the UK regulator responsible for enforcing the law, remarked, “Combating child sexual exploitation and abuse remains a top priority, and online service providers failing to implement necessary safeguards should be prepared for enforcement actions.”
The IWF reported a staggering 400% rise in AI-generated abuse material reports in the first half of this year compared to the same timeframe last year, attributing this surge to advancements in technology.
While the chatbot content is accessible from the UK, it is hosted on a U.S. server and has been reported to the National Center for Missing and Exploited Children (NCMEC), the U.S. equivalent of the IWF. NCMEC stated that the report on the Cyber Tipline has been forwarded to law enforcement. The IWF mentioned that the site appears to be operated by a company based in China.
The IWF noted that some chatbot scenarios included an 8-year-old girl trapped in an adult’s basement and a preteen homeless girl being invited to a stranger’s home. In these scenarios, the chatbot presented itself as the girl while the user portrayed an adult.
IWF analysts reported accessing explicit chatbots through links in social media ads that directed users to sections containing illegal material. Other areas of the site offered legal chatbots and non-sexual scenarios.
According to the IWF, one chatbot that displayed CSAM images revealed in an interaction that it was designed to mimic preteen behavior. In contrast, other chatbots not showing CSAM indicated that they were neither dressed nor suppressed when inquiries were made by analysts.
The site recorded tens of thousands of visits, including 60,000 in July alone.
A spokesperson for the UK government stated, “UK law is explicit: creating, owning, or distributing images of child sexual abuse, including AI-generated content, is illegal… We recognize thatmore needs to be done. The government will utilize all available resources to confront this appalling crime.”
jessica, 25 years old, recalls an incident at Sephora where a young girl rushed to a crying colleague. “Her skin was on fire,” Jessica noted, “It was bright red. She was frantically applying every acid she could find on her face.”
Former Sephora employee KM (25 years old) shared her experiences, recalling an incident where a woman caught shoplifting explained to the guards, “She didn’t have a Dior lip gloss, so she was attempting to steal one because her child was facing bullying. Although she couldn’t afford it, her daughter warned her she would be teased at school.”
Gabby, 26, who spent three years at Sephora, remarked, “I witnessed so much.” One parent even asked Gabby if her tween should “start using retinol now to prevent aging.”
Another mother requested Gabby to make her daughter’s nose appear smaller. “After the mom left, I felt compelled to tell the girl, ‘Your nose is beautiful, by the way.’ It’s not my place to say it, but I just had to.”
The “Sephora Kids” phenomenon—encompassing preteens, upscale beauty stores, and the strong bond between expensive and often harsh products—is now well recognized. Research from Circana indicates that, in the first half of last year, one-third of “prestige” beauty sales were influenced by tweens and households with teenagers. That same year, Sephora, under LVMH ownership, achieved around $9 billion in US sales, while Ulta Beauty reported $11.3 billion, according to Statista.
This trend is fueled by skincare content shared by beauty influencers, which puts young skin at “significant dermatological risk.” A recent study from Northwestern University highlights that skincare routines popular among tweens on TikTok involve an average of 11 potentially irritating active ingredients, leading to possible acute reactions and lifelong allergies.
Sephora has attempted to distance itself from the trend, as President and CEO Artemis Patrick stated, “We’re not marketing to this demographic.” It’s not about promoting anti-aging products to children. Last year, the brand Drunk Elephant experienced a significant decline in sales, attributed to a disconnect with older customers.
The issue has escalated this summer, as former and current Sephora employees report concerning scenarios they have witnessed.
Summer tends to be peak season for “Sephora Kids,” as school is out and retail spaces are increasingly rare. The vibrant, lively environment of beauty stores—with loud music and brightly colored products—acts as a significant attraction for children.
According to employees, toddlers often run amok unsupervised, disrupting displays, knocking over merchandise, and filling baskets with testers. KM referred to these children as “free-range kids,” often distracted by loud YouTube videos.
Kennedy, who works at a Sephora inside a Kohl’s department store near the junior clothing section, noted, “The traffic is very intentional.” Parents often drop their kids at Sephora while they shop elsewhere. It’s common for her to see parents swiping their cards for large amounts without realizing what their children are actually purchasing.
Employees have tried to dissuade younger children from using products meant for mature skin.
However, Gabby mentioned that even parents don’t always pay heed to their advice. “They often disregard it,” Jessica said, “When I warned one mom about a product being too harsh for her child’s skin, she simply replied, ‘I saw it on TikTok’ and bought everything.”
There can be tense exchanges between parents and their tweens. “But I saw it online; it has to be good!” KM noted while mentioning the typical tantrums, like kids insisting, “I want lip gloss!” “But you already have six!” the mothers argue back.
All employees agreed that many beauty products remain unused. “If products sit open for a while, they just become waste,” KM noted, highlighting the issue of overconsumption.
Shoplifting, or “reducing” stores, has also become prevalent. “I frequently find so many empty boxes at work,” Gabby pointed out. Erika, 28, remarked on how social media has normalized a culture of “borrowing” without accountability among children.
Children often use their parents’ credit cards. “I’ve seen kids pull out shiny American Express cards, and I just know it’s not theirs,” Gabby said. Erika noted she witnessed groups of girls casually asking their parents for purchases.
This behavior reflects the broader trend of preteens acting like mini adults. Kennedy described it as a “strange qualification,” where children carry their phones and Starbucks cups, often joking about needing “to start anti-aging right away.” Despite their jest, the pressure of anti-aging has seeped into the minds of young children.
For Joy, a 25-year-old Sephora employee, the attitude is pervasive. The pressure from social media leads girls to think, “Celebrities and influencers in their 50s still look my age.” They are increasingly aware of the role of cosmetic procedures.
Erika frequently notices young girls scrutinizing their own skin, often asking, “Do you think I have pores?” They view everything through a filtered lens.
Dr. Meghan Owentz, a clinical associate professor specializing in parenting and anxiety, asserts that while it’s natural for preteen girls to focus on personal hygiene, today’s pressure has significantly altered how they navigate comparisons with others. With social media amplifying these messages, they feel inundated with constant information.
The desire for belonging among children through brands like sneakers and collectible cards isn’t new. However, those born after 2010 face unprecedented marketing saturation. Surveys suggest that 43% of Generation Alpha kids had tablets before six, and 58% received their first iPhone by age ten. Government research from 2023 indicates that social media use is now “nearly universal,” affecting even 40% of children between 8-12 years old. On these platforms, the line between authentic content and advertisements is often blurred, particularly in the beauty influencer space.
KM began to notice the invasive language of influencers in young customers, who often echo phrases like, “I repeat the actual product name over and over. I don’t really know why I’m doing this, but I saw someone promote it.”
Owentz links this surge in influencer culture to a surge in superficial discussions around topics like “my skincare routine” and “Get Ready with Me,” adding that these dialogues are often not suitable for young girls who may feel pressured to conform to unrealistic beauty standards.
“There are simply too many advertisements targeted at kids, making it hard for them to say no,” Owentz stated. “Children are under immense pressure and often redirect that burden onto their parents.”
She emphasized that it’s up to parents to discern what’s appropriate for their children and communicate their rationale clearly.
Yet, California Senator Alex Lee argues that the responsibility shouldn’t rest solely on parents, criticizing the lack of clear warnings regarding product ingredients. “The typical parent isn’t a pediatric dermatologist,” he noted. He has proposed bills aimed at prohibiting the sale of products containing ingredients like retinol to those under 18, which have not passed due to pushback from the Personal Care Product Council.
Many employees believe that beauty brands are deliberately targeting this younger demographic. Kennedy observed that brands have begun to adjust their packaging to be more colorful, introducing tween-friendly offerings like lip oils and blush, alongside skincare products meant for older users.
Some brands foster a culture of collecting, as Gaby explained: “They release limited-edition products, such as matcha-flavored lip balm, even if kids already have several similar items. But they still want the newest scent.”
Sephora, Ulta, and Drunk Elephant did not respond to requests for comment. Nonetheless, the skincare industry is progressively expanding its range of products targeted at younger skin, often employing enticing marketing aimed at parents, featuring close-ups of bottles and vibrant packaging.
Trends are emerging: kids have been celebrating birthdays at some Sephora and Ulta stores for a while now, but Ulta just recently introduced their formal $42 per guest party package, designed to offer “75-90 minutes of beauty enjoyment” using products created “with tweens and teens in mind.” This reinforces the idea that such initiatives cater to grooming needs, allowing guests the chance to invest more with a 20% coupon on their next visit.
Some names have been changed to protect the identity of current Sephora employees.
When a teenager exhibits significant distress while interacting with ChatGPT, parents might receive a notification if their child displays signs of distress, particularly in light of child safety concerns, as more young individuals seek support and advice from AI chatbots.
This alert is part of new protective measures for children that OpenAI plans to roll out next month, following a lawsuit from a family whose son reportedly received “months of encouragement” from the chatbot.
Among the new safeguards is a feature that allows parents to link their accounts with their teenagers’, enabling them to manage how AI models respond to their children through “age-appropriate model behavior rules.” However, internet safety advocates argue that progress on these initiatives has been slow and assert that AI chatbots should not be released until they are deemed safe for young users.
Adam Lane, a 16-year-old from California, tragically took his life in April after discussing methods of suicide with ChatGPT, which allegedly offered to assist him in crafting a suicide note. OpenAI has acknowledged deficiencies in its system and admits that safety training for AI models has declined throughout extended conversations.
Raine’s family contends that the chatbot was “released to the market despite evident safety concerns.”
“Many young people are already interacting with AI,” OpenAI stated. The blog outlines their latest initiatives. “They are among the first ‘AI natives’ who have grown up with these tools embedded in their daily lives, similar to earlier generations with the internet and smartphones. This presents genuine opportunities for support, learning, and creativity; however, it also necessitates that families and teens receive guidance to establish healthy boundaries corresponding to the unique developmental stages of adolescence.”
A significant change will allow parents to disable AI memory and chat history, preventing past comments about personal struggles from resurfacing in ways that could exacerbate risk and negatively impact a child’s long-term profile and mental well-being.
In the UK, the Intelligence Committee has established a Code of Practice regarding the design of online services that are suitable for children, advising tech companies to “collect and retain only the minimum personal data necessary for providing services that children are actively and knowingly involved in.”
Around one-third of American teens utilize AI companions for social interactions and relationships, including role-playing, romance, and emotional support, according to a study. In the UK, 71% of vulnerable children engage with AI chatbots, with six in ten parents reporting their children believe these chatbots are real people, as highlighted in another study.
The Molly Rose Foundation, established by the father of Molly Russell, who took her life after succumbing to despair on social media, emphasized that “we shouldn’t introduce products to the market before confirming they are safe for young people; efforts to enhance safety should occur beforehand.”
Andy Burrows, the foundation’s CEO, stated, “We look forward to future developments.”
“OFCOM must be prepared to investigate violations committed by ChatGPT, prompting the company to adhere to online safety laws that must ensure user safety,” he continued.
Anthropic, the company behind the popular Claude chatbot, states that its platform is not intended for individuals under 18. In May, Google permitted children under 13 to access its app using the Gemini AI system. Google also advises parents to inform their children that Gemini is not human and cannot think or feel and warns that “your child may come across content you might prefer them to avoid.”
The NSPCC, a child protection charity, has welcomed OpenAI’s initiatives as “a positive step forward, but it’s insufficient.”
“Without robust age verification, they cannot ascertain who is using their platform,” stated senior policy officer Toni Brunton Douglas. “This leaves vulnerable children at risk. Technology companies should prioritize child safety rather than treating it as an afterthought. It’s time to establish protective defaults.”
Meta has implemented protection measures for teenagers in its AI offerings, stating that for sensitive topics like self-harm, suicide, and disability, it will “incorporate additional safeguards, training AI to redirect teens to expert resources instead.”
“These updates are in progress, and we will continue to adjust our approach to ensure teenagers have a secure and age-appropriate experience with AI,” a spokesperson mentioned.
On Friday, a federal appeals court reinstated some lawsuits against Elon Musk’s X, alleging that the platform has become a haven for child exploitation. However, the court affirmed that X is largely protected from liability for harmful content.
While rejecting multiple claims, the 9th Circuit Court of Appeals in San Francisco mandated that X (formerly Twitter) must promptly report a video featuring explicit images of two minor boys, asserting that it was negligent for not reporting it immediately to the National Center for Missing and Exploited Children (NCMEC).
This incident occurred prior to Musk’s acquisition of Twitter in 2022. A judge dismissed the case in December 2023, and X’s legal counsel has yet to provide a comment. Musk was not named as a defendant.
One plaintiff, John Do 1, recounted that at the age of 13, he and his friend, John Do 2, were lured on Snapchat into sharing nude photos, believing they were communicating with a 16-year-old girl.
In reality, Snapchat users were trafficking in child exploitation images, threatening the plaintiff, and soliciting more photos from him. These images were ultimately compiled into a video that was disseminated on Twitter.
Court documents revealed that Twitter took nine days to report the content to NCMEC after becoming aware of it, during which time the video amassed over 167,000 views.
Circuit Judge Daniel Forest stated that Section 230 of the Communications Decency Act, which typically shields online platforms from liability for user-generated content, does not protect X from negligence claims once it became aware of the images.
“The facts presented here, along with the statutory ‘actual knowledge’ requirement, establish that the responsibility to report child pornography is distinct from its role as a publisher to NCMEC,” she wrote on behalf of the three-judge panel.
X should further argue that its infrastructure posed challenges in reporting child abuse images.
It claimed immunity from allegations of intentionally facilitating sex trafficking and developed a search function that “amplifies” images of child exploitation.
Dani Pinter, representing the plaintiffs and speaking for the National Center on Sexual Exploitation, provided a statement:
Officials will employ artificial intelligence to assist in estimating the age of asylum seekers who claim to be minors.
Immigration Minister Angela Eagle stated on Tuesday that the government will pilot technology designed to assess a person’s age based on facial characteristics.
This initiative is the latest effort aimed at helping the Labor Minister leverage AI to address public service issues without incurring significant expenses.
The announcement coincided with the public release of a report by David Bolt, the Chief Inspector of Borders and Immigration. A crucial report indicated efforts to estimate the age of new arrivals.
Eagle mentioned in a formal statement to Parliament: “We believe the most economically feasible approach is likely to involve estimating age based on facial analysis. This technology can provide age estimates with known accuracy for individuals whose age is disputed or uncertain, drawing from millions of verifiable images.”
“In cases where it’s ambiguous whether the individual undergoing age assessment is over 18 or not claiming to be a minor, facial age estimation offers a potentially swift and straightforward method to validate judgments against the technology’s estimates.”
Eagle is launching a pilot program to evaluate the technology, aiming for its integration into official age verification processes by next year.
John Lewis announced earlier this year that it will be the first UK retailer to facilitate online knife sales using facial age estimation technology.
The Home Office has previously utilized AI in other sectors, such as identifying fraudulent marriages. However, this tool has faced criticism for disproportionately targeting specific nationalities.
Although there are concerns that AI tools may intensify biases in governmental decision-making, the minister is exploring additional applications. Science and Technology Secretary Peter Kyle announced a partnership with OpenAI, the organization behind ChatGPT, to investigate AI deployment in areas like justice, safety, and education.
Bolt expressed that the mental health of young asylum seekers has deteriorated due to failings in the age verification system, especially in Dover, where the influx of small boats is processed.
“Many concerns raised over the past decade regarding policy and practices remain unresolved,” Bolt cautioned, emphasizing that the challenging conditions at the Dover processing facility could hinder accurate age assessments.
He added: “I have heard accounts of young individuals who felt distrustful and disheartened in their encounters with Home Office officials, where hope has faded and their mental well-being is suffering.”
His remarks echo a report from the Refugee Council, indicating that at least 1,300 children have been mistakenly identified as adults over an 18-month period.
Last month, scholars from the London School of Economics and the University of Bedfordshire suggested that the Home Office should be stripped of its authority to make decisions regarding lonely asylum seekers.
In France, where kindergarten begins at age 3, there is a debate on whether staff should allow children to nap. “Although naps are widely acknowledged to positively impact cognitive development, some parents and educators worry that daytime resting might disrupt nighttime sleep or diminish essential learning opportunities,” notes Stephanie Mazza from the University of Lyon, France.
Mazza and her team researched whether naps interfere with nighttime rest by observing 85 children aged 2-5 years across six French kindergartens using wrist sleep trackers for about 7.8 days.
The findings, combined with sleep diaries maintained by parents, showed that an hour increase in napping was linked to a reduction of roughly 13.6 minutes in nighttime sleep, delaying bedtime by about 6.4 minutes. However, children who napped gained an overall 45 minutes of additional sleep.
“Parents need not worry if their child still requires a nap before turning six,” asserts Mazza. “Our results imply that naps can boost total sleep, even if they slightly delay bedtime. Instead of viewing naps as detrimental, they should be seen as a valuable source of rest, particularly in stimulating environments.”
“I believe this indicates—if they can nap, let them nap,” says Rebecca Spencer from the University of Massachusetts, Amherst. She emphasizes, considering that sleep duration during early childhood varies globally, further research is necessary to assess the broader applicability of these findings.
The quantity of online videos depicting child sexual abuse created by artificial intelligence has surged as advancements in technology have impacted pedophiles.
According to the Internet Watch Foundation, AI-generated abuse videos have surpassed a critical level, nearing a point where they can nearly measure “actual images,” with a notable increase observed this year.
In the first half of 2025, the UK-based Internet Safety Watchdog examined 1,286 AI-generated videos containing illegal child sexual abuse material (CSAM), a sharp increase from just two during the same period last year.
The IWF reported that over 1,000 of these videos fall under Category A abuse, the most severe classification of such material.
The organization indicated that billions have been invested in AI, leading to a widely accessible video generation model that pedophiles are exploiting.
“It’s a highly competitive industry with substantial financial incentives, unfortunately giving perpetrators numerous options,” stated an IWF analyst.
This video surge is part of a 400% rise in URLs associated with AI-generated child sexual abuse content in the first half of 2025, with IWF receiving reports of 210 such URLs compared to 42 last year.
IWF discovered one post on a Dark Web Forum where a user noted the rapid improvements in AI and how pedophiles had rapidly adapted to using an AI tool to “better interact with new developments.”
IWF analysts observed that the images seem to be created by utilizing free, basic AI models and “fine-tuning” these models with CSAM to produce realistic videos. In some instances, this fine-tuning involved a limited number of CSAM videos, according to IWF.
The most lifelike AI-generated abuse videos encountered this year were based on actual victims, the Watchdog reported.
Interim CEO of IWF, Derek Ray-Hill, remarked that the rapid advancement of AI models, their broad accessibility, and their adaptability for criminal purposes could lead to a massive proliferation of AI-generated CSAM online.
“The risk of AI-generated CSAM is astonishing, leading to a potential flood that could overwhelm the clear web,” he stated, cautioning that the rise of such content might encourage criminal activities like child trafficking and modern slavery.
The replication of existing victims of sexual abuse in AI-generated images allows pedophiles to significantly increase the volume of CSAM online without having to exploit new victims, he added.
The UK government is intensifying efforts to combat AI-generated CSAM by criminalizing the ownership, creation, or distribution of AI tools designed to produce abusive content. Those found guilty under this new law may face up to five years in prison.
Additionally, it is now illegal to possess manuals that instruct potential offenders on how to use AI tools for creating abusive images or for child abuse. Offenders could face up to three years in prison.
In a February announcement, Interior Secretary Yvette Cooper stated, “It is crucial to address child sexual abuse online, not just offline.”
AI-generated CSAM is deemed illegal under the Protection Act of 1978, which criminalizes the production, distribution, and possession of “indecent or false images” of children.
Homo sapiens parents” data-credit=”Israel Hershkovitz”/>
A skull of a young girl believed to be a descendant of Neanderthal Homo sapiens parents
Israel Hershkovitz
Skulls uncovered in Israel, dating back 140,000 years, likely belonged to hybrid children of Neanderthals and Homo sapiens. Anthropological analysis indicates that the ancient remains of a 5-year-old girl were found in one of the earliest known cemeteries, reshaping our understanding of organized burial practices and the people who partook in them.
The skull was initially unearthed in 1929 from Skhul Cave on Mount Carmel. This excavation ultimately revealed seven adults and three children, totaling 16 bones attributed to early humans classified as Homo sapiens.
However, the classification of the child’s skull has been disputed for nearly a century. It was originally thought to belong to a migrating lineage known as Paleoanthropus palestinensis, but later studies have suggested it is more likely a Homo sapiens specimen.
Anne Dambricourt Marasse, from the French Institute of Human Paleontology, is undertaking CT scans of the skull to compare it with other known Neanderthal juvenile remains.
“This study likely marks the first scientific assessment of Skhul’s child remains,” mentions John Hawks from the University of Wisconsin-Madison, who was not part of the research. “Previous efforts linked to antiquated reconstructions in plaster failed to comprehend the biology of this child in relation to similar specimens from a broader comparative framework.”
Malasse and her team found that the lower jaw presented notable Neanderthal traits, while the remainder of the skull was structurally aligned with Homo sapiens. They conclude that this blend of features suggests the child was of mixed ancestry.
“For a long time, I’ve questioned the viability of hybridization, suspecting that most cases resulted in miscarriages,” states Malasse. “The skeletal evidence indicates that this little girl, despite her youth at just five years, represents a different story.”
The new findings significantly enhance our understanding of the significant Skhul Child Skull, yet firmly identifying the child as a hybrid without recovering DNA is challenging, a task researchers have yet to achieve. “Human populations demonstrate substantial variability in appearance and physical form without interbreeding with ancient species like Neanderthals,” adds Malasse.
From research into both ancient and modern genomes, we know Homo sapiens and Neanderthals interchanged genes multiple times over the last 200,000 years. In 2018, bone fragments identified as Neanderthal and Denisovan hybrids, another ancient hominin species, were discovered in Russia, utilizing DNA analysis.
The Levant region emerges as a particularly significant area for human species intermingling due to its geographical positioning between Africa, Asia, and Europe. Some have termed it a “central bus stop” for Pleistocene humans, clarifies Dany Coutinho Nogueria at the University of Coimbra, Portugal.
Recent studies compel us to reevaluate our understanding of early burial practices among Homo sapiens, according to Malasse. Such ritualistic behaviors may have originated from Neanderthals, Homo sapiens, or resulted from interactions between the two.
“I cannot ascertain who performed the burial of this child, or whether the chosen burial ground belonged to a single community or one from another lineage that had established connections, shared rituals, and emotions,” reflects Malasse.
The entrepreneur expressed that she felt “humiliated” after departing from London Tech Week, the annual corporate gathering, while accompanying her baby daughter.
Davina Schonle was barred from entering the event on Monday after a three-hour journey of eight months and had to forgo a meeting with potential high-tech startup suppliers.
Schonle recounted to TheBusinessDesk.com that upon arriving at the entrance with her daughter in the stroller, she was asked if she was a VIP. She was informed that she could not enter with the baby. After attempting to retrieve her badge, she was redirected to an Informa State organizer who stated they lacked insurance.
This incident incited outrage and cast a pall over the event. Prime Minister Kiel Starmer addressed the gathering on the same day Schonle was denied entry. The tech industry is striving to distance itself from accusations of sexism and the perception that women are seen as second-class.
Schonle mentioned that this experience highlighted her worst fears regarding being a woman in this sector. She is the founder and CEO of HumanVantage AI, a startup leveraging AI technology to create conversational role-play corporate training platforms.
In a widely shared LinkedIn Post, Schonle remarked: “This moment was inconvenient, serving as a stark reminder that within the tech industry, we still have progress to make regarding inclusion beyond mere buzzwords.”
“Parents are integral to this ecosystem. Caregivers are innovators, founders, investors, and leaders. If a significant event like London Tech Week cannot accommodate them, what message does that send about who truly belongs in technology?”
London Tech Week, organized by Global Events Company Informa, addressed the situation in a statement: “We are aware that one of the participants was not allowed entry with children. As a business event, the venue is not equipped to accommodate specific needs, facilities, and safety measures for those under the age of 16.”
“We are appreciative of everyone’s support in the tech community during London Tech Week. We have reached out to the involved parties to discuss the incident and will use this experience to improve our approach at LTW in the future.”
Julia Hobbsbohm, a businesswoman and commentator on entrepreneurship and work-life balance, reacted to Schonle’s LinkedIn post, remarking: London Tech Week “The worst kind of tin ears.”
The parents of the family were left devastated when their aspirations for change were dashed after they sought to safeguard their children in the Colorado Legislature last month and online activism targeting a drug dealer resulted in tragedy.
Among those parents was Lori Shot, who was instrumental in crafting the bill. Her 18-year-old daughter Annaly tragically took her own life in 2020 after engaging with content on TikTok and Instagram related to depression, anxiety, and suicide.
“When lawmakers sidestep votes and shift discussions to an insubstantial calendar date without accountability, it feels like a betrayal to us as parents.” “It’s a betrayal to my daughter and to all the other children we’ve lost.”
Had the law been enacted, it would have necessitated investigations and the removal of accounts engaged in gun and drug sales, or the sexual exploitation and human trafficking of minors on platforms like Facebook, Instagram, and TikTok. It also required a dedicated hotline for law enforcement and a 72-hour response timeframe for police inquiries, which would significantly increase obligations compared to current legal standards.
Additionally, the platforms would have had to report on the usage statistics of minors, including how often and for how long they interacted with content violating company policies. Several major tech firms have taken official stances regarding the bill. As noted in Colorado’s lobbying records, Meta’s long-time lobbying firm, Headwater Strategies, has registered its support for revising the bill. Conversely, Google and TikTok employed lobbyists to oppose it.
‘[Legislators] chose self-interest over the protection of children and families. ” Illustration: Andrei Cojocaru/Guardian
“We are deeply disheartened,” said Kim Osterman, whose 18-year-old son Max died in 2021. “[Legislators] prioritized their own interests over the safety of my children and family.”
Protection for Social Media Users (SB 25-086) passed both legislative chambers, only to be vetoed by Democrat Governor Jared Polis on April 24th. His veto was justified by concerns that the bill would “erode privacy, freedom, and innovation.” On April 25, the Colorado Senate voted to override the veto, but on April 28, the House chose to delay the vote until the end of the legislative session, effectively blocking the override and keeping the bill alive.
Originally, the bill had passed the Senate with a 29-6 margin and the House with a 46-18 margin. On April 25, the Senate voted 29-6 for an override, and lawmakers anticipated that the House would take up the matter later that day, believing that there was enough bipartisan support to successfully overturn the veto.
“It was a straightforward vote for people because our goal was clear: to safeguard children from the predatory practices of social media companies,” remarked Senator Lindsey Dorgerty, a Democrat and co-sponsor of the bill. She expressed her disappointment that House leaders chose to sidestep the vote on Friday.
Advocating parents blamed the failure of the bill on an unexpected 11-hour lobbying blitz by The Far Right Gun Owners Association in Colorado. Two state legislators and seven other legislative participants corroborated the parents’ claims.
An unprecedented last-minute campaign disrupts bipartisan consensus
The owner of Rocky Mountain Guns (RMGO) characterized the bill as government censorship related to the statute against “ghost guns” assembled from kits purchased online.
RMGO initiated an extensive social media and email campaign, rallying its 200,000 members to contact lawmakers and voice their opposition to the bill. Sources familiar with the workings of the Colorado State Capitol explained that the gun group’s outreach included social media and text campaigns that encouraged Republican constituents to reach out to their representatives in opposition.
“[Legislators] were inundated with calls and emails from activists. It was an all-out assault. A campaign declared, ‘This is a government censorship bill,'” they stated.
The group’s actions contributed to efforts preventing Republicans from backing the veto override, leading to the bill’s demise. According to ten individuals involved in the bill’s development and the legislative process, this lobbying effort appeared unexpectedly robust, fueled by organizations that had previously faced financial constraints. An anonymous source from the Colorado State Capitol shared insights with the Guardian, citing fears of retaliation from RMGO.
The House of Representatives postponed its vote until April 28th, providing RMGO time to amplify its campaign over the weekend. When lawmakers reconvened on Monday, the House voted 51-13 to delay the override until the legislative session concluded, effectively dissolving the effort.
“It was a coordinated full-scale attack proclaiming this as a government censorship bill.” Illustration: Andrei Cojocaru/Guardian
A significant text messaging initiative targeted registered Republican voters, alleging that the social media bill “forces platforms to enforce extensive surveillance of content shared on their platforms,” claiming violations of Colorado’s gun laws, and framing the legislation as an affront to First and Second Amendment rights, according to texts reviewed by the Guardian.
A recurring adversary
Established in 1996, RMGO claims a membership exceeding 200,000 activists. It is recognized as a far-right organization staunchly opposed to regulations on firearms. Dudley Brown, its founder and leader president of the National Gun Rights Association, diverges significantly from the perspective of the National Rifle Association (NRA). RMGO is criticized for employing tactics labeled as “bullying” and “extremist” against both Democrats and moderate Republicans. The group has not responded to requests for commentary regarding legislative measures.
RMGO is a well-known presence at the Colorado State Capitol, typically opposing gun control measures. Daugherty described their usual campaign tactics as “intimidating.” Following backlash for her involvement in a bill banning assault weapons earlier this year, she deactivated her social media account.
“While advocating for gun legislation at the Capitol, RMGO published images of me and other legislators on their website,” she noted. An RMGO tweet depicted Daugherty alongside a bold “Traitor” stamp.
The group disseminated misinformation regarding the bill’s implications on gun ownership, as reported by sources who participated in the legislative discussions.
“My support for the bill and the veto override stemmed from concerns about child trafficking and safeguarding children,” stated Republican Senator Rod Pelton, who voted in favor of overriding the veto in the Senate. “I did not subscribe to the entire argument pertaining to the second amendment.”
The bill garnered support from 23 district attorneys in Colorado as well as bipartisan backing from the state House of Representatives.
RMGO’s late-stage opposition to the social media bill deviated from its usual tactics. Typically, the organization weighs in on legislation early in the process, according to eight sources, including co-sponsors Daugherty and Representative Andy Boesenecker.
“Their surge of focused efforts caught my attention,” Boesenecker remarked. “It was curious to note that their resistance materialized so late in the process and appeared to be well-financed.”
In recent years, RMGO has experienced reduced activity attributed to financial difficulties that limited their legislative campaigning capacity. In a 2024 interview, the organization’s leader candidly acknowledged struggles with fundraising. Daugherty believes RMGO’s capacity for such a substantial outreach campaign would be unlikely without considerable funding. Others within Colorado’s political landscape echoed this sentiment.
“The Rocky Mountain Gun Owners had been largely ineffective in the legislature for several years due to financial constraints. Suddenly, they increased their influence, seemingly backed by substantial funds,” said Dawn Reinfeld, from a Colorado-based nonprofit focused on youth rights.
This context caused lawmakers to feel pressured, especially concerning primary elections in their districts, following RMGO’s recent social media attacks on supporters of the bill.
“The bill had given me hope that Avery’s legacy would make a difference, and its failure was incredibly disappointing.” Illustration: Andrei Cojocaru/Guardian
“There was a palpable concern among many about party affiliation; it certainly played a role,” remarked Dorgerty.
Aaron Ping’s 16-year-old son, Avery, passed away from an overdose in December after buying what he believed to be ecstasy on Snapchat, only to receive a substance laced with fentanyl instead. Ping viewed the organized opposition to the bill as a purposeful distortion.
“The narrative painted the bill as an infringement on gun rights, depicting it as merely a tool for targeting people purchasing illegal firearms online,” he stated.
Ping had testified in support of the bill alongside other families, recovering teens, and district attorneys back in February before the initial Senate vote.
“This bill carried the hope that Avery’s legacy would incite change; its rejection was truly disheartening,” Ping shared.
In the absence of federal action, states initiate online child safety legislation
A number of states, including California, Maryland, Vermont, Minnesota, Hawaii, Illinois, New Mexico, South Carolina, and Nevada, have introduced legislation over the past two years aimed at enhancing online safety for minors. These initiatives encounter vigorous resistance from the technology sector, which includes extensive lobbying efforts and legal challenges.
Maryland successfully passed the Children’s Code bill in May 2024, marking it as the first state to enact such legislation. However, this victory may be short-lived. The high-tech industry coalition, NetChoice, representing companies such as Meta, Google, and Amazon, has already launched legal challenges against these measures.
In the meanwhile, federal efforts have stalled, with the Children’s Online Safety Act (KOSA) faltering in February after failing to pass the House despite years of modifications and deliberations. A newly revised version of the bill was reintroduced in Congress on May 14th.
California’s similar initiative, the age-appropriate design code law, which mirrors UK legislation, was halted in late 2023 following a NetChoice injunction citing potential First Amendment infringements.
European officials have initiated an investigation into four adult websites suspected of inadequately preventing minors from viewing adult content.
Following a review of the companies’ policies, the European Commission criticized PornHub, StripChat, XNXX, and XVideos for not implementing adequate age verification procedures to block minors from accessing their sites.
This inquiry has been launched in accordance with the EU’s Digital Services Act (DSA), a comprehensive set of regulations aimed at curbing online harm such as disinformation, cyber threats, hate speech, and counterfeit merchandise. The DSA also enforces stringent measures to safeguard children online, including preventing mental health repercussions from exposure to adult materials.
The committee noted that all four platforms employed a simple one-click self-certification for age verification.
“Today marks a significant step toward child protection online in the EU, as the enforcement action we are initiating… clearly indicates our commitment to hold four major adult content platforms accountable for effectively safeguarding minors under the DSA.”
While no specific deadline has been set for concluding the investigation, officials stressed that they aim to act swiftly on potential next steps based on the platforms’ responses.
The platforms can resolve the investigation by implementing an age verification system recognized as effective by EU regulators. Failure to comply could result in fines of up to 6% of their global annual revenue.
The DSA regulates platforms with over 45 million users, including Google, Meta, and X, while national authorities in each of the 27 member states are responsible for those that fall beneath this threshold.
On Tuesday, the committee announced that StripChat no longer qualifies as a “very large online platform.” Following the company’s appeal, its oversight will now be handled by Cyprus rather than Brussels, under its parent company, Techinius Ltd.
However, this new designation will not take effect until September, meaning that the investigation into age verification remains active.
The child protection responsibilities of StripChat will continue unchanged.
Aylo FreeSites, the parent company of Pornhub, is aware of the ongoing investigation and has stated its “full commitment” to ensuring the online safety of minors.
“We are in full compliance with the law,” the company remarked. “We believe the effective way to protect both minors and adults is to verify user age at the point of access through their device, ensuring that websites provide or restrict access to age-sensitive content based on that verification.”
Techinius has been approached for comments. A Brussels-based attorney, recently representing the parent company of XVideos (Web Group Czech Republic) and XNXX (NKL Associates) in EU legal matters, has also reached out for statements.
When Solomon* entered the gleaming Octagon Tower in Accra, Ghana, he was embarking on his journey as a meta content moderator. Tasked with removing harmful content from social media, he faced a challenging yet rewarding role.
However, just two weeks into his training, he encountered a much darker side of the job than he had anticipated.
“I initially didn’t encounter graphic content, but eventually, it escalated to images of beheadings, child abuse, bestiality, and more. The first time I saw that content, I was completely taken aback.”
Octagon Building in Accra. Photo: foxglove
“Eventually, I became desensitized and began to normalize what I was seeing. It was disturbing to find myself watching beheadings and child abuse.”
“I’ll never forget that day,” Solomon recounted, having arrived from East Africa in late 2023. “The system doesn’t allow you to skip. You must view it for a minimum of 15 seconds.”
In one particular video, a woman from his homeland cried for help as several assailants attacked her.
He noted that this exposure was increasingly unsettling. One day there were no graphic videos, but as a trend emerged, suddenly around 70-80% of the content became graphic. He gradually felt “disconnected from humanity.”
In the evenings, he returned to shared accommodations provided by his employer, the outsourcing firm Telepelforming, where he faced issues related to privacy, water, and electricity.
When Solomon learned of his childhood friend’s death, it shattered his already fragile mental state. He was Broken, feeling trapped in his thoughts, and turned to Telepelforming for a temporary escape until he could regain his composure.
Isolating himself for two weeks, he admitted, “I began to spiral into depression. I stopped eating and sleeping, smoking day in and day out. I was never this way before.”
Solomon tried to take his own life and was hospitalized, where he was diagnosed with major depressive disorder and suicidal ideation. He was discharged eight days later, towards the end of 2024.
Telepelforming offered him a lower-paying position, but he feared it would not suffice to live in Accra. He sought compensation for his distress and long-term psychological care, but instead, Telepelforming sent him back to his hometown amid unrest.
“I feel used and discarded. They treated me like a disposable water bottle,” Solomon expressed after his termination.
He reflected on his past professional life in his home country, saying, “I was content and at peace before coming here.”
Another moderator, Abel*, defended Solomon and shared how he ended his contract in solidarity with fellow employees.
He confronted Telepelforming: “You’re not treating him fairly.”
“They isolated him at home. He felt unsafe being alone, which caused him severe stress, prompting him to return to work.”
Abel also faced mental health struggles stemming from the content. “I was unaware of the nature of the job and the reality of viewing explicit material for work… The first time I encountered blood, I was left numbed.”
He mentioned that colleagues often gathered to sip coffee and discuss disturbing material, even sharing their discomfort.
He hesitated to discuss these issues with wellbeing coaches due to a fear of how his concerns would be perceived by his team leader. He faced challenges when he declined to utilize a wellness service he believed was merely for “research purposes.”
A spokesman for Telepelforming stated: Recognizing his depression following his friend’s death, we conducted a psychological evaluation and found he was unfit to continue in a moderation role.
“We offered a different non-moderating position, which he declined, expressing a desire to remain in his current role. With that not being a viable option, his employment ended, and he was provided compensation per our contractual agreement.
“Throughout his tenure and afterward, we ensured ongoing psychological support. He consistently declined assistance. At the suggestion of his family, help was arranged for him, and upon medical approval, arrangements for a flight to Ethiopia were made.
“We have maintained support for him in Ethiopia, but he has avoided it, instead attempting to pressure Telepelforming for monetary compensation under the threat of public exposure.”
*The name has been changed to protect their identity
The Communication Watchdog is accused of endorsing major technology for the safety of under-18s after England’s children’s commissioners criticized new measures to address online harm. Rachel de Souza warned Offcom last year that the proposals to protect children under online safety laws are inadequate. She expressed disappointment that the new code of practice published by WatchDog ignored her concerns, prioritizing the business interests of technology companies over child safety.
De Souza, who advocates for children’s rights, highlighted that over a million young people shared their concerns about the online world being a significant worry. She emphasized the need for stronger protection measures and criticized the lack of enhancements in the current code of practice.
Some of the measures proposed by Ofcom include implementing effective age checks for social media platforms, filtering harmful content through algorithms, swiftly removing dangerous material, and providing children with an easy way to report inappropriate content. Sites and apps covered by the code must adhere to these changes by July 25th or face fines for non-compliance.
Critics, including the Molly Rose Foundation and online safety campaigner Beavan Kidron, argue that the measures are too cautious and lack specific harm reduction targets. However, Ofcom defended its stance, stating that the rules aim to create a safer online environment for children in the UK.
The Duke and Duchess of Sussex have also advocated for stricter online protections for children, calling for measures to reduce harmful content on social media platforms. Technology Secretary Peter Kyle is considering implementing a social media curfew for children to address the negative impacts of excessive screen time.
Overall, the new code of practice aims to protect children from harmful online content, with stringent measures in place for platforms to ensure a safer online experience. Failure to comply with these regulations could result in significant fines or even legal action against high-tech companies and their executives.
The platform’s CEO advises parents concerned about their children using Lobras not to allow them to use it.
Reports of bullying and grooming have surfaced, making the site the most popular among UK gamers aged 8 to 12, raising fears of exposure to explicit or harmful content.
David Basizakki, co-founder and CEO of Roblox, told BBC News that the platform is committed to safeguarding users and that millions have had positive experiences on the site.
However, he emphasized the importance of parental comfort and empowerment in making decisions regarding their children’s use of Roblox, mentioning the platform’s vigilance against negative behaviors and its collaboration with law enforcement when necessary.
Justin Roberts from Mumsnet highlighted the challenge parents face in monitoring their children’s online activities, especially with multiple children, expressing how managing their children’s Roblox use is a common struggle among forum users.
Roblox, a US-based company, boasts a large user base, surpassing the Nintendo Switch and Sony PlayStation combined, with over 80 million daily players in 2024, 40% of whom are under 13 years old.
The platform enforces consequences for misbehavior, utilizes advanced AI systems to detect problematic behaviors, and limits certain features for younger users to enhance safety.
Baszucki emphasized a zero-tolerance policy towards inappropriate content and shared that Roblox follows strict age-rating guidelines based on content and game titles.
Baszucki and Cassel founded Roblox in 2004, initially opening it to the public in 2006 after realizing its potential beyond educational use.
As the platform’s popularity grew, safety measures were introduced, marking a significant turning point when the digital currency Robux was launched, propelling Roblox to a $41 billion valuation.
Robux is used by players to acquire items and unlock content, with content creators earning a percentage of the fees and pricing adapting dynamically based on popularity.
Baszucki envisions Roblox as the future of communication, focusing on creating metaverse-style experiences where users interact through avatars in a virtual world, aiming to engage 10% of global gamers.
The United Kingdom has become the first country to implement laws regarding the use of AI tools, as highlighted by a remarkable enforcement organization overseeing the use of this technology.
It is now illegal to possess, create, or distribute AI tools specifically designed to generate sexual abuse materials involving children, addressing a significant legal loophole that has been a major concern for law enforcement and online safety advocates. Violators can face up to five years in prison.
There is also a ban on providing manuals that instruct potential criminals on how to produce abusive images using AI tools. The distribution of such material can result in a prison sentence of up to three years for offenders.
Additionally, a new law is being introduced to prevent the sharing of abusive images and advice among criminals or on illicit websites. Border units will be granted expanded powers to compel suspected individuals to unlock and submit digital devices for inspection, particularly in cases involving sexual risks.
The use of AI tools in creating images of child sexual abuse has increased significantly, with a reported four-fold increase over the previous year. According to the Internet Watch Foundation (IWF), there were 245 instances of AI-generated child sexual abuse images in 2024, compared to just 51 the year before.
These AI tools are being utilized in various ways by perpetrators seeking to exploit children, such as modifying a real child’s image to appear nude or superimposing a child’s face onto existing abusive images. Victim voices are also incorporated into these manipulated images.
The newly generated images are often used to threaten children and coerce them into more abusive situations, including live-streamed abuse. These AI tools also serve to conceal perpetrators’ identities, groom victims, and facilitate further abuse.
Senior police officials have noted that individuals viewing such AI-generated images are more likely to engage in direct abuse of children, raising fears that the normalization of child sexual abuse may be accelerated by the use of these images.
A new law, part of upcoming crime and policing legislation, is being proposed to address these concerns.
Technology Secretary Peter Kyle emphasized that the country cannot afford to lag behind in addressing the potential misuse of AI technology.
He stated in an Observer article that while the UK aims to be a global leader in AI, the safety of children must take precedence.
Concerns have been raised about the impact of AI-generated content, with calls for stronger regulations to prevent the creation and distribution of harmful images.
Experts are urging for enhanced measures to tackle the misuse of AI technology, while acknowledging its potential benefits. Deleclehill, the CEO of IWF, highlighted the need for balancing innovation with safeguarding against abuse.
Rani Govender, a policy manager at NSPCC’s Child Safety Online, emphasized the importance of preventing the creation of harmful AI-generated images to protect children from exploitation.
In order to achieve this goal, stringent regulations and thorough risk assessments by tech companies are essential to ensure children’s safety and prevent the proliferation of abusive content.
Fundamental biological reality means that a birth mother can be certain that she is genetically related to her child (aside from the case of surrogacy or egg donor IVF).
On the other hand, paternity cannot be accurately known without genetic testing. This can lead to false paternity attribution, where a man unknowingly raises a child not genetically related to himself, or fraud regarding paternity if the man is deceived into such a situation.
In some cases, relationship conflicts may prompt men to have their children undergo genetic testing. Additionally, with the increase in consumer genetic testing for ancestry and health conditions, more men are incidentally testing today.
However, Australian academic Professor Michael Gilding argued that this data was biased as it only targeted men with doubts about their paternity. He suggested a more realistic figure of about 3%, based on accompanying data from genetic and medical studies.
It is difficult to accurately measure the proportion of children who are not biologically related to their fathers – Credit: Maskot
Recent data from a US study published in 2022 found that 7% of users discovered they had paternity inaccuracies.
Similarly, a genetic sampling study in the Netherlands in 2017 estimated that just under 1% of fathers were unknowingly genetically unrelated to their children. A recent Swedish study with over 2 million families suggested that this number is around 1.7% and decreasing.
While these recent numbers are lower than earlier claims, they still indicate a significant impact on some men and children.
This article addresses the question, “How many fathers are unknowingly raising children who are not biologically theirs?” (submitted via email by Dave Shaw).
To submit your questions, please email questions@sciencefocus.com or contact us via our Facebook, @sciencefocus, or Instagram pages (remember to include your name and location).
For more fascinating science content, visit our Ultimate Fun Facts page.
Have you ever glanced around the dinner table and pondered about your parents’ favorite among your siblings? If you’re the youngest, you might want to look away.
A recent meta-analysis published in the Psychology Bulletin reveals that eldest daughters tend to receive preferential treatment from their parents.
Researchers examined 30 peer-reviewed journal articles and 14 databases, involving 19,469 participants, to explore how birth order, gender, temperament, and personality traits impact parental favoritism. The study showed that both mothers and fathers more often favored their daughters as compared to sons.
In terms of birth order, older siblings tended to receive more autonomy, which was viewed as preferential treatment. This favoritism was also evident in the amount of money spent on children and the level of control exerted by parents.
Children who exhibited responsible and organized traits were also more likely to be favored by their parents, indicating that parents may find them easier to manage and respond positively to.
Both mothers and fathers were more likely to favor their daughters over their sons. – Photo credit: Getty
“Parental differential treatment can have long-lasting effects on children,” stated lead author Dr. Alexander Jensen, an Associate Professor at Brigham Young University, USA.
“This study sheds light on which children are more susceptible to the impacts of favoritism, whether positive or negative.”
Jensen and his team also discovered that siblings receiving less favorable treatment often had poorer mental health and strained family relationships.
“It’s worth noting that this study is correlational and doesn’t explain why parents favor certain children,” Jensen added. “However, it does highlight areas where parents may need to be more mindful of their interactions with their children.”
TikTok has been aware for a long time that its video livestream feature was being misused to harm children, as revealed in a lawsuit filed by the state of Utah against the social media company. The harms include child sexual exploitation and what Utah describes as an “open door policy that allows predators and criminals to exploit users.”
The state’s attorney general stated that TikTok conducted an internal investigation in which adults allegedly used the TikTok Live feature to engage in provocative behavior with teenagers. It was found that some of them were paid for this. Another internal investigation found that criminals used TikTok Live to launder money, sell drugs, and fund terrorist groups.
Utah was the first to file a lawsuit against TikTok last June, alleging that the company was profiting from child exploitation. The lawsuit was based on internal documents obtained through subpoenas from TikTok. On Friday, an unredacted version of the lawsuit was released by the Utah Attorney General’s Office, despite TikTok’s efforts to keep the information confidential.
“Online exploitation of minors is on the rise, leading to tragic consequences such as depression, isolation, suicide, addiction, and human trafficking,” said Utah Attorney General Sean Reyes in a statement on Friday. He criticized TikTok for knowingly putting minors at risk for profit.
A spokesperson for TikTok responded to the Utah lawsuit by stating that the company has taken proactive steps to address safety concerns. The spokesperson mentioned that users must be 18 or older to use the Live feature and that TikTok provides safety tools for users.
The lawsuit against TikTok is part of a trend of U.S. attorney generals filing lawsuits over child exploitation on various apps. In December 2023, New Mexico sued Meta for similar reasons. Other states have also filed lawsuits against TikTok over similar allegations.
Following a report by Forbes in 2022, TikTok launched an internal investigation called Project Meramec to look into teens making money from TikTok Lives. The investigation found that underage users were engaging in inappropriate behavior for digital currency.
The complaint also mentions that TikTok captures a share of digital gifts from live streams, with lawmakers arguing that the algorithm encourages streams with sexual content as they are more profitable. Another internal investigation called Project Jupiter looked into organized crime using Live for money laundering purposes.
A senior police official has issued a warning that pedophiles, fraudsters, hackers, and criminals are now utilizing artificial intelligence (AI) to target victims in increasingly harmful ways.
According to Alex Murray, the National Police’s head of AI, criminals are taking advantage of the expanding accessibility of AI technology, necessitating swift action by law enforcement to combat these new threats.
Murray stated, “Throughout the history of policing, criminals have shown ingenuity and will leverage any available resource to commit crimes. They are now using AI to facilitate criminal activities.”
He further emphasized that AI is being used for criminal activities on both a global organized crime level and on an individual level, demonstrating the versatility of this technology in facilitating crime.
During the recent National Police Chiefs’ Council meeting in London, Mr. Murray highlighted a new AI-driven fraud scheme where deepfake technology was utilized to impersonate company executives and deceive colleagues into transferring significant sums of money.
Instances of similar fraudulent activities have been reported globally, with concern growing over the increasing sophistication of AI-enabled crimes.
The use of AI by criminals extends beyond fraud, with pedophiles using generative AI to produce illicit images and videos depicting child sexual abuse, a distressing trend that law enforcement agencies are working diligently to combat.
Additionally, hackers are employing AI to identify vulnerabilities in digital systems, providing insights for cyberattacks, highlighting the wide range of potential threats posed by the criminal use of AI technology.
Furthermore, concerns have been raised regarding the radicalization potential of AI-powered chatbots, with evidence suggesting that these bots could be used to encourage individuals to engage in criminal activities including terrorism.
As AI technologies continue to advance and become more accessible, law enforcement agencies must adapt rapidly to confront the evolving landscape of AI-enabled crimes and prevent a surge in criminal activities using AI by the year 2029.
Child safety experts have claimed that Apple lacks effective monitoring and scanning protocols for child sexual abuse materials on its platforms, posing concerns about addressing the increasing amount of such content associated with artificial intelligence.
The National Society for the Prevention of Cruelty to Children (NSPCC) in the UK has criticized Apple for underestimating the prevalence of child sexual abuse material (CSAM) on its products. Data obtained by the NSPCC from the police shows that perpetrators in England and Wales use Apple’s iCloud, iMessage, and FaceTime for storing and sharing more CSAM than in all other reported countries combined.
Based on information collected through a Freedom of Information request and shared exclusively with The Guardian, child protection organizations discovered that Apple was linked to 337 cases of child abuse imagery offenses recorded in England and Wales between April 2022 and March 2023. In 2023, Apple reported only 267 suspected instances of child abuse imagery globally to the National Centre for Missing and Exploited Children (NCMEC), contrasting with much higher numbers reported by other leading tech companies, with Google submitting over 1.47 million and Meta reporting more than 30.6 million, as per NCMEC reports mentioned in the Annual Report.
All US-based technology companies are mandated to report any detected cases of CSAM on their platforms to the NCMEC. Apple’s iMessage service is encrypted, preventing Apple from viewing user messages, similar to Meta’s WhatsApp, which reported about 1.4 million suspected CSAM cases to the NCMEC in 2023.
Richard Collard, head of child safety online policy at NSPCC, expressed concern over Apple’s discrepancy in handling child abuse images and urged the company to prioritize safety and comply with online safety legislation in the UK.
Apple declined to comment but referenced a statement from August where it decided against implementing a program to scan iCloud photos for CSAM, citing user privacy and security as top priorities.
In late 2022, Apple abandoned plans for an iCloud photo scanning tool called Neural Match, which would have compared uploaded images to a database of known child abuse images. This decision faced opposition from digital rights groups and child safety advocates.
Experts are worried about Apple’s AI system, Apple Intelligence, introduced in June, especially as AI-generated child abuse content poses risks to children and law enforcement’s ability to protect them.
Child safety advocates are concerned about the increase in AI-generated CSAM reports and the potential harm caused by such images to survivors and victims of child abuse.
Sarah Gardner, CEO of Heat Initiative, criticized Apple’s insufficient efforts in detecting CSAM and urged the company to enhance its safety measures.
Child safety experts worry about the implications of Apple’s AI technology on the safety of children and the prevalence of CSAM online.
“It is well known that the best way to prevent catching a cold is to stay in shape.” write Mariam Amankerdievna Sidikova Medical, Practice and Nursing JournalLest parents overdo it, she warns that only healthy children “can get stronger with hydrotherapy.”
While exercising may be your best bet, it’s not your only cold prevention strategy. Aman Keldievna, a researcher at Samarkand State Medical University in Uzbekistan, also recommends scrubbing. “Scrubbing should be done year-round,” she says. If done correctly, “scrubbing should begin with the arms, then the legs, chest, abdomen, and back.”
The hardening doesn’t have to be water-based: Amankerdievna also approves of air hardening. “Air hardening is a gentler factor and is allowed for children in any state of health,” she writes.
Sunbathing is another option, but hardening caused by sunlight can be problematic. “Sunbathing is only possible with the doctor’s permission,” says Amankerdievna.
We all know that
If you’re a good speed reader, it’s easy to keep up with all that’s known — just read the thousands of new research papers published every week — but not everyone is good at speed reading.
As a service to slow readers, the feedback aims to summarize some things that are officially well known, as evidenced by the scientific literature (see above), each of which is documented with a sentence beginning with “It is well known that…”
Here are some well-known examples:
Forgetful functors are well known. Cary Malkiewich and Maru Sarazola Writing in a preprint study: “It is well known that stable model structures on a symmetric spectrum cannot be transferred from stable model structures on a continuous spectrum via a forgetting function.”
It’s notoriously complicated. Frank Nielsen wrote in the Journal: entropy, Mentioned One is that “it is well known that the distorted Bhattacharya distance between probability densities of exponential families corresponds to a distorted Jensen divergence induced by a cumulant function between the corresponding natural parameters, and in the limiting case, the two-sided Kullback-Leibler divergence corresponds to the inverse two-sided Bregman divergence.”
Heinz Kohut’s paper on narcissism is well known. write In the journal Psychoanalysis, Self and Contextreminds us that “it is well known that Heinz Kohut’s work on narcissism led to a reevaluation of patients’ healthy self-esteem.”
Ronald Fagin and Joseph Halpern A new approach to belief updatingNote that “it is well known that conditional probability functions are probability functions.”
And Luca Di Luzio, Admir Greggio and Marco Nardeckia write: Physics Review Dassure us “It is well known The giant vector is yearning for ultraviolet (UV) completion.”
How many of these well-known things are known to most people? The answer to that question is unknown. If you know of any well-known things that are less well-known but should be brought to our attention, please submit them (along with documentation) to Well-known things, c/o Feedback.
Fascism Disease
Reader Jennifer Skillen shared in her feedback that thinking about thinking was what sparked her mother-son shared reading sessions, which began several years ago. The Very Hungry Caterpillar And now, embrace New Scientist, It also contains other, more mature content.
“The other day, I started reading the cancer section of “How Do You Think About…?” [New Scientist, 25 May, page 42]And my son said, ‘Mom, why don’t you just read it and replace the word cancer with the word fascist?’ And I did, because I was fine with anything that concerned my son,” Jennifer says.
“To my surprise, the article was still very readable even with the substitutions. It made sense, but was very entertaining. It seems that both cancer cells and fascist cells can respond to changes in their environment and divide rapidly.”
Feedback agrees, and offers some excerpts from the article so readers can judge for themselves: “Cancer cells compete for nutrients and only the fittest survive…Cancer cells have evolved to be the best cancer cells possible, which is usually bad news.”
Jennifer and her son were wondering about other word pair substitutions that readers might have spotted. New Scientist The article states that substitutions “add meaning, increase knowledge, and make things more interesting.”
terrible
The question “what’s in it?” has generated many surprises, sometimes involving eels. Rohit Goel and his colleagues from the Pondicherry Medical School in India have uncovered one such surprise.
writing American Journal of Forensic PathologyThe researchers said:Unusual examples “The discovery of an interesting post-mortem remains: the presence of a moray eel among the corpses.”
The research team said that to their knowledge, “this is the first time such a discovery has been reported.”
Marc Abrahams is the founder of the Ig Nobel Prize ceremony and co-founder of the journal Annals of Improbable Research. He previously worked on unusual uses of computers. His website is Impossible.
Do you have a story for feedback?
You can submit articles for Feedback by emailing feedback@newscientist.com. Please include your home address. This week’s and past Feedback can be found on our website.
Child sexual exploitation is increasing online, with artificial intelligence generating new forms such as images and videos related to child sexual abuse.
Reports of online child abuse to NCMEC increased by more than 12% from the previous year to over 36.2 million in 2023, as announced in the organization’s annual CyberTipline report. Most reports were related to the distribution of child sexual abuse material (CSAM), including photos and videos. Online criminals are also enticing children to send nude images and videos for financial gain, with increased reports of blackmail and extortion.
NCMEC has reported instances where children and families have been targeted for financial gain through blackmail using AI-generated CSAM.
The center has received 4,700 reports of child sexual exploitation images and videos created by generative AI, although tracking in this category only began in 2023, according to a spokesperson.
NCMEC is alarmed by the growing trend of malicious actors using artificial intelligence to produce deepfaked sexually explicit images and videos based on real children’s photos, stating that it is devastating for the victims and their families.
The group emphasizes that AI-generated child abuse content hinders the identification of actual child victims and is illegal in the United States, where production of such material is a federal crime.
In 2023, CyberTipline received over 35.9 million reports of suspected CSAM incidents, with most uploads originating outside the US. There was also a significant rise in online solicitation reports and exploitation cases involving communication with children for sexual purposes or abduction.
Top platforms for cybertips included Facebook, Instagram, WhatsApp, Google, Snapchat, TikTok, and Twitter.
Out of 1,600 global companies registered for the CyberTip Reporting Program, 245 submitted reports to NCMEC, including US-based internet service providers required by law to report CSAM incidents to CyberTipline.
NCMEC highlights the importance of quality reports, as some automated reports may not be actionable without human involvement, potentially hindering law enforcement in detecting child abuse cases.
NCMEC’s report stresses the need for continued action by Congress and the tech community to address reporting issues.
○On April 6, Maryland passed the first “Kids Code” bill in the US. The bill is designed to protect children from predatory data collection and harmful design features by tech companies. Vermont’s final public hearing on the Kids Code bill took place on April 11th. This bill is part of a series of proposals to address the lack of federal regulations protecting minors online, making state legislatures a battleground. Some Silicon Valley tech companies are concerned that these restrictions could impact business and free speech.
These measures, known as the Age-Appropriate Design Code or Kids Code bill, require enhanced data protection for underage online users and a complete ban on social media for certain age groups. The bill unanimously passed both the Maryland House and Senate.
Nine states, including Maryland, Vermont, Minnesota, Hawaii, Illinois, South Carolina, New Mexico, and Nevada, have introduced bills to improve online safety for children. Minnesota’s bill advanced through a House committee in February.
During public hearings, lawmakers in various states accused tech company lobbyists of deception. Maryland’s bill faced opposition from tech companies who spent $250,000 lobbying against it without success.
Carl Szabo, from the tech industry group NetChoice, testified before the Maryland state Senate as a concerned parent. Lawmakers questioned his ties to the industry during the hearing.
Tech giants have been lobbying in multiple states to pass online safety laws. In Maryland, these companies spent over $243,000 in lobbying fees in 2023. Google, Amazon, and Apple were among the top spenders according to state disclosures.
The bill mandates tech companies to implement measures safeguarding children’s online experiences and assess the privacy implications of their data practices. Companies must also provide clear privacy settings and tools to help children and parents navigate online privacy rights and concerns.
Critics are concerned that the methods used by tech companies to determine children’s ages could lead to privacy violations.
Supporters argue that social media companies should not require identification uploads from users who already have their age information. NetChoice suggests digital literacy education and safety measures as alternatives.
During a discussion on child safety legislation, a NetChoice director emphasized parental control over regulation, citing low adoption rates of parental monitoring tools on platforms like Snapchat and Discord.
NetChoice has proposed bipartisan legislation to enhance child safety online, emphasizing police resources for combating child exploitation. Critics argue that tech companies should be more proactive in ensuring child safety instead of relying solely on parents and children.
Opposition from tech companies has been significant in all state bills, with representatives accused of hiding their affiliations during public hearings on child safety legislation.
State bills are being revised based on lessons learned from California, where similar legislation faced legal challenges and opposition from companies like NetChoice. While some tech companies emphasize parental control and education, critics argue for more accountability from these companies in ensuring child safety online.
Recent scrutiny of Meta products for their negative impact on children’s well-being has raised concerns about the company’s role in online safety. Some industry experts believe that tech companies like Meta should be more transparent and proactive in protecting children online.
Louise* thought she had been honest with her two children about the risks of the internet. However, last year, at 6 a.m., the police knocked on her door looking for her 17-year-old son.
“Five or six police officers came up my stairs,” she recalled. She exclaimed, “When they informed her they were searching for her son due to indecent images, she felt like she was going to pass out.
“I said, ‘Oh my god, he’s autistic. Has he been taught?’ They confiscated all his devices and took him away. I was so stunned that I almost vomited after they left.”
Louise’s son is just one of many under-18s accused by law enforcement of viewing or sharing indecent images of children in the past year.
the study Published in February Some individuals who consume child sexual abuse material (CSAM) admit to becoming desensitized to adult pornography and are now in search of more extreme or violent content. It appears that there are people.
In December, an investigation by The Guardian revealed that in certain areas, the majority of individuals identified by authorities as viewing or sharing indecent images of children were under 18.
Experts argue that this is part of a larger crisis caused by predators grooming children through chat apps and social media platforms.
In January, the Internet Watch Foundation cautioned that over 90% of child abuse images online are self-produced, meaning they are generated and distributed by children themselves.
Louise attributes her son’s natural teenage curiosity about pornography to steering him towards a dangerous path of interacting with strangers and sharing explicit images. Alex* was convicted of viewing and distributing a small number of child abuse images, some falling under Category A (rape and abuse of young children). Categories B and C.
While Louise acknowledges that her son, who received an 18-month community sentence and is now on the sex offenders register for five years, committed a serious offense and must face the consequences. But she also wants other parents to comprehend the sequence of events.
“It all began with an obsession common among many young people with autism,” she explained. “He adored manga and anime. I can’t even count how many miles he traveled to buy manga for himself.
“This interest led him from innocent cartoons to sexualized images, eventually leading him to join a group where teenagers exchange pornography.”
Alex has since admitted to his mother that he had an interest in pornography and was part of online groups with names like “Sex Images 13 to 17.” “What teenager isn’t curious?” Louise pondered.
It was on these popular sites and chat apps that adults were waiting to exploit vulnerable young individuals like him.
“He was bombarded with messages,” Louise shared. “Literally thousands of messages from individuals attempting to manipulate him. This boy has struggled for years to fit in as an autistic kid at school. He’s been a victim of bullying. And all of a sudden, he felt accepted. He felt a sense of excitement.
“Adults coerced him into sharing images of abuse. If he hadn’t been caught, who knows where it could have led?”
Louise questioned Alex why he didn’t show the images he received to an adult.
“I even asked him, ‘Why didn’t you tell me immediately when you saw the image?'” And he replied, “Mom, I know it’s difficult to do that. Did you know?” to describe the months I’ve been online in these spaces. ” His actual words when the police arrived were, “Oh, thank God.” That was a relief to him. ”
She mentioned that the lockdown has shifted the dynamics for young people like her son, with their lives increasingly reliant on the internet. “They were instructed, ‘Just go online and do everything there.”
Both Alex and his mother are receiving assistance from the Lucy Faithful Foundation, a charity aiding online sex offenders. Last year, 217,889 people expressed concern about their own or someone else’s sexual thoughts or actions and have reached out to seek help.
The organization recently launched a website called coast, targeting young individuals anxious about their own sexual thoughts and behaviors. Following the lifting of lockdown restrictions, calls to support hotlines for under-18s rose by 32%.
Alex also reflected on the precarious position he found himself in. “I was in my final year of sixth form, at home while my friends were heading off to university, so I felt anxious and fearful about our friendship drifting apart.
“Here, I made the fateful decision to use multiple chat platforms to try to build friendships. Although I had no intention of sexual involvement, I approached my friend in a natural sexual interest, experience. The fear of delay, combined with the powerful effect of anonymity, has made it very easy to engage in these matters.”
He cautions that his generation’s utilization of the online realm demands novel approaches to safeguard children better.
“This issue cannot be resolved by simply advising against talking to strangers on the internet. That information is outdated,” he remarked.
“Many people believe that this content can only be found on the dark web, when in fact it can be found in the shallowest parts of the internet without any effort. It was so scary that I might have thought about it, but unfortunately I was in too deep and it was too late.”
*Name has been changed
If you have concerns about images your child may have shared themselves, you can report them through the joint Childline and Internet Watch Foundation service. Delete report. You can also report images of child sexual abuse from the same website. If you are concerned about the sexual behavior of young people, please visit: shorespace.org.uk
SAs historic legislation obtained enough votes to pass in the U.S. Senate, divisions among online child safety advocates have emerged. Some former opponents of the bill have been swayed by amendments and now lend their support. However, its staunchest critics are demanding further changes.
The Kids Online Safety Act (Kosa), introduced over two years ago, garnered 60 supporters in the Senate by mid-February. Despite this, numerous human rights groups continue to vehemently oppose the bill, highlighting the ongoing discord among experts, legislators, and activists over how to ensure the safety of young people in the digital realm.
“The Kids Online Safety Act presents our best chance to tackle the harmful business model of social media, which has resulted in the loss of far too many young lives and contributed to a mental health crisis,” stated Josh Golin, executive director of Fair, a children’s online safety organization.
Critics argue that the amendments made to the bill do not sufficiently address their concerns. Aliya Bhatia, a policy analyst at the Center for Democratic Technology, expressed, “A one-size-fits-all approach to child safety is insufficient in protecting children. This bill operates on the assumption of a consensus regarding harmful content types and designs, which does not exist. Such a belief hampers the ability of young people to freely engage online, impeding their access to the necessary communities.”
What is the Kids Online Safety Act?
The Xhosa bill, spearheaded by Connecticut Democrat Richard Blumenthal and Tennessee Republican Marsha Blackburn, represents a monumental shift in U.S. tech legislation. The bill mandates platforms like Instagram and TikTok to mitigate online risks through alterations to their designs and the ability to opt out of algorithm-based recommendations. Enforcement would necessitate more profound changes to social networks compared to current regulations.
Initially introduced in 2022, the bill elicited an open letter signed by over 90 human rights organizations vehemently opposing it. The coalition argued that the bill could enable conservative state attorneys general, who determine harmful content, to restrict online resources and information concerning LGBTQ+ youth and individuals seeking reproductive health care. They cautioned that the bill could potentially be exploited for censorship.
The EU is launching an investigation into whether TikTok has violated online content regulations, particularly those relating to the safety of children.
The European Commission has officially initiated proceedings against a Chinese-owned short video platform for potential violations of the Digital Services Act (DSA).
The investigation is focusing on areas such as safeguarding minors, keeping records of advertising content, and determining if algorithms are leading users to harmful content.
Thierry Breton, EU Commissioner for the Internal Market, stated that child safety is the “primary enforcement priority” under the DSA. The investigation particularly focuses on age verification and default privacy settings for children’s accounts.
In April last year, TikTok was fined €345 million in Ireland for violating EU data law in its handling of children’s accounts. Additionally, the UK Information Commissioner fined the company £12.7 million for unlawfully processing data from children under 13.
Companies that violate the DSA can face fines of up to 6% of their global turnover. TikTok is owned by Chinese technology company ByteDance.
TikTok has stated that it is committed to working with experts and the industry to ensure the safety of young people on its platform and is eager to brief the European Commission on its efforts.
The commission is also examining alleged deficiencies in TikTok’s provision of publicly available data to researchers and its compliance with requirements to establish a database of ads shown on the platform.
A deadline for the investigation has not been set and will depend on factors such as the complexity of the case and the degree of cooperation from the companies being investigated.
This investigation of TikTok is the DSA’s second, following a December 2021 formal investigation into Elon Musk’s social media platform X, which was previously known as Twitter. The case against X focuses on failure to block illegal content and inadequate measures against disinformation.
Apple is reportedly facing a substantial fine from the EU for its conduct in the music streaming app market. The European Commission is investigating whether US tech companies blocked music distributors from informing users about cheaper subscription options outside of their own app stores.
According to the Financial Times, the city of Brussels plans to fine Apple 500 million euros, marking a significant decision following years of complaints from companies offering services through iPhone apps.
Apple was previously fined 1.1 billion euros by France in 2020 for anti-competitive agreements with two wholesalers, a fine that was later reduced by an appeals court.
Big technology companies like Apple and Google have come under increased scrutiny due to competitive concerns. Google is appealing against fines of more than 8 billion euros imposed by the EU in three separate competition investigations.
Apple has successfully defended against a lawsuit by Fortnite developer Epic Games alleging that its app store was an illegal monopoly. In December, Epic won a similar lawsuit against Google.
Last month, Apple announced that it would allow EU customers to download apps without using its own app store, in response to the EU’s digital market law.
According to a whistleblower, Mark Zuckerberg’s Meta Inc. has not done enough to protect children following Molly Russell’s death. The whistleblower claimed that the social media company already poses a risk to teenagers and that Zuckerberg had put in place infrastructure to protect against such content.
Arturo Bejar, the owner of Instagram and Facebook, voiced his concern that the company had not learned from Molly’s death and could have provided a safer experience for young users. Bejar’s survey of Instagram users revealed that 8.4% of 13- to 15-year-olds had seen someone harm themselves or threaten to harm themselves within the past week.
Bejar stressed that if the company had taken the right steps after Molly Russell’s death, the number of people encountering self-harm content would have been significantly lower. Russell, who committed suicide after viewing harmful content related to suicide, self-harm, depression, and anxiety on Instagram and Pinterest, sparked the whistleblower’s concerns. Bejar believes that the company could have made Instagram safer for teens but chose not to make necessary changes.
Former Meta employees have also asked the company to set goals for reducing harmful content and creating sustainable incentives to work on these issues. Meanwhile, Béjart has met with British politicians, regulators, and activists, including Ian Russell, Molly’s father.
Bejar has suggested a series of changes for Meta, including making it easier for users to flag unwanted content, surveying users’ experiences regularly, and facilitating the reporting of negative experiences with Meta’s services.
The Guardian has revealed that social media companies relying on artificial intelligence software to manage their platforms are producing unworkable reports on child sexual abuse cases, leaving U.S. police unable to uncover potential leads, which is delaying the investigation into suspected looters.
By law, U.S.-based social media companies are required to report child sexual abuse content detected on their platforms to the National Center for Missing and Exploited Children (NCMEC), which serves as a national clearinghouse for child abuse information and forwards information to relevant law enforcement agencies in the United States and around the world. The company said it received more than 32 million reports of suspected child sexual exploitation and approximately 88 million images, videos, and other files from businesses and the general public in 2022.
Meta is the largest reporter of this information, with over 27 million (84%) generated by Facebook, Instagram, and WhatsApp platforms in 2022. NCMEC is partially funded by the Department of Justice and also receives private sources of corporate donations.
Social media companies, including Meta, use AI to detect and report suspicious content on their sites and employ human moderators to send some flagged content to law enforcement. However, U.S. law enforcement agencies can only disclose AI-generated child sexual abuse material (CSAM) by serving a search warrant on a company that has filed a report, which can add days or even weeks to the investigation process.
“If a company reports a file to NCMEC and does not indicate that it viewed the file before reporting, we will not be able to open the file,” said Staka Shehan, vice president of analytical services at NCMEC.
To protect your privacy under the Fourth Amendment, neither law enforcement officials nor the federally funded NCMEC will issue a search warrant unless the contents of the report are clear and first reviewed by a social media company representative.
NCMEC staff and law enforcement agencies cannot legally see the content of AI-generated content that is not seen by humans, which can stall investigations into suspected predators for several weeks, resulting in the loss of evidence that may be possible to connect.
“Any delay [in viewing the evidence] “The longer criminals go undetected, the more detrimental it is to ensuring community safety,” said an assistant U.S. attorney in California, who spoke on condition of anonymity. “They are dangerous to all children.”
In December, the New Mexico Attorney General’s Office filed a lawsuit against Meta, alleging that its social network has become a marketplace for child predators and that Meta has repeatedly failed to report illegal activity on its platform. woke up. In response, Meta said its priority was to combat child sexual abuse content.
The state attorney general laid the blame for the fight to send actionable information at the feet of Meta. “Reports showing the inefficiency of the company’s AI-generated cyber information systems prove what we said in the complaint,” Raul Torrez said in a statement to the Guardian.
To ensure the safety of children, keep parents informed, and enable law enforcement to effectively investigate and prosecute online sex crimes against children, the company is reforming, staffing levels, and policies. , it’s long past time to implement algorithmic changes,” Torrez added.
Despite legal limitations on moderation AI, social media companies are likely to increase its use in the near future. In 2023, OpenAI, developer of ChatGPT, announced they claimed that large-scale language models can do the job of human content moderators and have roughly the same accuracy.
However, child safety experts say that the AI software used by social media companies to moderate content already knows the digital fingerprints of images, known as hashes, and that the AI software used by social media companies to moderate content cannot be used to detect known cases of child sexual abuse. It claims to be effective only when identifying images of Lawyers interviewed said AI would be ineffective when newly created images or when known images or videos are altered.
“There is always concern about cases involving newly identified victims, and because they are new, the materials do not have a hash value,” said the director of the Zero Abuse Project, a nonprofit organization focused on combating child abuse.
said senior lawyer Kristina Korobov. . “If humans were doing the work, there would be more discoveries of newly discovered victims.”
This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.
Strictly Necessary Cookies
Strictly Necessary Cookie should be enabled at all times so that we can save your preferences for cookie settings.