Roblox Prohibits Children from Communicating with Unknown Adults Following Legal Actions

The online gaming platform Roblox is set to restrict interactions between children and adults, as well as older teenagers, starting next month. This decision comes in light of a new lawsuit that alleges the platform has been exploited by predators to groom children as young as seven.

Roblox, known for popular games like “Grow a Garden” and “Steal a Brainrot,” boasts 150 million daily players. However, it now faces legal action claiming that its system design facilitates the predation of minors.

Beginning next month, a facial age estimation feature will be implemented, allowing children to communicate with strangers only if they are within a certain age range.


Roblox claims it will be the first gaming or communication platform to enforce age verification for chats. Similar measures were enacted in the UK this summer for adult sites, ensuring that under-18s cannot access explicit content.

The company likened its new approach to the age structures found in schools, differentiating elementary, middle, and high school levels. The initiative will be launched first in Australia, New Zealand, and the Netherlands, where children will be prohibited from having private conversations with unknown adults starting next month, with a global rollout planned for early January.

Users will be classified into categories: under 9, 9-12, 13-15, 16-17, 18-20, or 21 and older. Children will only be allowed to chat with peers in their age group or a similar age range. For instance, a child whose age is estimated at 12 can only interact with users under 16. Roblox stated that any images or videos used during the age verification process will not be stored.

“We view this as a means to enhance user confidence in their conversations within the game,” stated Matt Kaufman, Roblox’s chief safety officer. “We see it as a genuine chance to foster trust in our platform and among our community.”

This lawsuit emerges alongside growing concerns from family attorneys regarding the “systematic predation of minors” on Roblox. Florida attorney Matt Dolman mentioned that he has filed 28 lawsuits against Roblox, which has rapidly expanded during the pandemic, asserting that “the primary allegations pertain to the systematic exploitation of minors.”

One of the more recent lawsuits, filed in U.S. District Court in Nevada, involves the family of a 13-year-old girl who claims that Roblox conducted its operations “recklessly and deceptively,” facilitating her sexual exploitation.


The alleged incident involved a ‘dangerous child predator’ who posed as a child, developed an emotional connection, and manipulated the girl into providing her phone number and engaging in graphic exchanges. The manipulator then coerced her into sending explicit photos and videos.

The lawsuit claims that had Roblox implemented user screening measures prior to allowing access, the girl “would not have encountered the numerous predators that litter the platform,” and if age and identity checks had been conducted, the abuse could have been prevented.

Other recent cases in the Northern District of California include a 7-year-old girl from Philadelphia and a 12-year-old girl from Texas, both of whom were reportedly groomed and sent explicit materials by predators on Roblox.

“We are profoundly concerned about any situation that places our users at risk,” a Roblox spokesperson remarked. “The safety of our community is our highest priority.”

“This is why our policies are intentionally more stringent than those on many other platforms,” they added. “We have filters aimed at protecting younger users, prohibit image sharing, and restrict sharing personal information.

“While no system is flawless, we are continually striving to enhance our safety features and platform restrictions, having launched 145 new initiatives this year to assure parents that we prioritize their children’s safety online.”

“One platform’s safety standards alone aren’t sufficient; we genuinely hope others in the industry will adopt some of the practices we’re implementing to ensure robust protections for children and teens across the board,” Kaufman commented.

Bevan Kidron, UK founder of the 5Rights Foundation, advocating for children’s digital rights, stated: “It’s imperative for game companies to prioritize their responsibility toward children within their services.

“Roblox’s announcement asserts that their forthcoming measures will represent best practices in this sector, but it is a bold statement from a company that has historically been slow to tackle predatory behavior and granted unverified adults and older children easy access to millions of young users. We sincerely hope they are correct.”

Source: www.theguardian.com

NIH Prohibits New Funding for US Scientists Collaborating with Overseas Partners

The National Institutes of Health has implemented a policy that prevents American scientists from allocating their funds to international research collaborators, raising concerns about the implications for studies on critical issues like malaria and pediatric cancer.

On Thursday, the new NIH director, Dr. Jay Bhattacharya, made this announcement. Coincidentally, Deputy Director Dr. Matthew J. Memoli criticized these so-called sub-awards in an email shared with the New York Times.

Dr. Memoli stated, “If you can’t clearly justify why you’re doing something overseas, you can’t do it anywhere else and can’t benefit Americans.”

The impending restrictions will also extend to domestic sub-awards in the future, coinciding with executive orders aimed at reshaping the nation’s scientific priorities amidst declining NIH funding and stalled federal grants at numerous premier universities.

On Monday, President Trump enacted an executive order to restrict experiments that could enhance the risks posed by pathogens and limit support for so-called gain-of-function research in nations like China.

Researchers receiving NIH grants have frequently employed sub-awards to foster international collaboration, a crucial component for studying diseases such as childhood cancer, malaria, and tuberculosis, which is less prevalent in the U.S.

Sub-awards are legal and financial arrangements between grant recipients and their international counterparts. This practice is widespread across the federal government and not exclusive to the NIH.

However, there has been increased scrutiny in recent years due to lax reporting and tracking of funds. Following a critical report from the Government Accountability Office (GAO) in 2023, the NIH introduced more stringent oversight requirements.

Proponents of scientific and medical research argue that as science grows more complex, collaborative efforts that engage participants and researchers globally are becoming increasingly vital.

“Competitiveness in science necessitates a collaborative approach,” stated Dr. E. Anderskolb, CEO of the Leukemia and Lymphoma Association. “No single lab, agency, or investigator possesses all the necessary tools to address the complex questions we’re facing.”

Many of these studies require a significant number of subjects. For instance, scientists can more precisely classify the types of pediatric cancers, leading Dr. Kolb to comment, “we’re entering a niche of diseases that are becoming progressively smaller.”

“Thus, if you’re aiming to conduct clinical trials for new treatments that could aid these children, attempting to only enroll U.S. children might prolong the trial duration by decades,” he added. “Collaborating with international partners allows us to expedite these trials and deliver treatments to our children much sooner.”

In unveiling the new directive, Dr. Bhattacharya referenced a GAO report criticizing the funding awarded to international universities, research institutes, and firms.

Dr. Bhattacharya added that the issues raised by the GAO “could undermine trust and safety for U.S. biomedical research entities.”

Tracking NIH expenditures for these international organizations is challenging. A notable obstacle pointed out by the GAO, as reported by journal Nature, estimated total funding to be about $500 million annually.

Dr. Monica Gandhi, a professor of medicine at the University of California, San Francisco, is utilizing NIH funding for HIV prevention and treatment research in Kenya and South Africa.

Researchers like her are required to furnish detailed information when applying for international sub-awards, she explained.

Currently, international partners must access lab notebooks, data, and other documents at least once annually, as noted by Dr. Gandhi. All expenses must comply with Foreign Awards and Component Tracking Systems.

“It’s extremely stringent, similar to using taxpayer funds,” Dr. Gandhi remarked.

“Each year, when submitting your progress report, you must account for every dollar spent on international locations. You’ll detail where it was allocated, how much laboratory testing costs, and who the principal investigators are—every facet.”

It remains unclear how the new policy will be implemented. The NIH has not responded to requests for further information.

The NIH stated it will not retroactively reverse foreign sub-awards that are already in effect “at this time,” and will continue to grant funding to international organizations.

However, the new policy prohibits the reissuance of new, competitive awards if they include proposals for sub-awards to foreign institutions.

“If the project is unfeasible without foreign sub-awards, the NIH will collaborate with the recipient to negotiate the bilateral termination of the project,” stated the agency.

The new policy seems to be slightly less comprehensive than what Dr. Memoli outlined in his internal email.

“Sub-awards to foreign sites cannot proceed,” he wrote. “This has been mismanaged horrendously in recent years and is utterly irresponsible. We must act immediately. If there is a foreign site involved in our research, we need to either start closing it or devise another method to track it properly.”

GAO reports indicate that several federal departments are seeking improved surveillance following criticism regarding lax reporting. However, the office did not advocate for the complete termination of such funding.

The 2023 GAO Report reviewed $2 million in direct and sub-awards, the majority coming from the NIH, awarded to three Chinese research institutions, including the Wuhan Virology Institute, between 2014 and 2021.

The Virology Institute received a sub-award from the University of California, Irvine and the non-profit EcoHealth Alliance. Collaborating with the Alliance and Chinese scientists led former President Joseph R. Biden Jr. to suspend funding last year. Recently, the Trump administration updated its government portal for COVID-19 information to suggest that a novel virus emerged from a lab in Wuhan.

According to a GAO report, NIH oversight has not consistently ensured that foreign agencies comply with requirements, including biosafety regulations.

Another GAO report indicated that one reason for the difficulty in tracking spending is a federal policy requiring the reporting of sub-awards of $30,000 or more.

The report examined approximately $48 million in NIH and State Department funding provided to Chinese companies and research institutions between 2017 and 2021.

“The full extent of these sub-awards remains unknown,” and the data retrieved were found to be incomplete and inaccurate, with numerous expenditures exempt from reporting.

Apoorva Mandavilli Reports of contributions.

Source: www.nytimes.com

UK Case Ruling Prohibits Sex Offenders from Utilizing AI Tools

A convicted sex offender who created over 1,000 indecent images of children has been forbidden from using any “AI creation tools” for the next five years, marking a significant case in this realm.

Anthony Dover, 48, was instructed by a British court in February not to use artificial intelligence-generated tools without prior police authorization, as part of a sexual harm prevention order issued in February.

The prohibition extends to tools like text-image generators that produce realistic-looking photos from written commands, as well as the manipulation of websites used to generate explicit “deepfake” content.

Mr. Dover, who received a community order and a £200 fine, was specifically directed not to utilize the Stable Diffusion software known to be exploited by pedophiles to create surreal child sexual abuse material.

This case is part of a series of prosecutions where AI-generated images have come to the forefront, prompting warnings from charities regarding the proliferation of such images of sexual abuse.

Last week, the government announced the creation of a new crime that makes it illegal to produce sexually explicit deepfakes of individuals over 18 without their consent, with severe penalties for offenders.

Using synthetic child sexual abuse material, whether real or AI-generated, has been illegal under laws since the 1990s, leading to recent prosecutions involving lifelike images produced using tools like Photoshop.

These tools are increasingly being used to combat the dangers posed by sophisticated synthetic content, as evidenced by recent court cases involving the distribution of such images.

The Internet Watch Foundation (IWF) emphasized the urgent need to address the production of AI-generated child sexual abuse images, warning about the rise of such content and its chilling realism.

Law enforcement agencies and charities are working to tackle this growing trend of AI-generated images, with concerns rising about the production of deepfake content and the impact on victims.

Skip past newsletter promotions

Efforts are underway to address the growing concern over AI-generated images and deepfake content, with calls for technology companies to prevent the creation and distribution of such harmful material.

The decision to restrict adult sex offenders from using AI tools may pave the way for increased surveillance of those convicted of indecent image offenses, highlighting the need for proactive measures to safeguard against future violations.

While restrictions on internet use for sex offenders have existed, limitations on AI tools have not been common, underscoring the gravity of this case and its implications for future legal actions.

The company behind Stable Diffusion, Stability AI, has taken steps to prevent abuse of their software, emphasizing the importance of responsible technology use and compliance with legal guidelines.

Source: www.theguardian.com

Airbnb prohibits hosts from using indoor surveillance cameras in rental properties

Airbnb has announced that it will prohibit the use of indoor surveillance cameras in rental properties worldwide by the end of next month.

The online rental platform, based in San Francisco, stated that it aims to “simplify” its security camera policies while emphasizing privacy. This policy change will be implemented on April 30th.

Juniper Downs, Airbnb’s Community Policy and Head of Partnerships, stated in a prepared statement, “These changes were made in consultation with guests, hosts, and privacy experts, and we continue to solicit feedback to ensure our policies work for our global community.”

Previously, Airbnb permitted indoor surveillance cameras in common areas like hallways and living rooms, as long as their location was disclosed on the property page. With the new policy, hosts can still use doorbell cameras and noise decibel monitors in common areas, but they must make the devices’ presence and location known. Outdoor cameras monitoring indoor spaces are now prohibited.

Reports from Airbnb guests have highlighted instances of hidden cameras in rental rooms. Downs anticipates that this policy change will impact only a small number of hosts, as most Airbnb properties do not have indoor surveillance cameras. Any host found to violate the new indoor camera policy risks losing their Airbnb account.

In its fourth-quarter earnings report last month, Airbnb stated that demand remained strong, with bookings and revenue on the rise.

Source: www.theguardian.com

OpenAI prohibits bot mimicking US presidential candidate Dean Phillips from its platform

OpenAI has taken down the account of the developer of an AI-powered bot that pretended to be US presidential candidate Dean Phillips, citing a violation of company policies.

Phillips, who is challenging Joe Biden for the Democratic nomination, was impersonated by a bot using ChatGPT. dean bot site.

The bot is backed by Silicon Valley entrepreneurs Matt Krysilov and Jed Summers, who are supporting Phillips with a superpack called “We Deserve Better” that funds and supports political candidates. An organization to do this has been established.

San Francisco-based OpenAI announced it has removed developer accounts that violated its policies against political campaigning and impersonation.

“We recently terminated developer accounts that knowingly violated our API Usage Policy, which prohibits political campaigning, or that impersonated individuals without their consent,” the company said.

The Phillips bot, created by AI company Delphi, is currently disabled. Delphi has been contacted for comment.

OpenAI Usage policy It says developers who use the company’s technology to build their own applications must not engage in “political campaigning or lobbying.” It also prohibits “impersonating another person or entity without their consent or legal right to do so,” although it is unclear whether Minnesota Congressman Phillips gave his consent to the bot.

A pop-up notification on the dean.bot website describes the “AI voice bot” as “a fun educational tool, but not perfect.” It added: “Although the voice bot is programmed to sound like him and elicit his ideas, it may say things that are wrong, incorrect, or shouldn’t be said.” I am.

washington post, The ban was first reported by, reported that Krysilov asked Delphi to remove ChatGPT from its bot and instead rely on freely available open source technology. We have reached out to Krysilov, a former OpenAI employee, for comment.

We Deserve Better received $1 million in funding from billionaire hedge fund manager Bill Ackman, who put it in a post to “It’s the biggest investment I’ve ever made.”

Mr. Phillips, 55, announced his candidacy for president in October, citing Mr. Biden’s age and saying he should be given the opportunity to mentor younger generations. Mr. Phillips, who was campaigning in New Hampshire on Saturday, described Mr. Biden as “un-electable and weak.”

There are concerns that deepfakes and AI-generated disinformation could disrupt elections around the world this year, with the US, EU, UK and India all planning to vote. On Sunday, the Observer reported that 70% of British MPs are concerned that AI will increase the spread of misinformation and disinformation.

Source: www.theguardian.com

Rite Aid Prohibits Use of Facial Recognition Software for Shoplifting Impersonation

Rite aid It has been Banned US drugstore giant’s use of facial recognition software comes after Federal Trade Commission (FTC) finds ‘reckless use of facial surveillance system’ humiliates customers and ‘compromises confidential information’ was banned for five years.

F.T.C. orderU.S. Bankruptcy Court approval required after Rite Aid Filing for bankruptcy protection under Chapter 11 of the Federal Bankruptcy Code In October, it directed Rite Aid to delete images collected as part of its facial recognition system rollout and products built from those images. Companies must also implement robust data security programs to protect the personal data they collect.

Reuters 2020 report Details of how the drugstore chain secretly installed facial recognition systems in about 200 U.S. stores over an eight-year period starting in 2012, using “primarily low-income, non-white neighborhoods” as testbeds for the technology. Stated.

With the increase in FTC Focus on the abuse of biometric surveillance, Rite Aid was firmly targeted by government agencies. Among the allegations: Rite Aid partnered with two contracting companies to create a “watch list database” containing images of customers it said had engaged in criminal activity at one of its stores. Includes what you did. These images are often of low quality and are taken from CCTV or an employee’s mobile phone camera.

When a customer enters a store that appears to match an existing image in the database, employees receive an automated alert instructing them to take action, which in most cases involves “walking closer and identifying”; That means verifying the customer’s identity and asking them to leave. According to the FTC, these “matches” were often false positives, causing employees to falsely accuse customers of wrongdoing and causing “embarrassment, harassment, and other harm.”

“Following false positive alerts, employees may follow consumers in the store, search them, order them to leave, call the police, confront or remove consumers, and sometimes shoplift in front of their friends and family. and other misconduct,” the suit says.

Additionally, the FTC said Rite Aid did not notify customers that facial recognition technology was being used and specifically instructed employees to: do not have Reveal this information to your customers.

face off

Facial recognition software has emerged as one of the most controversial aspects of the AI-powered surveillance era. In recent years, cities have issued broad bans on the technology while politicians have fought to regulate how police use it. Meanwhile, companies like Clearview AI have been hit with lawsuits and fines around the world for massive data privacy violations involving facial recognition technology.

The FTC’s latest findings regarding Rite Aid also shed light on the biases inherent in AI systems. For example, the FTC says Rite Aid failed to reduce risks to certain consumers due to race. The technology is “more likely to generate false positives in stores located in predominantly Black and Asian communities than in predominantly white communities.” Observation notes.

Additionally, the FTC said Rite Aid failed to test or measure the accuracy of its facial recognition system before or after its implementation.

in press releaseRite Aid said it was “pleased to reach an agreement with the FTC” but disagreed with the core of the allegations.

“The allegations relate to a pilot program for facial recognition technology that we implemented in a limited number of stores,” Rite Aid said in a statement. “Rite Aid stopped using the technology at this small group of stores more than three years ago, before the FTC’s investigation into the company’s use of the technology began.”

Source: techcrunch.com