Why Safety Bills in the US Didn’t Pass, Leaving Desperate Parents to Protect Their Children on Social Media

wHEN Congress was postponed to the holiday in December. This is a groundbreaking bill aimed at overhauling how technology companies protect the youngest users. The Kids Online Safety Act (KOSA) introduced in 2022 was intended to be a massive calculation for Big Tech. Instead, the bill waned and died in the House despite sailing through the Senate in July with a 91-3 vote.

Kosa is passionately defended by families who say children have fallen victim to the harmful policies of social media platforms, and advocates who say bills that curb the unidentified power of big technology have been postponed for a long time is. They are seriously disappointed that a strong chance to check out Big Technology has failed due to Congress' indifference. However, human rights groups argued that the law could have led to unintended consequences that impacted freedom of speech online.

What is the Kids Online Safety Act?

Kosa was introduced nearly three years ago in the aftermath of a bomb revelation by former Facebook employee Frances Haugen, and the extent to which the social media platform's impact on younger users. Platforms like Instagram and Tiktok would have required that children be affected through design changes and address online risks to allow younger users to opt out of algorithmic recommendations.

“This is a basic product praise bill,” said Alix Fraser, director of the Council on Responsible Social Media Issues. “It's complicated because the internet is complex and social media is complex, but essentially it's just an effort to create basic product driving standards for these companies.”

The central and controversial element of the bill is its “duty of care” clause, declaring that businesses “have an obligation to use the platform to act in the best interests of minors,” and the regulatory authority It has declared it open to interpretation by They would have also requested that the platform implement measures to reduce harm by establishing “safeguards for minors.”

Critics argued that the lack of clear guidance on what constitutes harmful content encourages businesses to filter content more aggressively, resulting in unintended consequences for free speech. Delicate but important topics such as gun violence and racial justice can be considered potentially harmful and may subsequently be ruled out by the corporation itself. These censorship concerns are particularly prominent in the LGBTQ+ community, saying that opponents of the Kosa could be disproportionately affected by conservative regulators and reduce access to critical resources.

“Using Kosas we see a truly intentional but ultimately ambiguous bill that requires online services to adopt online services to take unspecified actions to keep children safe. A policy analyst at the Center for Democracy Technology, who opposes the law and receives money from technology donors such as Amazon, Google, and Microsoft.

The complex history of the Kosa

When the bill was first introduced, over 90 human rights groups signed letters against it, highlighting these and other concerns. In response to such criticism, the bill's author published a revision in February 2024. Most notably, the state attorney general changed the enforcement of its “duty of care” provisions to the Federal Trade Commission. Following these changes, many organizations, including the Glaad, the Human Rights Campaign and the Trevor project, have withdrawn their opposition, saying the amendments “significantly reduce the risk of the matter.” [Kosa] It has been misused to suppress LGBTQ+ resources and to curb young people's access to online communities. ”

However, other civil rights groups have maintained their opposition, including the Electronic Frontier Foundation (EFF), the ACLU and the future battle, calling Kosa a “censorship bill” that harms vulnerable users and freedom of speech. They argued that the duty-of-care provision could easily be weaponized by conservative FTC chairmen against LGBTQ+ youth, as well as the state attorney general. These concerns are reflected in the appointment of Republican Andrew Ferguson, Trump's FTC chairman; Who said in the leaked statement He had planned to use his role to “fight the trans agenda.”

Concerns about how Ferguson will manage online content are “what LGBTQ youth wrote and called Congress hundreds of times over the past few years in this fight,” says Saraphilips of the Future Fight. Ta. “The situation they were afraid of has come to fruition. Anyone who ignores it is really just putting their heads in the sand.”

Opponents say that even if KOSA doesn't pass, they've already achieved a calm effect on content available on certain platforms. recently Report User MAG has found that hashtags for LGBTQ+-related topics are classified as “sensitive content” and are restricted from search. Laws like Kosa, Bhatia of the Center for Democracy Technology, said it doesn't take into account the complexity of the online landscape, and it's likely that the platform will lead preemptive censorship to avoid litigation.

“Children's safety holds an interesting and paradoxical position in technology policy, where children benefit greatly from the internet, as well as vulnerable actors,” she said. . “Using policy blunt instruments to protect them can often lead to consequences that don&#39t really take this into consideration.”

Supporters will make backlash at Kosa an aggressive lobbying from the tech industry, but fight for the future – two top opponents – EFF will be supported by large tech donors Not there. Meanwhile, the large tech companies have been split up by KOSA, with X, SNAP, Microsoft and Pinterest quietly supporting the bill, Meta and Google.

Skip past newsletter promotions

“The Kosa was a very robust law, but what's more robust is the power of big technology,” Fraser is the power of problem 1. “They hired all the lobbyists in town to take it down, and they succeeded with it.”

Fraser added that supporters are disappointed that Kosa didn&#39t pass, but “will not take a break until federal law is passed to protect children online.”

Potential revival of Kosa

Besides Ferguson as FTC Chairman, it is unclear what the changing composition of the new Trump administration and Congress will mean for the future of Kosa. Trump has not directly expressed his views on Kosa, but some of his close circles are Revealed support After last minute amendments to the 2024 bill Promoted by Elon Musk&#39s X.

The death of the Congress in Kosa may seem like the end of a winding and controversial path, but defenders on both sides of the fight say it&#39s too early to write legislative obituaries.

“We shouldn&#39t expect the Kosa to go quietly,” said Prem Trivedi, policy director at the Institute for Open Technology, which opposes Kosa. “Whether it&#39s being reintroduced or seeing if a different incarnation is introduced, it will continue to focus more broadly on online safety for children.”

Senator Richard Blumental, who co-authored the bill with Senator Marsha Blackburn, has promised to reintroduce it in future legislative sessions, and other defenders of the bill say they won&#39t give It’s.

“I want to talk about the worst days of their lives over and over again, in front of lawmakers, in front of staff, in front of the press, knowing something is known. I&#39ve worked with a lot of parents who think that, and to change,” Fraser said. “They don&#39t intend to stop.”

Source: www.theguardian.com

The “Godfather” of AI warns that Deepseek’s advancements may heighten safety concerns.

A groundbreaking report by AI experts suggests that the risk of artificial intelligence systems being used for malicious purposes is on the rise. Researchers, particularly in DeepSeek and other similar organizations, are concerned about safety risks which may escalate.

Yoshua Bengio, a prominent figure in the AI field, views the progress of China’s DeepSeek startup with apprehension as it challenges the dominance of the United States in the industry.

“This leads to a tighter competition, which is concerning from a safety standpoint,” voiced Bengio.

He cautioned that American companies and competitors need to focus on overtaking DeepSeek to ensure safety and maintain their lead. Openai, known for Chatgpt, responded by hastening the release of a new virtual assistant to keep up with DeepSeek’s advancements.

In a wide-ranging discussion on AI safety, Bengio stressed the importance of understanding the implications of the latest safety report on AI. The report, spearheaded by a group of 96 experts and endorsed by renowned figures like Jeffrey Hinton, sheds light on the potential misuse of general-purpose AI systems for malicious intents.

One of the highlighted risks is the development of AI models capable of generating hazardous substances beyond the expertise of human experts. While these advancements have potential benefits in medicine, there is also a concern about their misuse.

Although AI systems have become more adept at identifying software vulnerabilities independently, the report emphasizes the need for caution in the face of escalating cyber threats orchestrated by hackers.

Additionally, the report discusses the risks associated with AI technologies like Deep Fake, which can be exploited for fraudulent activities, including financial scams, misinformation, and creating explicit content.

Furthermore, the report flags the vulnerability of closed-source AI models to security breaches, highlighting the potential for malicious use if not regulated effectively.

In light of recent advancements like the O3 model by OPENAI, Bengio underscores the need for a thorough risk assessment to comprehend the evolving landscape of AI capabilities and associated risks.

While AI innovations hold promise for transforming various industries, there is a looming concern about their potential misuse, particularly by malicious actors seeking to exploit autonomous AI for nefarious purposes.

It is essential to address these risks proactively to mitigate the threats posed by AI developments and ensure that the technology is harnessed for beneficial purposes.

As society navigates the uncertainties surrounding AI advancements, there is a collective responsibility to shape the future trajectory of this transformative technology.

Source: www.theguardian.com

FDA Urges Pet Food Companies to Review Safety Plans in Light of Bird Flu Outbreak

The number of cats increasing that have died or become ill after consuming raw pet food and raw milk contaminated with the H5n1 virus has prompted health authorities to take special precautionary measures to protect pet food companies from bird flu. They are advising pet food makers to follow food safety plans such as sourcing ingredients from healthy flocks and applying heat treatments to inactivate viruses, as suggested in recent guidance from the Food and Drug Administration.

Since the H5n1 virus started spreading in 2022, there have been bird outbreaks under all conditions. Cats appear to be particularly susceptible to the H5N1 virus, with many household cats and wild cats becoming infected since its emergence in 2022. Some farm cats have fallen ill after consuming raw milk, while others have died after consuming contaminated raw pet food.

Despite the FDA guidance, some experts like Dr. Jane Cycks from the University of California, Davis School of Veterinary Medicine have raised concerns about the lack of detailed instructions on guaranteeing the absence of H5N1 in food. The FDA has advised pet owners to cook raw pet food to eliminate risks and follow USDA guidelines for safe food handling.

In response to the situation, some raw pet food companies have implemented safety measures such as sourcing quality ingredients and using processes like high-pressure pasteurization. However, experts emphasize that cooking is the only certain way to eliminate the risk of H5N1 in pet food.

Overall, both the Centers for Disease Control and Prevention and the American Veterinary Medical Association recommend against feeding companion animals raw or undercooked meat due to the potential risks associated with pathogens like H5N1.

While high-pressure pasteurization is advertised as a method to kill pathogens, experts caution that cooking to internal temperature is the most reliable way to ensure food safety. Consumers are advised to cook raw pet food thoroughly before feeding it to their pets to reduce the risk of transmission of bird flu.

For those who prefer raw pet food brands, experts suggest cooking the food before feeding it to ensure the safety of pets.

Source: www.nbcnews.com

#10 Reminder: Online safety is not one-size-fits-all. – John Norton

London fixed gear and single speed (LFGSS) is a great online community of fixed gear and single speed cyclists in and around London. Unfortunately, this columnist is not eligible for membership. He doesn’t live in (or near) a big city and needs a lot of gear to climb the gentlest slopes. That’s why we admire more rugged cyclists who disdain Starmie’s assistance. Archer or Campagnolo hardware.

But bad news is on the horizon. As of Sunday, March 16th, LFGSS will be retired. Dee Kitchen is a core developer at Software Wizards (and Cyclists).
microcosm is a platform for operating non-commercial, privacy-friendly, and accessible online forums such as LFGSS.
announced On that day, he announced that he would “remove the virtual servers hosting LFGSS and other communities, effectively immediately terminating the approximately 300 small communities I run, as well as a small number of larger communities such as LFGSS.” said.

Why does the kitchen do this? Answer: he read
statement
It was announced on December 16 by Ofcom, the regulator appointed by the government to enforce the provisions of the Online Safety Act (OSA). “Providers are currently obliged to assess the risk of unlawful harm to their services, with a deadline of March 16, 2025. Following the Code, which has completed a parliamentary process, providers will be required to assess the risk of unlawful harm to their services from March 17, 2025. “We must take steps to protect our users from illegal content and activity as set forth in our Terms and Conditions or use other effective means. If a provider does not act quickly to address service risks, we are prepared to take enforcement action. ”

Please wait a moment. OSA isn’t just about protecting children and adults from harmful content, bullying, pornography, etc. It’s not just about discussions about fixed gear bikes, cancer support, dog walking, rebuilding valve amps, etc. Is it? It may sound strange, but the answer seems to be no. of
act requires
Services that process user-generated content have baseline content moderation practices and use those practices to remove reported content that violates UK law and to prevent children from viewing pornography. And that applies to every Services that process user-generated content and have “links to the UK”.

Mr. Kitchen believes the online forums he hosts fall within the scope of this practice and, as he is based in the UK, there is “no way around it”. “I can’t afford to spend probably tens of thousands of dollars to get through the legal and technical hoops here for an extended period of time…The site itself barely gets in a few hundred donations each month, and it takes a little more to run it. It costs money…this is not a venture that can afford the compliance costs… If I did, what would remain is a disproportionately high personal liability for me that could easily be used as a weapon by disgruntled people banned for egregious behavior. ” That is why he believes he has no choice but to shut down the platform.

Some may think that he is overreacting, that common sense will prevail and that legal precedent will eventually emerge. But the OSA is a new piece of legislation, a meandering evolution of the 2019 White Paper on Online Harms and the chaotic passage of Parliament at a time when the Conservative Party was busy mismanaging the country. (One grizzled political insider described this to me as a “dog’s breakfast.”) In such a situation, the cost of being an early test case would give anyone pause. . I’ve been a blogger for decades, and from the beginning I decided not to allow comments.
my blog Partly because I didn’t want the burden of moderation, but also because I was worried about the legal ramifications of what people posted. So instead of the kitchen, I would like to do what he has decided.

Many years ago, I had an exchange with Tim Berners-Lee, the inventor of the World Wide Web, at a Royal Society conference. From a conversation with a new Labor Minister, I realized the following: This guy thinks the web is the internet! And I told Tim that. “It’s much worse than that,” he replied, “and millions of people around the world think so.” facebook It’s the internet. ”

The root of the problem with OSA is that it was framed and enacted by legislators who believe that the “internet” consists only of the platforms of a few big tech companies. So they passed laws supposedly to deal with these corporate thugs, without imagining the unintended consequences for the actual internet of people using technology for purely social purposes. And in doing so, they inadvertently raise the famous question posed by Alexander Pope in a letter to Dr. Arbuthnot in 1735: “Who breaks the butterfly on the wheel?” It will end up being put away.

what i was reading

British students at risk
Nathan Heller’s long, thoughtful new yorker essay On the plane from the American University of Humanities.

Skip past newsletter promotions

don’t entrust your life
It’s really subtle LM Sacasas article What we can learn from 20th century cultural critic Lewis Mumford in the age of AI.

Musk meets Ross Perot
Incisive work by John Ganz Two engineers who thought they understood politics.

  • Do you have an opinion on the issues raised in this article? If you would like to submit a letter of up to 250 words for consideration for publication, please email observer.letters@observer.co.uk.

Source: www.theguardian.com

UK online safety laws are ‘non-negotiable,’ declare tech giants | Artificial Intelligence (AI)

In the wake of Meta founder Mark Zuckerberg’s pledge to team up with Donald Trump to pressure countries he deems as “censoring” content, efforts to enhance online safety have been emphasized. A government official has cautioned that Britain’s new law addressing hate speech is firm and non-negotiable.

Technology Secretary Peter Kyle, in an interview with observer, expressed optimism that recent legislation aimed at safeguarding online platforms for children and vulnerable individuals would attract major tech companies to the UK, supporting economic expansion without compromising safety measures.

As Keir Starmer prepares to unveil a significant tech initiative positioning the UK as an ideal hub for AI technology advancement, the government is under scrutiny from Elon Musk, a vocal Trump loyalist.

Technology Secretary Peter Kyle is dedicated to positioning the UK as a frontrunner in the AI revolution. Photo: Linda Nylind/The Guardian

Mark Zuckerberg’s recent decision to lift restrictions on topics like immigration and gender on meta platforms has stirred controversy. He emphasized collaboration with President Trump to combat governmental attacks on American businesses and increased censorship worldwide.

Despite not mentioning the UK specifically, Zuckerberg criticized the growing institutionalized censorship in Europe, hinting at potential clashes with the UK’s online safety law.

Peter Kyle, who is set to reveal the government’s AI strategy alongside Keir Starmer, acknowledged the overlap between Zuckerberg’s free speech dilemmas and his own considerations as an MP.

However, Kyle assured that he would not compromise on the integrity of the UK’s online safety laws, emphasizing the non-negotiable protection of children and vulnerable individuals.

Meta CEO Mark Zuckerberg has raised concerns about European online censorship policies. Photo: David Zarubowski/AP

Amid discussions with tech conglomerates and the unveiling of an AI Action Plan, the UK government aims to leverage its reputation for online safety and innovation. The plan emphasizes attracting tech investments by positioning the UK as a less regulated and more conducive environment for technological advancements.

As big tech leaders engage with President Trump nearing the inauguration, meta is changing its fact-checking approach to a “community notes” system similar to Company X, owned by Musk.

Elon Musk’s vocal criticisms of the UK government, particularly targeting Keir Starmer, have sparked controversy within the Labor Party and raised concerns about safety. Despite disagreements, the government remains committed to enacting robust measures against harmful online content.

While open to discussions with innovators and investors like Musk, Peter Kyle remains steadfast in prioritizing the advancement of technology to benefit British society both now and in the future.

Source: www.theguardian.com

Ofcom demands social media platforms to adhere to online safety laws

Social media platforms are required to take action to comply with UK online safety laws, but they have not yet implemented all the necessary measures to protect children and adults from harmful content, according to the regulator.

Ofcom has issued a code of conduct and guidance for tech companies to adhere to in order to comply with the law, which includes the possibility of hefty fines and site closures for non-compliance.

Regulators have pointed out that many of the recommended actions have not been taken by the largest and most high-risk platforms.

John Higham, Director of Online Safety Policy at Ofcom, stated, “We believe that no company has fully implemented all necessary measures. There is still a lot of work to be done.”

All websites and apps covered by the law, including Facebook, Google, Reddit, and OnlyFans, have three months to assess the risk of illegal content appearing on their platforms. Safety measures must then be implemented to address these risks starting on March 17, with Ofcom monitoring progress.

The law applies to sites and apps that allow user-generated content, as well as large search engines covering over 100,000 online services. It lists 130 “priority crimes,” including child sexual abuse, terrorism, and fraud, which tech companies need to address by implementing moderation systems.

The new regulations and guidelines are considered the most significant changes to online safety policy in history according to Technology Secretary Peter Kyle. Tech companies will now be required to proactively remove illegal content, with the risk of heavy fines and potential site blocking in the UK for non-compliance.

Ofcom’s code and guidance include designating a senior executive responsible for compliance, maintaining a well-staffed moderation team to swiftly remove illegal content, and improving algorithms to prevent the spread of harmful material.

Platforms are also expected to provide easy-to-find tools for reporting content, with a confirmation of receipt and timeline for addressing complaints. They should offer users the ability to block accounts, disable comments, and implement automated systems to detect child sexual abuse material.

Child safety campaigners have expressed concerns that the measures outlined by Ofcom do not go far enough, particularly in addressing suicide-related content and making it technically impossible to remove illegal content on platforms like WhatsApp.

In addition to addressing fraud on social media, platforms will need to establish reporting channels for instances of fraud with law enforcement agencies. They will also work on developing crisis response procedures for events like the summer riots following the Southport murders.

Source: www.theguardian.com

Should Parents be Concerned About Roblox Safety? Exploring the Risks of Pushing the Button

RJust before last week's newsletter was published, a short selling firm called Hindenburg Research issued the following report. Highly critical report on Roblox. In it, they accuse public companies of inflating their metrics (and thus their valuations), and even more worryingly for the parents of the millions of children who use Roblox. He also called it a “pedophile's hellscape.'’ The report claims that there were some gruesome discoveries within the game. Researchers found chat rooms of people purporting to exchange images and videos of children, and users claiming to be children or teens offering such material in exchange for Robux, an in-game currency. I discovered it. roblox I strongly refuse The claims made by Hindenburg in his report.

For those unfamiliar with the title, Roblox is more of a platform than a game (or, as corporate communicators like to think of it, a metaverse). It claims 80 million daily users (though Hindenburg says this figure is inflated). Log in, customize your avatar, and from there you can dive into thousands of different “experiences” created by other users. From role-playing cities to pizza delivery mini-games to cops-and-robbers games to cops-and-robbers games and, unfortunately, much less, Public Bathroom Simulator (which the creators say was 12 years old before they realized bad people existed) It is a delicious dish that looks like the one that he made at the time of his death. Roblox games are created by players, so the site must be constantly moderated. The company's moderation team handles a huge amount of content every day.

It's important to recognize that Hindenburg has a vested interest in making Roblox a stock tank. Hindenburg has a short position in the company (meaning it stands to profit if the stock price falls). Several other companies I've seen their stocks crater after releasing a report on them. However, it is also possible to independently verify some of the claims made in the report. A very quick search of the platform reveals that these in-game chat groups that appear to be soliciting and trading images do indeed exist and are active. And the accounts with questionable usernames that reference child abuse and Jeffrey Epstein are genuine. Some of the specific games and accounts mentioned in Hindenburg's report last week have been removed by the company.

Roblox defended itself in a statement posted online, saying, “Every day, tens of millions of users of all ages have safe and positive experiences on Roblox, and we adhere to our community standards.” said. But any safety incident is terrifying. We take content and conduct that does not adhere to our standards on our platform very seriously. ” The company further added: “We are continually evolving and enhancing our safety approach to catch and prevent malicious and harmful activity, including text chat filters that block inappropriate words and phrases; , which includes disallowing image sharing between users on Roblox” (as further reported in this article in the Guardian).

If your kids are playing on a platform like Roblox, triple-check their settings. Photo: Phil Noble/Reuters

Of course, this isn't the first sensational report about Roblox. In recent years, articles in CNN, the Observer, Wired, and many other publications have found that there is a large amount of inappropriate content on the platform, and that child predators are There are also some proven cases of using Roblox for crafting. Last July, More from Bloomberg In one such case, a man was sentenced to 15 years in prison for grooming a minor and having her cross state lines to perform sex acts as part of a broader investigation into the platform's apparent flaws in moderation and child safety. He was sentenced in 2018.

Many parents are worried about what to do. Roblox is part of the daily online lives of millions of children, even if the figure of around 80 million daily users is inflated, as Hindenberg claims. Anyone who has children of school age knows that it is very widely used. Is Roblox dangerous for kids? Should they stop playing it immediately?

Despite everything presented in this and other reports over the past few years, I believe it is entirely possible for children to play Roblox safely. Appropriate parental controls are in place to limit or eliminate the extent to which strangers can contact your child. When used correctly. If I had kids playing Roblox, I'd be checking all of these settings over and over again to make sure the “friends list” feature was set to include only real-life friends. We also recommend supervising young children to minimize the likelihood that they will encounter or actually seek out the many inappropriate games that seem to regularly elude Roblox's management efforts. I'm very reluctant to let you play this game without it.

Basic online safety education is critical for all children who use the Internet. Given the multiple convictions of child predators who used Roblox to access children, it is impossible to deny the presence of pedophiles on the platform, but it is difficult to objectively assess the extent of it. It's difficult. Some of what Hindenburg highlights in his report seems to me more likely to be the product of an adolescent fringe master than an actual child predator. Roblox is full of teenagers who have grown up with the game. When you see 900 variations of the username Jeffrey Epstein, you don't necessarily see 900 active child abusers, you see 900 stupid 14-year-olds trying to be funny. .

Full disclosure: I don't let my kids play Roblox, and I have no intention of starting them. I don't believe that a publicly traded company can be trusted to put the interests and safety of children ahead of profits. Moderation is expensive and difficult. No one in the big tech industry is any closer to building a system to prevent harmful material from appearing on these types of open platforms, or to prevent people from exploiting harmful material for their own purposes. No. Legitimate safety concerns aside, rather than trying to squeeze money out of kids to pay for endless in-game cosmetics and “experiences,” it's simply better to serve kids' imaginations and curiosity. There are hundreds of great games.

Only offline games can completely eliminate this risk of children being exposed to inappropriate content. After just a few hours of exploring Roblox, one thing is abundantly clear. It's not hard to find something very problematic.

what to play

The wolf in the game Neva grows into a magnificent creature crowned with horns that protects you.

Neva, a game about a warrior and a she-wolf, surprised me. I've played so many beautiful, artistic indie platformers that it’s hard to find one that really makes me feel something. But there I was ugly crying in front of the TV after a few nights with Neva. It takes place over four seasons. The wolf starts out as a cub that you have to protect, but later grows into a magnificent creature with horns that can protect you. Use an elegant combination of jumps, double jumps, dashes, and strikes to explore an incredibly beautiful but horribly corrupt natural world and make multiple attempts to conquer the demons that poison it. Worth a few hours of anyone's life.

Available: PC, PS5, Nintendo Switch, Xbox
Approximate play time: 3-4 hours

Skip past newsletter promotions

what to read

The cause of the Alamo…Nintendo's new clock. Photo: Nintendo
  • On Friday, a group of people who worked on highly acclaimed psychology and political science research RPG Disco Elysium announced the creation of a new studio to work on the game's spiritual successor. Then, confusingly, another new studio was announced same thingThis time it comes with a trailer. And on the same day, a third group announced another spiritual successor. as one viral tweet “Disco Elysium splitting into three unions claiming succession is more of a commentary on communism than the game wanted.”

  • A premium book/magazine hybrid about video games. above, Released today. Guardian games correspondent Keith Stuart and I feature in issue one. Naturally, his article is about Sega arcade boards and mine is about Nintendo details.

  • game freakThe developer of Pokemon suffered a hack of almost unprecedented scale.: Details about unreleased Pokemon game and movie projects, employee information, source code, and details about the series' development. decades I'm there now.

  • To cap off a truly bizarre week of video game news: nintendo We have announced an alarm clock that watches over you while you are sleeping. It's called “Alarmo,” and it wakes you up with the not-so-gentle sounds of Mario, Splatoon, or Zelda, synchronized with your groggy morning movements. the available now For those willing to jump through a few hoops (and pay £90).

What to clickwww.theguardian.com

Ofcom calls for action following allegations of Roblox being a ‘pedophile hellscape’ by US company

Child safety activists have urged the UK’s communications watchdog to enforce new online laws following accusations that video game companies have turned their platforms into “hellscapes for adult pedophiles.” They are calling for “gradual changes.”

Last week, Roblox, a popular gaming platform with 80 million daily users, came under fire for its lax security controls. An investment firm in the US criticized Roblox, claiming that its games expose children to grooming, pornography, violent content, and abusive language. The company has denied these claims and stated that safety and civility are fundamental to their operations.

The report highlighted concerning issues such as users seeking to groom avatars, trading in child pornography, accessible sex games, violent content, and abusive behavior on Roblox. Despite these concerns, the company insists that millions of users have safe and positive experiences on the platform, and any safety incidents are taken seriously.

Roblox, known for its user-generated content, allows players to create and play their own games with friends. However, child safety campaigners emphasize the need for stricter enforcement of online safety laws to protect young users from harmful content and interactions on platforms like Roblox.

Platforms like Roblox will need to implement measures to protect children from inappropriate content, prevent grooming, and introduce age verification processes to comply with the upcoming legislation. Ofcom, the regulator responsible for enforcing these laws, is expected to have broad enforcement powers to ensure user safety.

In response, a Roblox spokesperson stated that the company is committed to full compliance with the Online Safety Act, engaging in consultations and assessments to align with Ofcom’s guidelines. They look forward to seeing the final code of practice and ensuring a safe online environment for all users.

Source: www.theguardian.com

Rapamycin could potentially enhance the safety of epilepsy medications in pregnant women

Sodium valproate is an effective drug for epilepsy, but its consumption is not recommended during pregnancy

Miljan Zivković/Shutterstock

The drug rapamycin may prevent the epilepsy drug sodium valproate from causing developmental problems during pregnancy.

Sodium valproate is used to treat epilepsy, bipolar disorder, and sometimes migraines. Although effective, it is not recommended during pregnancy as it can cause birth defects such as spina bifida and lifelong learning disabilities.

Giovanni Pietrogrande Researchers from the University of Queensland in Australia wanted to understand why sodium valproate could have such an effect. So they used stem cells to create mini-spinal cords called organoids in the lab. These mimic the spine of a fetus during early pregnancy.

When organoids were exposed to sodium valproate, their cells changed in ways that may be associated with risk of congenital disease.

The researchers looked for reasons for this and found that activity in one of the cell's signaling pathways, called mTOR, indicates that cells are aging. This is a process in which cells stop growing but do not die, but instead continue to release chemicals that can cause inflammation.

Rapamycin, which was initially developed as an immunosuppressant but has some promise for anti-aging effects, also targets the mTOR pathway.

In another experiment, researchers exposed a new set of spinal cord organoids to a combination of sodium valproate and rapamycin and found that no aging occurred. They then replicated this test in zebrafish larvae and found that the cells similarly did not undergo senescence and showed no signs of the changes that occur when exposed to sodium valproate alone.

Rather than doctors discontinuing sodium valproate if an epileptic patient is pregnant or may become pregnant, someday doctors may be able to prevent the negative effects of sodium valproate by combining it with rapamycin. Pietro Grande says. Human studies are needed to make this recommendation.

Frank Vajda The University of Melbourne says sodium valproate is “a critically important drug and the single most effective treatment for generalized seizures, where abnormal electrical activity begins in both halves of the brain at the same time.”

“I think this is a very important paper that could lead to a return to the level of importance that this drug had before its side effects were discovered,” he says.

topic:

Source: www.newscientist.com

Wearable Sensors Target Heatstroke Detection for Worker Safety

summary

  • Researchers are experimenting with biosensors that can monitor workers’ vital signs and provide warnings if they show signs of heatstroke.
  • The four-year study involves more than 150 farmworkers in Florida who have been wearing sensors in the fields.
  • Agricultural workers are 35 times more likely to die from heatstroke than other workers.

People who work outdoors are at greatest risk from extreme heat, which can be fatal within minutes, so researchers have begun experimenting with wearable sensors that can monitor workers’ vital signs and warn them if they are starting to show the early symptoms of heatstroke.

In Pearson, Florida, where temperatures can soar to nearly 90 degrees just before and after noon, workers on a fern farm wear experimental biopatches as part of a study sponsored by the Environmental Protection Agency. National Institutes of HealthThe patch also measures a worker’s vital signs and skin hydration, and is equipped with a gyroscope to monitor continuous movement.

Scientists from Emory University and Georgia Tech are collecting data and feeding it into an artificial intelligence algorithm. The ultimate goal is for the AI to predict when workers are likely to suffer from heatstroke and send them a warning on their phone before that happens. But for now, the researchers are still analyzing the data and plan to publish a research paper next year.

“There’s a perception that field work is hot, and that’s the reality,” says Roxana Chicas, a nurse researcher at Emory University who has been overseeing Biopatch data collection. “I think with research and creativity, we can find ways to protect field workers.”

average 34 workers died of heatstroke According to the Environmental Protection Agency, farmworkers will be killed every year from 1992 to 2022. 35x odds Workers are more likely to die from heatstroke than other workers, but until now it has been left to states to decide how to protect workers from heatstroke. California, for example, requires employers to provide training, water, and shade when temperatures exceed 80 degrees Fahrenheit, but many states have no such rules.

Chicas and his team partnered with the Florida Farmworkers Association to recruit participants for the study, aiming to have 100 workers wear the biopatch over the four-year study, but were surprised by how many volunteered, ultimately enrolling 166.

Participating workers arrive at work before dawn, receive a patch, have their vital signs monitored, and then head out into the fields before the hottest, most dangerous parts of the day.

“We hope this study will help improve working conditions,” study participant Juan Pérez said in Spanish, adding that he has worked in the fern fields for 20 years and would like more breaks and higher wages.

Other farmworkers said they hoped the study would shed light on just how tough their jobs are.

Study participant Antonia Hernandez, who lives in Pearson, said she often worries about the heat hazards facing her and her daughter, who both work in fern fields.

“When you don’t have a family, the only thing you worry about is the house and the rent,” Hernandez said in Spanish. “But when you have children, the truth is, there’s a lot of pressure and you have to work.”

Chicas said he could see the heat-related fatigue showing on some of the workers’ faces.

“They look much older than their real age, some of them look much older than their real age, because it takes a toll on their body and their health,” she said.

Chikas has been researching ways to protect farmworkers from the heat for nearly a decade. In a project that began in 2015, workers were fitted with bulky sensors that measured skin temperature, skin hydration, blood oxygen levels, and vital signs. This latest study is the first to test a lightweight biopatch that looks like a large bandage and is placed in the center of the chest.

Overall, wearable sensors are much easier to use, and some are becoming more widely used. While the biosensors that Cikas’ team is experimenting with aren’t yet available to the public, a brand called SlateSafety sells a system (sponsored by the Occupational Safety and Health Administration) that is available to employers. The system includes an armband that transmits measurements of a worker’s core temperature to a monitoring system. If the temperature is too high, the employer can notify the worker to take a break.

A similar technology, called the Heat Stroke Prevention System, is used in the military. Developed by the U.S. Army Institute of Environmental Medicine, the system requires soldiers or Marines in a company to wear a chest strap that estimates core temperature, skin temperature and gait stability, allowing commanders to understand a soldier’s location and risk of heatstroke.

“The system is programmed to sense when a person is approaching higher than appropriate levels of heat exposure,” says Emma Atkinson, a biomedical researcher at the institute. stated in a news release “Our system allows us to provide warnings before heat stroke occurs, allowing us to intervene before someone collapses,” the report, released in February, added.

The system that Chicas and his team are developing differs from those systems in that it notifies workers directly, rather than in a larger system controlled by their employers. They haven’t finished collecting data from farmworkers yet, but the next step is for algorithms to start identifying patterns that might indicate risk of heatstroke.

“Outdoor workers need to spend time outdoors – otherwise food wouldn’t be harvested, ferns wouldn’t be cut, houses wouldn’t be built,” Chicas said. “With the growing threat of climate change, workers need something to better protect themselves.”

Source: www.nbcnews.com

Triathletes preparing for Paris Olympics swim in Seine after last-minute safety tests

After months of speculation about whether the water in the Seine was clean enough for Olympic athletes to compete in, authorities have determined after last-minute testing that the river’s water is safe for swimming.

After tests of the Seine’s water quality came back positive on Wednesday morning Paris time, the men and women will swim in back-to-back races as part of a triathlon, starting at 8 a.m. local time. The men’s race was originally scheduled for Tuesday but was postponed after the Seine’s water failed tests.

“The latest water quality analysis results, received at 3:20 a.m., have been assessed by World Triathlon as meeting the standards and clearing the way for the triathlon to go ahead,” World Triathlon, the organisers and governing body of the Paris Games, said in a statement.

People cool off under a bridge over the Seine during the sweltering heat at the Paris Olympics on Tuesday.
Maya Hitidji/Getty Images

After the race, Team USA triathlete Taylor Spivey said he “swallowed a ton of water” during the triathlon swim in the Seine, a river that has historically been so polluted that swimming in it has been illegal for the past century.

Spivey, who finished 10th in the race, told NBC News that his biggest concern wasn’t the water quality, but the “exceptional” and “shocking” strength of the current, which he said was so strong that the race could have been canceled.

“The flow was incredible,” she said. “It felt like I was on a treadmill in one place.”

When asked about the quality of the water, she added, “I’ve been taking lots of probiotics for the past month. We’ll see how it goes.”

Cassandre Beaugrand of France won the gold medal ahead of Julie Delong of Switzerland, who took the silver medal, while Beth Potter of Great Britain took the bronze medal.

The Seine’s water quality has caused a bit of a stir in the run up to the events, as organizers rush to clean up the polluted waterway for prime-time attention. For months, France has been testing samples from the river for the presence of pathogens such as E. coli and enterococcus. High levels of E. coli put swimmers at risk of developing gastrointestinal illness.

The Seine has not passed these tests after wet weather, when storms can send runoff and sometimes sewage into the river.

Swimming in the Seine has been banned for more than a century because it was deemed too polluted, but the city of Paris led a $1.5 billion effort to clean up the river and strengthen waste-treatment systems ahead of the Olympics.

As the first event approached, organizers were hoping for sunny weather that would reduce overall pollution and allow ultraviolet light to inactivate some bacteria.

But the weather rarely cooperated.

last year, Test Event Triathlon rehearsals were canceled due to concerns about water quality after the rains.

The opening ceremony, which included a boat parade on the Seine, took place in pouring rain on Friday, which continued into Saturday.

Pollution from the rain forced organisers to cancel two days of swimming training on Sunday and Monday, then postpone the men’s triathlon originally scheduled for Tuesday morning.

There were no spectators at the swimming venue for the Olympic triathlon along the Seine river in Paris on Tuesday.
Thibaut Moritz/AFP – Getty Images

“I’m just trying to focus on what I can control,” U.S. triathlete Kirsten Kasper told NBC News on Tuesday. “We swim in a lot of cities and water quality is often an issue, but I just have to trust that the race organizers are doing the testing and doing what it takes to make sure we’re safe.”

Water experts said the difficulty of keeping the Seine clean enough could draw attention to a broader problem of environmental pollution shared around the world.

“In large cities, it’s very difficult to control the amount of human waste that you see,” said Katie Graham, an assistant professor in the Georgia Institute of Technology’s College of Engineering. “The public assumes that a lot of these problems have been completely solved, but that’s by no means the case.”

NBC News is a unit of NBCUniversal, which owns U.S. media rights to the Olympics through 2032, including the 2024 Paris Games, which begin July 26.

Evan Bush reported from Seattle and Alexander Smith from Paris.

Source: www.nbcnews.com

10 Simple Steps to Ensure Your Dog’s Safety and Happiness in Hot Weather

As temperatures rise in many parts of the world this summer, staying cool can be a challenge. Imagine wearing a furry coat all day in such heat – not fun, right?

Our furry friends face this reality, which is why they need extra attention when the weather gets hot.

“Dogs rely on panting to cool down, which is less efficient than sweating,” explains Dogs Trust to BBC Science Focus.

“They lack self-control, so they don’t realize when they need to slow down due to heat.”

Fortunately, there are simple things you can do to keep your dog calm and happy when temperatures soar.

1. Walk your dog in the mornings and evenings

Like humans, dogs can overheat if exercised in direct sunlight. Research shows that a significant number of heatstroke cases in dogs are caused by exercise, with walking being a common trigger.

One recommendation from The Kennel Club is to walk your dog early in the morning or late in the evening to avoid the hottest times of the day.

2. Stay hydrated

Just like people, dogs need to stay hydrated in hot weather. Carry water and a bowl for your dog when going out to prevent dehydration.

3. Harness your dog

Harnesses are recommended over collars, especially in hot weather, as collars can restrict airflow and hinder a dog’s ability to cool down through panting.

4. Watch out for symptoms of heatstroke and stroke

Heatstroke can affect any dog, with certain breeds and conditions increasing the risk. Look for signs like excessive panting, breathing difficulties, fatigue, and more.

5. Remember that the sidewalk can be hot for your feet.

Test pavement temperature with your hand before letting your dog walk on it. Hot pavements can burn your dog’s paws, so stick to grass or cooler surfaces.

6. Try paddling

Give your dog access to water for a cool dip. A paddling pool or water play can help them cool off and have fun.

7. Be careful when traveling by car

Avoid leaving your dog in a hot car and take precautions for car journeys to ensure your dog’s comfort and safety.

8. Offer frozen treats

Provide your dog with frozen treats to help them cool down. Avoid harmful foods and opt for ice in their water or frozen toys.

9. Have the person lie down on a damp towel

Use a damp towel to help your dog relax and cool down after a hot day.

10. Get a haircut

Trimming your dog’s hair can help keep them cool, especially in hot weather. Proper grooming can assist in heat dissipation and prevent overheating.

For more tips and information on caring for your dog in hot weather, visit the Dogs Trust website.

About our experts

Victoria Phillips Veterinary Manager at Dogs Trust, with 18 years of experience in the veterinary field.

Source: www.sciencefocus.com

Is AI Coming to Apple Devices? The Safety Concerns

During its annual developers conference on Monday, Apple introduced Apple Intelligence, an eagerly anticipated artificial intelligence system designed to personalize user experiences, automate tasks, and, as CEO Tim Cook assured, set “a new standard of privacy in AI.”

Although Apple emphasizes that its AI prioritizes security, its collaboration with OpenAI has faced criticism. The service, launched in November 2022, has raised privacy concerns by collecting user data for model training without explicit consent. Users will have the option to opt out of this data collection starting in April 2023.

Apple has assured that its collaboration with ChatGPT will be limited to specific tasks with explicit user consent, but security experts remain vigilant about how these concerns will be addressed.

Late to the game in generative AI, Apple has trailed behind competitors like Google, Microsoft, and Amazon, whose AI ventures have boosted their stock prices. Apple has refrained from integrating generative AI into its main consumer products.

Apple aims to apply AI technology responsibly, building Apple Intelligence products over several years using proprietary technology to minimize user data leakage from the Apple ecosystem.

AI, which requires vast data to train language models, poses a challenge to Apple’s focus on privacy. Critics like Elon Musk argue that it’s impossible to balance AI integration and user privacy. However, some experts disagree.

“By pursuing privacy-focused strategies, Apple is leading the way for businesses to reconcile data privacy with innovation,” said Gar Ringel, CEO of a data privacy software company.

Skip Newsletter Promotions

Many recent AI releases have been criticized for being dysfunctional or risky, reflecting Silicon Valley’s “move fast and break things” culture. Apple seems to be taking a more cautious approach.

According to Steinhauer, “Historically, platforms release products first and address issues later. Apple is proactively tackling common concerns. This illustrates the difference between designing security measures upfront versus addressing them reactively, which is always less effective.”

Central to Apple’s AI privacy measures is its new private cloud computing technology. Apple intends to conduct most computing internally for Apple Intelligence features on devices. For tasks requiring more processing power, the company will outsource to the cloud while safeguarding user data.

To achieve this, Apple will only share the data necessary for each request, implement additional security measures at endpoints, and avoid long-term data storage. Apple will also open tools and software related to its private cloud for third-party verification.

Private cloud computing represents a significant advancement in AI privacy and security, according to Krishna Visnubotra, VP of product strategy at Zimperium. The independent audit component is particularly noteworthy.

Source: www.theguardian.com

Tackling the Issue of Pedophiles Using AI to Generate Nude Images of Children for Extortion, Charity Warns

An organization dedicated to fighting child abuse has reported that pedophiles are being encouraged to utilize artificial intelligence to generate nude images of children and coerce them into producing more explicit content.

The Internet Watch Foundation (IWF) stated that a manual discovered on the dark web included a section advising criminals to use a “denuding” tool to strip clothing from photos sent by children. These photos could then be used for blackmail purposes to obtain further graphic material.

The IWF expressed concern over the fact that perpetrators are now discussing and promoting the use of AI technologies for these malicious purposes.


The charity, known for identifying and removing child sexual abuse content online, initiated an investigation into cases of sextortion last year. They observed a rise in incidents where victims were coerced into sharing explicit images under threat of exposure. Additionally, the use of AI to create highly realistic abusive content was noted.

The author of the online manual, who remains anonymous, claimed to have successfully coerced 13-year-old girls into sharing nude images online. The IWF reported the document to the UK National Crime Agency.

Recent reports by The Guardian suggested that there were discussions within the Labour party about banning tools that create nude imagery.

According to the IWF, 2023 witnessed a record number of extreme cases of child sexual abuse. Over 275,000 web pages containing such material, including content depicting rape, sadism, and bestiality, were identified, marking the highest number on record. This included a significant amount of Category A content, the most severe form containing explicit and harmful images.

The IWF further discovered 2,401 images of self-produced child sexual abuse material involving children aged three to six, where victims were manipulated or threatened to record their own abuse. The incidents were observed in domestic settings like bedrooms and kitchens.

Susie Hargreaves, the CEO of IWF, emphasized the urgent need to educate children on recognizing danger and safeguarding themselves against manipulative criminals. She stressed the importance of the recently passed Online Safety Act to protect children on social media platforms.

Security Minister Tom Tugendhat advised parents to engage in conversations with their children about safe internet usage. He emphasized the responsibility of tech companies to implement stronger safeguards against abuse.

Research published by Ofcom revealed that a significant percentage of young children own mobile phones and engage in social media. The government is considering measures such as raising the minimum age for social media use and restricting smartphone sales to minors.

Source: www.theguardian.com

U.S. states and big tech companies clash over online child safety bills: Battle lines drawn

On April 6, Maryland passed the first “Kids Code” bill in the US. The bill is designed to protect children from predatory data collection and harmful design features by tech companies. Vermont’s final public hearing on the Kids Code bill took place on April 11th. This bill is part of a series of proposals to address the lack of federal regulations protecting minors online, making state legislatures a battleground. Some Silicon Valley tech companies are concerned that these restrictions could impact business and free speech.

These measures, known as the Age-Appropriate Design Code or Kids Code bill, require enhanced data protection for underage online users and a complete ban on social media for certain age groups. The bill unanimously passed both the Maryland House and Senate.

Nine states, including Maryland, Vermont, Minnesota, Hawaii, Illinois, South Carolina, New Mexico, and Nevada, have introduced bills to improve online safety for children. Minnesota’s bill advanced through a House committee in February.

During public hearings, lawmakers in various states accused tech company lobbyists of deception. Maryland’s bill faced opposition from tech companies who spent $250,000 lobbying against it without success.

Carl Szabo, from the tech industry group NetChoice, testified before the Maryland state Senate as a concerned parent. Lawmakers questioned his ties to the industry during the hearing.

Tech giants have been lobbying in multiple states to pass online safety laws. In Maryland, these companies spent over $243,000 in lobbying fees in 2023. Google, Amazon, and Apple were among the top spenders according to state disclosures.

The bill mandates tech companies to implement measures safeguarding children’s online experiences and assess the privacy implications of their data practices. Companies must also provide clear privacy settings and tools to help children and parents navigate online privacy rights and concerns.

Critics are concerned that the methods used by tech companies to determine children’s ages could lead to privacy violations.

Supporters argue that social media companies should not require identification uploads from users who already have their age information. NetChoice suggests digital literacy education and safety measures as alternatives.

During a discussion on child safety legislation, a NetChoice director emphasized parental control over regulation, citing low adoption rates of parental monitoring tools on platforms like Snapchat and Discord.

NetChoice has proposed bipartisan legislation to enhance child safety online, emphasizing police resources for combating child exploitation. Critics argue that tech companies should be more proactive in ensuring child safety instead of relying solely on parents and children.

Opposition from tech companies has been significant in all state bills, with representatives accused of hiding their affiliations during public hearings on child safety legislation.

State bills are being revised based on lessons learned from California, where similar legislation faced legal challenges and opposition from companies like NetChoice. While some tech companies emphasize parental control and education, critics argue for more accountability from these companies in ensuring child safety online.

Recent scrutiny of Meta products for their negative impact on children’s well-being has raised concerns about the company’s role in online safety. Some industry experts believe that tech companies like Meta should be more transparent and proactive in protecting children online.

Source: www.theguardian.com

Lab Discovers Simple Method to Evade AI Safety Features in Multi-shot Jailbreak

A study shows that some of the most powerful AI tools meant to prevent cybercrime and terrorism can be bypassed simply by inundating them with fraudulent activities.

Researchers at Anthropic, the AI lab responsible for creating the large-scale language model (LLM) powering ChatGPT competitor Claude, detailed an attack called a “multi-shot jailbreak” in a recent paper. This attack was both simple and effective.

Claude, like many other commercial AI systems, contains safety features that block certain types of requests, such as generating violent content, hate speech, illegal instructions, deception, or discrimination. However, by providing enough examples of the “correct” responses to harmful questions like “How to create a bomb,” the system can be tricked into providing harmful responses despite being trained not to do so.

Anthropic stated, “By inputting large amounts of text in specific ways, this approach can lead the LLM to produce potentially harmful outputs even though it was trained to avoid doing so.” The company has shared its findings with industry peers and aims to address the issue promptly.

This jailbreak attack targets AI models with a large “context window” capable of processing lengthy queries. These advanced models are susceptible to such attacks as they can learn to circumvent their own safety measures faster.

Newer, more advanced AI systems are at greater risk of such attacks due to their ability to handle longer inputs and learn from examples quickly. Anthropic expressed concern over the effectiveness of this jailbreak attack on larger models.

Skip past newsletter promotions

Anthropic has identified various strategies to mitigate this issue. One approach involves adding a mandatory warning to remind the system not to provide harmful responses, which has shown promise in reducing the likelihood of a successful jailbreak. However, this method may impact the system’s performance on other tasks.

Source: www.theguardian.com

The US and UK Formally Partner on Ensuring Artificial Intelligence Safety

The United States and Britain have revealed a fresh collaboration in the realm of artificial intelligence safety on Monday, amid increasing apprehensions about the upcoming advanced versions.

US Secretary of Commerce Gina Raimondo and UK Technology Secretary Michelle Donnellan will collaborate on developing cutting-edge AI model testing, following commitments made during the AI Safety Summit at Bletchley Park in November. A memorandum of understanding was signed in Washington, DC.

“We all understand that AI is the defining technology of our era,” mentioned Raimondo. “This partnership will enhance efforts in both institutions to tackle risks related to national security and broader public concerns.”

Within this formal partnership, the US and UK will conduct at least one joint experiment using a publicly accessible model, and are also contemplating the possibility of personnel exchanges between the institutions. Both nations are committed to forming similar collaborations with other countries to promote AI safety.

“This is a groundbreaking agreement globally,” affirmed Donnellan. “AI is already a tremendous force for good in our society and has the potential to address significant global challenges, but only if we grasp the associated risks.”

Since the launch of ChatGPT in November 2022, generative AI capable of producing text, images, and videos in response to open-ended prompts could render certain jobs redundant, disrupt elections, and potentially overwhelm humans. It elicits both anxiety and excitement simultaneously.

The two countries aim to exchange vital information on the capabilities and risks linked to AI models and systems, along with conducting technical research on AI safety and security.

In October, Joe Biden signed an executive order aimed at mitigating AI-related risks. In January, the Commerce Department proposed the imposition of a requirement for US cloud companies to determine if foreign entities access US data centers for training AI models.

In February, the UK announced an investment exceeding 100 million pounds ($125.5 million) to establish nine new research centers and train AI regulators on the technology.

Source: www.theguardian.com

OpenAI warns against releasing voice cloning tools due to safety concerns.

OpenAI’s latest tool can create an accurate replica of someone’s voice with just 15 seconds of recorded audio. This technology is being used by AI Labs to address the threat of misinformation during a critical global election year. However, due to the risks involved, it is not being released to the public in an effort to limit potential harm.

Voice Engine was initially developed in 2022 and was initially integrated into ChatGPT for text-to-speech functionality. Despite its capabilities, OpenAI has refrained from publicizing it extensively, taking a cautious approach towards its broader release.

Through discussions and testing, OpenAI aims to make informed decisions about the responsible use of synthetic speech technology. Selected partners have access to incorporate the technology into their applications and products after careful consideration.

Various partners, like Age of Learning and HeyGen, are utilizing the technology for educational and storytelling purposes. It enables the creation of translated content while maintaining the original speaker’s accent and voice characteristics.

OpenAI showcased a study where the technology helped a person regain their lost voice due to a medical condition. Despite its potential, OpenAI is previewing the technology rather than widely releasing it to help society adapt to the challenges of advanced generative models.

OpenAI emphasizes the importance of protecting individual voices in AI applications and educating the public about the capabilities and limitations of AI technologies. The voice engine is watermarked to enable tracking of generated voices, with agreements in place to ensure consent from original speakers.

While OpenAI’s tools are known for their simplicity and efficiency in voice replication, competitors like Eleven Labs offer similar capabilities to the public. To address potential misuse, precautions are being taken to detect and prevent the creation of voice clones impersonating political figures in key elections.

Source: www.theguardian.com

Ofcom concludes that exposure to violent online content is unavoidable for children in the UK

The UK children are now inevitably exposed to violent online content, with many first encountering it while still in primary school, according to a media watchdog report.

British children interviewed in the Ofcom investigation reported incidents ranging from videos of local school and street fights shared in group chats to explicit and extreme graphic violence, including gang-related content, being watched online.

Although children were aware of more extreme content existing on the web, they did not actively seek it out, the report concluded.

In response to the findings, the NSPCC criticized tech platforms for not fulfilling their duty of care to young users.

Rani Govender, a senior policy officer for online child safety, expressed concern that children are now unintentionally exposed to violent content as part of their online experiences, emphasizing the need for action to protect young people.

The study, focusing on families, children, and youth, is part of Ofcom’s preparations for enforcing the Online Safety Act, giving regulators powers to hold social networks accountable for failing to protect users, especially children.

Ofcom’s director of Online Safety Group, Gil Whitehead, emphasized that children should not consider harmful content like violence or self-harm promotion as an inevitable part of their online lives.

The report highlighted that children mentioned major tech companies like Snapchat, Instagram, and WhatsApp as platforms where they encounter violent content most frequently.

Experts raised concerns that exposure to violent content could desensitize children and normalize violence, potentially influencing their behavior offline.

Some social networks faced criticism for allowing graphic violence, with Twitter (now X) under fire for sharing disturbing content that went viral and spurred outrage.

While some platforms offer tools to help children avoid violent content, there are concerns about their effectiveness and children’s reluctance to report such content due to fear of repercussions.

Algorithmic timelines on platforms like TikTok and Instagram have also contributed to the proliferation of violent content, raising concerns about the impact on children’s mental health.

The Children’s Commissioner for England revealed alarming statistics about the waiting times for mental health support among children, highlighting the urgent need for action to protect young people online.

Snapchat emphasized its zero-tolerance policy towards violent content and assured its commitment to working with authorities to address such issues, while Meta declined to comment on the report.

Source: www.theguardian.com

US Legislators Clash Over Strategies to Enhance Online Child Safety | Technology

SAs historic legislation obtained enough votes to pass in the U.S. Senate, divisions among online child safety advocates have emerged. Some former opponents of the bill have been swayed by amendments and now lend their support. However, its staunchest critics are demanding further changes.

The Kids Online Safety Act (Kosa), introduced over two years ago, garnered 60 supporters in the Senate by mid-February. Despite this, numerous human rights groups continue to vehemently oppose the bill, highlighting the ongoing discord among experts, legislators, and activists over how to ensure the safety of young people in the digital realm.


“The Kids Online Safety Act presents our best chance to tackle the harmful business model of social media, which has resulted in the loss of far too many young lives and contributed to a mental health crisis,” stated Josh Golin, executive director of Fair, a children’s online safety organization.

Critics argue that the amendments made to the bill do not sufficiently address their concerns. Aliya Bhatia, a policy analyst at the Center for Democratic Technology, expressed, “A one-size-fits-all approach to child safety is insufficient in protecting children. This bill operates on the assumption of a consensus regarding harmful content types and designs, which does not exist. Such a belief hampers the ability of young people to freely engage online, impeding their access to the necessary communities.”

What is the Kids Online Safety Act?

The Xhosa bill, spearheaded by Connecticut Democrat Richard Blumenthal and Tennessee Republican Marsha Blackburn, represents a monumental shift in U.S. tech legislation. The bill mandates platforms like Instagram and TikTok to mitigate online risks through alterations to their designs and the ability to opt out of algorithm-based recommendations. Enforcement would necessitate more profound changes to social networks compared to current regulations.

Initially introduced in 2022, the bill elicited an open letter signed by over 90 human rights organizations vehemently opposing it. The coalition argued that the bill could enable conservative state attorneys general, who determine harmful content, to restrict online resources and information concerning LGBTQ+ youth and individuals seeking reproductive health care. They cautioned that the bill could potentially be exploited for censorship.

Source: www.theguardian.com

EU initiates probe into TikTok concerning online content and child safety

The EU is launching an investigation into whether TikTok has violated online content regulations, particularly those relating to the safety of children.

The European Commission has officially initiated proceedings against a Chinese-owned short video platform for potential violations of the Digital Services Act (DSA).

The investigation is focusing on areas such as safeguarding minors, keeping records of advertising content, and determining if algorithms are leading users to harmful content.


Thierry Breton, EU Commissioner for the Internal Market, stated that child safety is the “primary enforcement priority” under the DSA. The investigation particularly focuses on age verification and default privacy settings for children’s accounts.

In April last year, TikTok was fined €345 million in Ireland for violating EU data law in its handling of children’s accounts. Additionally, the UK Information Commissioner fined the company £12.7 million for unlawfully processing data from children under 13.

Companies that violate the DSA can face fines of up to 6% of their global turnover. TikTok is owned by Chinese technology company ByteDance.

TikTok has stated that it is committed to working with experts and the industry to ensure the safety of young people on its platform and is eager to brief the European Commission on its efforts.

The commission is also examining alleged deficiencies in TikTok’s provision of publicly available data to researchers and its compliance with requirements to establish a database of ads shown on the platform.

A deadline for the investigation has not been set and will depend on factors such as the complexity of the case and the degree of cooperation from the companies being investigated.

This investigation of TikTok is the DSA’s second, following a December 2021 formal investigation into Elon Musk’s social media platform X, which was previously known as Twitter. The case against X focuses on failure to block illegal content and inadequate measures against disinformation.

Apple is reportedly facing a substantial fine from the EU for its conduct in the music streaming app market. The European Commission is investigating whether US tech companies blocked music distributors from informing users about cheaper subscription options outside of their own app stores.

According to the Financial Times, the city of Brussels plans to fine Apple 500 million euros, marking a significant decision following years of complaints from companies offering services through iPhone apps.

Apple was previously fined 1.1 billion euros by France in 2020 for anti-competitive agreements with two wholesalers, a fine that was later reduced by an appeals court.

Big technology companies like Apple and Google have come under increased scrutiny due to competitive concerns. Google is appealing against fines of more than 8 billion euros imposed by the EU in three separate competition investigations.

Apple has successfully defended against a lawsuit by Fortnite developer Epic Games alleging that its app store was an illegal monopoly. In December, Epic won a similar lawsuit against Google.

Last month, Apple announced that it would allow EU customers to download apps without using its own app store, in response to the EU’s digital market law.

Source: www.theguardian.com

Telemedicine and doctor’s office have the same safety for abortion pills.

Abortion pill pills contain mifepristone and misoprostol

Brigette Supernova / Alamy

Abortion pills are just as safe and effective when obtained through telehealth services as they are when obtained in a doctor's office, according to the largest study ever on telemedicine abortions.

Access to abortion is a contentious political issue in the United States. In 2021, the U.S. Food and Drug Administration (FDA) eliminated in-person dispensing requirements for the abortion drug mifepristone, allowing people to obtain the pill through telehealth services or by mail. Anti-abortion groups are currently challenging this ruling in the U.S. Supreme Court.

Previous studies of hundreds of pregnancies have shown that telemedicine abortions are safe. For further investigation with a larger sample size, Ushma Upadhyay Researchers from the University of California, San Francisco, collected data on more than 6,000 telemedicine abortions performed in 20 U.S. states and Washington, DC. All participants were less than 10 weeks pregnant, and approximately 72% of them obtained the abortion pill through a secure text message rather than a video call.

The researchers followed the participants for three to seven days after the abortion, and then again two to four weeks later. The research team found that nearly 98 percent of abortions effectively ended the pregnancy. Additionally, only 0.25 percent of participants experienced serious side effects, such as uncontrolled bleeding or infection. By comparison, personal use of mifepristone is more than 97% effective and has a 0.3% chance of causing adverse events. There was also no difference in outcomes between abortions obtained via text message or video.

“These findings are consistent with the growing body of evidence that mifepristone is safe and effective and that FDA's decision to eliminate the in-person dispensing requirement was scientifically sound.” says Upadhyay.

“The outcomes for patients who come to telemedicine and brick-and-mortar clinics are essentially indistinguishable,” he says. samuel dickman at Planned Parenthood of Montana, a reproductive health nonprofit. Telemedicine abortions are essential to providing care to rural populations and people who are uncomfortable going to an abortion clinic because of an abusive partner, he says.

topic:

Source: www.newscientist.com

UK AI Safety Association: Setting Standards, Not Tests, is Essential for Artificial Intelligence Safety

The UK should prioritize setting global standards for artificial intelligence testing, instead of attempting to conduct all reviews itself, as suggested by the company responsible for the government’s AI Safety Institute.

Mark Warner, CEO of Faculty AI, emphasized the institute’s commitment to AI safety and its development of technologies for chatbots like ChatGPT. He cautioned that excessive scrutiny of AI models could be limiting.

Last year, Rishi Sunak announced the establishment of the AI Safety Institute (AISI) ahead of a global AI safety summit. This initiative involved collaboration with large tech companies from the EU, UK, US, France, and Japan to prioritize testing of advanced AI models before and after deployment.

The UK’s leading role in AI safety was underscored by the establishment of the Institute, according to Warner, whose London-based company also works with a British lab to test AI model compliance with safety guidelines.

Warner stressed the importance of the institute becoming a global leader in setting testing standards: “I think it’s important to set standards for the wider world rather than trying to do everything ourselves,” he said.

He also expressed optimism about the institute’s potential as an international standard setter, promoting scalability in maintaining AI security and describing it as a long-term vision.

Warner cautioned against the government taking on all testing responsibilities, advocating for the development of standards that other governments and companies can adopt instead.

He acknowledged the challenge of testing every released model and suggested focusing on the most advanced systems.

Skip past newsletter promotions

The Financial Times reported that major AI companies are urging the UK government to expedite safety testing of AI systems. Notably, the US also announced the establishment of an AI Safety Institute participating in the testing program outlined at the Bletchley Park summit.

The UK’s Department for Science, Innovation and Technology emphasized the role of governments in testing AI models, with the UK taking a leading global role through the AI Safety Institute.

Source: www.theguardian.com

British Safety Council’s findings reveal that AI safety devices are easily susceptible to breaches

The UK’s new Artificial Intelligence Safety Authority has discovered that the technology can mislead human users, produce biased results, and lacks safeguards against the dissemination of harmful information.

Announced by the AI Safety Research Institute, initial findings of research into advanced AI systems, also known as large language models (LLMs), revealed various concerns. These AI systems power tools like chatbots and image generators.

The institute found that basic prompts can bypass LLM safeguards and be used to power chatbots such as ChatGPT for “dual-use” tasks, which refers to using a model for both military and civilian purposes.

According to AISI, “Using basic prompting techniques, users were able to instantly defeat the LLM’s safeguards and gain assistance with dual-use tasks.” The institute also mentioned that more advanced “jailbreak” techniques could be used by relatively unskilled attackers within a few hours.

The research showed that LLM models can be useful for beginners planning cyberattacks and are capable of creating social media personas for spreading disinformation.

When comparing AI models to web searches, the institute stated that they provide roughly the same level of information, but AI models tend to produce “hallucinations” or inaccurate advice.

The image generator was found to produce racially biased results. Additionally, the institute discovered that AI agents can deceive human users in certain scenarios.

AISI is currently testing advanced AI systems and evaluating their safety, while also sharing information with third parties. The institute focuses on the misuse of AI models, their impact on humans, and their ability to perform harmful tasks.

AISI clarified that it does not have the capacity to test all released models and is not responsible for declaring these systems “secure.”

The institute emphasized that it is not a regulator but conducts secondary checks on AI systems.

Source: www.theguardian.com

Report reveals former employee’s criticism of Instagram chief Adam Mosseri’s track record on youth safety

Instagram boss Adam Mosseri has announced that his employees, even as parent company Meta Inc. faces increased legal scrutiny over concerns that the popular social media app is harming young users, have reportedly prevented or weakened the implementation of youth safety features. Mosseri, whose name frequently appears in a high-profile lawsuit brought by 33 states accusing Meta of having addictive features in its apps that harm the mental health of young people, reportedly ignored “pressure from employees” to install some of its safety features as default settings for Instagram users. According to the information.

Meta-owned Instagram and Facebook say their use is fueling a number of worrying trends among young people, including an increase in depression, anxiety, insomnia, body image issues, and eating disorders. This claim has drawn criticism from critics.

Despite this, Instagram executives have rejected pressure from members of the company’s “welfare team” to include app features that encourage users to stop comparing themselves to others, according to three former employees with knowledge of the details. The feature was implemented despite Mosseri himself acknowledging in an internal email that he considered “social comparisons” to be “an existential problem facing Instagram” and that “social comparisons are for Instagram.” It wasn’t done. [what] According to the state’s complaint, the election interference is against Facebook.

Adam Mosseri was appointed as the head of Instagram in 2018. Reuters

Additionally, a Mosseri-backed feature that addresses the “social comparison” problem by hiding Instagram like counts will eventually be “watered down” and an option that users can manually enable. The report states that this has been set up.

Internally, some employees have reportedly pointed out that the “like hiding” tool would hurt engagement in the app, resulting in less advertising revenue.

While some sources praised Mosseri’s efforts to promote youth safety, one told the magazine that Instagram has a pattern of making such features optional rather than automatically implementing them. There was also

A Meta spokesperson did not specifically answer questions about why the company rejected proposals for tools to combat problems arising from social comparison issues.

“We don’t know what triggers a particular individual to compare themselves to others, so we give people the tools to decide for themselves what they do and don’t want to see on Instagram. ,” a Meta spokesperson told the publication.

A coalition of state attorneys general is suing Instagram and Facebook. shutter stock

Mehta did not immediately respond to a request for comment from the Post.

Elsewhere, Mosseri allegedly objected to the use of a tool that automatically blocks offensive language in direct message requests. The reason for this, The Information reported, citing two former employees, was “because we thought it might prevent legitimate messages from being sent.”

Finally, Instagram approved an optional “filter” feature in 2021, allowing users to block the company’s curated list of offensive words or compile their own list of offensive phrases and emojis they’d like to block. I made it possible.

The move reportedly infuriated safety staff, including former Meta engineer Arturo Bejar. They believed that people of color should not be forced to confront offensive language in order to address the problem. In November, Mr. Behar testifies before Senate committee About harmful content on Instagram.

“I returned to Instagram with the hope that Adam would be proactive about addressing these issues, but there was no evidence of that in the two years I was there,” Bejart said, initially starting Meta in 2015. He retired in 2007 and returned to a safety management role. the team told the outlet in 2019.

Mehta has been accused of failing to protect young social media users. Just Right – Stock.adobe.com

Meta pushed back against the report, saying Instagram has implemented a series of safety defaults for teen users, including blocking adults 19 and older from sending direct messages to teen accounts that don’t follow them. It was pointed out that the function has been introduced.

For example, Meta said its tool called “Hidden Words,” which hides offensive phrases and emojis, will be enabled by default for teens starting in 2024. The company said it has announced more than 20 policies regarding teen safety since Mosseri took over Instagram. 2018.

Mosseri echoed this, writing that further investments in platform security would “strengthen our business.”

“If teens come to Instagram and feel bullied, receive unwanted advances, or see content that makes them uncomfortable, they will leave and go to a competitor.” said Mosseri. “I know how important this work is, and I know that my leadership will be determined by how much progress we make in this work. I look forward to continuing to do more.” Masu.”

Instagram, led by Adam Mosseri, has reportedly scrapped or watered down proposed safety tools. Getty Images

Mosseri was one of several meth executives who came under scrutiny as part of a major lawsuit filed in October by a coalition of 33 state attorneys general. The lawsuit claimed in part that Meta’s millions of underage Instagram users were the company’s “open secret.” The complaint includes an internal chat from November 2021 in which Mosseri appeared to acknowledge the app’s problems with underage users, saying, “Teens want access to Instagram. , who is my age and wants to get Instagram right now.”

A month later, Mosseri testified before the Senate that children under 13 “are not allowed to use Instagram.” He also told MPs that he believes online safety for young people is “very important”.

Separate from the state legal challenges, Meta is facing a separate lawsuit from New Mexico, alleging it failed to protect young people from alleged sex offenders and flooded them with adult sex material. confronting.

Source: nypost.com

Chrome on desktop gets proactive safety checks in the latest Google update

Google is releasing several updates to the desktop version of Chrome this week to make your browsing experience safer and give you more control over the browser’s memory usage.

The main feature of this update is proactive safety checks. In fact, starting with version 120, which was released a few weeks ago, Chrome Safety Check on the desktop runs in the background to detect if your Chrome password has been compromised or if an extension you’ve installed is malware. We now send proactive alerts. . You will also be notified to update Chrome.

Image credits: Google

But perhaps more importantly, Chrome’s safety checks automatically revoke permissions you gave sites a long time ago but haven’t used them in a while. This is similar to how Google currently handles permissions on Android, allowing you to prevent sites you no longer use from continuing to gain access to your location or microphone.

Also now: If you receive a large number of notifications from a site you don’t engage with often, the safety check will ask you if you want to disable them. I regained my sanity.

Image credits: Google

Google is also highlighting two other updates to Chrome for desktop today. The first is an update to Chrome’s Memory Saver mode, which shows more information when you hover over a tab, and a new feature that makes it easier to tell Chrome to prevent certain sites from going to sleep. Settings added.

The second is the save function tab group (It’s a browser feature that some users really like, but most users simply ignore.) This will be rolled out in the coming weeks. The use case here is that you can save these tab groups and sync them with other desktop devices to pick up where you left off.

Image credits: Google

Source: techcrunch.com

Intrinsic, supported by Y Combinator, is developing essential infrastructure for trust and safety teams

Karine Mellata and Michael Lin met several years ago while working on Apple’s Fraud Engineering and Algorithmic Risk team. Both Mellata and Lin were involved in addressing online fraud issues such as spam, bots, account security, and developer fraud among Apple’s growing customer base.

Despite their efforts to develop new models to respond to evolving patterns of abuse, Melata and Lin feel they are falling behind and stuck in rebuilding core elements of their trust and safety infrastructure. I did.

“As regulation puts increased scrutiny on teams that centralize somewhat ad hoc trust and safety responses, we are helping modernize this industry and build a safer internet for everyone. We saw this as a real opportunity to do that,” Melata told TechCrunch in an email interview. “We dreamed of a system that could magically adapt as quickly as the abuse itself.”

Co-founded by So Mellata and Lin essentialis a startup that aims to give safety teams the tools they need to prevent product fraud. Intrinsic recently raised $3.1 million in a seed round with participation from Urban Innovation Fund, Y Combinator, 645 Ventures, and Okta.

Intrinsic’s platform is designed to moderate both user-generated and AI-generated content, allowing customers (primarily social media companies and e-commerce marketplaces) to detect and take action on content that violates their policies. We provide the infrastructure to do so. Intrinsic focuses on integrating safety products and automatically orchestrates tasks like banning users and flagging content for review.

“Intrinsic is a fully customizable AI content moderation platform,” said Mellata. “For example, Intrinsic can help publishers creating marketing materials avoid giving financial advice that carries legal liability. We can also help marketplaces discover listings such as:

Mellata notes that there are no off-the-shelf classifiers for such sensitive categories, and even for a well-resourced trust and safety team, adding a new auto-discovered category can take weeks of engineering. They claim it can take several months in some cases. -House.

Asked about rival platforms such as Spectrum Labs, Azure, and Cinder (almost direct competitors), Mellata said Intrinsic is superior in terms of (1) explainability and (2) significantly expanded tools. I said I was thinking about it. He explained that Intrinsic allows customers to “ask questions” about mistakes they made in content moderation decisions and provide an explanation as to why. The platform also hosts manual review and labeling tools that allow customers to fine-tune moderation models based on their own data.

“Most traditional trust and safety solutions were inflexible and not built to evolve with exploits,” Melata said. “Now more than ever, resource-constrained trust and safety teams are looking to vendors to help them reduce moderation costs while maintaining high safety standards.”

Without third-party auditing, it is difficult to determine how accurate a particular vendor’s moderation model is or whether it is susceptible to some type of influence. prejudice It plagues content moderation models elsewhere. But either way, Intrinsic appears to be gaining traction thanks to its “large and established” enterprise customers, who are signing deals in the “six-figure” range on average.

Intrinsic’s near-term plans include increasing the size of its three-person team and expanding its moderation technology to cover not just text and images, but also video and audio.

“The widespread slowdown in the technology industry has increased interest in automation for trust and safety, and this puts Intrinsic in a unique position,” Melata said. “COOs are concerned with reducing costs. Chief compliance officers are concerned with mitigating risk. Embedded helps both. , to catch more fraud.”

Source: techcrunch.com

OpenAI enhances safety measures and grants board veto authority over risky AI developments

OpenAI is expanding its internal safety processes to prevent harmful AI threats. The new “Safety Advisory Group” will sit above the technical team and will make recommendations to management, with the board having a veto right, but of course whether or not they actually exercise it is entirely up to them. This is a problem.

There is usually no need to report on the details of such policies. In reality, the flow of functions and responsibilities is unclear, and many meetings take place behind closed doors, with little visibility to outsiders. Perhaps this is the case, but given recent leadership struggles and the evolving AI risk debate, it’s important to consider how the world’s leading AI development companies are approaching safety considerations. there is.

new document and blog postOpenAI is discussing its latest “preparation framework,” but this framework is based on two of the most “decelerationist” members of the board, Ilya Satskeva (whose role has changed somewhat and is still with the company). After the reorganization in November when Helen was removed, Toner seems to have been slightly remodeled (completely gone).

The main purpose of the update appears to be to provide a clear path for identifying “catastrophic” risks inherent in models under development, analyzing them, and deciding how to deal with them. They define it as:

A catastrophic risk is a risk that could result in hundreds of billions of dollars in economic damage or serious harm or death to a large number of individuals. This includes, but is not limited to, existential risks.

(Existential risks are of the “rise of the machines” type.)

Production models are managed by the “Safety Systems” team. This is for example against organized abuse of ChatGPT, which can be mitigated through API limits and adjustments. Frontier models under development are joined by a “preparation” team that attempts to identify and quantify risks before the model is released. And then there’s the “superalignment” team, working on theoretical guide rails for a “superintelligent” model, but I don’t know if we’re anywhere near that.

The first two categories are real, not fictional, and have relatively easy-to-understand rubrics. Their team focuses on cyber security, “persuasion” (e.g. disinformation), model autonomy (i.e. acting on its own), CBRN (chemical, biological, radiological, nuclear threats, e.g. novel pathogens), We evaluate each model based on four risk categories: ).

Various mitigation measures are envisaged. For example, we might reasonably refrain from explaining the manufacturing process for napalm or pipe bombs. If a model is rated as having a “high” risk after considering known mitigations, it cannot be deployed. Additionally, if a model has a “severe” risk, it will not be developed further.

An example of assessing model risk using OpenAI’s rubric.

These risk levels are actually documented in the framework, in case you’re wondering whether they should be left to the discretion of engineers and product managers.

For example, in its most practical cybersecurity section, “increasing operator productivity in critical cyber operational tasks by a certain factor” is a “medium” risk. The high-risk model, on the other hand, would “identify and develop proofs of concept for high-value exploits against hardened targets without human intervention.” Importantly, “the model is able to devise and execute new end-to-end strategies for cyberattacks against hardened targets, given only high-level desired objectives.” Obviously, we don’t want to put it out there (although it could sell for a good amount of money).

I asked OpenAI about how these categories are being defined and refined, and whether new risks like photorealistic fake videos of people fall into “persuasion” or new categories, for example. I asked for details. We will update this post if we receive a response.

Therefore, only medium and high risks are acceptable in any case. However, the people creating these models are not necessarily the best people to evaluate and recommend them. To that end, OpenAI has established a cross-functional safety advisory group at the top of its technical ranks to review the boffin’s report and make recommendations that include a more advanced perspective. The hope is that this will uncover some “unknown unknowns” (so they say), but by their very nature they’ll be pretty hard to catch.

This process requires sending these recommendations to the board and management at the same time. We understand this to mean his CEO Sam Altman, his CTO Mira Murati, and his lieutenants. Management decides whether to ship or refrigerate, but the board can override that decision.

The hope is that this will avoid high-risk products and processes being greenlit without board knowledge or approval, as was rumored to have happened before the big drama. Of course, the result of the above drama is that two of the more critical voices have been sidelined, and some money-minded people who are smart but are not AI experts (Brett Taylor and Larry・Summers) was appointed.

If a panel of experts makes a recommendation and the CEO makes a decision based on that information, will this friendly board really feel empowered to disagree with them and pump the brakes? If so, do we hear about it? Transparency isn’t really addressed, other than OpenAI’s promise to have an independent third party audit it.

Suppose a model is developed that guarantees a “critical” risk category. OpenAI has been unashamedly vocal about this kind of thing in the past. Talking about how powerful your model is that you refuse to release it is great advertising. But if the risk is so real and OpenAI is so concerned about it, is there any guarantee that this will happen? Maybe it’s a bad idea. But it’s not really mentioned either way.

Source: techcrunch.com

Tesla Announces Recall of Over 2 Million Cars in the US Due to Autopilot Safety Concerns | Science and Technology Update

Tesla is recalling more than 2 million vehicles in the United States over concerns about its advanced driver assistance system, Autopilot.

The National Highway Traffic Safety Administration (NHTSA) said the system’s methods of determining whether drivers are paying attention may be inadequate and could lead to “foreseeable abuse of the system.”

NHTSA is investigating Elon Musk’s Over two years, the company has suffered a series of crashes, some fatal, that occurred while using the Autopilot system.

tesla He said Autopilot’s software system controls “may not be sufficient to prevent driver misuse” and could increase the risk of a crash.

Tesla’s Autopilot is intended to allow the car to automatically steer, accelerate, and brake within the line, but while the enhanced Autopilot can assist with lane changes on the highway, self-driving It won’t be.

Use Chrome Browser for a more accessible video player


From August: Tesla car catches fire ‘spontaneously’ at scrapyard

One of the Autopilot components is Autosteer, which maintains a set speed or following distance and works to keep the vehicle within its lane of travel.

Tesla disagrees with NHTSA’s analysis, but notes that “additional controls and warnings already exist in affected vehicles to further encourage drivers to comply with ongoing driving responsibilities each time Autosteer engages.” “We will deploy an over-the-air software update that incorporates this.” “I’m engaged.”

The update says it includes increased prominence of visual alerts on the user interface, easier activation and deactivation of Autosteer, and additional checks when Autosteer is activated.

Tesla added that the update will eventually result in a driver’s use of Autosteer being suspended if the driver “repeatedly fails to demonstrate continued and sustained driving responsibility while the feature is activated.” .

read more:
UK could be shut down ‘at any time’ due to cyber attack

Amazon reveals the most asked questions for Alexa in 2023

The recall applies to models Y, S, 3, and X produced between October 5, 2012 and December 7 of this year.

The update was expected to be sent to some affected vehicles on Tuesday, with the remaining vehicles sent out later.

NHTSA will continue its investigation into Autopilot “to monitor the effectiveness of Tesla’s remedies,” the agency said.

Since 2016, regulators have investigated 35 Tesla crashes in which the vehicles were suspected of being driven on automated systems. At least 17 people were killed in the clashes.

It is unclear whether this recall affects Tesla vehicles in other countries, including the UK.

This is the second time this year Tesla recalls its vehicles In the United States.

Source: news.sky.com

Fired Blue Origin Rocket Engine Manager Alleges Unjust Termination After Blowing the Whistle on Safety Concerns

A former program manager for Blue Origin’s BE-4 rocket engine has filed a lawsuit against the company, alleging whistleblowing retaliation after speaking out about safety issues.

The complaint was filed Monday in Los Angeles County Superior Court. It includes a detailed story about program manager Craig Stoker’s seven-month effort to raise concerns about Blue Origin’s safety and harsh working conditions.

Stoker reportedly told two vice presidents in May 2022 that then-CEO Bob Smith’s actions caused employees to “understand safety procedures to meet unreasonable deadlines.” “Frequently violates procedures and processes,” he said. The suit says Smith “exploded” when problems arose, creating a hostile work environment. Mr. Stoker sent a follow-up email containing a formal complaint against Mr. Smith to two vice presidents: Linda Koba, vice president of engine operations, and Mary Plunkett, senior vice president of human resources.

“Myself, my management team, and others within the company do not need to constantly apologize or make excuses to ourselves or our team for the CEO’s bad behavior,” the email said. There is. “We spend a significant amount of time trying to keep things running smoothly, boosting morale, repairing damage, and stopping people from overreacting. . . . Hostile work environment. . . . Our employees , creating a safety and quality risk to our products and customers.”

TechCrunch has reached out to Blue Origin for comment and will update this article if we hear back.

When Mr Stoker asked about a separate investigation into Mr Smith’s actions, Mr Plunkett said the investigation had concluded and Mr Smith was being “coached”.

Just months after filing a formal complaint, Stoker learned that a fellow employee had nearly suffocated while working under an engine nozzle. He expressed his concerns to Michael Stevens, vice president of safety and mission assurance. The complaint says Stoker was “ignored.” In August, Stoker sent another email to executives saying nine people on the engine team were working “over 24-hour” shifts to deliver engines on time to customer United Launch Alliance. expressed concern.

There is no doubt that the company was under pressure to deliver. Blue Origin’s BE-4 will power United Launch Alliance’s Vulcan rocket, which is expected to make its much-delayed debut around Christmas. According to the complaint, Blue Origin’s contract with ULA requires the company to provide one year’s notice of any issues that could affect the delivery of its rocket engines. Stoker wanted to tell ULA that the engine might be delayed.

However, Smith allegedly instructed Stoker not to share these production or delivery issues with ULA.

Ultimately, after an internal investigation, Blue Origin HR concluded that Mr. Smith did not create a hostile work environment or violate company policy. Stoker disagreed with this conclusion. Stoker later learned that officials from the engine program had not been interviewed as part of the investigation, according to the complaint.

The complaint alleges that the human resources department was reluctant to conduct an investigation because the accuser, Mr. Stalker, was a man. “Being a man, Human Resources expected him to deal with problems on his own and not do too much ‘whining,’ and Mr. Stoker was given no means or resources.” He expressed his concerns to the company’s most powerful executive. ”

Stoker was fired on October 7, seven months after he first raised safety concerns. The complaint makes clear who was behind this decision. “Smith spearheaded this termination due to complaints against Mr. Stoker, raising safety/ethics/legal issues, and the fact that many of these reports were intended to disrupt his production/delivery schedule. Ta. “

Blue Origin has announced that Bob Smith will step down as CEO in September after nearly six years. His tenure was a successful one, growing the team from less than 1,000 people to more than 12,000 people and signing numerous high-profile and high-paying contracts with NASA. But it has not been without serious controversy, including allegations of a culture of sexism among senior executives.

Read the full complaint here.

Source: techcrunch.com

Rishi Sunak Commends AI Safety Institute at Bletchley, Though Regulation is Delayed

The Frontier AI Taskforce, set up by the UK in June in preparation for this week’s AI Safety Summit, is expected to become a permanent fixture as the UK aims to take a leading role in future AI policy. UK Chancellor Rishi Sunak today formally announced the launch of the AI ​​Safety Institute, a “global hub based in the UK tasked with testing the safety of emerging types of AI”.

The institute was informally announced last week ahead of this week’s summit. This time, the government announced that the committee will be led by Ian Hogarth, an investor, founder and engineer who also chaired the taskforce, and that Yoshuo Bengio, one of the most prominent figures in the AI ​​field, will lead the committee. It was confirmed that the Creating your first report.

It’s unclear how much money the government will put into the AI ​​Safety Institute, or whether industry players will pick up some of the costs. The institute, which falls under the Department of Science, Innovation and Technology, is described as “supported by major AI companies,” but this may refer to approval rather than financial support. do not have. We have reached out to his DSIT and will update as soon as we know more.

The news coincided with yesterday’s announcement of a new agreement, the Bletchley Declaration. The Bletchley Declaration was signed by all countries participating in the summit, pledging to jointly undertake testing and other commitments related to risk assessment of ‘frontier AI’ technologies. An example of a large language model.

“Until now, the only people testing the safety of new AI models were the companies developing them,” Sunak said in a meeting with journalists this evening. Citing efforts being made by other countries, the United Nations and the G7 to address AI, the plan is to “collaborate to test the safety of new AI models before they are released.”

Admittedly, all of this is still in its early stages. The UK has so far resisted moves to consider how to regulate AI technologies, both at the platform level and more specific application level, and the idea of ​​quantifying safety and risk has stalled. Some people think that it is meaningless.

Mr Sunak argued it was too early to regulate.

“Technology is developing at such a fast pace that the government needs to make sure we can keep up,” Sunak said, focusing too much on big ideas but too little on legislation. He spoke in response to accusations that he was “Before we make things mandatory and legislate, we need to know exactly what we’re legislating for.”

Transparency appears to be a very clear goal of many long-term efforts around this brave new world of technology, but today’s series of meetings at Bletchley, on the second day of the summit, It was far from the ideal.

In addition to bilateral talks with European Commission President Ursula von der Leyen and United Nations Secretary-General António Guterres, today’s summit focused on two plenary sessions. Though not accessible to journalists watching from across a small pool as people gather in the room, attendees at the event included the CEOs of DeepMind, OpenAI, Antrhopic, InflectionAI, Salesforce, and Mistral, as well as Microsoft The president of the company and the head of AWS were also included. Among those representing governments were Sunak, US Vice President Kamala Harris, Italy’s Giorgia Meloni and France’s Finance Minister Bruno Le Maire.

Remarkably, although China was a much-touted guest on the first day, it did not appear at the closed plenary session on the second day.

Elon Musk, owner of X.ai (formerly Twitter), also appeared to be absent from today’s session. Mr. Sunak is scheduled to have a fireside chat with Mr. Musk on his social platforms this evening. Interestingly, it is not expected to be a live broadcast.

Source: techcrunch.com