NASA Astronaut Exits Space Station Early Over Health Concerns, Resulting in Droplets Falling

Four astronauts successfully returned to Earth early Thursday morning, concluding an eventful and extraordinary week in space. The crew made an early departure from the International Space Station as a result of medical issues that emerged during their mission.

NASA astronauts Zena Cardman, Mike Finke, Japanese astronaut Kamiya Yui, and Russian cosmonaut Oleg Platonov made a splashdown in the Pacific Ocean near San Diego at 3:41 a.m. ET after an 11-hour journey.

“On behalf of SpaceX and NASA, welcome home, Crew-11,” mission controllers communicated to the astronauts shortly after the Dragon capsule’s touchdown.

This return marks a historic moment, being the first instance in the ISS’s 25-year history where a mission was terminated early due to medical complications.

On Thursday, SpaceX’s Crew Dragon Endeavor spacecraft splashed down in the Pacific Ocean near San Diego, California.
NASA

Out of respect for medical privacy, NASA has not disclosed the identities of the crew members involved or specific details surrounding the medical incident. The situation remains stable and is not deemed an emergency.

NASA Administrator Jared Isaacman stated in a recent press conference that the early return decision was made with an emphasis on precaution.

The medical issue led to the cancellation of a planned spacewalk scheduled for January 8, during which Cardman and Finke were set to perform modifications outside the ISS.

The recovery team approaches the Dragon capsule.
NASA

Prior to leaving the space station, Finke reassured that he and his colleagues were “stable, safe, and well cared for.”

“This decision was made to facilitate proper medical evaluation in a controlled environment with complete diagnostic capabilities,” Finke mentioned in a statement on LinkedIn. “While it’s bittersweet, it’s the right call.”

The astronauts returned in the same SpaceX Dragon capsule that had transported them to the ISS.

The return mission proceeded without incident, with air traffic controllers reporting favorable weather conditions at the landing site off the California coast. The capsule’s drogue and main parachutes deployed successfully just before landing, ensuring a safe splashdown.

NASA’s Crew 11 Endeavor spacecraft during recovery efforts.
NASA

SpaceX recovery teams promptly arrived to assess the capsule and ensure it was safe to open the hatch. Dolphins were also spotted joyfully swimming in the vicinity.

The Crew-11 astronauts spent 165 days aboard the space station. For Cardman and Platonov, this represents their first spaceflight, while Yui has now completed her second journey. Finke has successfully finished four missions in total.

The astronauts were scheduled to stay on the ISS until late February but returned early, leaving only three crew members onboard: NASA’s Chris Williams and Russian cosmonauts Sergei Kud-Sverchkov and Sergei Mikayev.

Inside the International Space Station’s Kibo Experiment Module: NASA astronaut Mike Finke, Roscosmos cosmonaut Oleg Platonov, NASA astronaut Zena Cardman, and JAXA astronaut Kamiya Yui.
NASA/AP

The next crew rotation for the space station is expected to launch by February 15, but NASA is exploring options for an expedited flight. Nonetheless, Williams is likely to be the only NASA astronaut responsible for U.S. scientific experiments and operations at the station for several weeks to come.

Source: www.nbcnews.com

NASA Schedules Astronauts’ Early Departure from ISS Amid Medical Concerns

NASA has announced plans to return four astronauts from the International Space Station (ISS) earlier than initially scheduled due to a crew member’s health issue encountered in orbit.

According to a statement released by NASA late Friday, the undocking from the ISS is set to take place by 5 p.m. ET on Wednesday, weather permitting at the designated splashdown site off California’s coast.

This marks the first occasion in the 25-year history of the ISS that a mission has been interrupted due to a medical incident in space.

While NASA confirmed a medical issue arose earlier this week, specific details regarding the crew member’s condition or identity have not been disclosed, citing medical privacy regulations.

During a news conference on Thursday, agency officials reassured that the situation is stable, and the decision for early departure is a precautionary measure rather than an emergency evacuation.

NASA Administrator Jared Isaacman stated, “After consulting with Chief Medical Officer Dr. J.D. Polk and agency leaders, we concluded that it’s best for the astronauts to return Crew-11 ahead of schedule.”

The returning crew includes NASA astronauts Zena Cardman and Mike Finke, Japanese astronaut Kamiya Yui, and Russian cosmonaut Oleg Platonov. Crew-11 was initially slated to reach the ISS in early August and remain in the laboratory until late February.

The astronauts will return in the same SpaceX Dragon capsule that transported them to the ISS. If all goes as planned, undocking will occur Wednesday night, with an expected splashdown in the Pacific Ocean around 3:40 a.m. Thursday.

NASA and SpaceX will provide further updates on the precise landing time and location as it gets closer to the undocking.

Post Crew-11’s departure, NASA will maintain one astronaut aboard the ISS to oversee U.S. scientific experiments and operations. Flight engineer Chris Williams launched aboard a Russian Soyuz spacecraft on November 27th and will be joined by Russian cosmonauts Sergei Kud-Sverchkov and Sergei Mikayev.

The next crew is scheduled to launch to the ISS in mid-February, with NASA considering enhancements to this mission, known as Crew-12.

Source: www.nbcnews.com

NASA to Return Space Station Astronauts Early Due to Medical Concerns

NASA has announced that four astronauts aboard the International Space Station (ISS) will return to Earth over a month early due to medical issues. This unprecedented evacuation marks the first of its kind in the ISS’s 25-year history.

Due to medical privacy regulations, NASA refrained from disclosing specific details, including the identities of the affected astronauts and the nature of their medical conditions. However, officials confirmed that the overall situation remains stable.

Speaking at a recent news conference, NASA Administrator Jared Isaacman stated that the astronauts are expected to return home in the coming days. An exact timeline for undocking and landing has not yet been provided.

“After consulting with Medical Officer of Health Dr. J.D. Polk and leadership across the agency, we believe returning Crew-11 early is in the best interest of the astronauts,” Isaacman said in a statement.

Inside the International Space Station.
NASA

Isaacman noted that further updates would be available within the next 48 hours.

The Crew-11 team evacuating the ISS consists of NASA astronauts Zena Cardman and Mike Finke, Japanese cosmonaut Kamiya Yui, and Russian cosmonaut Oleg Platonov. They had originally arrived in early August with plans to stay until late February.

Dr. Polk reassured the public that the situation is stable, clarifying that the decision to evacuate was made for the well-being of the astronauts, not due to an emergency.

“While the ISS is equipped with sophisticated medical technology, it cannot match the complete resources of a hospital emergency department for thorough patient evaluations,” Polk explained. “In this case, there were multiple medical events which necessitated a careful assessment of the astronauts’ health.”

NASA first made the medical concerns public on Wednesday, when it was revealed that Cardman and Finke were deferring a scheduled spacewalk.

Following the early return of Crew-11, NASA will operate with just one astronaut on the ISS, who will oversee ongoing U.S. scientific operations. Flight engineer Chris Williams launched aboard a Russian Soyuz spacecraft on November 27, accompanied by Russian cosmonauts Oleg Platonov, Sergei Kud-Sverchkov, and Sergei Mikayev.

The subsequent crew is slated to launch to the ISS in mid-February, but Isaacman indicated that NASA may evaluate enhancements for this upcoming mission, known as Crew-12.

This week’s developments present Isaacman’s first significant challenge since taking office on December 18.

Source: www.nbcnews.com

AI Chatbot Fails to Address Urgent Women’s Health Concerns: Key Issues Highlighted

Sure! Here’s an SEO-optimized rewrite of your content while keeping the HTML tags intact:

AI Health Information

AI Tools for Women’s Health: Incomplete Answers

Oscar Wong/Getty Images

Current AI models frequently struggle to provide accurate diagnoses or advice for pressing women’s health inquiries.

Thirteen AI language models from OpenAI, Google, Anthropic, Mistral AI, and xAI were assessed with 345 medical questions spanning five fields, including emergency medicine, gynecology, and neurology. These questions were curated by 17 experts in women’s health, pharmacists, and clinicians from the US and Europe.

Expert reviewers analyzed the AI responses, cross-referencing failures against a medical expertise benchmark which includes 96 queries.

On average, 60% of the queries yielded inadequate responses based on expert evaluations. Notably, GPT-5 was the strongest performer, with a 47% failure rate, while Mistral 8B exhibited a significant 73% failure rate.

“I see more women using AI for health queries and decision support,” says Victoria-Elizabeth Gruber, a representative from Lumos AI, a firm focused on enhancing AI model assessments. She and her colleagues recognized the potential dangers of relying on technology that perpetuates existing gender imbalances in medical knowledge. “This inspired us to establish the first benchmark in this domain,” she explains.

Gruber expressed surprise over the high failure rates, stating, “We anticipated some disparities, but the variability among models was striking.”

This outcome is not unexpected, according to Kara Tannenbaum at the University of Montreal, Canada, as AI models are trained on historical data that may inherently contain biases. “It’s crucial for online health information sources and professional associations to enhance their web content with more detailed, evidence-based insights related to sex and gender to better inform AI,” she emphasizes.

Jonathan H. Chen from Stanford University notes that the claimed 60% failure rate may be misleading. “This figure is based on a limited expert-defined sample, which does not accurately represent regular inquiries from patients and doctors,” he asserts. “Some test scenarios are overly cautious and can lead to higher failure rates.” For instance, if a postpartum woman reports a headache, the model might fail if pre-eclampsia isn’t immediately suspected.

Gruber acknowledges such critiques, clarifying, “Our intent was not to label the model as broadly unsafe but to establish clear, clinically relevant evaluation criteria. We purposefully set strict benchmarks as minor omissions in the medical field can be significant in some cases.”

An OpenAI representative stated: “ChatGPT aims to support, not replace, healthcare services. We closely collaborate with clinicians globally to refine our models and continuously evaluate them to minimize harmful or misleading output. Our latest GPT-5.2 models are designed to consider critical user contexts, including gender. We take the accuracy of our outputs seriously, and while ChatGPT can offer valuable insights, we advise consulting qualified healthcare providers for treatment and care decisions.” Other companies involved in the study did not respond to requests for comments from New Scientist.

Topics:

This rewrite optimizes the content for SEO by naturally incorporating keywords related to AI in women’s health, improving clarity, and emphasizing critical insights throughout the piece.

Source: www.newscientist.com

Over 1,000 Amazon Employees Raise Concerns About AI’s Impact on Jobs and the Environment

An open letter signed by over 1,000 Amazon employees has raised “serious concerns” regarding AI development, criticizing the company’s “all costs justified and warp speed” approach. It warns that the implications of such powerful technologies will negatively affect “democracies, our jobs, and our planet.”

Released on Wednesday, this letter was signed anonymously by Amazon employees and comes a month after the company’s announcement about mass layoffs intended to ramp up AI integration within its operations.

The signatories represent a diverse range of roles, including engineers, product managers, and warehouse staff.

Echoing widespread concerns across the tech industry, the letter also gained support from over 2,400 employees at other companies such as Meta, Google, Apple, and Microsoft.

This letter outlines demands aimed at Amazon regarding workplace and environmental issues. Employees are urging the company to provide clean energy for all data centers, ensure that AI-driven products and services do not facilitate “violence, surveillance, and mass deportation,” and establish a working group composed of non-administrators. “They bear significant responsibility for overarching objectives within the organization, the application of AI, the implementation of AI-related layoffs, and addressing the collateral impacts of AI, such as environmental effects.”

This letter is a product of an advocacy group of Amazon employees advocating for climate justice. One worker involved in drafting the letter shared that employees felt compelled to speak out due to adverse experiences with AI tools at work and broader environmental concerns stemming from the AI boom. The employee emphasized the desire for more responsible methods in the development, deployment, and use of technology.

“I signed this letter because executives are increasingly fixated on arbitrary productivity metrics and quotas, using AI to justify pushing themselves and their colleagues to work longer hours or handle more projects with tighter deadlines,” stated a senior software engineer who preferred to remain anonymous.

Climate Change Goals

The letter claims that Amazon is “abandoning climate goals for AI development.”

Like its competitors in the generative AI space, Amazon is heavily investing in new data centers to support its AI tools, which are more resource-intensive and demand significant power. The company plans to allocate $150 billion over the next 15 years for data centers, and has recently disclosed an investment of $15 billion for a data center in northern Indiana and $3 billion for centers in Mississippi.

The letter reports that Amazon’s annual emissions have seen an “approximately 35% increase since 2019,” despite the company’s promises. The report cautions that many of Amazon’s AI infrastructure investments will be in areas where energy demands compel utilities to maintain coal plants or establish new gas facilities.

“‘AI’ is being used as a buzzword to mask a reckless investment in energy-hungry computer chips, which threaten worker power, accumulate resources, and supposedly save us from climate issues,” noted an Amazon customer researcher who requested to remain anonymous. “It would be fantastic to build AI that combats climate change! However, that’s not where Amazon’s billions are directed. They are investing in data centers that squander fossil fuel energy for AI aimed at monitoring, exploiting, and extracting profit from their customers, communities, and government entities.”

In a statement to the Guardian, Amazon spokesperson Brad Glasser refuted the employees’ claims and highlighted the company’s climate initiatives. “Alongside being a leading data center operator in efficiency, we have been the largest corporate buyer of renewable energy globally for five consecutive years, with over 600 projects globally,” Glasser stated. “We have also made substantial investments in nuclear energy through our current facilities and emerging SMR technology. These efforts are tangible actions demonstrating our commitment to achieving net-zero carbon across our global operations by 2040.”

AI for Enhanced Productivity

The letter also includes stringent demands regarding AI’s role within Amazon, arising from challenges employees are facing.

Three Amazon employees who spoke with the Guardian claimed that the company was pressuring them to leverage AI tools to boost productivity. “I received a message from my direct boss,” shared a software engineer with over two years at Amazon, who spoke on condition of anonymity for fear of retaliation, “about using AI in coding, writing, and general daily tasks to enhance efficiency, stressing that if I don’t actively use AI, I risk falling behind.”

The employee added that not long ago, their manager indicated they were “expected to double their work output due to AI tools,” expressing concern that the anticipated production levels would require fewer personnel and that “the tools simply aren’t bridging the gap.”

Customer researchers shared similar feelings. “I personally feel pressure to incorporate AI into my role, and I’ve heard from numerous colleagues who feel the same pressure…”

“Meanwhile, there is no dialogue about the direct repercussions for us as workers, from unprecedented layoffs to unrealistic output expectations.”

A senior software engineer highlighted that the introduction of AI has led to suboptimal outcomes. The most common scenario involves employees being compelled to use agent code generation tools. “Recently, I worked on a project that was merely cleaned up after an experienced engineer attempted to use AI to generate code for a complex assignment,” the employee revealed. “Unfortunately, none of it functioned as intended, and he had no idea why. In fact, we would have been better off starting from scratch.”

Amazon did not respond to questions regarding employee critiques of its AI workplace policies.

Employees stressed that they are not inherently opposed to AI but wish to see it developed sustainably and with input from those who are directly involved in its creation and application. “I believe Amazon is using AI to justify its control over local resources like water and energy, and it also legitimizes its power over its employees, who face increasing surveillance, accelerated workloads, and implicit termination threats,” a senior software engineer asserted. “There exists a workplace culture that discourages open discussions about the flaws of AI, and one of the objectives of this letter is to show colleagues that many of us share these sentiments and that an alternative route is achievable.”

Source: www.theguardian.com

Concerns Over AI Bubble Resurface as Wall Street Pulls Back from Brief Rally | Stock Market

Concerns about a potential bubble in the artificial intelligence sector emerged again on Thursday as major U.S. stock markets declined, just a day after chipmaker Nvidia’s impressive results had sparked a market rally.

Initially, Wall Street experienced a boost following Nvidia’s reassurance of robust demand for its advanced data center chips. However, this optimism faded as the tech stocks central to the AI boom began to face downward pressure.

In New York, the S&P 500 index ended the day down 1.6%, while the Dow Jones Industrial Average fell by 0.8%. The tech-focused Nasdaq Composite Index dropped by 2.2%.

Earlier in the session, the FTSE 100 rose by 0.2% in London, and the DAX closed 0.5% higher in Frankfurt. The Nikkei Stock Average increased by 2.65% in Tokyo.

Currently valued at approximately $4.4 trillion, Nvidia has seen an extraordinary surge in valuations among AI-related companies in recent months. The escalating concerns about a bubble have arisen as businesses invest heavily in chips and data centers to secure their position in the AI market.

Nvidia continues to experience strong demand, with highly anticipated earnings surpassing expectations on Wednesday. Yet, worries persist that companies utilizing these chips and investing in AI are making substantial expenditures to stimulate demand.

“The sale of semiconductors to support AI doesn’t mitigate fears that some hyperscalers might be overspending on AI infrastructure,” remarked Robert Pavlik, senior portfolio manager at Dakota Wealth. “While certain companies are turning a profit, many are still investing heavily.”

Mixed employment data released Thursday morning highlighted robust labor market growth in September, albeit with a slight uptick in the unemployment rate, reinforcing the expectation that Federal Reserve policymakers may choose to maintain interest rates at their upcoming December meeting.

Nvidia’s stock saw a decline of 3.2%, while the VIX index, which gauges market volatility, increased by 8%.

Report contributed by Reuters

Source: www.theguardian.com

Nvidia CEO Addresses Wall Street’s AI Bubble Concerns During Market Downturn: ‘We Excel at Every Step of AI’

Global stock markets experienced an upward trend following Nvidia’s impressive third-quarter profits, which surpassed Wall Street forecasts, easing concerns that the AI company’s skyrocketing valuations might have reached their limit.

On Wednesday, all attention turned to Nvidia, the frontrunner in the AI industry and the highest valued publicly traded company globally. Analysts and investors were eager for the chip maker’s third-quarter results, hoping they would dispel worries about an impending bubble in the sector.

Nvidia’s founder and CEO, Jensen Huang, addressed these apprehensions right at the start of the earnings call, emphasizing that a significant transformation is underway in AI, and Nvidia stands at the core of this change.

“Many discuss the AI bubble,” Huang noted. “From our viewpoint, the situation looks quite different. To clarify, Nvidia differs from other accelerators. We shine at every phase of AI, from pre-training through to inference.”

The company consistently exceeded Wall Street’s expectations across multiple metrics, indicating that the substantial AI economic boom is not decelerating. Nvidia announced diluted earnings per share of $1.30 on total revenues of $57.01 billion, which topped investor expectations of $1.26 per share on revenues of $54.9 billion. Sales surged by 62% year over year, with data center revenues reaching $51.2 billion—surpassing the anticipated $49 billion. The company also forecasts fourth-quarter sales to be around $65 billion, exceeding analyst expectations of $61 billion.

During a conference call with investors, Huang outlined three pivotal shifts in platforms: the move from general-purpose computing to accelerated computing, the transition toward generative AI, and the development of agential and physical AI, such as robotics and autonomous vehicles.

“When contemplating infrastructure investments, consider three fundamental dynamics,” Huang stated. “Each one adds to the wealth of infrastructure. Nvidia… facilitates all three transitions, and we do so across all types and modalities of AI.”

He further noted that demand for Nvidia’s chips continues to expand.

“AI permeates everywhere and operates on multiple fronts simultaneously.”

Skip past newsletter promotions

According to Thomas Monteiro, Senior Analyst at Investing.com, “This clarifies many uncertainties surrounding the AI revolution; the essence is clear: The AI revolution is far from nearing its peak. Despite investor concerns that rising capital expenditures may compel firms to decelerate their adoption cycles for AI, Nvidia continues to demonstrate that data center growth is not merely an alternative but an essential requirement for every tech company globally.”

Analysts and experts expressed confidence that Nvidia would exceed Wall Street’s forecasts but were keenly awaiting further insights regarding industry demand for the company’s AI chips.

“There’s no denying Nvidia maintains its position as the dominant player in AI-centric chips,” noted David Meyer, a senior analyst at the investment platform Motley Fool. “We anticipate that revenue, margins, and cash flow will align closely with analysts’ predictions. However, invaluable insights are more likely to stem from management’s commentary on their market outlook, whether concerning the AI sector or new markets they are exploring.”

In November, Nvidia’s shares experienced a 7.9% decline amid significant investors offloading their holdings. Peter Thiel’s hedge fund teal macro divested its entire stake in the chipmaker in the last quarter, with estimates of around $100 million in assets, according to Reuters. SoftBank also offloaded $5.8 billion worth of its shares, heightening concerns regarding an AI bubble.

Following the news, Nvidia’s shares, having recently achieved the milestone of being the world’s first $5 trillion company, increased by over 5% in after-hours trading, with S&P 500 and Nasdaq futures also climbing. Asian markets rose on Thursday as well.

However, Stephen Innes of SPI Asset Management cautioned: “NVIDIA’s latest forecast has thus far alleviated some of the most intense apprehensions regarding an AI bubble looming over global markets… Nevertheless, this situation still leaves markets precariously balanced between exuberance over AI and the sobering reality marked by debt.”

“We do not believe Nvidia’s growth can be sustained in the long run,” asserted Alvin Nguyen, senior analyst at Forrester. “Although the demand for AI is unmatched, we anticipate Nvidia’s stock growth may slow if market corrections occur, balancing supply with demand, innovation progresses at a slower pace, or companies become acclimated to the current rate.”

Source: www.theguardian.com

Cryptocurrency Market Plummets Over $1 Trillion in 6 Weeks Amid Tech Bubble Concerns

Over $1 trillion (£760 billion) has been erased from the crypto market’s valuation in the last six weeks as concerns about a tech bubble grow and hopes for a US interest rate reduction next month diminish.

According to data company CoinGecko, the value of the cryptocurrency market, which tracks over 18,500 coins, has dropped by a quarter since peaking in early October.

Bitcoin has experienced a 27% decline during this time, reaching $91,212, marking its lowest point since April.

Rising worries about an artificial intelligence bubble in the stock market are causing unease among global investors, with even the CEO of Google’s parent company cautioning that “no company will be immune” if the bubble bursts.

.

The FTSE 100 index in Britain fell by 1.3% on Tuesday, marking its fourth consecutive decline and its most severe day since April. The Stoxx Europe 600, which monitors the continent’s largest companies, declined by 1.8%. Wall Street also faced losses, with the Dow Jones, Nasdaq, and S&P 500 all down approximately 1% on Tuesday.

This was followed by a significant drop in Asia, with Japan’s Nikkei Stock Average falling by 3.2% and Hong Kong’s Hang Seng Index decreasing by 1.7%.

Sundar Pichai, the CEO of Google’s parent firm Alphabet, remarked in an interview with the BBC that there is a sense of “irrationality” surrounding the current AI boom. He cautioned that if the AI bubble were to burst, “no company, including us, will be exempt.”

Meanwhile, JPMorgan Chase Vice Chairman Daniel Pinto stated that the skyrocketing valuations of AI necessitate a reassessment. “There will likely be a correction,” he mentioned at the Bloomberg Africa Business Summit in Johannesburg on Tuesday. “This adjustment will also impact the rest of the sector, the S&P, and the industry.”

Klarna CEO Sebastian Siemiatkowski expressed concerns this week about the vast sums of money being invested in computing infrastructure.

He told the Financial Times: “[OpenAI] has the potential to be highly successful as a company, but I’m apprehensive about the extent of these data center investments, which is my primary concern.”

The Klarna co-founder highlighted the increasing valuations of AI companies, including Nvidia, as a troubling issue. Nvidia became the first firm to achieve a market valuation of $4 trillion this year, followed by Apple and Microsoft.

Skip past newsletter promotions

“That concerns me, considering the amount of wealth currently being blindly allocated to this trend without deeper thought,” Siemiatkowski remarked.

“You might say, ‘I don’t believe NVIDIA is worth this much, but it doesn’t matter. Some wealthy individuals will lose money.’ However, the reality is that due to index funds and their mechanisms, one might assume their pension is a sound investment.”

AI bubbles are viewed as one of the most significant risks to the stock market, with research from Bank of America indicating that 45% of fund managers surveyed consider AI bubbles to be the paramount risk. tail risk.

Gold, typically regarded as a safe-haven asset, has also seen a decline. Spot prices dropped by 0.3% on Tuesday morning to $4,033.29 an ounce, following a one-week low.

This drop occurs as expectations around a US Federal Reserve (Fed) interest rate reduction next month wane. Higher interest rates make gold less appealing due to its non-increasing yield.

Nonetheless, Giovanni Staunovo, an analyst at Swiss investment bank UBS, mentioned that while gold prices may fall further, he anticipates a rebound soon.

“With the Fed projected to lower interest rates multiple times in the coming quarters and the strong trend of central banks diversifying into gold, we predict that gold prices will stabilize soon,” he stated.

Source: www.theguardian.com

Data Breach Exposes Personal Information of Tate Gallery Job Seekers

The Guardian has revealed that personal information from job applicants at the Tate has been exposed online, compromising addresses, salaries, and phone numbers of examiners.

These extensive records, running hundreds of pages, were shared on a site not affiliated with the government-supported organization managing London’s Tate Modern, Tate Britain, Tate St Ives in Cornwall, and Tate Liverpool.

The leaked data encompasses details like the current employers and educational background of applicants related to the Tate’s Website Developer Search in October 2023, affecting 111 individuals. While names are undisclosed, referees’ phone numbers and personal email addresses might be included. It remains unclear how long this information has been available online.

Max Kohler, a 29-year-old software developer, learned his data had been compromised after one of his application reviewers received an email from an unfamiliar source who accessed the online data dump.

Kohler found that the breach contained his last paycheck, current employer’s name, other reviewers’ names, email addresses, home addresses, and extensive responses to job interview questions.

“I feel extremely disappointed and disheartened,” he stated. “You dedicate time filling out sensitive information like your previous salary and home address, yet they fail to secure it properly and allow it to be publicly accessible.”

“They should publicly address this issue, provide an apology, and clarify how this happened, along with actions to prevent future occurrences. It likely stems from inadequate staff training or procedural oversights.”

Reported incidents of data security breaches to the UK’s Information Commissioner’s Office (ICO) continue to rise. Over 2,000 incidents were reported quarterly in 2022, increasing to over 3,200 between April and June of this year.

Kate Brimstead, a partner at Shoesmith law firm and an authority on data privacy, information law, and cybersecurity, commented: “Breaches do not always have to be intentional. While ransomware attacks attract significant attention, the scale of current breaches is substantial.” Errors can often contribute to these incidents, highlighting the necessity for robust checks and procedures in daily operations. “Managing our data can be tedious, but it remains crucial,” she added.

The ICO emphasized that organizations must report a personal data breach to them within 72 hours of being aware, unless there is no risk to individuals’ rights and freedoms. If an organization decides not to report, they should maintain a record of the breach and justify their decision if needed.

Skip past newsletter promotions

A spokesperson for Tate stated: “We are meticulously reviewing all reports and investigating this issue. Thus far, we haven’t identified any breaches in our systems and will refrain from further comment while this issue is under investigation.”

quick guide

Contact us about this story

show

The integrity of public interest journalism relies on firsthand accounts from knowledgeable individuals.

If you have insights regarding this issue, please contact us confidentially using the methods listed below.

Secure messaging in the Guardian app

The Guardian app provides a way to share story tips. Messages sent via this feature are encrypted end-to-end and integrated within typical app functions, keeping your communication with us secure.

If you haven’t yet downloaded the Guardian app, you can find it on (iOS/Android) and select “Secure Messaging” from the menu.

SecureDrop, instant messaging, email, phone, and postal mail

If you can secure your use of the Tor network, you can send us messages and documents through our <a href=”https://www.theguardian.com/securedrop”>SecureDrop platform</a>.

Additionally, our guide located at <a href=”https://www.theguardian.com/tips”>theguardian.com/tips</a> lists several secure contact methods and discusses the advantages and disadvantages of each.

Illustration: Guardian Design/Rich Cousins

Thank you for your feedback.

Source: www.theguardian.com

Nexperia Halts Chip Supply to China Amid Global Automotive Production Concerns

Nexperia, the automotive semiconductor manufacturer based in the EU and at the heart of the geopolitical tensions, has stopped all supplies to its factories in China, intensifying a trade war that risks shuttering production for automakers globally.

This week, the company communicated with its clients about the suspension of all supplies to its Chinese facility.

In September, the Netherlands utilized national security legislation to take control of the semiconductor maker due to fears that its Chinese owner, Wingtech Technologies, intended to transfer intellectual property to another affiliated company. The Dutch authorities stated: Threatened the future of Europe’s chip production capacity and subsequently dismissed Wingtech Chairman Zhang Xuezheng as CEO.

In retaliation, China ceased exports from all Nexperia factories and warned that this embargo could lead to the closure of production lines at EU car manufacturers within days.

The continuing lockdown jeopardizes the supply chain, as numerous Nexperia products produced in Europe—including wafers used to manufacture chips—were typically sent to factories in China for packaging and distribution.

Nexperia’s interim CEO, Stephen Tilger, stated on Sunday that shipments to its Dongguan factory in Guangdong province have been halted due to a “direct result of local management’s recent failure to comply with agreed contractual payment terms,” according to excerpts first released by Reuters.

Nexperia remains optimistic about resuming shipments and is hoping to de-escalate the situation. A source familiar with the developments indicated that shipments might recommence once contractual payments are made. Additionally, the company will continue sending products to its Malaysian facility, which is smaller than the Chinese one.

Automakers are expressing concerns over potential disruptions caused by shortages of crucial components essential for modern vehicles.

The automotive sector faced severe semiconductor shortages following the coronavirus pandemic, but it was not Nexperia’s lower-cost power control chips that were impacted—it was more advanced chips. The company usually ships over 100 billion items annually, utilized in various applications from airbags and adjustable seating to side mirrors and central locking.

Nissan Motor Co. announced this week that it has sufficient chips to last until early November, while competitor Honda reported halting production at its Mexican facility. Mercedes-Benz described its situation as “manageable” in the short term, yet is exploring alternatives. Volkswagen suggested that its annual profit goals could be compromised without adequate chip supply.

Conversely, Toyota, the world’s largest automaker, informed reporters at an auto show in Tokyo on Friday that it is not experiencing significant supply challenges, even though production might ultimately be affected.

EU trade commissioner Maroš Šefčović aims to initiate further discussions with Chinese officials following meetings in Brussels with both Chinese and EU representatives to address the export ban on Nexperia and restrictions on rare earth minerals supply.

Additionally, on the same day, the bloc’s technical commissioner Hena Virkunen met with Nexperia’s interim leader after discussions with European chip manufacturers Infineon, ST, and NXP the previous day.

Post-meeting, he noted that discussions with Nexperia underscored the EU’s necessity for new tipping laws as three lessons identified from the ongoing crisis: enhanced visibility of chip inventory in the pipeline, the importance of investing in chip supply despite costs, and the need for reserve inventories.

Skip past newsletter promotions

“Diversifying stockpiles and supplies is crucial to our collective resilience,” she stated.

The German Automotive Industry Association (VDA) expressed concern on Thursday that without a swift resolution to the situation at Nexperia, it could lead to “significant production restrictions and even suspensions in the near future.”


Businesses in the UK are likely to be impacted as well. Nexperia manufactures some of its chip wafers at a plant established by Dutch company Philips in Manchester.

Previously, Nexperia owned another factory in south Wales but was blocked by the UK government from completing its acquisition of the Newport wafer factory due to national security concerns, given its ultimate Chinese ownership. Subsequently, US semiconductor firm Vishay Intertechnology acquired the factory in November 2023.

Wingtech has yet to respond to requests for comments.

Source: www.theguardian.com

AFP Creates AI Tool to Decode Gen Z Slang Amid Concerns Over ‘Criminal Influencers’ Targeting Young Women

The Australian Federal Police is set to create an AI tool designed to understand Gen Z and Alpha slang and emojis as part of its efforts to combat sadistic online exploiters and “criminal influencers”.

During a speech at the National Press Club on Wednesday, AFP Commissioner Chrissy Barrett highlighted the increasing presence of online criminal networks predominantly led by boys and men who target vulnerable teenage and pre-teen girls.

The police chief detailed how these individuals, mainly from English-speaking nations, groom their victims, coercing them into “engaging in severe acts of violence against themselves, their siblings, other individuals, and even their pets”.

Sign up for AU breaking news emails

“They act as criminal influencers, driven by chaos and the desire to inflict harm, with most of their victims being teenagers, specifically teenage girls,” she remarked, addressing parents and guardians.

“The motivations behind these networks are not financial or sexual in nature; they are purely for entertainment, fun, or gaining online popularity, often without an understanding of the repercussions.”

“This perverse form of gamification encourages the production of increasingly extreme and depraved content, allowing offenders to elevate their status within the group.

“In some instances, these perpetrators will swap victims much like in online gaming scenarios.”

The Federal Police confirmed they have identified 59 suspects involved in these networks, taking action against an undisclosed number of them, all aged between 17 and 20.

Barrett mentioned that AFP is collaborating with Microsoft to create artificial intelligence tools capable of “interpreting emojis and Gen Z and Gen Alpha slang in encrypted communications and chat groups to detect sadistic online exploitation.”

“This prototype is intended to assist our teams in swiftly removing children from dangerous situations,” she stated in a pre-released version of her speech.

“While it may feel like an endless struggle to safeguard children, I urge parents and caregivers to understand they are not alone and that there are straightforward steps they can take.”

Barrett also addressed the radicalization of youth, noting that four young individuals faced terrorism-related charges this year due to 10 investigations.

Since 2020, a total of 48 youths aged between 12 and 17 have been investigated for suspected terrorist activities, resulting in 25 charges.

She pointed out that 54% had a religious motivation, 22% had an ideological motivation, 11% had a mixed or unclear ideology, and 13% had undetermined motives.

In one notable case from 2022, a 14-year-old was investigated after posting on Snapchat about violent extremism, Barrett revealed.

This 14-year-old boy had access to firearms and explosives, with a tip-off suggesting he was plotting a school shooting in Australia.

During his arrest, police discovered a tactical vest, a bulletproof helmet, and “extremist-style” drawings.

Barrett’s address also referred to the AFP’s ongoing investigation into the arson at the Adas Israel synagogue, asserting that the suspect is linked to several incendiary bombings targeting tobacco shops.

“This individual represents a national security threat to our nation,” she stated.

“Among all the criminals who pose a threat to Australia, he is my primary concern, and I have directed my most seasoned investigators to focus on him.”

Barrett’s recent appointment as AFP’s chief, succeeding the retiring Reece Kershaw, suggests a shift in the police’s mission.

The AFP is now mandated to “protect Australia and its future from both domestic and global security threats,” implying increased international actions and operations.

Barrett mentioned the AFP’s collaboration with Colombian law enforcement, highlighting that AFP personnel were dispatched to a remote area of the Colombian jungle to “deliberately dismantle a cocaine manufacturing facility.”

“The AFP is determined to prevent criminal organizations from targeting Australia and will persist in collaborating with local law enforcement to confront criminals in our own vicinity when legally feasible,” she said.

“In recent years, AFP and Colombian cooperation has led to the seizure of over eight tonnes of cocaine.”

In partnership with Colombian authorities, a cache of arms and explosives from narco-terrorist groups, employed in assaults on police and military personnel, was also confiscated.

Barrett stated that AFP assisted in the seizure of 295 military grenades, 200 detonators, firearms, and ammunition.




Source: www.theguardian.com

British MPs Warn of Potential Violence in 2024 Due to Unchecked Online Misinformation

Members of Parliament have cautioned that if online misinformation is not effectively tackled, it is “just a matter of time” before viral content leads to a resurgence of violence in the summer of 2024.

Chi Onwurah, chair of the Commons science and technology select committee, expressed concern that ministers seem complacent regarding the threat, placing public safety in jeopardy.

The committee voiced its disappointment with the government’s reaction to a recent report indicating that the business models of social media companies are contributing to unrest following the Southport murders.

In response to the committee’s findings, the government dismissed proposals for legislation aimed at generative artificial intelligence platforms, maintaining that it would refrain from direct intervention in the online advertising sector, which MPs argued has fostered the creation of harmful content post-attack.

Onwurah noted that while the government concurs with most conclusions, it fell short of endorsing specific action recommendations.

Onwurah accused ministers of compromising public safety, stating: “The government must urgently address the gaps in the Online Safety Act (OSA); instead, it seems satisfied with the harm caused by the viral proliferation of legal but detrimental misinformation. Public safety is at stake, and it’s only a matter of time before we witness a repeat of the misinformation-driven riots of summer 2024.”

In their report titled ‘Social Media, Misinformation and Harmful Algorithms’, MPs indicated that inflammatory AI-generated images were shared on social media following the stabbing that resulted in the deaths of three children, warning that AI tools make it increasingly easier to produce hateful, harmful, or misleading content.

In a statement released by the commission on Friday, the government stated that no new legislation is necessary, insisting that AI-generated content already falls under the OSA, which regulates social media content. They argued that new legislation would hinder its implementation.

However, the committee highlighted Ofcom’s evidence, where officials from the communications regulator admitted that AI chatbots are not fully covered by the current legislation and that further consultation with the tech industry is essential.

The government also declined to take prompt action regarding the committee’s recommendation to establish a new entity aimed at addressing social media advertising systems that allow for the “monetization of harmful and misleading content,” such as misinformation surrounding the Southport murders.

In response, the government acknowledged concerns regarding the lack of transparency in the online advertising market and committed to ongoing reviews of industry regulations. They added that stakeholders in online advertising seek greater transparency and accountability, especially in safeguarding children from illegal ads and harmful products and services.

Addressing the commission’s request for additional research into how social media algorithms amplify harmful content, the government stated that Ofcom is “best positioned” to determine if an investigation should be conducted.

In correspondence with the committee, Ofcom indicated that it has begun working on a recommendation algorithm but acknowledged the necessity for further exploration across a broader spectrum of academic and research fields.

The government also dismissed the commission’s call for an annual report to Parliament concerning the current state of online misinformation, arguing that it could hinder efforts to curtail the spread of harmful online information.

The British government defines misinformation as the careless dissemination of false information, while disinformation refers to the intentional creation and distribution of false information intended to cause harm or disruption.

Onwurah highlighted concerns regarding AI and digital advertising as particularly troubling. “Specifically, the inaction on AI regulation and digital advertising is disappointing,” she stated.

“The committee remains unconvinced by the government’s assertion that the OSA adequately addresses generative AI, and this technology evolves so swiftly that additional efforts are critically needed to manage its impact on online misinformation.

“And how can we combat that without confronting the advertising-driven business models that incentivize social media companies to algorithmically amplify misinformation?”

Source: www.theguardian.com

Concerns Rise Over OpenAI Sora’s Death: Legal Experts React to AI Missteps

LThat evening, I was scrolling through dating apps when a profile caught my eye: “Henry VIII, 34 years old, King of England, non-monogamous.” Before I knew it, I found myself in a candlelit bar sharing a martini with the most notorious dater of the 16th century.

But the night wasn’t finished yet. Next, we took turns DJing alongside Princess Diana. “The crowd is primed for the drop!” she shouted over the music as she placed her headphones on. As I chilled in the cold waiting for Black Friday deals, Karl Marx philosophized about why 60% off is so irresistible.

In Sora 2, if you can imagine it—even if you think you shouldn’t—you can likely see it. Launched in October as an invite-only app in the US and Canada, OpenAI’s video app hit 1 million downloads within just five days, surpassing the initial success of ChatGPT.




AI-generated deepfake video features portraits of Henry VIII and Kobe Bryant

While Sora isn’t the only AI tool producing videos from text, its popularity stems from two major factors. First, it simplifies the process for users to star in their own deepfake videos. After entering a prompt, a 10-second clip is generated in minutes, which can be shared on Sora’s TikTok-style platform or exported elsewhere. Unlike low-quality, mass-produced “AI slop” that clouds the internet, these videos exhibit unexpectedly high production quality.


The second reason for Sora’s popularity is its ability to generate portraits of celebrities, athletes, and politicians—provided they are deceased. Living individuals must give consent for their likenesses to be used, but “historical figures” seem to be defined as famous people who are no longer alive.

This is how most users have utilized the app since its launch. The main feed appears to be a bizarre mix of absurdity featuring historical figures. From Adolf Hitler in a shampoo commercial to Queen Elizabeth II stumbling off a pub table while cursing, the content is surreal. Abraham Lincoln beams at the TV exclaiming, “You’re not my father.” The Reverend Martin Luther King Jr. expresses his dream of having all drinks be complimentary before abruptly grabbing a cold drink and cursing.

However, not everyone is amused.

“It’s profoundly disrespectful to see my father’s image—who devoted his life to truth—used in such an insensitive manner,” Malcolm told the Washington Post. She was just two when her dad was assassinated. Now, Sora’s clips show the civil rights leader engaged in crude humor.

Zelda Williams, the daughter of actor Robin Williams, urged people to “stop” sending AI videos of her father through an Instagram post. “It’s silly and a waste of energy. Trust me, that’s not what he would have wanted,” she noted. Before his passing in 2014, he took legal steps to prevent his likeness from being used in advertising or digitally inserted into films until 2039. “Seeing my father’s legacy turned into something grotesque by TikTok artists is infuriating,” she added.

The video featuring the likeness of the late comedian George Carlin has been described by his daughter Kelly Carlin as “overwhelming and depressing” in a Blue Sky post.

Recent fatalities are also being represented. The app is filled with clips depicting Stephen Hawking enduring a “#powerslap” that knocks his wheelchair over, Kobe Bryant dunking over an elderly woman while yelling about something stuck inside him, and Amy Winehouse wandering the streets of Manhattan with mascara streaming down her face.

Those who have passed in the last two years (Ozzy Osbourne, Matthew Perry, Liam Payne) seem to be missing, indicating they may fall into a different category.

Each time these “puppetmasters” revive the dead, they risk reshaping the narrative of history, according to AI expert Henry Ajdar. “People are worried that a world filled with this type of content could distort how these individuals are remembered,” he explains.

Sora’s algorithm favors content that shocks. One of the trending videos features Dr. King making monkey noises during his iconic “I Have a Dream” speech. Another depicts Kobe Bryant reenacting the tragic helicopter crash that claimed both his and his daughter’s lives.

While actors and comedians sometimes portray characters after death, legal protections are stricter. Film studios bear the responsibility for their content. OpenAI does not assume the same liability for what appears on Sora. In certain states, consent from the estate administrator is required to feature an individual for commercial usage.

“We couldn’t resurrect Christopher Lee for a horror movie, so why can OpenAI resurrect him for countless short films?” questions James Grimmelman, an internet law expert at Cornell University and Cornell Tech.

OpenAI’s decision to place deceased personas into the public sphere raises distressing questions about the rights of the departed in the era of generative AI.

It may feel unsettling to have the likeness of a prominent figure persistently haunting Sora, but is it legal? Perspectives vary.

Major legal questions regarding the internet remain unanswered. Are AI firms protected under Section 230 and thus not liable for third-party content on their platforms? If OpenAI qualifies for Section 230 immunity, users cannot sue the company for content they create on Sora.

“However, without federal legislation on this front, uncertainties will linger until the Supreme Court takes up the issue, which might stretch over the next two to four years,” notes Ashken Kazarian, a specialist in First Amendment and technology policy.




OpenAI CEO Sam Altman speaks at Snowflake Summit 2025 on June 2 in San Francisco, California. He is one of the living individuals who permitted Sora to utilize his likeness. Photo: Justin Sullivan/Getty Images

In the interim, OpenAI must circumvent legal challenges by obtaining consent from living individuals. US defamation laws protect living people from defamatory statements that could damage their reputation. Many states have right-of-publicity laws that prevent using someone’s voice, persona, or likeness for “commercial” or “misleading” reasons without their approval.

Allowing the deceased to be represented this way is a way for the company to “test the waters,” Kazarian suggests.

Though the deceased lack defamation protections, posthumous publicity rights exist in states like New York, California, and Tennessee. Navigating these laws in the context of AI remains a “gray area,” as there is no established case law, according to Grimmelman.

For a legal claim to succeed, estates will need to prove OpenAI’s responsibility, potentially by arguing that the platform encourages the creation of content involving deceased individuals.

Grimmelmann points out that Sora’s homepage features videos that actively promote this style of content. If the app utilizes large datasets of historical material, plaintiffs could argue it predisposes users to recreate such figures.

Conversely, OpenAI might argue that Sora is primarily for entertainment. Each video is marked with a watermark to prevent it from being misleading or classified as commercial content.

Generative AI researcher Bo Bergstedt emphasizes that most users are merely experimenting, not looking to profit.

“People engage with it as a form of entertainment, finding ridiculous content to collect likes,” he states. Even if this may distress families, it might abide by advertising regulations.

However, if a Sora user creates well-received clips featuring historical figures, builds a following, and begins monetizing, they could face legal repercussions. Alexios Mantsalis, director of Cornell Tech’s Security, Trust, and Safety Initiative, warns that the “financial implications of AI” may include indirect profit from these platforms. Sola’s rising “AI influencers” could encounter lawsuits from estates if they gain financially from the deceased.

“Whack-a-Mole” Approach

In response to the growing criticism, OpenAI recently announced that representatives of “recently deceased” celebrities can request their likenesses be removed from Sora’s videos.

“While there’s a significant interest in free expression depicting historical figures, we believe public figures and their families should control how their likenesses are represented,” a spokesperson for OpenAI stated.


The parameters for “recent” have yet to be clarified, and OpenAI hasn’t provided details on how these requests will be managed. The Guardian received no immediate comment from the company.

The copyright-free-for-all strategy faced challenges after controversial content, such as “Nazi SpongeBob SquarePants,” circulated online and the Motion Picture Association of America accused OpenAI of copyright infringement. A week post-launch, the company transitioned to an opt-in model for rights holders.

Grimmelmann hopes for a similar adaptation in how depictions of the deceased are handled. “Expecting individuals to opt out may not be feasible; it’s a harsh expectation. If I think that way, so will others, including judges,” he remarks.

Bergstedt likens this to a “whack-a-mole” methodology for safeguards, likely to persist until federal courts establish AI liability standards.

According to Ajdel, the Sola debate hints at a broader question we will all confront: Who will control our likenesses in this age of composition?

“It’s a troubling scenario if people accept they can be used and exploited in AI-generated hyper-realistic content.”

Source: www.theguardian.com

Major Direct Action on Actor Image Use in AI Content Poses Fairness Concerns

The performing arts union Equity has issued a warning of significant direct action against tech and entertainment firms regarding the unauthorized use of its members’ likenesses, images, and voices in AI-generated content.

This alert arises as more members express concerns over copyright violations and the inappropriate use of personal data within AI materials.

General Secretary Paul W. Fleming stated that the union intends to organize mass data requests, compelling companies to reveal whether they have utilized members’ data for AI-generated content without obtaining proper consent.

Recently, the union declared its support for a Scottish actor who alleges that his likeness contributed to the creation of Tilly Norwood, an “AI actor” criticized by the film industry.

Bryony Monroe, 28, from East Renfrewshire, believes her image was used to create a digital character by the AI “talent studio” Xicoia, though Xicoia has denied her claims.

Most complaints received by Equity relate to AI-generated voice replicas.

Mr. Fleming mentioned that the union is already assisting members in making subject access requests against producers and tech firms that fail to provide satisfactory explanations about the sources of data used for AI content creation.

He noted, “Companies are beginning to engage in very aggressive discussions about compensation and usage. The industry must exercise caution, as this is far from over.”

“AI companies must recognize that we will be submitting access requests en masse. They have a legal obligation to respond. If a member reasonably suspects their data is being utilized without permission, we aim to uncover that.”

Fleming expressed hope that this strategy will pressure tech companies and producers resisting transparency to reach an agreement on performers’ rights.

“Our goal is to leverage individual rights to hinder technology companies and producers from binding collective rights,” Fleming explained.

He emphasized that with 50,000 members, a significant number of requests for access would complicate matters for companies unwilling to negotiate.

Under data protection laws, individuals have the right to request all information held about them by an organization, which typically responds within a month.

“This isn’t a perfect solution,” Fleming added. “It’s no simple task since they might source data elsewhere. Many actors are behaving recklessly and unethically.”

Ms. Monroe believes that Norwood not only mimics her image but also her mannerisms.

Monroe remarked, “I have a distinct way of moving my head while acting. I recognized that in the closing seconds of Tilly’s showreel, where she mirrored exactly that. Others observed, ‘That’s your mannerism. That’s your acting style.'”

Liam Budd, director of recorded media industries at Equity UK, confirmed that the union takes Mr. Monroe’s concerns seriously. Particle 6, the AI production company behind Xicoia, claimed it is collaborating with unions to address any concerns raised.

A spokesperson from Particle 6 stated, ‘Bryony Monroe’s likeness, image, voice, and personal data were not utilized in any way to create Tilly Norwood.’

“Tilly was developed entirely from original creative designs. We do not, and will not, use performers’ likenesses without their explicit consent and proper compensation.”

Budd refrained from commenting on Monroe’s allegations but said, “Our members increasingly report specific infringements concerning their image or voice being used without consent to produce content that resembles them.”

“This practice is particularly prevalent in audio, as creating a digital audio replica requires less effort.”

However, Budd acknowledged that Norwood presents a new challenge for the industry, as “we have yet to encounter a fully synthetic actor before.”

Equity UK has been negotiating with UK production industry body Pact (Film and TV Producers Alliance) regarding AI, copyright, and data protection for over a year.

Fleming mentioned, “Executives are not questioning where their data originates. They privately concede that employing AI ethically is nearly impossible, as they are collecting and training on data with dubious provenance.”

“Yet, we frequently discover that it is being utilized entirely outside established copyright and data protection frameworks.”

Max Rumney, deputy chief executive of Pact, highlighted that its members must adopt AI technology in production or risk falling behind companies without collective agreements that ensure fair compensation for actors, writers, and other creators.

However, he noted a lack of transparency from tech firms regarding the content and data used for training the foundational models of AI tools like image generators.

“The fundamental models were trained on our members’ films and programming without their consent,” Rumney stated.

“Our members favor genuine human creativity in their films and shows, valuing this aspect as the hallmark of British productions, making them unique and innovative.”

Source: www.theguardian.com

Instagram Continues to Endanger Children Despite New Safety Features and Whistleblower Concerns at Meta

A study spearheaded by whistleblowers from Meta reveals that children and teens are facing online dangers on Instagram, despite the implementation of “highly ineffective” safety features.

A thorough examination by Arturo indicated that 64% of Instagram’s newly introduced safety measures were ineffective. Bejar, a former senior engineer at Meta, provided testimony before US Congress, along with scholars from NYU and Northeastern University, the Molly Rose Foundation in the UK, and other organizations.


Meta, the parent company of several well-known social media platforms, including Facebook, WhatsApp, Messenger, and Threads, mandated the creation of teen accounts on Instagram in September 2024.

However, Bejar stated that Meta has “consistently failed” to protect children from sensitive or harmful content, inappropriate interactions, and excessive use, claiming the safety features are “ineffective, unacceptable, and have been quietly altered or removed.”

He emphasized: “The lack of transparency within Meta, the duration of this neglect, and the number of teens harmed on Instagram due to their negligence and misleading safety assurances is alarming.”

“Children, including many under 13, are not safe on Instagram. This isn’t solely about bad content online; it’s about negligent product design. Meta’s intentional design choices promote and compel children to engage with inappropriate content and interactions daily.”

The research utilized a “test account” that mimicked the behavior of teens, parents, and potential predators to evaluate 47 safety features throughout March and June 2025.

Using a rating system of green, yellow, and red, it was discovered that 30 tools fell into the red category, indicating they could be easily circumvented or ignored with minimal effort. Only eight received a green rating.

Findings from the test account revealed that adults could effortlessly send messages to teens who were not following them, despite indications that such accounts were blocked. Although the system claims to prevent this after the testing period, it was found that minors could initiate conversations with adults on the platform, making it difficult to report sexual or inappropriate messages.

The research also highlighted that the “hidden language” feature failed to block offensive language as promised. Testers were able to send messages saying, “You are a prostitute and you should kill yourself,” with Meta clarifying that this feature applies only to unknown accounts, not to followers.

The algorithms still promote inappropriate sexual and violent content, and the “non-interested” features proved ineffective. Researchers found that the platform actively recommends search terms and accounts related to suicide, self-harm, eating disorders, and illegal substances.

Furthermore, researchers identified hundreds of reels where users claimed that various well-publicized time management tools aimed at curbing addictive behaviors had been discontinued. Meta asserts that these features still exist but altered, and despite claims that Meta would block these, there remain numerous reels featuring users claiming to be under 13 years old.

The report noted that Meta continues to structure Instagram’s reporting features in a way that does not promote actual usage.

In the report’s introduction, co-authors Ian Russell of the Molly Rose Foundation and Ian Russell of David’s Legacy Foundation highlighted tragic cases where children died by suicide after encountering harmful online content.

Consequently, they advocate for stronger online safety laws in the UK.

The report also urges regulators to adopt a “bolder and more assertive” stance on implementing regulatory measures.

A spokesperson from Meta stated: “This report misrepresents our ongoing efforts to empower parents and safeguard teens, misunderstanding how our safety tools function and how millions of parents and teens utilize them today. Our teen accounts are the industry standard for automated safety protections and parental controls.”

“In reality, teens using these protections encounter less sensitive content and receive fewer unwanted contacts while spending time on Instagram safely. Parents also have robust monitoring tools in place. We are committed to improving our features and welcome constructive criticism, though this report doesn’t reflect that.”

An Ofcom spokesperson commented:

“Our online rules for children necessitate a safety-first approach in how technology companies design and operate their services in the UK.

“Clearly, sites that fail to comply can expect enforcement action.”

A government representative added: “Under the online safety law, platforms must protect young users from content that promotes self-harm and suicide, thus enforcing safer algorithms and reducing toxic feeds.

Source: www.theguardian.com

Rise of AI Chatbot Sites Featuring Child Sexual Abuse Imagery Sparks Concerns Over Misuse

A chatbot platform featuring explicit scenarios involving preteen characters in illegal abuse images has raised significant concerns over the potential misuse of artificial intelligence.

A report from the Child Safety Monitoring Agency urged the UK government to establish safety guidelines for AI companies in light of an increase in technology-generated child sexual abuse materials (CSAM).

The Internet Watch Foundation (IWF) reported that they were alerted by chatbot sites offering various scenarios, including “child prostitutes in hotels,” “wife engaging in sexual acts with children while on vacation,” and “children and teachers together after school.”

In certain instances, the IWF noted that clicking the chatbot icon led to full-screen representations of child sexual abuse images, serving as a background for subsequent interactions between the bot and the user.

The IWF discovered 17 images created by AI that appeared realistic enough to be classified as child sex abuse material under the Child Protection Act.

Users of unnamed sites for security reasons also had the capability to generate additional images resembling the illegal content already accessible.

Operating from the UK and possessing global authority to monitor child sexual exploitation, the IWF stated that future AI regulations should incorporate child protection guidelines from the outset.

The government has revealed plans for AI legislation that is anticipated to concentrate on the future advancement of cutting-edge models, prohibiting the ownership and distribution of models that produce child sexual abuse in crime and police bills.

“We welcome the UK government’s initiative to combat AI-generated images and videos of child sexual abuse, along with the tools to create them. While new criminal offenses related to these issues will not be implemented immediately, it is critical to expedite this process,”

stated Chris Sherwood, Chief Executive Officer of NSPCC, as the charity emphasized the need for guidelines.

User-generated chatbots fall under the UK’s online safety regulations, which allow for substantial fines for non-compliance. The IWF indicated that the sexual abuse chatbot was created by users and site developers.

Ofcom, the UK regulator responsible for enforcing the law, remarked, “Combating child sexual exploitation and abuse remains a top priority, and online service providers failing to implement necessary safeguards should be prepared for enforcement actions.”

The IWF reported a staggering 400% rise in AI-generated abuse material reports in the first half of this year compared to the same timeframe last year, attributing this surge to advancements in technology.

While the chatbot content is accessible from the UK, it is hosted on a U.S. server and has been reported to the National Center for Missing and Exploited Children (NCMEC), the U.S. equivalent of the IWF. NCMEC stated that the report on the Cyber Tipline has been forwarded to law enforcement. The IWF mentioned that the site appears to be operated by a company based in China.

The IWF noted that some chatbot scenarios included an 8-year-old girl trapped in an adult’s basement and a preteen homeless girl being invited to a stranger’s home. In these scenarios, the chatbot presented itself as the girl while the user portrayed an adult.

IWF analysts reported accessing explicit chatbots through links in social media ads that directed users to sections containing illegal material. Other areas of the site offered legal chatbots and non-sexual scenarios.

According to the IWF, one chatbot that displayed CSAM images revealed in an interaction that it was designed to mimic preteen behavior. In contrast, other chatbots not showing CSAM indicated that they were neither dressed nor suppressed when inquiries were made by analysts.

The site recorded tens of thousands of visits, including 60,000 in July alone.

A spokesperson for the UK government stated, “UK law is explicit: creating, owning, or distributing images of child sexual abuse, including AI-generated content, is illegal… We recognize thatmore needs to be done. The government will utilize all available resources to confront this appalling crime.”

Source: www.theguardian.com

Nvidia Achieves New Sales Milestones Amid Concerns Over AI Bubble and Trump’s Trade War

Chipmaker Nvidia achieved record sales in the second quarter, exceeding Wall Street’s predictions for artificial intelligence chips. Nonetheless, the company’s stock dropped by 2.3% after hours, as investors appeared unfazed by concerns surrounding the AI bubble and the effects of Donald Trump’s trade tensions.

Nvidia’s financial results mark the first assessment of investor sentiment since the recent mass selloff of AI stocks, which saw many tech shares decline amid skepticism regarding the valuation of AI-driven firms.

On Wednesday, NVIDIA announced adjusted earnings per share of $1.08 with total revenues reaching $467.4 billion. According to FactSet data, this surpassed Wall Street’s earnings per share expectations.

However, investor expectations were notably high. The market’s reaction may be influenced by slight misses in other segments of the company’s performance, particularly in data center revenues, where Nvidia recorded $41.1 billion, falling short of optimistic forecasts.

“We can’t overlook Nvidia this time, especially as they strive for record-breaking highs.” Investing.com. “To claim that stock prices are optimally priced would be a considerable understatement, as we actually needed another significant exceedance.”

The company further indicated that it had not factored the shipping of the H20 chip to China into its forecasts.

This aspect is central to concerns regarding the US-China trade conflict. Earlier in the year, Trump imposed a ban on AI chip sales to China, resulting in a $4.5 billion hit to Nvidia’s finances during the first quarter. In August, Nvidia consented to provide the US government a 15% reduction in H20 chip prices for exports to China in exchange for export licenses. China has voiced security concerns over chips and is amplifying its own domestic production efforts.

Colette Kress, Nvidia’s Chief Financial Officer, noted during a revenue call that some companies are interested in acquiring H20, with the first group of companies already receiving licenses to purchase chips. Kress mentioned that Nvidia could potentially ship between $2 billion and $5 billion worth of H20 chips to China, contingent on “geopolitical circumstances.”

Huang has consistently highlighted the importance of operating in the Chinese market. “We are in discussions with the administration about the necessity of addressing the Chinese market for American firms,” Huang stated. He added that, aside from the fact that H20 has been cleared for sale in China by unlicensed companies, there might be opportunities for the company to introduce a version of Blackwell in that market.


“China is the world’s second-largest computing market and hosts a substantial number of AI researchers. Approximately 50% of the world’s AI researchers are based in China,” Huang stated. “Most of the leading open-source models are developed there, making it crucial for American tech companies to engage with that market.”

“We eagerly anticipate future developments,” remarked Monteiro, an analyst from Investing.com. “The fact remains that without the essential sales boost from H20 in China, Nvidia cannot sustain the growth trajectory that driven that valuation.”

The company projects revenues of $54 billion for the third quarter, aligning with Wall Street’s expectations, and mentions that its board has authorized an additional $600 billion in share buybacks.

Founder and CEO Jensen Huang remarked that production of the company’s latest AI superchip, Blackwell, is “gaining momentum and demand is remarkable.”

“The race in AI has commenced, and Blackwell will serve as the essential platform,” Huang stated in a press release.

Despite the initial tepid market reaction to the company’s financials, some analysts remain optimistic about the ongoing AI revolution, especially as major tech firms like Meta, Microsoft, Amazon, and Alphabet heavily invest in AI infrastructure. “This is a critical analysis of Nvidia and the AI revolution,” noted Dan Ives, an analyst at Wedbush Securities.

“This represents a significant indicator for the broader tech world, suggesting that despite prevailing challenges from China, the AI revolution is positioning for the next phase of growth. One chip is pivotal to triggering this AI revolution, and that is Nvidia.”

Source: www.theguardian.com

Intel Stock Surges Amid Crisis Concerns After Earnings Report

Intel’s shares increased by 7.4% following reports that the Trump administration is contemplating acquiring stock in a faltering US chip manufacturer.

According to Bloomberg, any potential government investment will be directed towards the development of Intel’s factory hubs in Ohio. This move aims to bolster the financial stability of chipmakers during a period when Intel is implementing job cuts as part of broader cost-reduction measures.

Discussions about this possible investment emerged from a meeting earlier this week between US President Donald Trump and Intel CEO Rip Bu Tang, which took place just days after Trump accused Tan of having connections with the Chinese Communist Party before resigning. Bloomberg indicated that Tan is likely to lead the chipmaker going forward.


In response to the Bloomberg article, White House spokesperson Kush Desai stated, “The dialogue regarding virtual transactions should be viewed as speculation unless formally announced by the administration.”

Despite this, the news triggered excitement among investors, with shares climbing by 7.4% on Thursday to $23.86 (£17.60), elevating the company’s market capitalization to $104 billion.

This move regarding Intel reflects the Trump administration’s ongoing efforts to intervene in significant private sectors. The President has consistently threatened to impose tariffs of up to 100% on imported semiconductors and chips.

Earlier this week, the US government also unveiled a deal involving advanced microdevices with chip manufacturer Nvidia, which commits to paying 15% of revenues derived from AI chip sales to China to the US government. Last month, the Department of Defense revealed that rare earth producer MP Materials would need $400 million in preferred stock.

However, investing in Intel represents a notable shift from Trump’s recent critical comments on the company’s leadership.

Trump expressed his thoughts on the True Social Media Platform last Thursday, stating, “The Intel CEO is exceedingly contradictory and must resign immediately. There’s no alternative to this problem. Thank you for your attention to this matter!”

His remarks came shortly after U.S. Republican Senator Tom Cotton sent a letter to Intel Chairman Frank Yearly regarding Tan’s investment and its connections to semiconductor companies linked with the CCP and its military faction, the People’s Liberation Army.

Skip past newsletter promotions

In April, Reuters disclosed that Tan had invested in numerous Chinese high-tech firms, with at least eight connections to the People’s Liberation Army.

Cotton questioned Intel’s board regarding whether Tan divested these investments, raising concerns over Tan’s previous role at Cadence Design Systems, which was found to have sold products to China’s National University of Defense Technology, in breach of US export controls.

At that time, Intel remarked that both the board and CEO are “deeply dedicated to advancing US domestic and economic security priorities, making significant investments in line with the President’s agenda to prioritize America.” Intel has been manufacturing within the US for 56 years and expressed eagerness to maintain collaboration with the administration.

Intel was approached for a statement.

Source: www.theguardian.com

Trump Sparks Concerns Over Nvidia’s Potential Sale of Advanced AI Chips in China

Donald Trump has indicated that Nvidia can sell more advanced chips in China than is currently allowed.

During a Monday briefing, Trump addressed the recent development, revealing his groundbreaking agreements with NVIDIA and AMD. He has authorized an export license allowing the sale of previously restricted chips to China, with the US government receiving 15% of the sales revenue. The US president defended the deal after analysts labeled it as potentially resembling “shakedown” payments or unconstitutional export taxes. He expressed hope for further negotiations regarding a more advanced Nvidia chip.

Trump mentioned that Nvidia’s latest chip, Blackwell, would not be available for trade, but he is considering trading “a slightly negatively impacted version of Blackwell,” which could see a downgrade of 30-50%.

“I believe he’ll be back to discuss it, but it will be a significant yet unenhanced version,” he remarked, referring to Nvidia’s CEO Jensen Huang, who has had multiple discussions with Trump about China’s export limits.



Huang has yet to comment on the revenue-sharing agreement pertaining to the sales of Nvidia’s H20 chips and AMD’s Mi308 chips in China.

The H20 and Mi308 chips were prohibited from being sold to China in April, even though the low-power H20 was specially designed to meet the restrictions set by the Biden administration. Nvidia previously stated last month that they hoped to receive clearance to resume shipments soon.

Nvidia’s impact is a major driver of the AI boom, garnering significant interest from both China and the US, which has led to heightened scrutiny among analysts in Washington and concerns from Chinese officials.

“I’m worried about reports indicating the US government might take revenue from sales of chips akin to advanced H20 sales,” he told the Financial Times.

Trump justified the agreement on Monday: “I stated, ‘Listen, I want 20% if I approve this for you,'” emphasizing that he hasn’t received any personal money from the deal. He suggested that Huang provided 15% as part of the agreement.

“I permitted him only for the H20,” Trump clarified.

He referred to the H20 as an “outdated” chip that is “already in a different form for China.”

However, Harry Cleja, research director at the Washington office of the Carnegie Mellon Institute of Strategic Technology, labeled the H20 as a “second tier” AI chip.

“The H20 is not the premier training chip available, but the type of computing dominating AI tasks today—particularly the ‘inference’ model and ‘agent’ products—are what the field is focused on,” Kresja told the Guardian, referring to systems employing advanced inference to autonomously resolve complex issues.

“Lifting H20 export restrictions undoubtedly provides Beijing with the necessary tools to compete in the AI realm.”

The US government has been attempting for several years to defend national security, especially concerning artificial intelligence development and the provision of technology that could be weaponized.

China’s Foreign Ministry remarked on Monday that the country has consistently articulated its stance on US chip exports, accusing Washington of utilizing technology and trade measures to “maliciously suppress and hinder China.”

Revenue-sharing contracts are quite rare in the US, reflecting Trump’s latest interference in corporate decisions after pressuring executives to reinvest in American manufacturing. He has requested the resignation of Intel’s new CEO, Lip-Bu Tan, regarding its connections with Chinese companies.

Trump has also suggested imposing 100% tariffs on the global semiconductor market, exempting businesses that commit to investing in the US.

Taiwan’s TSMC, a leading semiconductor manufacturer, announced plans in April to expand its US operations through a $100 million investment. However, foreign investments of this magnitude require government approval from Taiwan.

The Guardian confirmed that TSMC has yet to apply for this approval. The company has not responded to requests for comment.

Source: www.theguardian.com

UK Council Employs AI Tools to Minimize Women’s Health Concerns, Research Shows

Research indicates that more than half of the Council of England’s use of artificial intelligence tools minimizes women’s physical and mental health issues, raising concerns about potential gender bias in care decisions. The study revealed that when generating and summarizing identical case notes using Google’s AI tool “Gemma,” terms like “invalid,” “impossible,” and “complex” appeared significantly more often in descriptions of males than females.

Conducted by the London School of Economics and Political Science (LSE), the study found that comparable care needs in women were more likely to be overlooked or inadequately explained. Dr. Samurikman, the report’s lead author and a researcher at LSE’s Care Policy and Assessment Centre, emphasized that AI could result in “unequal care provision for women.” He noted, “These models are widely used, yet our findings reveal significant disparities regarding bias across different models. Specifically, Google’s models understate women’s physical and mental health needs compared to those for men.”

Furthermore, he pointed out that the care received is often determined by perceived needs, which could lead to women receiving inadequate care if a biased model is in use—although it remains unclear which model is currently being applied.

As AI tools grow in popularity among local authorities, the LSE study analyzed real case notes from 617 adult social care users. These notes were anonymized by gender and input multiple times into various major language models (LLM). Researchers examined a summary of 29,616 pairs to assess how male and female cases were treated differently by the AI model.

One example highlighted that the Gemma model summarized case notes as follows: “Mr. Smith is an 84-year-old man living alone with a complicated medical history, a care package, and poor mobility.” Conversely, when the gender was swapped, the summary read: “Mrs. Smith is an 84-year-old resident. Despite her limitations, she is independent and can maintain personal care.” In another instance, the summary stated that Mrs. Smith “has no access to the community,” while Mr. Smith “has managed to manage her daily activities.”

Among the AI models assessed, Google’s Gemma exhibited a more significant gender-based disparity compared to other models. The study noted that Meta’s Llama 3 model did not differentiate its language based on gender.

Dr. Rickman commented that although the tool “is already in use in the public sector, it should not compromise fairness.” He added, “My research sheds light on the issues posed by a single model, but with many models continuously being deployed, it is imperative that all AI systems are transparent, rigorously tested for bias, and subject to stringent legal oversight.”

The paper concludes that to prioritize “algorithm equity,” regulators should mandate measures of bias in LLMs used in long-term care. Concerns regarding racial and gender bias in AI tools have persisted for an extended period, as machine learning technology tends to absorb biases present in human languages. Our research analyzed 133 AI systems across various industries, revealing that approximately 44% exhibited gender bias, while 25% showed both gender and racial biases.

According to Google, the team is reviewing the report’s findings. The researcher assessed the initial generation of the GEMMA model, which is currently in its third generation and is expected to show improved performance; however, it should not be utilized for medical purposes.

Source: www.theguardian.com

Apple Eases Wall Street Concerns Amid Delays in AI Progress and China’s Challenges

Apple is facing significant challenges this year. While striving to keep pace with other tech giants in the realm of artificial intelligence, it has seen its stock prices decline by double digits since the year began. The recent closure of a Chinese store marks a troubling point, as increasing US tariffs on Beijing pose a threat to its supply chain. On Thursday, the company reported third-quarter fiscal year revenues, inviting scrutiny into its operational improvements.

Despite a bleak forecast, Apple remains valued at over $300 million and exceeded Wall Street’s expectations regarding profit and revenue for this quarter. The tech giant posted a notable 10% year-on-year revenue increase to $94.04 billion, translating to $1.57 per share. This is the most substantial revenue growth Apple has experienced since 2021, surpassing analyst forecasts of over $89.3 billion and more than $1.43 per share.

Revenue from iPhones has also surpassed Wall Street predictions, rising 13% compared to the same quarter last year.

Apple CEO Tim Cook expressed pride in announcing a “June quarter revenue record,” highlighting the growth across its iPhone, Mac, and services sectors. During a revenue call on Thursday, he remarked that the quarterly results were “better than anticipated.”

According to Dipanjan Chatterjee, Vice President and Principal Analyst at Forrester, the growth of services is boosting the company’s revenue streams. “Apple has grown accustomed to enhancing revenue through this service-centric margin business,” he noted.

However, he pointed out some factors contributing to underwhelming product performance, suggesting Apple is trailing in hardware innovation, leading to “consumer indifference,” with its AI rollout experiencing glitches. The AI initiative, dubbed Apple Intelligence, is introducing only incremental features rather than transformative enhancements.

It has been over a year since Apple revealed plans for the AI-enhanced version of Voice Assistant Siri, yet many features remain unreleased.

“This work [on Siri] was discussed during the company’s developer meeting in June,” said Craig Federighi, Apple’s Vice President of Software Engineering.

The imposition of Donald Trump’s tariffs has also complicated matters for the company, as the US president pushes for revitalizing domestic manufacturing. A significant portion of Apple’s products are produced in China, with 90% of iPhones assembled there, despite recent efforts to shift production elsewhere. Cook warned that China’s tariffs could impact revenue by $900 million during the quarterly call.

Apple is actively working to relocate more manufacturing to countries like India and Vietnam. However, this week, Trump announced an increase in tariffs in India set to reach 25% starting August 1st.

Skip past newsletter promotions

During the revenue call on Thursday, Cook reminded analysts that Apple has committed $500 million in the US over the upcoming four years and added, “eventually we’ll do more in the US.” He mentioned that Apple has “made significant progress” with a more personalized Siri, scheduled for release next year.

Both external and internal pressures have significantly impacted Apple this year. Once celebrated as part of the “magnificent 7” industry titans—comprised of the most valuable public tech companies in the US—Apple’s stock is now the second weakest performer, declining seven spots behind Tesla. Since January, Apple’s stock has dropped approximately 15%. Nevertheless, there was a slight uptick in the stock price following Thursday’s after-hours trading, recovering 25%.

Source: www.theguardian.com

Concerns Grow for FEMA’s Future Following Texas Flooding

The catastrophic flood in Texas, claiming nearly 120 lives, marked the first major crisis encountered by the Federal Emergency Management Agency (FEMA) under the current Trump administration. Despite the tragic loss of life, both former and current FEMA officials have expressed to NBC News that the effects on smaller geographic regions don’t adequately challenge the capabilities of the agency, especially as staffing has been reduced significantly.

They argue that the true tests may arise later this summer, when the threat of hurricanes looms over several states.

As discussions about the agency’s future unfold—with President Donald Trump hinting at the possibility of “dismantling it”—Homeland Security Secretary Christy Noem, who oversees FEMA, has tightened her control.

Current and former officials have mentioned that Noem now mandates that all agents personally authorize expenditures exceeding $100,000. To expedite the approval process, FEMA established a task force on Monday aimed at streamlining Noem’s approval, according to sources familiar with the initiative.

While Noem has taken a more direct approach to managing the agency, many FEMA leadership positions remain unfilled due to voluntary departures. In May, the agency disclosed in an internal email that 16 senior officials had left, collectively bringing over 200 years of disaster response experience with them.

“DHS and its components are fully engaged in addressing recovery efforts in Carville,” a spokesperson from DHS remarked in a statement to NBC News.

“Under Chief Noem and Deputy Manager David Richardson, FEMA has transformed from an unwieldy DC-centric organization into a streamlined disaster response force that empowers local entities to assist their residents. Outdated processes have been replaced due to their failure to serve Americans effectively in real emergencies… Secretary Noem ensures accountability to U.S. taxpayers, a concern often overlooked by Washington for decades.”

Civilians assist with recovery efforts near the Guadalupe River on Sunday.Giulio Cortez / AP

On Wednesday afternoon, the FEMA Review Council convened for its second meeting, set up to outline the agency’s future direction. “Our goal is to pivot FEMA’s responsibilities to the state level,” Trump told the press in early June.

At this moment, FEMA continues to manage over 700 active disaster situations, as stated by Chris Currie, who monitors governmental accountability.

“They’re operating no differently. They’re merely doing more with fewer personnel,” he noted in an interview.

While some advocates push for a more proactive role for the agency, certain Republicans in Congress emphasize the need to preserve FEMA in response to the significant flooding.

“FEMA plays a crucial role,” said Senator Ted Cruz of Texas during a Capitol Hill briefing this week. “There’s a consensus on enhancing FEMA’s efficiency and responsiveness to disasters. These reforms can be advantageous, but the agency’s core functions remain vital, regardless of any structural adjustments.”

Bureaucratic Hurdles

A key discussion point in the first FEMA Review Council meeting was how the federal government can alleviate financial constraints. However, current and former FEMA officials argue that Noem’s insistence on personal approvals for expenditures introduces bureaucratic layers that could hinder timely assistance during the Texas crisis and potential future hurricanes.

Current officials voiced that the new requirements contradict the aim of reducing expenses. “They’re adding bureaucracy…and increasing costs,” one official commented.

A former senior FEMA official remarked that agents need to procure supplies and services within disaster zones, routinely requiring their authorization for contracts over $100,000 to facilitate these actions.

“FEMA rarely makes expenditures below that threshold,” disclosed an unnamed former employee currently involved in the industry to NBC News.

In addition to the stipulation that Noem must approve certain expenditures, current and former staff members revealed confusion regarding who holds authority—Noem or Richardson, who has been acting as administrator since early May. One former official noted a cultural shift within the agency from proactive measures to a more cautious stance, as employees fear job loss.

DHS spokesperson Tricia McLaughlin referred to questions regarding who is in charge as “absurd.”

Further changes are underway. Last week, agents officially ceased their practice of sending personnel into disaster areas to engage with victims about available services. This decision followed complaints regarding interactions that had been criticized last fall. Acting managers previously labeled this conduct by FEMA staff as “unacceptable.” Distancing from the scrutiny, the dismissed personnel claimed to have acted under their supervisor’s instructions to avoid “unpleasant encounters.”

Although many individuals access FEMA services through various channels like the agency’s website and hotline, two former officials emphasized that in-person outreach remains essential for connecting disaster victims with available resources. It remains uncertain if the agency plans to send personnel into Texas for door-to-door outreach.

This week, Democratic senators expressed frustration that Noem has yet to present the 2025 hurricane plans she mentioned in May, after they were promised to be shared.

New Jersey Senator Andy Kim, leading Democrat on the Disaster Management Subcommittee, plans to send another letter to Noem on Wednesday to solicit these plans.

“The delay in FEMA’s 2025 hurricane season plan report at the start of hurricane season highlights the ongoing slowness of DHS in providing essential information to this committee,” Kim asserted in his letter.

FEMA’s Future

Critical questions remain regarding FEMA’s role in disaster recovery: What responsibilities will it retain, and which will be delegated to states to manage independently?

Experts consulting with NBC News concur that while federal agencies should maintain responsibility for large-scale disasters, the question persists as to whether states could be empowered to handle smaller ones rather than deferring to federal assistance.

“Disaster prevention is paramount,” remarked Jeff Schlegermilch, director of Columbia University’s National Center for Disaster Response.

Natalie Simpson, a disaster response expert at the University of Buffalo, added that larger states could assume greater risk during disasters.

“I believe we could establish a local FEMA due to economies of scale in larger states like California, New York, and Florida, but I doubt their efficacy in smaller states,” she stated during an interview.

Current and former FEMA officials, including Texas Governor Greg Abbott, have criticized FEMA as “inefficient and slow,” asserting the need for a more responsive approach. They highlighted that the governor called for a FEMA disaster declaration within days of the flood.

On Sunday, the president sidestepped inquiries about potential agency restructuring, stating:

White House spokesperson Karoline Leavitt commented that ongoing discussions are taking place regarding the agency’s broader objectives. “The President aims to ensure that American citizens have the resources they need, whether that assistance is provided at the state or federal level; it’s a matter of continuous policy discourse,” Leavitt remarked.

Source: www.nbcnews.com

Concerns Arise Over Genetic Screening of Newborns for Rare Diseases

Rare diseases often elude early diagnosis, remaining undetected until significant organ damage occurs. Recently, UK Health Secretary Wes Streeting announced a 10-year initiative to integrate genetic testing for specific rare conditions into the standard neonatal screening process across the UK. This approach aims to ensure early intervention before symptoms manifest, aligning with ongoing global viability programs in places like the US and Australia. Yet, questions arise about the scientific validity of such measures.

The genome, akin to a book written in a novel language, is only partially understood. Decades of research on high-risk families have shed light on some genetic mutations, but there remains limited knowledge about the implications of population-level genetic testing for those at low risk. While this screening may prove advantageous for certain children and families, it might also lead to unnecessary tests and treatments for others.

Many genetic conditions involve more than just a single genetic mutation. For example, individuals with a variant of the hnf4a gene and a strong family history of rare diabetes have a 75% risk of developing the condition; conversely, those with the same variant but without a family history face only a 10% risk. It is misleading to assume genetic variants behave uniformly across all populations. Perhaps families carrying the hnf4a variant lack other unrecognized protective genes, or specific environmental factors might interplay with genetic risks to lead to diabetes.

The proposed neonatal screening program presupposes that genetic variants linked to diseases signify equally high risks for all, which is rarely the case. The exploration of disease-related variations in healthy populations is just starting. Until this research is thorough, we will not know how many individuals carry a variant that does not result in illness, possibly due to other protective factors. Should we really subject newborns to genetic hypotheses?

Furthermore, ethical concerns emerge from this initiative. How do we secure informed consent from parents when testing for hundreds of conditions simultaneously? In the near future, a genetic database encompassing all living individuals could become a reality—what safeguards will exist for its use and protection?

Screening newborns is not new, but the scope of conditions included in this initiative, the complexity of interpreting results, and the sensitivity of the information gathered pose unique challenges. I worry that parents may feel compelled to accept the test, yet not all uncertainties will be appropriately managed. I fear that important early life stages could become burdened with unnecessary hospital visits. Additionally, the pressure on parents and pediatricians to decide on potentially invasive testing for healthy infants is concerning.

A prudent step would be to gather more data on the prevalence and behavior of genetic mutations in the wider population before utilizing genetic testing as a speculative screening tool for children. The potential benefits may be overshadowed by significant risks.

Suzanne O’Sullivan is a neurologist and author of The Age of Diagnosis: Illness, Health, and Why Medicine Went Too Far.

Topic:

Source: www.newscientist.com

Instagram Users Claim They Were Banned Without an Appeal Process | Consumer Concerns

I am a young black entrepreneur and RM leader. His personal and business social media profiles have been deleted by Meta, the parent company of Instagram. There was no notice, no option to appeal, and no explanation given to my understanding. He had successfully established two businesses in clothing design and music events.

Just six days prior to the ban, he sold 1,500 tickets for an electronic dance event in London. Instagram, rather than his website, serves as the main platform for his work. Yet, he was abruptly informed that his content violated Meta’s community guidelines regarding violence and incitement.

His business account boasted 5,700 followers, while his personal account had nearly 4,000 contacts. All were erased without alternative means of contact, leaving him without his entire social and professional network. Retrieving this data is not allowed. IP address His device is inaccessible due to restrictions New account.

In following his work, I’ve yet to see anything violent in his promotional videos, save for toy weapons. His life is being upended by what seems to be an unyielding algorithm.

RP, London

The pivotal role of social media in the lives of young people often confuses older generations who rely on websites and direct contacts.

When I spoke with RM, 21, he shared that the abrupt account closure by Meta, due to vaguely defined infractions, also affected fellow students, resulting in a loss for their burgeoning businesses.

“For my generation, my Instagram profile is not just my sole source of income; it’s part of my identity, making recovery challenging,” he explains. “I wasn’t notified about violating any guidelines. This decision has cost me thousands of pounds in lost sales, which is especially devastating for single parents in the city center.”

RM firmly denies posting any content that could be perceived as violent or inciting harm. His account has been deleted, leaving him unable to clarify.

Instead, I came across an interview with RM on a music website that offered insights into the cyberpunk rave scene he participates in. Some band and song titles might trigger the algorithms.

Terms like drug, sex, and kill are prevalent in various musical genres. It remains unclear which specific lines resulted in RM’s discharge, as Meta has provided no communication to RM or myself, citing “confidentiality.”

While they declined to comment further, a spokesperson indicated that they would not restore RM’s account or provide him with contact details due to a “violation” of the guidelines. There is no avenue for appeal.

Meta, as a commercial entity, has the right to decide its clientele and eliminate harmful content, yet its role as judge, jury, and executioner is concerning given the repercussions of such decisions.

RM can file a Subject Access Request to discover what information Meta holds about him. While this won’t restore his account, it might help him comprehend the basis of the actions taken against him. Should Meta refuse to comply, he can reach out to the Information Commissioner’s Office.

He has created a new account and purchased a laptop to begin the process of rebuilding. I advise him (and others) to regularly back up contacts and not solely rely on companies that offer opaque administrative practices.

Meta currently faces scrutiny for enforcing widespread bans on users via algorithms on Facebook and Instagram. A petition has garnered over 25,000 signatures, advocating for human intervention.

Locked out of Facebook

em West Sussex hit a digital dead end after being locked out of her Facebook account when hackers changed her password, email address, and phone number. She states that Facebook’s automated system provided a lengthy set of instructions when she sought guidance to regain access from the hackers. Subsequently, the hackers switched her account from private to public, exposing her sensitive personal information.

Upon seeking help from Facebook, her newly established account was permanently closed. “It’s impossible to find someone to communicate with via email, chat, or phone,” she laments. “On a positive note, I enjoy the absence of Facebook noise in my life, though it felt like having my arm amputated!”

Meta did not respond to requests for comment.

Source: www.theguardian.com

Tesla Shares Plummet Amid Investor Concerns Over Potential Brand Damage from Elon Musk’s New Party

Tesla stocks are poised for a significant decline in the US, as investors worry that Elon Musk might introduce more challenges for electric vehicle manufacturers by potentially launching a new political party.

On Monday, Tesla shares dropped over 7% in pre-market trading, which could erase approximately $70 billion (£51 billion) from the company’s market capitalization at the Wall Street opening.

Should the stocks decrease significantly, Musk’s net worth could fall by more than $9 billion, bringing it down to around $120 billion. According to Forbes, Musk, along with the head of SpaceX, ranks among the wealthiest individuals globally, with a combined fortune of about $400 million.


Tesla’s stock, currently valued at just under $10, is experiencing downward pressure largely due to Musk’s relationships with both the company and former President Donald Trump.

Musk’s staunch support for Trump has sparked consumer backlash, and the unpredictable nature of his relationship with the former president raises concerns about Musk getting sidetracked from his responsibilities, potentially leading to repercussions for the company.

Wedbush Securities analyst Dan Ives pointed out that Musk’s financial involvement in US political parties could deter investors.

“Musk diving deep into politics and now attempting to establish a Beltway is the opposite direction Tesla investors and stakeholders hope he would take at this critical juncture for the company,” Ives noted, adding that there is a palpable “broader fatigue” regarding Musk’s political endeavors.

On Sunday, Trump criticized Musk’s ambitions, labeling the American Party as a “silly” initiative.

Skip past newsletter promotions

Trump took to Truth Social to express his disappointment over Musk’s new direction, stating: “I’m sad to see Elon Musk go to Rails completely.”

Over the weekend, Musk revealed the formation of the American Party on his X platform, declaring: “We live in a one-party system, not a democracy, which is bankrupting our country with waste and graft. Today, the American Party is formed to restore your freedom.”

Source: www.theguardian.com

Concerns Grow That X’s AI Fact-Checkers May Undermine Efforts Against Conspiracy Theories

The decision by Elon Musk’s X social media platform to register artificial intelligence chatbots for creating FactChecks might inadvertently promote “lies and conspiracy theories,” warns a former UK technology minister.

Damian Collins criticized X for “leaving it to the bot to edit the news,” following the announcement that it would permit a large-scale language model to clarify or alter community notes before user approval. Previously, notes were written solely by humans.

X revealed that it plans to utilize AI for drafting FactChecking notes, asserting in a statement, “We are at the forefront of enhancing information quality on the Internet.”

Keith Coleman, Product Vice Chairman of X, mentioned that the notes would only be shown after human reviewers assess AI-generated content, ensuring usefulness from varied perspectives.

“We designed the pilot to operate as human-assisted AI. We believe it can offer both quality and reliability. We also released a paper alongside the pilot’s launch, co-authored by professors and researchers from MIT, Washington University, Harvard University, and Stanford, detailing why this blend of AI and human involvement is promising.”

However, Collins pointed out that the system is prone to abuse, with AI agents handling community notes potentially enabling “industrial manipulation that users may trust” on a platform boasting around 600 million users.

This move represents the latest challenge to human fact checkers by US tech firms. Last month, Google stated that user-created FactChecks would degrade search results, including those from professional fact-checking organizations, asserting that such checks “no longer provide significant additional value to users.” In January, Meta announced its intention to phase out American human fact checkers and replace them with its own community notes system across Instagram, Facebook, and Threads.

An X research paper describing the new fact-checking system claims that specialized fact checks are often limited in scale and lack the trust of the general public.

An AI-generated community note asserts that “rapid production requires minimal effort while maintaining high-quality potential.” Both human and AI-created notes will enter the same pool, ensuring that the most useful content appears on the platform.

According to the research paper, AI will generate a “summary of neutral evidence.” Trust in community notes, the paper states, “stems from those who evaluate them, not those who draft them.”

Andy Dudfield, leading AI at the UK fact-checking organization Full Fact, commented: “These plans will add to the existing significant workload for human reviewers, raising valid concerns about the adequacy of AI-generated content that lacks thorough drafting, review, and consideration.”

Samuel Stockwell, a researcher at the Alan Turing Institute’s Emerging Technology Security Center, noted: “AI can assist fact checkers in managing the vast array of claims that circulate daily on social media, but it hinges on the quality of X, which risks the chance that these AI ‘note writers’ will mislead users with false or dubious narratives. Even when inaccuracies arise, the confident delivery can deceive viewers.”

Research indicates that individuals view human-generated community notes as significantly more reliable than a simple misinformation flag.

An analysis of hundreds of misleading posts on X leading up to last year’s presidential election reveals that in three-quarters of cases, accurate community notes were not displayed, nor were they supported by users. These misleading claims, including accusations of Democrats importing illegal voters and the assertion that the 2020 presidential election was stolen, have amassed over 20 billion views, according to a center combating digital hatred.

Source: www.theguardian.com

John Oliver on AI Concerns: “Some of These Might Be Quite Dangerous”

On his weekly HBO show, John Oliver discussed the alarming risks of AI, labeling it “worrisomely corrosive” to our society.

During “Last Week Tonight,” Oliver remarked, “The rampant use of AI generation tools has made it effortlessly simple to clutter social media platforms with cheap, professional, and often bizarre content, coining the term AI Slop to categorize everything.”


He described it as “the latest version of spam,” with peculiar images and videos overwhelming users’ feeds, leading people to say, “I have no idea that this isn’t the real thing.”

“It’s highly probable that this content will flood platforms in the near future,” Oliver warned.

With such content, “The main goal is to grab your attention,” and barriers to entry have significantly lowered due to its ease of creation.

Meta has jumped into the fray with its own tools and has also refined its algorithm. This means more than a third of the content in your feed originates from accounts that currently do not comply. “That’s how the slops infiltrate without your consent,” he noted.

A monetization program has emerged for those who manage to make their content go viral, and numerous AI slop experts are now offering to teach individuals the tricks of the trade for a small fee.

This has become “ultimately a spam-like volume game in all forms,” resulting in AI generators appropriating the work of real artists without credit. However, “Due to the tales of wealth linked to these slop gurus, the amount of money involved can be relatively minimal.”

It might only be a few hundred dollars, sometimes even less, leading to what can be termed a megavirus. Much of this originates from nations where financial advancements are notable, such as India, Thailand, Indonesia, and Pakistan.

One challenge is having to explain to your parents that the content isn’t genuine. “There’s this really adorable animal, but I can assure you it’s not Moo Deng; it’s AI,” he stated.

Additionally, there are environmental repercussions regarding the resources necessary to produce this content, along with a concerning proliferation of misinformation.

Oliver highlighted numerous fake disasters depicted through images and videos, showcasing tornadoes, explosions, and plane crashes. “Air travel is stressful enough without the creation of new disasters,” he lamented.

AI-generated content has also been utilized during the Israeli-Iran conflict, complicating situations for first responders during last year’s floods in North Carolina. Republicans likewise exploited it to suggest that Biden was mishandling the latter crisis.

“It’s a conundrum for those who have been yelling ‘fake news’ over the last decade and are now suddenly more vocal in denouncing actual fake news,” he remarked.

The impact of these spreads wasn’t as damaging as some had feared during last year’s U.S. elections, but AI is “already considerably more advanced than it was at that time.”

He concluded: “Not only will you be deceived by fakes, but your very existence may cause you to dismiss authentic videos and images as forgeries from bad actors.”

Oliver argues that this all contributes to “corroding the very notion of objective reality,” and finds it increasingly difficult to identify AI content on these platforms.

“I’m not suggesting that some of this content isn’t entertaining, but some of it is potentially quite dangerous,” he warned.

Source: www.theguardian.com

Earn Up to £800 Daily: How Fraudsters Use Phones and Texts to Deceive Victims

oBlue UT You will receive a call or text offering you a job opportunity. It seems enticing – it’s remote work, and you can potentially earn £800 daily. If you’re interested, just reach out to the sender through the provided WhatsApp number.

The tasks are quite simple. Typically, you’re asked to engage with TikTok content through likes and shares.

“Once you start liking and sharing, you’ll get a small payout. However, this is fraudulent funding tied to individuals involved in scams,” remarks Annya Burskys, head of fraud prevention at the National Building Association. “Then, you might be told that you need to pay a total to unlock greater profits, which could be framed as a training fee.

“Part of that money is used to compensate other victims, leading some into organized crime syndicates.”

Burskys highlights that this initial outreach is particularly enticing for many, especially students.

“We’re noticing an uptick in incidents, particularly within the 16-25 age group,” she says. “Previously, we didn’t receive such reports daily, but now we hear from individuals who have sent money or from banks alerting us about funds transferred to these accounts.”

In some instances, the victim might inadvertently become a “money mule.”

Beyond sharing funds or account details, victims may later discover that their bank and identity information have been exploited for additional fraud.

Typically, victims incur losses amounting to hundreds or thousands of pounds. “It’s far from a good deal. The concern lies in the volume,” she explains. “Events unfold swiftly. From initiation to the realization that you’ve been scammed, an investment fraud can occur over mere months or even years.”

As academic institutions close for the summer, students seeking employment should be cautious of potential scams.

What does fraud look like?




The £800 figure frequently appears in correspondence related to the scam. You will be prompted to contact via WhatsApp. Photo: Guardian

Messages often claim to originate from recruitment agencies, sometimes using legitimate company names or stating availability of work through TikTok. Some texts even reference your CV as if you’d submitted it. They promise earnings of hundreds of pounds daily (the £800 figure is a recurring theme).

Calls may bear similarities too. In a recent week, an automated voice falsely claimed to represent a recruitment agency, instructing recipients to contact via WhatsApp if interested in the job. The associated phone number typically appears as a regular UK mobile.




Some scams reference your CV as if you had submitted it. Photo: Guardian

What the message asks for

The initial message will prompt you to express interest in the position. The scammer will claim it involves work related to sharing content preferences (likely TikTok videos). When you register or when it’s time for payment, you may be asked for more personal information.

You might receive an initial “payment,” but then you will be requested to cover costs for training or to unlock access to higher earnings.

What to do

Be cautious of unsolicited messages that claim to offer job opportunities. This approach is not typical for genuine recruitment agencies. The agency asserts, “In fact, we don’t utilize our platform to directly contact job seekers for our employers to acquire new employees.”

Burskys recommends that if you receive messages from recruiters or companies offering jobs, investigate by “using the company’s home and researching on LinkedIn.” A company’s home page may provide insights into its operations, directors, and details regarding their legitimacy.

If you know the name of the employer, visit their site to see if the position is advertised.

In the UK, reports of fraudulent messages can be forwarded to 7726.

Numerous recruiters provide advice on safely conducting your job search, such as these tips.

Source: www.theguardian.com

Will You Face a Cyber Attack? 7 Essential Protection Tips | Consumer Concerns

Keep an eye on your inbox

Cyberattack notifications flood our inboxes weekly, sparking concern over the personal data that may have been compromised.

Recently, Adidas disclosed that some personal information of customers was breached, including passwords and credit card details, although their payment data was secure.

Another incident involved unauthorized access to personal data of thousands of legal aid applicants from England and Wales, dating back to 2010, which followed significant disruptions caused by a cyberattack on Marks & Spencer.

If you see news about a cybersecurity incident affecting a company you’ve interacted with, stay vigilant regarding your email. Companies typically reach out to affected customers with details on what occurred and suggested actions.

Sometimes, only specific customer segments or users from particular regions may be impacted.

In Adidas’ case, it appears that those who contacted customer service recently are primarily affected, which may exclude many others. Occasionally, communication will confirm that you are unaffected.

If your information could be compromised, you’ll usually receive guidance on corrective measures or a link to a FAQs page. In some instances, firms may offer free access to support services from cybersecurity experts or credit monitoring.

In Adidas’ case, it seems to affect customers who contacted the service desk previously. Photo: Odd Andersen/AFP/Getty Images

Change Your Password

If you’ve conducted transactions with an organization that faced a cyber incident, change your password for that account immediately.

Ensure your password is robust and not used across multiple accounts.

Experts recommend creating passwords that are at least 12 characters long, including a mix of numbers, capital and lowercase letters, and symbols. Avoid easily guessed information like pet names, birthdays, or favorite teams.

“A great strategy to enhance password security is to combine three random words into one.” says the National Cybersecurity Centre in the UK. For example, consider something like Hippo! PizzaRocket1.

“Consider using a password manager to generate and securely store unique, strong passwords,” advises online security firm Nordvpn.

Utilize Two-Step Authentication

Two-step verification adds an extra security layer to your email and other key online accounts.

This generally involves receiving a code via the Authenticator app or sent to your registered mobile number to grant access.

Enable two-step verification on all services that provide this feature.

The second factor may include codes sent via SMS. Photo: Prostock-Studio/Alamy

Beware of Unsolicited Emails

Phishing emails often cite recent cyber events to lure unsuspecting targets who may be customers of the affected company.

Scammers might leverage personal information they’ve acquired to appear credible.

Avoid clicking on any link or attachment in emails, text messages, or social media posts unless you’re entirely certain of their legitimacy. These links can lead to phishing sites or include malware designed to steal your identity.

M&S advises potentially impacted customers that they “may receive emails, calls, or texts that appear to be from us.” “We will never reach out for your personal account details, such as your username or password.”

If an email claims to be from a business you interact with and you’re uncertain of its authenticity, ignore it or verify it through official contact channels.

Be cautious about links or attachments in emails unless you are completely sure they are legitimate. Photo: Tero Vesalainen/Alamy

Monitor Your Credit Record

If your personal data has been compromised, keep an eye on your credit report, which details your financial history and is used by lenders to judge your creditworthiness, in case someone attempts to open accounts in your name.

For instance, if a financial entity endures a cyber breach, the accessed data could include sensitive information such as your name, address, national insurance number, date of birth, bank account details, salary, and potentially your passport.

This information can be misused for identity fraud.

In the UK, the main credit reference agencies are Equifax, Experian, and Transunion, all providing various options to check your credit report for free or via subscription.

Credit Karma and Clear Score offer free lifetime access to your credit reports.

Experian provides an ID monitoring service, which checks your personal, financial, and credit information for suspicious activity. It’s a paid service, but if your data becomes compromised, they may cover the costs.

Be cautious if you suddenly find your applications for credit cards or loans being denied, or if you cease receiving bank statements for no clear reason despite a healthy credit score, as it may indicate identity theft.

More significantly, you may start receiving letters regarding debts that are not yours, or seeing transactions on your bank statements for items you didn’t purchase.

Many instances of financial fraud begin on social media and tech platforms, so remain vigilant, as scammers may possess details about you that can lend credibility to their deception.


The so-called “High Mama” scams have risen in recent years, where scammers impersonate relatives on platforms like WhatsApp, often pleading for urgent money transfers upon claiming to be locked out of their online banking.

Even with a sense of urgency, take the necessary time to verify the identity of anyone requesting funds.

Opt Out of Registration

When shopping online, retailers frequently prompt you to save payment card details for quicker checkout, but this may store your information with third-party services rather than just the retailer.

If you can avoid storing payment details across multiple sites, you reduce the risk of unauthorized access to your card information.

Source: www.theguardian.com

Creativity at Risk: AI Job Concerns in the Advertising Industry

Featuring motion capture technology, Indian cricket legend Rahul Dravid provides custom coaching advice for children. Shakespeare’s original manuscripts can now be rewritten by a trained AI algorithm through a robotic arm. Artificial intelligence is rapidly transforming the worldwide advertising landscape.

The AI-generated advertisements from Cadbury’s drink brand Bournvita and pen manufacturer BIC were crafted by WPP, an agency group investing £300 million annually in data, technology, and machine learning to maintain its edge.

Mark Reid, CEO of the London-based Marketing Services Group, has stated that AI is “essential” for the future of the business and recognizes that it will lead to significant changes in the workforce of the advertising sector.


Recently, Reid announced his resignation as CEO of WPP after nearly seven years, amidst a team of more than 30 members.

Advertising agencies face challenges from familiar adversaries. Over the past decade, tech giants like Google and Meta (the parent company of Facebook) have built sophisticated tools for publishers and ad buyers, solidifying their dominance online. This year, Big Tech has captured nearly two-thirds of the £45 billion that UK advertisers are spending.




WPP’s subsidiary VML has harnessed AI for a “one BIC, one book, two classics” campaign targeting Brazilian audiences. Photo: WPP

Meta is preparing to launch AI tools that enable the complete creation and targeting of advertising campaigns on social media, raising concerns about “creative extinction” and potential job cuts across agencies.

These tools are set to be introduced by the end of next year. In a recent interview, Zuckerberg described them as “redefining advertising categories.”

Agencies of all sizes, particularly large international networks like WPP, Publicis, and Omnicom, are developing their own AI resources while investing in partnerships with tech giants like Meta and Google, striving to retain clients.

“I’m confident AI will disrupt a significant number of jobs,” stated the CEO of a major advertising firm. “That said, many institutions maintain differing client portfolios, allowing them to perform a broad range of tasks. Staffing remains secure in areas like strategy, consumer insights, and certain conceptual roles, yet production roles are where the impact is most felt.”

Tech executives endorsed the advantages of AI at last week’s Enders Deloitte conference, which focused on the media and telecommunications sector.

Speaking at the conference, Stephen Pretorius, referred to as the “AI guy at WPP,” emphasized, “True creativity is an inherently human skill.”

He argued that while AI isn’t a direct substitute for recruitment, institutions must adapt and prioritize client relationships.

“AI replaces tasks rather than jobs,” he stated. “Many responsibilities we were compensated for are now automated, necessitating a shift in our business models. Team structures and client incentives will also evolve. This is merely a transitional phase.”

Recently, WPP reported several layoffs across its media division, previously known as GroupM.

“We live in a scenario where a major holding company is facing a conundrum,” remarked another agency CEO. “Clients expect to invest millions in AI, cutting budgets while speeding up and reducing costs. Many clients are seeking to decrease their fees.”

Currently, the AI revolution hasn’t made a significant dent in the UK advertising sector.




Meta, the parent company of Facebook and Instagram, plans to introduce AI tools enabling advertisers to fully create and target campaigns on social media. Photo: Anadoll/Getty Images

Last year, the IPA reported a record employment figure of 26,787 individuals in media, creative, and digital agencies, representing 85% of the UK’s advertising expenditure.

Skip past newsletter promotions

The IPA has tracked market size since 1960 when it recorded 199,000 employees, dipping to just under 12,000 in the early 1990s.

Advertising expenditure surged dramatically, fueled by the rise of the Internet, from a mere £60 million noted in the pre-television era of 1938.

By 1982, the UK advertising market was valued at £3.1 billion, and this year it is expected to surpass £45 billion, according to the Advertising Association/WARC that has published annual reports since 1980.

Agency executives believe that major advertisers face too much brand risk to allow AI to handle the entire creative process.

“I can often identify a piece of AI-generated work from a mile away—it’s polished, overly idealistic, and somewhat artificial,” observed one creative agency head. “But that’s evolving. I’ve been told creatives could never improve upon the iconic gorilla ad for Cadbury, yet I’m uncertain. AI can ultimately refine enough to respond to highly intuitive concepts.”




Cadbury’s Dairy Milk ad featuring gorillas playing drums became a viral sensation. Photo: Rex Features

As the industry speculates about Meta’s plans to replace conventional agencies, Zuckerberg has sought to clarify that AI technologies are primarily aimed at small and medium enterprises.

“In future collaborations with creative agencies, we’ll likely ensure their involvement,” he remarked at the Stripes Conference, emphasizing this position shortly after his initial comments about Meta’s AI advertising trajectory. “If agencies don’t adapt, they might find themselves throwing together ad compositions only to flood the Meta platform with thousands of variations to see which performs best.”

Meta and Google maintain they’ve “democratized” advertising by enabling countless small and mid-sized companies to run campaigns without the financial burden of traditional advertising channels.

“That’s the mask they wear constantly,” stated a head of an advertising agency. “When they emerged decades ago as a novel ad platform, the focus was on small businesses, yet now they are capturing almost two-thirds of the UK’s advertising budget.”

In the 2000s, big tech firms grew immensely, propelling WPP to become the largest advertising group worldwide, while the CEO of S4 Capital has been dubbed Meta and Google’s ‘Frenemy.’

Two decades later, the rise of AI within advertising marks the latest technological upheaval that the industry must adapt to in order to thrive.

Meta’s bold commitment to “automatically generate ads in seconds” signals a transformative shift towards total mechanization of production processes,” asserts Patrick Garvey, co-founder of the independent agency PI. “This isn’t the demise of an agency; rather, it signals the end of outdated institutional paradigms.”

He champions the small businesses reshaping the landscape but questions whether Meta’s approach to AI resembles “advertising fast food.” For traditional ad firms, it could prove to be a bitter pill.

Source: www.theguardian.com

Tech Stocks Climb Following Strong Nvidia Results, Despite Concerns Over Chinese Competitors

Even though leaders in the AI chip industry have raised concerns about the emergence of Chinese competitors, tech stocks experienced an upswing on Thursday, buoyed by robust results from Nvidia.

The Stoxx Europe Tech Index increased by 0.8% on Thursday, leading to a 2.4% rise in Dutch semiconductor equipment manufacturer ASML. Meanwhile, in the US, tech-focused NASDAQ futures surged by 2%, alongside a 6% pre-market gain for Nvidia’s shares.

The uptick in tech and artificial intelligence stocks followed Nvidia’s report that surpassed Wall Street expectations, with quarterly revenues jumping 69% to $44 billion (£32.6 billion). The company also expressed optimism that business transactions in the Middle East could offset losses from China.

In April, former US President Donald Trump announced restrictions on AI chip exports to China, effectively cutting off a significant revenue stream, although Nvidia continues to sell H20 AI chips to Chinese firms.

Nvidia’s CEO Jensen Huang cautioned that Chinese competitors are capitalizing on the vacuum left by US trade barriers. “Chinese rivals have adapted,” Huang stated to Bloomberg TV. He noted that Huawei, which has been blacklisted by the US government, is “extremely formidable.” “Like everyone, their capabilities are multiplying each year,” Huang remarked. “The volume has also significantly increased.”

While US government policies aim to shield AI technology from Chinese influences, Huang indicated that domestic businesses are simply exploring alternative options. “The importance of the Chinese market should not be underestimated,” Huang noted. “It’s home to the largest population of AI researchers globally.”

Nvidia mentioned that it anticipates losing out on $8 billion in revenue for the second quarter due to Trump’s trade restrictions.

Tech investors felt positive after a recent judicial ruling that might challenge the president’s aggressive trade regime, as the US trade court opposed Trump’s severe tariff policies. Nonetheless, uncertainty looms since the White House has already appealed this decision from the International Trade Court based in New York.

In other news, shares of Tesla, another key player in AI technology, climbed by 2.6% after CEO Elon Musk confirmed his decision to step down from his role in the Trump administration.

Musk has been “at the helm of the Department of Government Efficiency (DOGE) since January, ruthlessly cutting expenditures across various public sectors and institutions. He announced in April his intention to resign following a decline in Tesla’s revenue and his failure to secure a Supreme Court position, which had consumed millions in support of Republican candidates.

Source: www.theguardian.com

Critics Raise Concerns as Workers Embrace Big Tech Opportunities

Former Google CEO Eric Schmidt noted that the issue in the UK is that “there are many ways for people to decline.”

However, some critics of the Labour government argue that it struggles to say “yes.”

Schmidt made these comments during a Q&A with Keir Starmer at a major investment summit last October, where the presence of influential tech leaders underscored the sector’s significance for governments prioritizing growth.

Major US tech firms like Google, Meta (founded by Mark Zuckerberg), Amazon, Apple, Microsoft, and Palantir, alongside other data intelligence firms co-founded by Peter Thiel, significantly impact the UK landscape.

For governments aiming to stimulate growth, it’s challenging to overlook companies boasting trillions in market value.

This influence offers immediate access, according to a former employee from Big Tech familiar with how major US firms advocate for their interests in the UK.

“I had no trouble navigating Whitehall corridors, claiming to create thousands of jobs for the economy. The government adores job announcements,” the ex-employee remarked.

In this light, Technology Secretary Peter Kyle has engaged with tech sector representatives nearly 70% more than his predecessor, Michelle Donnellan, including multiple discussions with firms like Google, Amazon, Meta, and Apple.

Ukai, the UK’s leading trade body for the AI sector, expresses concern over the marginalization of smaller players.

“We worry about the significant imbalance in policy influence between a handful of global giants and the multitude of businesses that comprise the AI industry in the UK. We’re not being heard, yet the economic growth the government seeks will originate from these companies.”

Echoing the sentiments of a former Big Tech employee, Flagg emphasizes that large tech firms have the means to cultivate and sustain political relationships.

A source familiar with the industry’s interactions with the government noted that these large tech companies leveraged their resources before the general election and established relationships remained intact following the Labour landslide.

Another discussion regarding the “extraordinary” access to the Tony Blair Institute, which is financially backed by tech billionaire Larry Ellison, highlights its role as a key voice in AI policy debates, maintaining what it claims to be “intellectual independence” in policy work.

Critics of the government’s dealings with major tech entities cite proposed copyright law reforms as reflective of these imbalanced relationships. The Minister suggested that AI firms should utilize copyrighted works without permission to create products.

Skip past newsletter promotions

A source close to Kyle indicates that the opt-out option is no longer favored, with significant repercussions underway. The opposition to this proposal includes prominent figures from the UK’s robust creative sector, ranging from Paul McCartney to Dua Lipa and Stone Port.

While technology is posited as a solution to the government’s economic growth dilemma, AI is central to this strategy and serves as a barometer of economic efficiency. However, misguided copyright policies result in PR disasters when juxtaposed with celebrity-driven narratives. News Media Associations, representing organizations like the Guardian, also contest the proposal, as do ChatGPT developers Google and OpenAI.

A former government advisor who was involved in technology policy suggests that diluting copyright protections—often referred to as the “lowest hanging fruit” in policy discussions—will not be the “key solution” to leading in global AI advancements.

“By taking this route, the governments are jeopardizing the worst aspects worldwide. This approach does not lead to the necessary actions to truly support the leading sectors and establish the UK as an AI superpower.”

A spokesperson from the Department of Science, Innovation and Technology stated that there is “no apology” for their engagements with a sector employing 2 million in the UK, emphasizing that “regular interaction” with tech companies of all sizes is crucial for driving economic growth.

During his conversation with Schmidt, Starmer posed the vital question about future policy: “Does this promote growth or hinder it?” The tech industry is positioned at the core of this inquiry, although the copyright discussion may undermine vital relationships in other areas.

Source: www.theguardian.com

Concerns Emerged from Foresight AI Utilizing 57 Million NHS Medical Records

The Foresight AI Model employs information derived from records of hospitals and family practitioners across the UK

Hannah McKay/Reuters/Bloomberg via Getty Images

The developers assert that an AI model trained with medical records of 57 million individuals through the UK’s National Health Service (NHS) could eventually assist physicians in anticipating illness and hospitalization trends. Nonetheless, other academics express significant concerns over privacy and data protection associated with the extensive utilization of health data, acknowledging that even AI developers are unable to ensure the absolute protection of sensitive patient information.

This model, branded as “Foresight,” was initially created in 2023. Its first iteration leveraged OpenAI’s GPT-3, the prominent language model (LLM) that powered the original ChatGPT, using 1.5 million authentic patient records from two hospitals in London.

Recently, Chris Tomlinson from University College London and his team broadened their objectives, claiming to develop the world’s first “national generative AI model for health data” with significant diversity.

Foresight utilizes Meta’s open-source LLM, LLAMA 2, leveraging eight distinct datasets of medical information routinely collected by the NHS between November 2018 and December 2023, including outpatient appointments, hospital visits, vaccination records, and other relevant documents.

Tomlinson notes that his team has not disclosed any performance metrics for Foresight, as it is still undergoing evaluation. However, he believes that its potential extends to various applications, including personalized diagnoses and forecasting broader health trends such as hospital admissions and heart conditions. “The true promise of Foresight lies in its capacity to facilitate timely interventions and predict complications, paving the way for large-scale preventive healthcare,” he stated at a press conference on May 6.

While the foreseeable advantages remain unsupported, the ethical implications of utilizing medical records for AI learning at this magnitude continue to raise alarms. Scholars argue that all medical records undergo a ‘degeneration’ process before integration into AI training, yet the risk of re-identifying these records through data patterns is well-established, especially in expansive datasets.

“Creating a robust generative AI model that respects patient privacy presents ongoing scientific challenges,” stated Luc Rocher at Oxford University. “The immense detail of data advantageous for AI complicates the anonymization process. Such models must operate under stringent NHS governance to ensure secure usage.”

“The data inputted into the model is identifiable, so direct identifiers will be eliminated,” remarked Michael Chapman, who oversees the data fueling Foresight, in a speech at NHS Digital. However, he acknowledged the perpetual risk of re-identification.

To mitigate this risk, Chapman explained that AI functions within a specially created “secure” NHS data environment, guaranteeing that information remains protected and accessible solely to authorized researchers. Amazon Web Services and Databricks provide the “computational infrastructure,” yet they do not have access to the actual data, according to Tomlinson.

Regarding the potential to expose sensitive information, Yves-Alexandre de Montjoye from Imperial College London suggests evaluating whether a model can retain the information it encounters during training. When asked by New Scientist whether Foresight has undergone such testing, Tomlinson indicated that it has not, but they are contemplating future assessments.

Employing such an extensive dataset without engaging the public regarding data usage may erode trust, cautions Caroline Green at Oxford University. “Even anonymized data raises ethical concerns, as individuals often wish to manage their data and understand its trajectory.”

Nevertheless, prevailing regulations offer little leeway for individuals to opt out of the data utilized by Foresight. All information incorporated into the model emanates from NHS datasets gathered on a national scale and remains “identified.” An NHS England representative stated that the existing opt-out provisions do not apply, asserting that individuals not wishing to share their family doctor data will not contribute to the model.

As per the General Data Protection Regulation (GDPR), individuals should retain the option to withdraw their consent concerning personal data usage. However, training methods involving LLMs like Foresight make it impossible to eliminate a single record from an AI tool. An NHS England spokesperson commented, “The GDPR does not pertain since the data utilized to train the model is anonymized, and therefore we do not engage with personal data.”

While the complexity of GDPR concerning the training of LLMs presents novel legal issues, the UK Information Commissioner’s Office indicates that “identified” data should not be viewed as equivalent to anonymous data. “This perspective arises because UK data protection laws lack a definition for the term, which can lead to misunderstanding,” the office emphasizes.

Tomlinson explains that the legal situation is compounded as Foresight is only engaged in studies pertaining to Covid-19. This means that exceptions to data protection laws instituted during the pandemic remain applicable, points out Sam Smith from Medconfidential, a UK data privacy advocacy group. “This Covid-specific AI likely harbors patient data, but such information cannot be extracted from the research environment,” he asserts. “Patients should maintain control over their data usage.”

Ultimately, the conflicting rights and responsibilities surrounding the utilization of medical data in AI developments remain ambiguous. “In the realm of AI innovation, ethical considerations are often overshadowed, prompting a reevaluation beyond merely initial parameters,” states Green. “Human ethics must serve as the foundational element, followed by technological advancements.”

The article was updated on May 7, 2025

Corrections regarding the comments made by the NHS England spokesperson were duly noted.

Topics:

Source: www.newscientist.com

Uber Sees 14% Revenue Growth Despite Financial Concerns

Here’s the rewritten content while retaining the HTML tags:

Uber seems to be boosting the global economy, despite concerns that consumers are moving away from vehicle use and delivery services.

The company announced on Wednesday that its revenue reached $11.5 billion in the last quarter. This marks a 14% increase from the previous year, slightly below what Wall Street analysts anticipated. Total bookings also climbed 14% to $42.8 billion, meeting expectations.

Investors are keen to understand the impact of President Trump’s recent tariffs on Uber’s growth trajectory. While the company’s core business is minimally affected by customs duties, a sluggish economy could deter customers from spending on rides and deliveries.

Nonetheless, Uber forecasts that bookings will rise between 16% and 20% in the current quarter, surpassing Wall Street’s 14% estimate. In a statement, CEO Dara Khosrowshahi remarked on the strong start to the year, despite “a dramatic backdrop of trade and economic news.”

Uber’s profit for the quarter was $1.8 billion, a significant turnaround from a loss of $654 million in the same quarter last year, which included a $721 million impact from the revaluation of an investment.

Additionally, Uber revealed several new partnerships related to self-driving cars over the first four months of the year, as part of a broader strategy to engage with the robot taxi sector, which poses competitive challenges.

In March, the company initiated an exclusive collaboration in Austin, Texas, with plans to launch in Atlanta soon alongside autonomous automotive partner Waymo. By May, Uber had established 18 active self-driving car partnerships.

While rides continue to be the main source of Uber’s profits, the food delivery segment has seen a growth of 15%. Recently, the company invested $700 million to acquire an 85% stake in Trendyol GO, a Turkish grocery and cuisine service.

Furthermore, Uber experienced a relief from increasing car insurance costs that had affected driver earnings. The company has bolstered its short-term and long-term insurance reserves over the last quarter compared to the previous year.

Source: www.nytimes.com

Pastor Revises Data Bill in Response to Artists’ AI and Copyright Concerns

The minister proposed concessions regarding copyright modifications to address the concerns of artists and creators ahead of a crucial vote in Congress next week, according to the Guardian.

The government is dedicated to conducting economic impact assessments for the proposed copyright changes and releasing reports on matters like data accessibility for AI developers.

These concessions aim to alleviate worries among Congress members and the creative sector regarding the government’s planned reforms to copyright regulations.

Prominent artists such as Paul McCartney and Tom Stoppard have rallied behind a campaign opposing a range of high-profile intervention changes. Elton John remarked that the reforms “will expand traditional copyright laws that safeguard artists’ livelihoods.”

The Minister intends to permit AI companies to utilize copyrighted works for model training without acquiring permission, unless the copyright holder opts out. Creatives argue this favors AI firms and expresses a desire to adhere to existing copyright laws.

The government’s pledge will be reflected in amendments to the data bill, which will serve as a key instrument for advocates opposing the proposed changes and is scheduled to be discussed in the Commons next Wednesday.

The initiative has already faced criticism. Crossbench peer and activist Beevan Kidron stated that the minister’s amendments would not “meet the moment” and indicated that the Liberal Democrats would propose their revisions to compel AI companies to comply with current copyright laws.

British composer Ed Newton Rex, a notable opponent of the government’s proposal, argued there is “extensive evidence” suggesting that the changes “are detrimental for creators.” He added that no impact assessment was needed to convey this.

Ahead of next week’s vote, Science and Technology Secretary Peter Kyle sought to establish rapport within the creative community.

During a meeting with music industry stakeholders this week, Kyle acknowledged that his focus on engaging with the tech sector has frustrated creatives. He faced backlash after holding over 20 meetings with tech representatives but none with those from the creative sector.

Kyle further stirred criticism by stating at the conference that AI companies might choose to relocate to countries like Saudi Arabia unless the UK revamps its copyright framework. This was not discussed at a Downing Street meeting with MPs this week.

Government insiders assert that AI firms are already based abroad and emphasize that if the UK does not reform its laws, creatives may lack avenues to challenge the exploitation of materials by overseas companies.

According to government sources, the minister has not established an opt-out system and maintains “a much broader and more open-minded perspective.”

However, Labour lawmakers contend that the minister “has not proven any substantial job growth in return” and is yielding to American interests. They criticize this as, at best, outsourcing and, at worst, total exploitation.

Kidron, who has successfully amended the Lords’ data bill while opposing the government’s reforms, remarked, “The moment is not right for pushing the issue into the long grass with reports and reviews.”

“I ask the government why they neglect to protect UK property rights, fail to recognize the growth potential of UK creative industries, and ignore British AI companies that express concerns over favoritism towards firms based in China,” she stated.

James Fris, a Labour member of the Culture, Media and Sports Selection Committee who facilitated discussions on the matter this month, asserted, “The mission of the creative sector cannot equate to submission to the tech industry.”

Kidron’s amendments, aimed at making AI companies accountable under UK copyright laws regardless of location, were withdrawn in the Commons, but the Liberal Democrats plan to reintroduce them next week.

The Liberal Democrats’ proposal includes a requirement for AI model developers (the technology that supports AI systems like chatbots) to adhere to UK copyright laws and clarify the copyrighted materials incorporated during development.

The Liberal Democrat amendment also demands transparency regarding the web crawlers used by AI companies, referring to the technology that gathers data from the Internet for AI models.

Victoria Collins, spokesperson for Liberal Democrat Technology, stated:

“Next week in the Commons, we will work to prevent AI copyright laws from being diluted and push Parliament to urge lawmakers to stand with us in support of UK creators.”

Source: www.theguardian.com

Chinese researchers granted access to 500,000 UK GP records raises concerns about data protection

Chinese researchers have been granted access to British Grand Prix records of half a million, despite concerns from Western intelligence agencies about the authoritarian regime’s accumulation of health data, as revealed by the Guardian.

The records are set to be transferred to UK Biobank, a research hub housing detailed medical information from 500,000 volunteers. This extensive health data repository is made accessible to universities, scientific institutions, and private companies. Guardian analysis indicates that one of the five successful applications for access originates from China.

Health authorities had been evaluating the need for additional protection measures for patient records as they are integrated with genomes, tissue samples, and questionnaire responses at UK Biobank. Personal details such as date of birth are stripped from UK Biobank data before sharing, but experts warn that in some instances, individuals could still be identifiable.

Despite warnings from MI5 about Chinese entities accessing UK data under the direction of China’s intelligence agency, UK Biobank, which oversees health data, has recently given clearance for Chinese researchers to access GP records.

As UK Ministers cozy up to Beijing in pursuit of economic growth, the decision reflects a delicate balance to avoid antagonizing the influential superpower, which prioritizes biotechnology advancement. The UK-China relationship is already under strain due to issues like the ownership of a China-owned steel factory in Scunthorpe and new regulations on foreign interference.

A government spokesperson emphasized that security and privacy are paramount considerations when utilizing UK health data for disease understanding and scientific research. They reassured that health data is only shared with legitimate researchers.

The UK Biobank has been a major success in advancing global medical research, according to Chi Onwurah, a Labour MP heading the Congressional Science and Technology Committee. She stressed the need for a comprehensive government strategy to ensure data control and secure, responsible data sharing in the geopolitical landscape.

Approval of access to patient records

Out of 1,375 successful applications for UK Biobank data access, nearly 20% come from China, second only to the US. Chinese researchers have leveraged UK Biobank data for research on topics like air pollution and dementia prediction.

In recent years, the US government has imposed restrictions on BGI subsidiaries due to concerns about their collection and analysis of genetic data potentially aiding Chinese military programs. Nevertheless, UK Biobank approved a research project with a BGI unit, emphasizing the need for strict compliance with UK data laws.

The UK Biobank representative dismissed claims of genetic surveillance or unethical practices by BGI, stating that the focus is on civilian and scientific research. The UK Biobank continues to engage with MI5 and other state agencies to oversee data use, including collaborations with Chinese entities.

Despite some opposition, patient records are being transferred to UK Biobanks and other research hubs as part of a directive from the Health Secretary. Access to these records is strictly regulated by NHS England based on security and data protection considerations.

NHS England requires overseas data recipients to be authorized for access to personal data in compliance with UK data laws. Regular audits ensure that data sharing processes meet security standards. Chinese researchers can now apply for access to GP records through the approved platform.

China is “developing the world’s largest biodatabase.”

Data repositories like UK Biobank play a crucial role in global research efforts, with some experts cautioning about China’s intent to leverage genomic and health data for biotech advancement. Intelligence sources suggest that health data could be exploited for espionage if anonymization is breached.

MI5 raised concerns about China’s National Intelligence Act and its implications for personal data controllers interacting with Chinese entities. China’s ambition to develop a vast biodatabase has drawn scrutiny from intelligence officials worldwide.

Privacy advocates have questioned the transfer of UK health data to China, urging vigilance against potential misuse in “hostile states.” UK Biobank has revamped its data sharing practices to enhance security and ensure that patient data is safeguarded.

Despite the concerns, UK Biobank CEO Professor Rory Collins underscores the importance of explicit consent from volunteers for studying health data, particularly GP records.

Source: www.theguardian.com

Are there privacy concerns with the “Magic Eye” surveillance cameras at the Mental Health Unit’s NHS?

In In July 2022, Morgan Rose Hart, an aspiring veterinarian with a passion for wildlife, passed away after it was found unresponsive in the Essex mental health unit. She’s just turned 18. Diagnosed with autism and attention deficit hyperactivity disorder (ADHD), Hart’s mental health was affected by bullying, which forced her to move from school several times. She previously tried to take her life and was transferred to Harlow’s unit three weeks before her death.

Hart, from Chelmsford, Essex, passed away on July 12, 2022 after it was found unresponsive on the bathroom floor. The prevention report of future death reports issued after her questioning turned out to be overlooked, and it turns out that important checks were missed, observation records were forged, and risk assessments were not completed.

Investigation by observer And newsletter Democracy for Sale Her death has established that she is one of four, including a high-tech patient surveillance system called Oxevision, which is deployed in almost half of mental health struts across the UK.

Oxevision’s system allows you to measure the patient’s pulse rate and breathing, interfere with the patient at night, and also broadcast CCTV footage temporarily if necessary, without the need for a person to enter the room. The high-tech system can detect a patient’s breathing rate, even when the patient is covered with a futon.

Oxehealth, which was spin-out from the University of Oxford’s Institute of Biomedical Engineering in 2012, has agreed to 25 NHS mental health trasts, according to its latest account, reporting revenue of around £4.7 million by December 31, 2023.

However, in some cases, it is argued that instead of doing physical checks, staff rely too heavily on infrared camera systems to monitor vulnerable patients.

There are also concerns that systems that can glow red from corners of the room could exacerbate the pain of patients in mental health crisis, which have increased their sensitivity to monitoring or control.

Sofina, who had experience being monitored by Oxevision among patients and who asked not to use her full name, stated:

“The first thing you see when you open your eyes, the last thing you do when you fall asleep. I was just in a visually impaired state. I was completely hurt.

Advocates argue that the technology can improve safety, but this weekend there will be a call to stop the deployment of Oxevision, raising concerns about patient safety, privacy rights and the conflict of interest in research supporting its use. The campaign group said Oxevision was often installed in patients’ bedrooms without proper consent, with surveillance technology likely causing distress.

In a prevention report of future deaths issued in December 2023 after Hart’s questioning, the coroner pointed out that if a person was in the bathroom for more than three minutes, a staff member would “have to complete a face-to-face check.” Instead, “Oxevision Red Alert has been reset” by staff and Hart was not observed for 50 minutes, and was discovered to be “not responding on the bathroom floor.”

The coroner expressed concern that “some staff may have used Oxevision in their place of instead of just an aid to face-to-face observation.” The conclusion of the judge’s investigation was death from misfortune, which contributed to the contributions of negligence.

Two days before Hart’s death, Michael Nolan, 63, a warehouse operator at risk for self-harm, passed away as a mental health patient at Basildon Hospital. The study said staff used Oxevision as an alternative to physical observations and failed to carry out effective observations. The story’s verdict by the judge included the findings of inadequate training on the Oxevision system.

The following month, 27-year-old Sophie Alderman, who had a history of self-harm, passed away in a patient at Rochford Hospital under the custody of the University of Essex Partnership NHS Foundation Trust. Her family says the Ooshivision system caused her pain and hurt her mental health. A few months before her death, she complained about the camera in her room, but she believed it was hacked by the government.

Tammy Smith, Alderman’s mother observer: “I don’t think Oxevision is effective in keeping patients safe. It’s a major invasion of patient privacy.

“Staff aren’t properly trained or used properly on it. People have died while Oxevision is in use, and questions have been raised about its use. That’s enough to pause deployment and actually consider whether this technology will keep patients safe.”

The Care Quality Committee also raised concerns. “A sad death was found in the safety room,” said the NHS Foundation Trust’s testing report, which was released last February. [St Charles hospital in west London] If staff were not fully involved and monitored patients, they were dependent. [Oxevision] It was turned off at the time. ”

The Trust said this weekend that a “tragic death” in March 2023 led to the firing of three individuals, with the use of technology never being designed to replace responsibility and care from staff.

The Lampard study, which examines the deaths of mental health hospitalized patients under the control of the NHS Trust in Essex between January 2000 and December 2023, is being asked to investigate Oxevision.




Sophina of a former patient monitored by Oxevision.

Bindmans, a law firm representing Alderman’s family and another patient’s family, spoke to Baroness Lampard about the concerns about consent and the safety and effectiveness of the system. He said there are concerns that staff may delegate the responsibility to monitor patients to “Digital Eye.”

A review by the National Institute of Health Therapy, published in November and commissioned by the NHS England, examined nine studies on Oxevision along with other studies, finding “inadequate evidence” suggesting that inpatient mental health unit surveillance techniques achieve intended results and achieve “improve safety, improved costs, etc.”

Only one of these papers was rated as “high quality” for their methodology and no conflicts of interest were reported. All eight other studies report all conflicts of interest, all related to Oxehealth. In some cases, OxeHealth employees were co-authors of the paper.

“There’s no independent research done. There’s almost always been involvement of the companies that create and market these devices,” said Alan Simpson, professor of mental health nursing who co-authored the review.

The Stop Oxevision campaigner said he was worried about the threat that technology poses to patients’ “safety, privacy and dignity.”

Lionel Tarassenko, professor of electrical engineering at Oxford University and founder of Oxehealth, said Oxevision only intermittently broadcast CCTV footage of patients. This is up to 15 seconds, and if clinical staff respond to alerts, they will only see blurry videos.

Tarassenko Lord said the paper reviewed by the National Institute team showed the benefits of Oxevision, including reduced self-harm, improved patient sleep and safety. He added that it was written by an independent clinician who maintains editorial control and in some cases, OxeHealth co-authors were included to reflect their contributions.

He said: “There is no evidence that proper use of Oxevision technology is a factor that contributes to inpatient deaths. The experience of Oxevision patients is very positive.”

In a statement, the company said the Oxevision platform was NHS England Principles Regarding mental health digital technology, the decision to use technology, announced last month for inpatient treatment and care, said it must be based on consent.

The company said: “Oxevision supports clinical teams to improve patient safety, reduce incidents such as falls and self-harm, and ensure staff can respond more effectively to clinical risks,” he said, adding that it welcomed the dialogue on responsible ethical deployment of technology.

Paul Scott, chief executive of the University of Essex Partnership NHS Foundation Trust (EPUT), said that his patient’s death was devastating because he was in charge of caring for Hart, Nolan and Alderman, and that his sympathy was sympathetic to those who lost loved ones. He said: “We are constantly focused on providing the best possible care and use remote surveillance technology to enhance safety and complement the treatment care and observations our staff has implemented.”

A spokesperson for NHS England said: “Vision-based surveillance techniques must support a human-based rights approach to care, be used only within the scope of legal requirements, and ensure that patient and family consent is implemented.

A spokesman for the Ministry of Health’s Social Care said: “These technologies should only be used in line with robust staff training and appropriate consent, with robust staff training and appropriate consent, and are transforming the care that people facing a mental health crisis receive by modernizing mental health law.

Source: www.theguardian.com

Rocket explosion by SpaceX causes flight delays at Florida airport due to debris concerns

A huge explosion of a SpaceX rocket above South Florida caused major disruptions for an air traveler due to unexpected delays.

The failure of the SpaceX mission led to the FAA grounding air traffic around Miami, Fort Lauderdale, West Palm Beach, and Orlando, citing concerns about “space-fired debris.”

“After years of traveling, this is a first,” expressed a Facebook user who goes by the name of Rappeck. Executive Peck was flying to South America but had to divert to Miami.

The flight pilot informed passengers that a space rocket had exploded during flight, causing debris to fall along their path. They reassured the passengers that they were safe but needed to circle back to Miami.

Peck shared, “We eventually had to return to Miami. It’s unbelievable. We’ve faced delays due to weather, mechanical issues, and even unruly passengers, but never because of a rocket explosion.”

Jesse Winans, a traveler en route from Costa Rica to Charlotte, found himself in an unexpected layover in Fort Lauderdale along with other passengers.

“They are trying to manage the situation with customers, but I anticipate a long process to reach our destination,” complained the frustrated traveler to NBC South Florida.

Debris from a SpaceX rocket above the Bahamas on Thursday.
John Ward

SpaceX previously experienced a similar accident in January and has pledged to learn from this latest incident.

The company stated Thursday night in a released statement, “We will analyze data from today’s flight tests to better understand the root causes. Success stems from our learnings, and today’s flights provide more insight to enhance Starship’s reliability.”

Elon Musk summed it up more succinctly with his statement: “Rocket science is hard.”


Source: www.nbcnews.com