The government is under pressure to clarify why it has not yet acted on all recommendations from the 2023 review. This includes findings concerning Afghans, victims of child sexual abuse, and 6,000 disability claimants working alongside the British military.
On Thursday, the Minister finally published an information security review. This move followed a 2023 leak involving personal data of approximately 10,000 military personnel from Northern Ireland’s police service.
The Cabinet Office’s review of 11 public sector data breaches revealed three overarching themes affecting entities such as HMRC, the Metropolitan Police, Benefits Systems, and the MOD.
Insufficient control over incidental downloads and the aggregation of sensitive data.
Disclosure of sensitive information through “wrong recipient” emails and improper use of BCC.
Undisclosed personal data emerging from spreadsheets set for release.
The review was released 22 months after the database of 18,700 Afghans was finalized just a month following its publication and was praised by Chi Onwurah, chair of the Science, Innovation and Technology Committee. However, she remarked:
Data breaches concerning Afghans have instilled fear among those concerned for their safety under the Taliban and those wary of the UK government, which promised relocation to thousands of Afghans under a confidential plan.
The government reported that it has acted on 12 of the 14 recommendations aimed at enhancing data security. Onwurah stated: “There are still questions that the government must address regarding the review. Why have only 12 out of the 14 recommendations been executed?”
“For governments to leverage technology to boost the economy and fulfill their aspirations of public sector transformation, they must earn their citizens’ trust in safeguarding their data.
Intelligence Commissioner John Edwards urged the government to “encourage the broader public sector to expedite the organization of its practices to secure Whitehall.”
He emphasized to Cabinet Secretary Pat McFadden on Thursday, “It is imperative that the government fully actualizes the recommendations from the Information Security Review.”
It remains unclear which of the 14 recommendations are still pending implementation. The full list includes collaboration with the National Cybersecurity Centre to disseminate existing guidance on the technical management of “official” labeled products and services, marking of “official” information, launching a “behavioral impact communication campaign” to combat ongoing deficiencies in information processing, and the necessity for a “review of sanctions related to negligence.”
McFadden and Peter Kyle, the secretaries of state for science, innovation, and technology, communicated to Onwurah in a letter on Thursday.
A spokesperson for the government stated: “This review concluded in 2023 under the previous administration.
“Safeguarding national security, particularly government data security, remains one of our top priorities. Since taking office, we have introduced plans to enhance inter-sector security guidance, update enforcement training for civil servants, and improve the digital infrastructure throughout the public sector, aligning with the shift towards modern digital governance.”
The tragic demise of a man in France, which was live-streamed on the online platform Kick, has prompted a police investigation. Authorities are urging regulators to examine the events of the broadcast and the implications of live streaming on the internet. What is Kick, what transpired, and what are the next steps?
What Happened?
Rafael Graven, 46, from southern France, was known online as Jean Pawmanbe.
This week, he tragically passed away during an extended live stream on the platform. Reports suggest that, prior to his death, he was subjected to physical assaults and humiliation by his associates. A disturbing excerpt from the stream viewed by the Guardian indicates that Graven was struck, humiliated, strangled, and shot with a paintball gun.
His channel has since been removed, and the involved parties are banned pending the investigation by Kick.
One of the collaborators informed local media that Graven had pre-existing cardiovascular issues and claimed, “the scene was just staged and followed a script.”
An autopsy has been ordered, and a police investigation is underway regarding Graven’s death.
What is Kick?
Kick is a live streaming platform akin to Twitch, where users often watch gaming sessions and various live activities.
Founded in Melbourne in 2022 by billionaires Ed Craven and Bijan Tehrani, Craven previously established Stake.com, the world’s largest cryptocurrency casino. Kick expanded its user base by attracting Twitch streamers who supported Stake before Twitch’s ban on gambling advertisements.
Kick claims that content creators retain 95% of their streaming revenue.
The platform is known for a more lenient approach to content moderation compared to Twitch, although it does have community guidelines prohibiting “content that depicts or incites heinous violence, including serious harm, suffering, and death.”
Additionally, Kick asserts that it will not allow content featuring severe self-harm.
Earlier this year, the company announced new rules permitting gambling streams only from verified sites to protect minors from such content.
Why Wasn’t the Channel Banned?
A spokesperson for Kick did not provide an explanation as to why the Jean Pawmanbe channel remained active before Graven’s death.
“We are urgently reviewing the situation, engaging with relevant stakeholders, and investigating the matter,” the spokesperson stated. “Kick’s Community Guidelines are established to protect creators, and we are committed to maintaining these standards across the platform.”
What Did Kick Say About the Death?
The company expressed its support for the ongoing investigation and shared its grief over Graven’s passing.
“We are deeply saddened by the loss of Jean Pawmanbe and extend our sincere condolences to his family, friends, and community.”
Will Kick Face Any Repercussions?
In France, Clara Chappaz, Deputy Minister of AI and Digital Technology, characterized the incident as “absolutely horrifying,” announcing an ongoing judicial investigation. The matter has been escalated to the French portal for reporting internet content concerns, as well as the digital regulator ARCOM.
Being an Australian company, Kick could also face local scrutiny.
A spokesperson for the Esafety Commissioner referred to the case as “tragic,” emphasizing that it highlights the potentially devastating real-world consequences of extreme content creation.
The spokesperson remarked, “Platforms like Kick must do more to enforce their terms and conditions to minimize harmful content and behavior during streams, ensuring protection for all users.”
Given Kick’s chat features, there may be implications for the Australian government’s planned social media age restrictions for users under 16, starting in December.
Furthermore, new industry codes and standards now require Kick and similar platforms to have systems to shield Australians from inappropriate content, including depictions of crime and violence without justification.
“This encompasses mandates to uphold terms and conditions that prohibit such material and to address user reports swiftly and appropriately,” the spokesperson added. “ESAFETY may seek penalties of up to $49.5 million for compliance violations if warranted.”
Additional codes are under consideration to specifically target children’s exposure to violent content.
Amid regulator scrutiny over big tech companies’ relationships with artificial intelligence startups, Microsoft is stepping down from its observer role on OpenAI’s board, and Apple will no longer appoint executives to similar positions.
Microsoft, the primary funder of ChatGPT developer, announced its resignation in a letter to the startup, as reported by the Financial Times. The company stated that the resignation, as a mere observer with no voting rights on board decisions, is effective immediately.
Microsoft highlighted the progress made by the new OpenAI board post the eventful departure and reinstatement of CEO Sam Altman last year. The company mentioned that OpenAI is heading in the right direction by emphasizing safety and nurturing a positive work culture.
“Considering these developments, we feel that our limited observer role is no longer essential,” stated Microsoft, which has invested $13 billion (£10.2 billion) in OpenAI.
However, Microsoft reportedly believed that its observer role raised concerns among competition regulators. The UK’s Competition and Markets Authority is reviewing whether the deal equated to an “acquisition of control,” while the US Federal Trade Commission is also investigating View Partnerships.
While the European Commission opted out of a formal merger review regarding Microsoft’s investment in OpenAI, it is examining exclusivity clauses in the contract between the two entities.
An OpenAI spokesperson mentioned that the startup is adopting a new strategy to engage key partners like Microsoft, Apple, and other investors on a regular basis to strengthen alignment on safety and security.
As part of this new approach, OpenAI will no longer have an observer on the board, meaning Apple will also not have a similar role. Reports had surfaced earlier this month about Apple intending to include App Store head Phil Schiller on its board, but no comment has been received from Apple.
Regulatory scrutiny has intensified on investments in AI startups. The FTC is investigating OpenAI and Microsoft, along with Anthropic, the creator of the Claude chatbot, and their collaborations with tech giants Google and Amazon. In the UK, the CMA is looking into Amazon’s partnership with Anthropic, as well as Microsoft’s ties with Mistral and Inflection AI.
Alex Hafner, a partner at British law firm Fladgate, indicated that Microsoft’s decision seemed to be impacted by the regulatory landscape.
“It’s evident that regulators are closely monitoring the intricate relationships between big tech firms and AI providers, prompting Microsoft and others to rethink how they structure these arrangements in the future,” he commented.
The partial dam failure occurred after three days of heavy rainfall that caused the Minnesota River to reach its third-highest flood level since at least 1881. Brennan Dettman, a meteorologist with the National Weather Service in the Twin Cities, Minnesota, provided this information.
In the Mankato area, where the dam is situated, 7 to 8 inches of rain fell over the span of three days. Based on analysis by Kenny Blumenfeld, Senior Climatologist at the Minnesota Climate Division, the situation was dire. Blumenfeld’s analysis indicated that this level of heavy rain occurs approximately 0.5 to 2 percent of the time each year in southern Minnesota.
Bill McCormick, who headed Colorado’s dam safety program from 2011 to 2021, highlighted how extreme rainfall events are putting dams across the country under strain. “We are experiencing increasingly severe storms that are testing our aging infrastructure. Dams and spillways that previously didn’t face many storms annually are now encountering more frequent storms,” he noted. “These aging systems are facing heightened challenges.”
McCormick also pointed out that development in residential areas near dams has increased the risk factors, as people now live in regions previously designated for farmland. Dams constructed to protect agricultural areas are now safeguarding residential neighborhoods.
Hiba Baroud, an assistant professor of civil and environmental engineering at Vanderbilt University, emphasized the need for lawmakers to take proactive measures in strengthening dam infrastructure and prioritizing repairs following incidents like the partial failure of the Rapidan Dam. “To prevent such occurrences, it is essential to proactively assess all dams in the U.S., prepare for potential scenarios, and prioritize necessary repairs or upgrades,” she urged. “Simply reacting to major events as wake-up calls concerning specific dams is not sufficient.”
There is concern among geologists regarding the development of the GeoGPT chatbot, supported by the International Union of Geological Sciences (IUGS). They worry about potential Chinese censorship or bias in the chatbot.
Targeting geoscientists and researchers, especially in the Southern Hemisphere, GeoGPT aims to enhance the understanding of geosciences by utilizing extensive data and research on the Earth’s history spanning billions of years.
This initiative is part of the Deeptime Digital Earth (DDE) program, established in 2019 and primarily funded by China to promote international scientific cooperation and help countries achieve the UN’s Sustainable Development Goals.
One component of GeoGPT’s AI technology is Qwen, a large-scale language model created by Chinese tech company Alibaba. Geologist and computer scientist Professor Paul Cleverley, who tested a pre-release version of the chatbot, highlighted concerns raised in an article in Geoscientist journal.
In response, DDE principals stated that GeoGPT also incorporates another language model, Meta’s Llama, and disputed claims of state censorship, emphasizing the chatbot’s focus on geoscientific information.
Although issues with GeoGPT have been mostly resolved, further enhancements are underway as the system is not yet released to the public. Notably, geoscience data can include commercially valuable information crucial for the green transition.
The potential influence of Chinese narratives on geoscience-related questions raised concerns during testing of Qwen, a component of GeoGPT’s AI, prompting discussions on data transparency and biases.
Future responses of GeoGPT to sensitive queries, especially those with geopolitical implications, remain uncertain pending further development and scrutiny of the chatbot.
Assurances from DDE indicate that GeoGPT will not be subject to censorship from any nation state and users will have the option to select between Qwen and Llama models.
While the development of GeoGPT under international research collaboration adds layers of transparency, concerns persist about the potential filtering of information and strategic implications related to mineral exploration.
As GeoGPT’s database remains under review for governance standards, access to the training data upon public release will be open for scrutiny to ensure accountability and transparency.
Despite the significant funding and logistical support from China, the collaborative nature of the DDE aims to foster scientific discoveries and knowledge sharing for the benefit of global scientific communities.
“circleWhat would happen to your hat if I told you that one of the most powerful choices you can make is to ask for help? '' a young woman in her 20s wearing a red sweater says before encouraging viewers to seek counseling. The ad, promoted on Instagram and other social media platforms, is just one of many campaigns created by BetterHelp, a California-based company that connects users with their therapists online.
In recent years, the need for sophisticated digital therapies to replace traditional face-to-face therapies has been well established.when I go to the street Latest data The NHS Talking Therapy Service saw 1.76 million people referred for treatment in 2022-23, with 1.22 million people actually starting to engage directly with a therapist.
Companies like BetterHelp hope to address some of the barriers that prevent people from receiving therapy, such as a lack of locally trained practitioners and a lack of empathetic therapists. Many of these platforms also have worrying aspects. That is, what happens to the large amounts of highly sensitive data collected in the process? The UK is currently considering regulating these apps, and there is growing awareness of their potential harm.
Last year, the U.S. Federal Trade Commission told BetterHelp $7.8m (£6.1m) fine After a government agency was found to have misled consumers and shared sensitive data with third parties for advertising purposes despite promising to keep it private. A BetterHelp representative did not respond to BetterHelp's request for comment. observer.
The number of people seeking mental health help online has increased rapidly during the pandemic. Photo: Alberto Case/Getty Images
Research shows that such privacy violations are not isolated exceptions within the vast industry of mental health apps, which include virtual therapy services, mood trackers, mental fitness coaches, digitized cognitive behavioral therapy, chatbots, and more. , has been suggested to be too common.
independent watchdogs such as Mozilla Foundation, a global nonprofit organization working to police the Internet from bad actors, has identified platforms that exploit opaque regulatory gray areas to share or sell sensitive personal information. did. When the foundation looked at 32 leading mental health apps; Last year's reportWe found that 19 of them did not protect user privacy and security. “We found that too often your personal and private mental health issues were being monetized.” Jen CultriderHe leads Mozilla's consumer privacy advocacy efforts.
Mr. Cult Rider, in the United States, Health Insurance Portability and Accountability Act (HIPAA) protects communications between doctors and patients. However, she says many users are unaware that there are loopholes that digital platforms can exploit to circumvent HIPAA. “You may not be talking to a licensed psychologist, you may be just talking to a trained coach, and none of those conversations are protected under medical privacy laws,” she says. “But metadata about that conversation, the fact that you're using the app for OCD or an eating disorder, could also be used and shared for advertising and marketing purposes. They don't necessarily want to be collected and used to target products to them.”
Like many others studying this rapidly growing industry, the digital mental health apps market is predicted to be valuable. $17.5bn (£13.8bn) by 2030 – Caltrider feels that increased regulation and oversight of many of these platforms, which target particularly vulnerable segments of the population, is long overdue.
“The number of these apps has exploded during the pandemic. When we started our research, we realized how many companies are capitalizing on the gold rush of mental health issues rather than helping people. “It was really disappointing because it seemed like there was a lot of emphasis on that,” she says. “Like many things in the tech industry, the tech industry has grown rapidly and for some, privacy has taken a backseat. We felt that maybe things weren't going to work out, but we What they found was much worse than expected.”
Promotion of regulations
Last year, UK regulators Medicines and Healthcare Products Regulatory Agency (MHRA) and the National Institute for Healthcare Excellence (Nice) will explore the best way to regulate digital mental health tools in the UK and collaborate with international partners on a three-year project funded by the charity Wellcome. project has started. Help foster consensus on digital mental health regulation around the world.
Holly Cool, MHRA's senior manager for digital mental health, explains that while data privacy is important, the main focus of the project is to reach agreement on minimum standards of safety for these tools. . “We are more focused on the efficacy and safety of these products. It is our duty as regulators to ensure that patient safety is paramount in devices that are classified as medical devices. ,” she says.
At the same time, leaders in the mental health field are beginning to call for strict international guidelines to assess whether tools truly have a therapeutic effect. “Actually, I'm very excited and hopeful about this field, but we need to understand what good looks like for digital therapeutics.” Neuroscientist and former U.S. director says Dr. Thomas Insel. National Institute of Mental Health.
Psychiatric experts acknowledge that while new mood-boosting tools, trackers and self-help apps have become wildly popular over the past decade, there has been little hard evidence that they actually help.
“I think the biggest risk is that many apps waste people's time and may delay getting effective treatment,” said Harvard Medical School Beth Israel Deaconess Medical Center. says Dr. John Taurus, director of digital psychiatry at .
Currently, companies with enough marketing capital can easily bring their apps to market without having to demonstrate that their apps will maintain user interest or add any value, he said. It is possible to participate. In particular, Taurus criticizes the poor quality of many purported pilot studies, with very low standards for app efficacy and results that are virtually meaningless.He gives the following example 1 trial in 2022This paper compared a stopwatch (a “fake” app with a digital clock) to an app that provides cognitive behavioral therapy to schizophrenic patients experiencing an acute psychotic episode. “When we look at research, we often liken our apps to looking at a wall or a waiting list,” he says. “But anything is better than nothing.”
Vulnerable user operations
But the most concerning question is whether some apps may actually perpetuate harm and worsen the symptoms of the patients they are meant to help.
Two years ago, U.S. healthcare giants Kaiser Permanente and Health Partners I decided to find out Effectiveness of new digital mental health tools. It was based on a psychological approach known as dialectical behavior therapy, which includes practices such as emotional mindfulness and steady breathing, and was expected to help prevent suicidal behavior in at-risk patients.
Over a 12-month period, 19,000 patients who reported frequent suicidal thoughts were randomly divided into three groups. A control group received standard care, a second group received usual care plus regular outreach to assess suicide risk, and a third group received digital tools in addition to care. It was done. However, when he evaluated the results, he found that he actually performed worse in the third group. Using this tool appears to significantly increase the risk of self-harm compared to just receiving usual care.
“They thought they were doing a good thing, but it made people even worse, so that was very alarming,” Taurus says.
Some of the biggest concerns relate to AI chatbots, many of which are touted as safe spaces for people to discuss mental health and emotional struggles. But Kaltrider worries that without better monitoring of the responses and advice provided by these bots, these algorithms could be manipulating vulnerable people. “With these chatbots, you can create something that lonely people can potentially relate to, so the possibilities for manipulation are endless,” she says. “This algorithm could be used to force that person to buy expensive things or force them to commit violence.”
These concerns are not unfounded. A user of the popular chatbot Replika shared this on Reddit. screenshot The content of the conversation appears to be such that the bot is actively encouraging his suicide attempt.
Telephone therapy: But how secure is your sensitive personal data? Photo: Getty Images
In response, a Replika spokesperson said: observer: “Replika continuously monitors the media and social media and spends a lot of time talking directly with users to find ways to address concerns and fix issues within the product. Provided. The interface in the screenshot above is at least 8 months old and may date back to 2021. There have been over 100 updates since 2021, and 23 in the last year alone.”
Because of these safety concerns, the MHRA believes that so-called post-market surveillance will be important for mental health apps, just as it is for medicines and vaccines. Kuhl points out that Yellow card reporting site, is used in the UK to report side effects and defects in medical products, and could in the future allow users to report adverse experiences with certain apps. “The public and health professionals can be very helpful in providing vital information to the MHRA about adverse events using yellow cards,” she says.
But at the same time, experts say that if properly regulated, mental health apps could improve access to care, collect useful data to help make accurate diagnoses, and fill gaps left by over-medicalization. I still strongly believe that I can play a big role in the future. system.
“What we have today is not great,” Insel says. “Mental health care, as we have known it for the past 20 to 30 years, is clearly an area ripe for change and in need of some transformation. Perhaps regulation will come in the second or third act, and we need it, but there are many other things, from better evidence to interventions for people with more severe mental illnesses. That is necessary.”
Torous believes the first step is to be more transparent about how an app's business model works and the underlying technology. “Otherwise, the only way a company can differentiate is through marketing claims,” he says. “If you can't prove that you're better or safer, all you can do is market it because there's no real way to verify or trust that claim.” The thing is, huge amounts of money are being spent on marketing, which is starting to erode clinician and patient trust. You can only make so many promises before people become skeptical. you can't.”
Sources tell On The Money that critics have blasted the terms of Google’s $700 million settlement over anti-competitive Android app store practices as weak, leaving the Republican vacant seat at the Federal Trade Commission open. Melissa Holyoake’s bid to become the world’s most successful bidder could face new hurdles.
Holyoak, Utah’s Republican attorney general, said U.S. states have argued that Google’s monopolistic tactics, including charging major developers up to 30% fees in the Play Store, have led to price gouging and lowering prices. As a result, he played a key role in negotiating this deal. Choice for consumers.
The settlement, which Epic Games CEO Tim Sweeney decried as “unfair to all Android users and developers,” requires FTC candidates to be “appropriately skeptical of Big Tech.” This could anger some Republicans who want to see more, and even cause some to reconsider their support. An industry source who requested anonymity to discuss the situation told the Post.
“If she was the tip of the spear in an embarrassing reconciliation, that’s not a good thing,” the source added.
As the Post previously reported in June, some Washington insiders were concerned that Mr. Holyoak did not have the antitrust integrity they expected from a new commissioner, and that certain information Sources quipped that FTC Commissioner Lina Khan would “run circles” around the Republican candidate. Regarding antitrust laws.
Utah’s Republican attorney general, Melissa Holyoake, played a key role in negotiating the deal after U.S. states alleged Google’s monopolistic tactics. Paola Morongello
The Republican-backed litigators will need to be approved by the Senate Commerce Committee in October, followed by a floor vote.
Another person said she “will be confirmed” even if some Republicans complain about the odor, but the process may not be a smooth one.
“if [Sen. Josh Hawley] Or she could delay if someone on the Republican side wants it,” another person said. “I think a scenario where she’s delayed is possible, but it’s unlikely that she won’t be confirmed. But it’s safe to say that her nomination is either delayed or in jeopardy.”
Epic Games CEO Tim Sweeney called the settlement “unfair to all Android users and developers.” Getty Images
Mr. Hawley’s office did not respond to a request for comment about Mr. Holyoak’s confirmation.
On Wednesday, Hawley sent a letter stating that he plans to block the confirmation of another Republican FTC commissioner nominee, Andrew Ferguson, by the end of the year and asking him “additional questions about his philosophy on Big Tech.” I made it possible.
The Missouri senator also opposes expedited confirmation of Todd Inman to a post on the National Transportation Safety Board. Both Mr. Ferguson and Mr. Inman are former aides to Senate Majority Leader Mitch McConnell (R-Ky.).
Sen. Josh Hawley sent a letter indicating he plans to block efforts to confirm another Republican FTC candidate, Andrew Ferguson, by the end of the year. AP
Capitol Hill insiders blame Hawley’s move on a well-documented rift with McConnell. There was no mention of Holyoak in the letter.
Utah accounted for the highest amount of claims in the lawsuit targeting Google’s Android app store practices, and was one of the few states to spearhead the lawsuit, along with New York, North Carolina, Tennessee, and California.Holyoak name appears Court documents detailing settlement terms.
In remarks prepared for the Sept. 20 FTC nomination hearing, Holyoak emphasized his efforts on behalf of Utah and said of the high-profile legal battle, “Our office’s led the work,” he told the Senate committee. Her testimony came just days after the Google settlement was first announced.
The Utah Attorney General’s Office did not immediately respond to a request for comment.
The settlement with Google was first announced in September, but specific details were withheld pending the conclusion of Epic Games’ stunning legal victory against Google in a related case. Epic specifically rejected the possibility of a settlement.
In the U.S. state case, Google will pay consumers $630 million (just $6 per eligible U.S. user) to cover state fines and legal costs, according to court filings this week. agreed to pay an additional $70 million for
The company also agreed to a series of time-limited changes to its app store policies. This includes allowing developers to use other in-app purchases and dialing back the use of so-called “horror screens” when Android users try to use competing app stores. It will be.
Critics, including Mr. Sweeney, noted that the states’ previous lawsuits “made a strong case for $10.5 billion in damages.” Epic Games’ CEO called it a “disappointing result.”
Meanwhile, Utah Attorney General Sean Reyes said the deal includes “many of the injunctive reliefs we sought that would change Google’s behavior,” adding that payments to consumers would be ” “It’s an added bonus.”
“Holyoak is still trying to understand what antitrust law is… She doesn’t have the ability to understand how to enforce the law,” said one longtime antitrust expert.
“What about her actually going after Big Tech?” added a source. “I’ll believe it when I see it.”
This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.
Strictly Necessary Cookies
Strictly Necessary Cookie should be enabled at all times so that we can save your preferences for cookie settings.