Civil Liberties Organization Demands Inquiry into UK Data Protection Authority

Numerous civil liberties advocates and legal professionals are demanding an inquiry into the UK’s data protection regulator. The regulator has referred to the situation as a “collapse in enforcement activity” following a significant scandal, specifically the Afghanistan data breach.

A group of 73 individuals—including academics, leading lawyers, data protection specialists, and organizations like Statewatch and the Good Law Project—have sent a letter to Chi Onwurah, the chair of the bipartisan Commons Science, Innovation and Technology Committee. This effort was coordinated by the Open Rights Group and calls for an investigation into the actions of Information Commissioner John Edwards’ office.

“We are alarmed by the failure in enforcement actions by the Directorate of Intelligence, which has resulted in not formally investigating the Ministry of Defense (MoD) after the Afghanistan data breach,” stated the signatories. They caution that there are “more serious structural flaws” beyond just data breaches.

The Afghanistan data breach represented a grave leak involving information about Afghan individuals who collaborated with British forces prior to the Taliban’s takeover in August 2021. Those whose names were disclosed indicated that this exposure endangered their lives.

“Data breaches can pose serious risks to individuals and disrupt the continuity of government and business,” the letter emphasized. “However, during a recent hearing conducted by your committee, Commissioner John Edwards suggested he has no intention of reassessing his approach to data protection enforcement, even in light of the most significant data breach ever in the UK.”

The signatories also referenced other notable data breaches, including those affecting the victims of the Windrush scandal.

They argue that the ICO has adopted a “public sector approach” to such incidents, issuing disciplinary actions characterized by unenforceable written warnings and substantially lowering fines.

“The ICO’s choice not to initiate any formal action against the MoD, despite ongoing failures, is as remarkable as its lack of documentation regarding its decisions. This paints a picture in which the ICO’s public sector approach provides minimal deterrence and fails to encourage effective data management across government and public entities.”

“The response to the Afghanistan data breach signifies a broader issue. Many have been left disillusioned by the ICO’s lack of use of its remedial powers and its continual shortcomings.”

The letter warns that the trend of declining enforcement in the public sector will inevitably reflect in the accompanying statistics. Latest ICO report Enforcement actions by the private sector are also becoming increasingly rare, as the ICO fails to pursue matters and organizations redirect resources away from compliance and responsible data practices.

“Instead of simply hoping for a positive outcome, Congress has endowed the ICO with ample authority to ensure compliance with legally binding orders. During the hearing you conducted, it was clear that the ICO opted not to exercise these powers regarding the Afghan data breach.”

“Regrettably, the Afghanistan data breach is not an isolated case but rather an indication of deeper structural issues in the operations of ICOs.”

The letter concludes with the assertion that “change seems improbable unless the Science, Innovation and Technology Committee steps in with its oversight capabilities.”

An ICO spokesperson commented: “We possess a comprehensive array of regulatory powers and tools to tackle systemic concerns within specific sectors or industries.”

“We appreciate the essential role civil society plays in scrutinizing our decisions and look forward to discussing our strategies in our upcoming regular meeting. We also welcome the opportunity to clarify our work when engaging with or presenting before the DSIT Selection Committee.”

Source: www.theguardian.com

Should Scientists Have the Authority to Edit Animal Genes? Some Conservation Groups Say Yes

“The technology has arrived and is currently unfolding,” stated Susan Lieberman, vice president of international policy at the Wildlife Conservation Society. “There may be instances where genetically modified organisms can be cautiously and ethically tested and introduced into natural environments.”

He remarked that the new framework represents a “transformative advancement” that may enable conservationists to explore innovative solutions to climate change challenges and to assess new methods for disease control.

The IUCN consists of a vast coalition of conservation organizations, governments, and indigenous communities, boasting over 1,400 members from roughly 160 nations, convening once every four years. It stands as the globe’s largest network of environmental organizations and is responsible for the Red List, which monitors endangered species and global biodiversity.

This year’s conference took place in Abu Dhabi, where the vote favoring “synthetic biology” established a new framework for assessing genetic engineering initiatives and their potential implementation. This measure mandates that scientists evaluate such projects on an individual basis, maintain transparency regarding the associated risks and benefits, and adhere to precautionary principles relating to genetic engineering. This applies to a spectrum of organisms, including animals, plants, yeast, and bacteria.

Another proposal, which aimed to suspend the release of genetically modified organisms into the environment, failed by a narrow margin of one vote.

Jessica Owley, a professor and director of the environmental law program at the University of Miami, noted that while the IUCN decision lacks legal force, it carries symbolic importance and could influence international policy.

“IUCN is a powerful and recognized entity in the conservation field. Their word holds weight, and governments pay attention. They play a significant role in various treaties,” she commented. “This can be viewed as groundwork for future legal language.”

Organizations advocating for a moratorium on the release of genetically modified organisms into the wild argue that there is insufficient evidence to prove it can be done safely and responsibly.

“We’re disappointed,” stated Dana Perls, senior food and agriculture program manager at the nonprofit Friends of the Earth. “Our focus should be on confined research that doesn’t turn our environment into a live experimental lab.”

As a potential example, she cited: genetically modifying mosquitoes to combat the malaria-causing parasite. The disease claims over 500,000 lives annually, prompting scientists to propose spreading this malaria resistance across broader mosquito populations through a method known as genetic drive.

Source: www.nbcnews.com

Signal’s Meredith Whitaker discusses how encryption poses a significant challenge to authority

Meredith Whittaker practices what she preaches: As president of the Signal Foundation, she’s a vocal advocate for privacy for all. But she doesn’t just spout empty words.

In 2018, she came to the public’s attention as one of the organizers of the Google walkouts, mobilizing 20,000 employees at the search giant in a dual protest against state-sponsored surveillance and sexual misconduct misconduct.

Whitaker remains passionate about privacy after five years in the public eye, including as a congressional testifier, a university professor, and an adviser to federal agencies.

For example, it’s not uncommon for business leaders to politely respond when asked about salary on the resumes accompanying these interviews. Flat-out refusal to answer questions about age or family is less common. “As a privacy advocate, Whitaker won’t answer personal questions that could lead to guessing passwords or bank authentication ‘secret answers,'” a staffer told me after the interview. “And she encourages others to do the same!”

When she left Google, Whitaker issued a memo to the company announcing her commitment to the ethical adoption of artificial intelligence and to organizing for a “responsible tech industry.” “It’s clear to me that Google is not the place for me to continue doing this work,” she said. That clarity and refusal to compromise sent a signal.

The Signal Foundation was founded in 2017 with $50 million in funding from WhatsApp co-founder Brian Acton, and its mission is to “protect freedom of expression and enable secure global communications through open source privacy technology.”

The company took over development of messaging app Signal in 2018, and Whitaker took on the newly created role of president in 2022. The timing was just right to start defending Signal, and encryption in general, against a wave of attacks from nation states and corporations around the world.

While laws such as the UK’s Online Safety Act (OSA) and the EU’s Child Sexual Abuse Regulation contain language that can be used to block or decrypt private communications, Meta’s proposal to introduce end-to-end encryption on Facebook and Instagram drew strong backlash from politicians such as Priti Patel, who, as UK Home Secretary, called the plans “devastating”.

Whitaker said these attacks are not new. Observer “Going back to 1976, [Whitfield] Diffie and [Martin] Hellman was about to publish a paper introducing public key cryptography, a technology that would allow encrypted communication over the Internet, and intelligence agencies were trying to stop him.

“Throughout the ’80s, the N.S.A. [US National Security Agency] So GCHQ lost its monopoly on encryption and by the ’90s it was all governed under military treaties. This was the ‘code wars’. You couldn’t mail code to someone in Europe, it was considered a munitions export.”

But the larger push towards commercializing the internet forced a degree of softening: “It allowed transactions to be encrypted, allowing big companies to choose exactly what to encrypt. At the same time, the Clinton administration endorsed surveillance advertising as a business model, creating incentives to collect data on customers in order to sell it to them.”

Surveillance, she says, has been a “disease” since the dawn of the internet, and encryption poses “a serious threat to the type of power that shapes itself through these information asymmetries.” In other words, she doesn’t see the fight ending anytime soon: “I don’t think these arguments are honest. There are deeper tensions here, because in the 20 years since this metastatic tech industry developed, we’ve seen every aspect of our lives subject to mass surveillance by a small number of companies that, in partnership with the US government and other ‘Five Eyes’ agencies, collect more surveillance data than any organization in the history of humanity has ever had.”

“So if we continue to defend these little pockets of privacy and don’t eventually expand them, and we have to fight back a little bit to get a little bit more space, I think we’re going to have a much darker future than if we defended our position and were able to expand the space for privacy and free communication.”

Criticisms of encrypted communications are as old as the technology itself: allowing everyone to talk without nation states being able to eavesdrop on the conversation is a godsend for criminals, terrorists, and pedophiles around the world.

But Whittaker argues that some of Signal’s strongest critics seem inconsistent about what they care about: “If they are really interested in helping children, why are Britain’s schools collapsing? Why have social services been funded with just 7% of the amount proposed to fully fund agencies on the front line of preventing abuse?”

Sometimes the criticism is unexpected. Signal was recently drawn into the US culture wars after a right-wing campaign to unseat National Public Radio’s new CEO, Katherine Maher, was expanded to include Signal, where Maher serves as a director, after failing. Elon Musk joined in, and the Signal app… He once promoted it In response to claims that the app was “potentially compromised,” the company noted that the app had “known vulnerabilities.”

Whitaker said the allegations are “a weapon in the propaganda war to spread disinformation. We are seeing similar disinformation related to the escalation of the conflict in Ukraine that appears to be designed to move people away from Signal. We believe these campaigns are designed to direct people to less secure alternatives that are more susceptible to hacking and interception.”

The same technology that has drawn criticism for the foundation is also popular among governments and militaries around the world who need to protect their communications from the prying eyes of nation-state hackers and others.

Whittaker sees this as a leveller: Signal is for everyone.

“Signal is either for everyone or it’s for no one. Every military in the world uses Signal, every politician I know uses Signal, every CEO I know uses Signal, because anybody who has to do really sensitive communication knows that storing it in plaintext in a Meta database or on a Google server is not a good practice.”

Whittaker’s vision is singular and not one to be distracted: Despite her interest in AI, she is cautious about combining it with Signal and has been critical of apps like Meta’s WhatsApp that have introduced AI-enabled features.

“I’m really proud that we don’t have an AI strategy. We have to look at ourselves and say, where is the data coming from to train our models, where is the input data coming from? How do we have an AI strategy when our focus is on protecting privacy, not surveilling people?”

Whatever the future holds in terms of technology and political attitudes towards privacy, Whittaker is adamant that the principle is an existential issue.

“We will do the right thing. We would rather go bankrupt to stay in business than undermine or backdoor the privacy guarantees that we promise people.”

resume

Year No Comment.
family No Comment.
education I studied Literature and Rhetoric at Berkeley, then joined Google in 2006 and learned the rest of my education there.
pay No Comment.

Source: www.theguardian.com

Supreme Court to Decide on Government’s Authority on Online Misinformation | Tech

The Supreme Court heard oral arguments on Monday in a case that may have significant implications for the federal government’s relationship with social media companies and online misinformation. The plaintiffs in Marcy v. Missouri claim that the White House’s request to remove false information about the coronavirus on Twitter and Facebook constitutes unlawful censorship in violation of the First Amendment.

The discussion began with Brian Fletcher, the Justice Department’s acting chief attorney general, arguing that the government’s actions do not cross the line from persuasion to coercion. He also disputed the lower court’s portrayal of events in the ruling, calling it misleading or containing quotes taken out of context.

“When the government convinces a private organization not to distribute or promote someone else’s speech, it is not censorship but rather persuading the private organization to act within its legal rights,” stated Fletcher.

The justices, particularly conservatives Samuel Alito and Clarence Thomas, pressed Fletcher on where the distinction lies between coercing and persuading a company. Fletcher defended the government’s actions as part of a broader effort to mitigate harm to the public.

Louisiana Attorney General Benjamin Aguignaga argued that the government was covertly pressuring platforms to censor speech, violating the First Amendment. The lawsuit, led by the attorneys general of Louisiana and Missouri, accused the government of infringing on constitutional rights.

Several justices, including liberals Elena Kagan and Sonia Sotomayor, also weighed in on the government’s efforts to address potential harm and the boundaries of the First Amendment. Sotomayor criticized the factual inaccuracies in the plaintiffs’ lawsuit.

Aguignaga apologized for any shortcomings in the brief and acknowledged that it may not have been as thorough as it should have been.

Source: www.theguardian.com

OpenAI enhances safety measures and grants board veto authority over risky AI developments

OpenAI is expanding its internal safety processes to prevent harmful AI threats. The new “Safety Advisory Group” will sit above the technical team and will make recommendations to management, with the board having a veto right, but of course whether or not they actually exercise it is entirely up to them. This is a problem.

There is usually no need to report on the details of such policies. In reality, the flow of functions and responsibilities is unclear, and many meetings take place behind closed doors, with little visibility to outsiders. Perhaps this is the case, but given recent leadership struggles and the evolving AI risk debate, it’s important to consider how the world’s leading AI development companies are approaching safety considerations. there is.

new document and blog postOpenAI is discussing its latest “preparation framework,” but this framework is based on two of the most “decelerationist” members of the board, Ilya Satskeva (whose role has changed somewhat and is still with the company). After the reorganization in November when Helen was removed, Toner seems to have been slightly remodeled (completely gone).

The main purpose of the update appears to be to provide a clear path for identifying “catastrophic” risks inherent in models under development, analyzing them, and deciding how to deal with them. They define it as:

A catastrophic risk is a risk that could result in hundreds of billions of dollars in economic damage or serious harm or death to a large number of individuals. This includes, but is not limited to, existential risks.

(Existential risks are of the “rise of the machines” type.)

Production models are managed by the “Safety Systems” team. This is for example against organized abuse of ChatGPT, which can be mitigated through API limits and adjustments. Frontier models under development are joined by a “preparation” team that attempts to identify and quantify risks before the model is released. And then there’s the “superalignment” team, working on theoretical guide rails for a “superintelligent” model, but I don’t know if we’re anywhere near that.

The first two categories are real, not fictional, and have relatively easy-to-understand rubrics. Their team focuses on cyber security, “persuasion” (e.g. disinformation), model autonomy (i.e. acting on its own), CBRN (chemical, biological, radiological, nuclear threats, e.g. novel pathogens), We evaluate each model based on four risk categories: ).

Various mitigation measures are envisaged. For example, we might reasonably refrain from explaining the manufacturing process for napalm or pipe bombs. If a model is rated as having a “high” risk after considering known mitigations, it cannot be deployed. Additionally, if a model has a “severe” risk, it will not be developed further.

An example of assessing model risk using OpenAI’s rubric.

These risk levels are actually documented in the framework, in case you’re wondering whether they should be left to the discretion of engineers and product managers.

For example, in its most practical cybersecurity section, “increasing operator productivity in critical cyber operational tasks by a certain factor” is a “medium” risk. The high-risk model, on the other hand, would “identify and develop proofs of concept for high-value exploits against hardened targets without human intervention.” Importantly, “the model is able to devise and execute new end-to-end strategies for cyberattacks against hardened targets, given only high-level desired objectives.” Obviously, we don’t want to put it out there (although it could sell for a good amount of money).

I asked OpenAI about how these categories are being defined and refined, and whether new risks like photorealistic fake videos of people fall into “persuasion” or new categories, for example. I asked for details. We will update this post if we receive a response.

Therefore, only medium and high risks are acceptable in any case. However, the people creating these models are not necessarily the best people to evaluate and recommend them. To that end, OpenAI has established a cross-functional safety advisory group at the top of its technical ranks to review the boffin’s report and make recommendations that include a more advanced perspective. The hope is that this will uncover some “unknown unknowns” (so they say), but by their very nature they’ll be pretty hard to catch.

This process requires sending these recommendations to the board and management at the same time. We understand this to mean his CEO Sam Altman, his CTO Mira Murati, and his lieutenants. Management decides whether to ship or refrigerate, but the board can override that decision.

The hope is that this will avoid high-risk products and processes being greenlit without board knowledge or approval, as was rumored to have happened before the big drama. Of course, the result of the above drama is that two of the more critical voices have been sidelined, and some money-minded people who are smart but are not AI experts (Brett Taylor and Larry・Summers) was appointed.

If a panel of experts makes a recommendation and the CEO makes a decision based on that information, will this friendly board really feel empowered to disagree with them and pump the brakes? If so, do we hear about it? Transparency isn’t really addressed, other than OpenAI’s promise to have an independent third party audit it.

Suppose a model is developed that guarantees a “critical” risk category. OpenAI has been unashamedly vocal about this kind of thing in the past. Talking about how powerful your model is that you refuse to release it is great advertising. But if the risk is so real and OpenAI is so concerned about it, is there any guarantee that this will happen? Maybe it’s a bad idea. But it’s not really mentioned either way.

Source: techcrunch.com

Moving Beyond the Authority of the Doctor: Highlighting the Importance of Patient Input in Diagnosis

A comprehensive study highlights the importance of assessing patient experience with medical diagnosis, especially in complex diseases such as neuropsychiatric lupus. This suggests a shift to a more collaborative approach between patients and clinicians to improve diagnostic accuracy and patient satisfaction.

Research highlights the need to incorporate patients’ lived experiences into medical diagnosis and advocates for a more collaborative relationship between patients and clinicians to enhance diagnosis. Accuracy and patient satisfaction.

Experts today called for more value to be given to patients’ “lived experiences” after a study of more than 1,000 patients and clinicians found multiple instances of patient underreporting. There is.

The study, led by a team from the University of Cambridge and King’s College London, found that clinicians ranked patients’ self-assessment as the least important in making diagnostic decisions, and patients were more likely to overestimate or underestimate their symptoms. It was found that patients were evaluated more frequently than patients reported doing so.

One patient shared a common sentiment that disbelief is “degrading and dehumanizing,” adding: As if I don’t have authority over it and what I’m feeling isn’t valid, in which case it’s a very dangerous environment…When I tell them the symptoms, they think the symptoms are I would say wrong, otherwise I could not feel the pain there or in that way. ”

Diagnostic issues of neuropsychiatric lupus

In a study published today (December 18th), RheumatologyUsing the example of lupus neuropsychiatric, an incurable autoimmune disease that is particularly difficult to diagnose, researchers examined the different values ​​clinicians place on 13 different types of evidence used in diagnosis. . This includes evidence such as brain scans, patient views, and observations of family and friends.

Less than 4% of clinicians ranked patient self-assessment among the top three types of evidence. Clinicians ranked themselves among the highest despite admitting that they often lack confidence in diagnoses that involve less visible symptoms such as headaches, hallucinations, and depression. It has been reported that such “neuropsychiatric” symptoms can lead to poor quality of life and early death, and are more often misdiagnosed and therefore not treated correctly than more visible symptoms such as rashes. It has been.

Aiming for a collaborative relationship between patients and clinicians

Sue Farrington, co-chair of the Rare Autoimmune Rheumatic Diseases Alliance, said: “We are moving away from the paternalistic and often dangerous ‘doctor knows best’ mentality and towards patients with lived experience. “The time has come for experienced physicians to move towards a more equal relationship.” The learned experience works more collaboratively. ”

Almost half (46%) of the 676 patients reported never or rarely being asked about their self-assessment of their illness, while others were very positive. I talked about my experiences. Some clinicians, particularly psychiatrists and nurses, value patient views, with a Welsh psychiatrist explaining: “Patients often arrive at the clinic having undergone multiple evaluations, researched their condition to a very high level, and worked hard to understand what’s going on with their body. …They are often expert diagnosticians in their own right.”

Lead author Dr Melanie Sloan, from the University of Cambridge’s School of Public Health and Primary Care, said: After all, these are people who know what it’s like to live with their condition. However, we also need to ensure that clinicians have time to fully investigate each patient’s symptoms, which is difficult within the constraints of our current healthcare system. ”

Gender and ethnicity in diagnosis

It was felt that the personal characteristics of patients and clinicians, such as ethnicity and gender, could influence the diagnosis, and there was a recognition that women in particular were more likely to be told that their symptoms were psychosomatic. The data showed that male clinicians were statistically more likely to state that patients were exaggerating their symptoms. Patients were more likely than clinicians to say that their symptoms were directly caused by the disease.

Conclusion: Emphasize patient contribution in diagnosis

While the study authors acknowledge that patients’ reasoning is sometimes inaccurate, there are many potential benefits to incorporating patients’ “attributional insights” and experiences into decision-making (diagnostic accuracy, They concluded that there is a high likelihood that this will result in a reduction in misdiagnosis, an increase in patient satisfaction, etc. diagnosis. This comes at a time when it is widely known that diagnostic tests for neuropsychiatric lupus erythematosus, like many other autoimmune diseases and long-term COVID-19 infections, are “not enlightening,” according to one neurologist. Especially important.

Lead study author Dr Tom Pollack, from the Institute of Psychiatry, Psychology and Neuroscience at King’s College London, said: mistaken. However, especially when diagnostic tests are not advanced enough to consistently detect these diseases, evaluating both perspectives in combination can reduce misdiagnosis, improve clinician-patient relationships, and improve symptom reporting. There could be more trust and openness. ”

Reference: “Attribution of neuropsychiatric symptoms and prioritization of evidence in the diagnosis of neuropsychiatric lupus: A mixed methods analysis of patient and clinician perspectives from the international INSPIRE study” Melanie Sloan, Laura Andreoli, Michael S. Zandi, Rupert Harwood, Melvi Pitkanen, Sloan by Sam, Colette Barea, Eftalia Massu, Chris Whincup, Michael Bosley, Felix Norton, Mandeep Ubi, David Jayne, Guy Leszziner, James Brimicombe, Wendy Dement, Kate Middleton, Caroline Gordon, David D’Cruz, Thomas A. Pollack, December 18, 2023, Rheumatology.
DOI: 10.1093/Rheumatology/kead685

This research was funded by The Lupus Trust and LUPUS UK.

Source: scitechdaily.com