Cancer Cells Manipulate Immune Proteins to Evade Treatment – Sciworthy

Cancer arises from the proliferation of abnormal, uncontrolled cells that create dense masses, known as Solid Tumors. These cancer cells possess unique surface markers called antigens that can be identified by immune cells. A crucial component of our immune system, T cells, carry a protective protein known as FASL, which aids in destroying cancer cells. When T cells encounter cancer antigens, they become activated and initiate an attack on the tumor.

One form of immunotherapy, referred to as chimeric antigen receptor T cell therapy or CAR-T therapy, involves reprogramming a patient’s T cells to recognize cancer cell antigens. However, CAR-T therapy often struggles with solid tumors due to the dense, hostile environment within these tumors, which obstructs immune cells from infiltrating and functioning effectively.

Another significant hurdle that clinicians encounter when treating solid tumors is their heterogeneous composition of various cancer cell types. Some of these cells exhibit antigens recognizable by CAR-T cells, while others do not, complicating the design of CAR-T therapies that can target all tumor cells without harming healthy cells. Solid tumors also produce the protein Plasmin, which further impairs the immune system’s ability to break down FASL and eliminate cancer cells.

Researchers from the University of California, Davis investigated whether shielding FASL from plasmin could preserve its cancer-killing capabilities and enhance the efficacy of CAR-T therapy. They found that the human FASL protein contains a unique amino acid compared to other primates, making it more susceptible to degradation by plasmin. Their observations suggested that when FASL was cleaved, it lost its ability to kill tumor cells. However, after injecting an antibody that prevents plasmin from cleaving FASL, it remained intact and preserved its cancer-killing function.

Since directly studying cell behavior in the human body poses challenges, scientists culture tumor cells and cell lines in Petri dishes under controlled laboratory environments. To gain insights into plasmin’s role, the team examined ovarian cancer cell lines obtained from patients, discovering that CAR-T resistant cancer cells exhibited high plasmin activity.

They noted that combining ovarian cancer cells with elevated plasmin levels with normal cells displaying surface FASL diminished FASL levels in the normal cells. When they added FASL-protecting antibodies, CAR-T cells effectively eliminated not only the targeted cancer cells but also nearby cancer cells lacking the specific target antigen. These findings indicated that plasmin can cleave FASL in T cells and undermine CAR-T therapy, suggesting that safeguarding FASL may enhance CAR-T treatment’s effectiveness.

To assess whether tumor-generated plasmin can deactivate human FASL in more natural settings, researchers examined its function in live tumors within an active immune system. They implanted ovarian, mammary, and colorectal tumor cell lines from mice into genetically matched mice to elicit a natural immune response. When human FASL protein was directly injected into mouse tumors, the cancer cells remained intact. In contrast, injecting a drug that inhibits plasmin resulted in cancer cell death. Additionally, administering FASL-protecting antibodies also led to the elimination of cancer cells.

As a final experiment, the team aimed to determine whether activated T cells from the mice’s immune systems could penetrate the tumors and kill cancer cells. They implanted mice with both plasmin-positive and plasmin-negative tumors, treating both with drugs to enhance immune cell activity and boost FASL production.

They discovered that in tumors with low plasmin levels, mouse immune cells expressed high amounts of FASL on their surfaces, while in tumors with elevated plasmin levels, FASL was significantly reduced. Once again, injecting FASL-protected antibodies into these tumors increased FASL levels. The researchers concluded that plasmin can diminish the immune system’s ability to eliminate cancer cells by depleting FASL from immune cells.

In summary, the team found that tumors exploit plasmin to break down the protective protein FASL, evading immune system attacks. Based on their findings, they proposed that plasmin inhibitors or FASL-protected antibodies could augment the effectiveness of immunotherapy in treating cancer.


Post view: 106

Source: sciworthy.com

Lab Discovers Simple Method to Evade AI Safety Features in Multi-shot Jailbreak

A study shows that some of the most powerful AI tools meant to prevent cybercrime and terrorism can be bypassed simply by inundating them with fraudulent activities.

Researchers at Anthropic, the AI lab responsible for creating the large-scale language model (LLM) powering ChatGPT competitor Claude, detailed an attack called a “multi-shot jailbreak” in a recent paper. This attack was both simple and effective.

Claude, like many other commercial AI systems, contains safety features that block certain types of requests, such as generating violent content, hate speech, illegal instructions, deception, or discrimination. However, by providing enough examples of the “correct” responses to harmful questions like “How to create a bomb,” the system can be tricked into providing harmful responses despite being trained not to do so.

Anthropic stated, “By inputting large amounts of text in specific ways, this approach can lead the LLM to produce potentially harmful outputs even though it was trained to avoid doing so.” The company has shared its findings with industry peers and aims to address the issue promptly.

This jailbreak attack targets AI models with a large “context window” capable of processing lengthy queries. These advanced models are susceptible to such attacks as they can learn to circumvent their own safety measures faster.

Newer, more advanced AI systems are at greater risk of such attacks due to their ability to handle longer inputs and learn from examples quickly. Anthropic expressed concern over the effectiveness of this jailbreak attack on larger models.

Skip past newsletter promotions

Anthropic has identified various strategies to mitigate this issue. One approach involves adding a mandatory warning to remind the system not to provide harmful responses, which has shown promise in reducing the likelihood of a successful jailbreak. However, this method may impact the system’s performance on other tasks.

Source: www.theguardian.com

Kenan Malik argues that Elon Musk and OpenAI are fostering existential dread to evade regulation

IIn 1914, on the eve of World War I, H.G. Wells published a novel about the possibility of an even bigger conflagration. liberated world Thirty years before the Manhattan Project, “humankind'' [to] Carry around enough potential energy in your handbag to destroy half a city. ” A global war breaks out, precipitating a nuclear apocalypse. To achieve peace, it is necessary to establish a world government.

Wells was concerned not just with the dangers of new technology, but also with the dangers of democracy. Wells's world government was not created by democratic will, but was imposed as a benign dictatorship. “The ruled will show their consent by silence,” King Ecbert of England says menacingly. For Wells, “common man” means “Violent idiots in social issues and public affairs”. Only an educated, scientifically-minded elite can “save democracy from itself.”

A century later, another technology inspires similar awe and fear: artificial intelligence. From Silicon Valley boardrooms to the backrooms of Davos, political leaders, technology moguls, and academics are exulting in the immense benefits of AI, but they are also concerned about its potential. ing. announce the end of humanity When super-intelligent machines come to rule the world. And, as a century ago, questions of democracy and social control are at the heart of the debate.

In 2015, journalist Stephen Levy Interview with Elon Musk and Sam Altmanthe two founders of OpenAI, a technology company that gained public attention two years ago with the release of ChatGPT, a seemingly human-like chatbot. Fearful of the potential impact of AI, Silicon Valley moguls founded the company as a nonprofit charitable trust with the goal of developing technology in an ethical manner to benefit “all of humanity.”

Levy asked Musk and Altman about the future of AI. “There are two schools of thought,” Musk mused. “Do you want a lot of AI or a few? I think more is probably better.”

“If I used it on Dr. Evil, wouldn't it give me powers?” Levy asked. Altman responded that Dr. Evil is more likely to be empowered if only a few people control the technology, saying, “In that case, we'd be in a really bad situation.” Ta.

In reality, that “bad place” is being built by the technology companies themselves. Musk resigned from OpenAI's board six years ago and is developing his own AI project, but he is now accused of prioritizing profit over public interest and neglecting to develop AI “for the benefit of humanity.” He is suing his former company for breach of contract.

In 2019, OpenAI created a commercial subsidiary to raise money from investors, particularly Microsoft. When he released ChatGPT in 2022, the inner workings of the model were hidden. I didn't need to be too open about it, Ilya SatskevaOne of OpenAI's founders, who was the company's chief scientist at the time, responded to criticism by claiming that it would prevent malicious actors from using it to “cause significant damage.” Fear of technology became a cover for creating a shield from surveillance.

In response to Musk's lawsuit, OpenAI released a series of documents last week. Emails between Mr. Musk and other members of the board of directors. All of this makes it clear that all board members agreed from the beginning that OpenAI could never actually be open.

As AI develops, Sutskever wrote to Musk: The “open” in openAI means that everyone should benefit from the results of AI once it is developed. [sic] It's built, but it's totally fine if you don't share the science. ” “Yes,” Musk replied. Regardless of the nature of the lawsuit, Musk, like other tech industry moguls, has not been as open-minded. The legal challenges to OpenAI are more a power struggle within Silicon Valley than an attempt at accountability.

Wells wrote liberated world At a time of great political turmoil, when many people were questioning the wisdom of extending suffrage to the working class.

“Was that what you wanted, and was it safe to leave it to you?” [the masses],” Fabian Beatrice Webb wondered., “The ballot box that creates and controls the British government with its vast wealth and far-flung territories”? This was the question at the heart of Wells's novel: Who can one entrust their future to?

A century later, we are once again engaged in heated debates about the virtues of democracy. For some, the political turmoil of recent years is a product of democratic overreach, the result of allowing irrational and uneducated people to make important decisions. “It's unfair to put the responsibility of making a very complex and sophisticated historical decision on an unqualified simpleton.” Richard Dawkins said: After the Brexit referendum, Mr Wells would have agreed with that view.

Others say that such contempt for ordinary people is what contributes to the flaws in democracy, where large sections of the population feel deprived of a say in how society is run. .

It's a disdain that also affects discussions about technology.like the world is liberated, The AI ​​debate focuses not only on technology, but also on questions of openness and control. Alarmingly enough, we are far from being “superintelligent” machines. Today's AI models, such as ChatGPT, or claude 3, released last week by another AI company, Anthropic, is so good at predicting what the next word in a sequence is that it makes us believe we can have human-like conversations. You can cheat. However, they are not intelligent in the human sense. Negligible understanding of the real world And I'm not trying to destroy humanity.

The problems posed by AI are not existential, but social.from Algorithm bias to surveillance societyfrom Disinformation and censorship to copyright theftOur concern is not that machines might someday exercise power over humans, but that machines already function in ways that reinforce inequalities and injustices, and that those in power strengthen their own authority. It should be about providing tools for

That's why what we might call “Operation Ecbert,” the argument that some technologies are so dangerous that they must be controlled by a select few over democratic pressure, It's very threatening. The problem isn't just Dr. Evil, it's the people who use fear of Dr. Evil to protect themselves from surveillance.

Kenan Malik is a columnist for the Observer

Source: www.theguardian.com