How Bacteria and Viruses Collaborate to Combat Cancer: Insights from Sciworthy

The history of cancer can be traced back to ancient Egyptian civilizations, where it was thought to be a divine affliction. Over the years, great strides have been made in understanding cancer’s causes and exploring diverse treatment options, although none have proven to be foolproof. Recently, a research team at Columbia University has pioneered a novel method for combating cancerous tumors by utilizing a combination of bacteria and viruses.

The researchers engineered this innovative strategy by infecting bacterial cells with Typhimurium that were modified to carry the Seneca virus A. The theory posited that when tumor cells engulf these bacteria, they would also take in the virus, which would then replicate within the cells, leading to their death and the subsequent distribution of the virus to surrounding cells. This technique has been termed Coordinated Activities of Prokaryotes and Picornaviruses for Safe Intracellular Delivery (CAPPSID).

Initially, the research team verified that Typhimurium was a suitable host for Seneca virus A. They infected a limited number of these bacteria with a modified variant of the virus that emitted fluorescent RNA. Subsequently, they applied a solution that facilitated viral entry into the bacteria. Using fluorescence microscopy, they confirmed the presence of viral RNA inside the bacterial cells, validating the infection. To further assist the viral RNA in escaping the bacteria and reaching cancer cells, the researchers added two proteins, ensuring that viral spread was contained to prevent infection of healthy cells.

After optimizing the bacteria and virus, the team tested the viral delivery system on cervical cancer samples. They found that viral RNA could replicate both outside of bacterial cells and inside cancer cells. Notably, newly synthesized RNA strands were identified within tumor cells, confirming the successful delivery and replication of the virus through the CAPPSID method.

Next, the researchers examined CAPPSID’s impact on a type of lung cancer known as small cell lung cancer (SCLC). By tracking fluorescent viral RNA within SCLC cells, they assessed the rate of viral dissemination post-infection. Remarkably, the virus continued to propagate at a consistent rate for up to 24 hours following the initial infection, demonstrating effective spread through cancerous tissue without losing vigor.

In a follow-up experiment, the researchers evaluated the CAPPSID method on two groups of five mice, implanting SCLC tumors on both sides of their backs. They engineered the Seneca virus A to generate a bioluminescent enzyme for tracking purposes and injected the CAPPSID bacteria into the tumors on the right side. Two days post-injection, the right-side tumor glowed, indicating active viral presence. After four days, the left-side tumor also illuminated, suggesting that the virus had successfully navigated throughout the mice’s bodies while sparing healthy tissues.

The treatment continued for 40 days, leading to complete tumor regression within just two weeks. Remarkably, upon observation over a subsequent 40-day period, the mice demonstrated a 100% survival rate, with no recurrence of cancer or significant side effects. The research team observed that the CAPPSID virus, being encapsulated by bacteria, could circumvent the immune response, thus preventing cancer cells from building immunity against it.

Finally, to prevent uncontrolled replication of Seneca virus A, the researchers isolated a gene from a tobacco virus responsible for producing an enzyme that activates a crucial protein in Seneca virus A. By incorporating this gene into the Typhimurium bacteria, they were able to independently produce this enzyme, ensuring the virus could not replicate or spread without the bacteria’s presence. Follow-up tests confirmed that this modified CAPPSID method improved viral spread while maintaining confinement within cancer-affected areas.

The research findings hold promising potential for the development of advanced cancer therapies. The remarkable regression of tumors in mice and the targeted delivery system of CAPPSID—without adverse effects—could lead to safer cancer treatments for human patients, eliminating the need for radiation or harmful chemicals. However, the researchers also cautioned about the risk of viral and bacterial mutations that may limit the effectiveness of CAPPSID and cause unforeseen side effects. They suggested that enhancing the system with additional tobacco virus-derived enzymes could help mitigate these challenges, paving the way for future research into innovative cancer therapies.

Post views: 148

Source: sciworthy.com

Tech Firms Collaborate with UK Child Safety Agency to Evaluate AI Tool for Generating Abuse Images

Under a new UK law, tech companies and child protection agencies will be granted the authority to test if artificial intelligence tools can create images of child abuse.

This announcement follows reports from a safety watchdog highlighting instances of child sexual abuse generated by AI. The number of cases surged from 199 in 2024 to 426 in 2025.

With these changes, the government will empower selected AI firms and child safety organizations to analyze AI models, including the tech behind chatbots like ChatGPT and image-generating devices such as Google’s Veo 3, to ensure measures are in place to prevent the creation of child sexual abuse images.

Kanishka Narayan, the Minister of State for AI and Online Safety, emphasized that this initiative is “ultimately to deter abuse before it happens,” stating, “Experts can now identify risks in AI models sooner, under stringent conditions.”

This alteration was made due to the illegality of creating and possessing CSAM. Consequently, AI developers and others will be prevented from producing such images during testing. Previously, authorities could only respond after AI-generated CSAM was uploaded online, but this law seeks to eliminate that issue by stopping the images from being generated at all.

The amendments are part of the Crime and Policing Bill, which also establishes a prohibition on the possession, creation, and distribution of AI models intended to generate child sexual abuse material.

During a recent visit to Childline’s London headquarters, Narayan listened to a simulated call featuring an AI-generated report of abuse, depicting a teenager seeking assistance after being blackmailed with a sexual deepfake of herself created with AI.

“Hearing about children receiving online threats provokes intense anger in me, and parents feel justified in their outrage,” he remarked.

The Internet Watch Foundation, which oversees CSAM online, reported that incidents of AI-generated abusive content have more than doubled this year. Reports of Category A material, the most severe type of abuse, increased from 2,621 images or videos to 3,086.

Girls are predominantly targeted, making up 94% of illegal AI images by 2025, with the portrayal of newborns to two-year-olds rising significantly from five in 2024 to 92 in 2025.

Kelly Smith, CEO of the Internet Watch Foundation, stated that these legal modifications could be “a crucial step in ensuring the safety of AI products before their launch.”

“AI tools enable survivors to be victimized again with just a few clicks, allowing criminals to create an unlimited supply of sophisticated, photorealistic child sexual abuse material,” she noted. “Such material commodifies the suffering of victims and increases risks for children, particularly girls, both online and offline.”

Childline also revealed insights from counseling sessions where AI was referenced. The concerns discussed included using AI to evaluate weight, body image, and appearance; chatbots discouraging children from confiding in safe adults about abuse; online harassment with AI-generated content; and blackmail involving AI-created images.

From April to September this year, Childline reported 367 counseling sessions where AI, chatbots, and related topics were mentioned, a fourfold increase compared to the same period last year. Half of these references in the 2025 sessions pertained to mental health and wellness, including the use of chatbots for support and AI therapy applications.

Source: www.theguardian.com

ChatGPT Is Polite, But It Doesn’t Collaborate with You




Illustration: Mathieu Labrecque/The Guardian

After the release of my third book in early April, I continued to see headlines that made me feel like the protagonist of a Black Mirror episode. “Vauhini Vara consulted ChatGPT and was instrumental in creating her new book, Searches.” Read more. “To tell her story, this celebrated author has essentially become ChatGPT,” another headline proclaimed. Yet another “Vauhini Vara will explore her identity with assistance from ChatGPT,” asserted a third article.

I was encouraged by the publications to search. Their portrayals were generally well-received and factual. However, their interpretations of my book and ChatGPT’s involvement did not align with my own understanding. While it’s true that I included conversations with ChatGPT in the book, my aim was critique, not collaboration. In interviews and public forums, I consistently cautioned against using large language models, like ChatGPT, for self-expression. Did these writers misconstrue my work? Or did I inadvertently lead them astray?

In my work, I document how major tech entities exploit human language for their own gain. We’ve made this possible, as we benefit from utilizing their products. It embodies the dynamics of Big Tech’s scheme to amass wealth and influence. We find ourselves both victims and beneficiaries. I’ll convey this complicity through my own online history: my Google searches, Amazon reviews, and yes, my dialogues with ChatGPT.

The Polite Politics of AI

The book opens with an epigraph highlighting the political potency of language, quoted from Audre Lorde and Ngũgĩ wa Thiong’o, followed by an initial conversation where I prompt ChatGPT to respond to my writing. This juxtaposition is intentional. I wanted feedback on various chapters to see how these exercises reflect both my language choices and the political implications of ChatGPT.

I maintained a polite tone, stating, “I’m nervous.” OpenAI, the creator of ChatGPT, claims its products excel when given clear instructions. Research indicates that when we engage kindly, ChatGPT responds more effectively. I framed my requests with courtesy; when it complimented me, I expressed my gratitude; when noting an error, I softened my critique.

ChatGPT, in turn, was designed for polite interaction. Oftentimes, its output is described as “bland” or “generic,” akin to a beige office building. OpenAI’s products are engineered to “sound like a colleague.” According to OpenAI, words are chosen to embody qualities such as “ordinary,” “empathetic,” “kind,” “rationally optimistic,” and “attractive.” These strategies aim to ensure the product appears “professional” and “friendly,” fostering a sense of safety. OpenAI recently discussed rolling back updates that pushed ChatGPT toward erratic responses.

Trust is a pressing challenge for AI companies, especially since their products frequently produce inaccuracies and reflect sexist, racist, and US-centric cultural assumptions. While companies strive to address these issues, they persist; OpenAI found that its latest system generates errors at even higher rates than its predecessor. In the book, I discussed inaccuracies and bias, demonstrating them with examples. For instance, when I prompted Microsoft’s Bing Image Creator for visuals of engineers and space explorers, it rendered a cast of exclusively male figures. Moreover, when my father requested that ChatGPT edit his writing, it converted his accurate Indian English into American English. Such biases are prevalent. Research indicates that these trends are widespread.

Within my dialogue with ChatGPT, I sought to illustrate how a veneer of product neutrality could dull our critical responses to misguided or biased output. Over time, ChatGPT seemed to encourage me towards more favorable portrayals of Big Tech, describing OpenAI’s CEO Sam Altman as “forward-thinking and pragmatic.” I have yet to find research confirming whether ChatGPT has a bias towards Big Tech entities, including OpenAI or Altman. We can only speculate about the reasons for this behavior in our interactions. OpenAI maintains that its products should not attempt to sway user opinions, but when I queried ChatGPT on the matter, it attributed the bias to limitations in training data, even as I believe deeper issues play a part.

When I asked ChatGPT about its rhetorical style, it replied: “My manner of communication is designed to foster trust and confidence in my responses.”

Nevertheless, by the end of our exchange, ChatGPT had suggested a conclusion for my book. Although Altman had never directly informed me, it seemed he would guide discussions towards accountability regarding AI product deficiencies.

I felt my argument had been made. The ChatGPT generated epilogue was inaccurately biased. The conversation concluded amicably, and I felt triumphant.

I Thought I Was Critiquing the Machine; Headlines Framed Me as Collaborating with It

Then, headlines emerged (and occasionally articles or reviews) referring to my use of ChatGPT as a means of self-expression. In interviews and publications, many asked if my work was a collaboration with ChatGPT. Each time, I rejected the premise by citing the Cambridge Dictionary definition of collaboration. Regardless of how human-like ChatGPT’s rhetoric appears, it is not a person.

Of course, OpenAI has its aspirations. Among them, it aims to develop AI that “benefits all of humanity.” Yet, while the organization is governed by non-profit principles, its investors still seek returns on their investments. This environment could incentivize users of ChatGPT to adopt additional products. Such objectives could be easily attained if these products are perceived as trustworthy partners. Last year, Altman predicted that AI would function as “an exceedingly competent colleague who knows everything about my life.” In an April Ted Talk, he indicated that AI could even influence social dynamics positively. “I believe AI will enable us to surpass intelligence and enhance collective decision-making,” he remarked this month during testimony before the US Senate, referencing potential integrations of “agents in their pockets” with government operations.

Upon reading headlines echoing Altman’s sentiments, my initial instinct was to attribute blame to the headline writer’s desire for sensationalism—tactics that algorithms increasingly dictate the content we consume. My second instinct was to hold accountable the companies behind these algorithms, including AI firms whose chatbots are being trained on published content. When I asked ChatGPT about contemporary discussions around “AI Collaborations,” it mentioned me and cited some reviews that had irritated me.

To clarify, I returned to my book to determine if I had couch misrepresented the notion of collaboration. Initially, it appeared that I hadn’t. I identified approximately 30 references to “collaboration” and similar terms. However, 25 of these originated from ChatGPT within interstitial dialogues, often elucidating the relationship between humans and AI products. None of the remaining five pertained to AI “collaboration” unless they referenced another author or were presented cynically—for instance, regarding the expectations of writers “refusing to cooperate with AI.”

Was I an Accomplice to AI Companies?

But was it significant that I seldom used the term? I speculated that those discussing my ChatGPT “collaboration” might have drawn interpretations from my book, even if not explicitly stated. What led them to believe that merely quoting ChatGPT would consistently unveil its absurdities? Why didn’t they consider the possibility that some readers would be persuaded by ChatGPT’s arguments? Perhaps my book inadvertently functioned as collaboration—not because AI products facilitated my expression, but because I had aided the corporations behind them in achieving their goals. My book explores how those in power leverage our language to their advantage, questioning what roles we play as accomplices. Now, it seemed that the very public reception of my book was intertwined in this dynamic. It was a sobering realization, but perhaps I should have anticipated it. There was no reason my work should be insulated from the same exploitation plaguing the world.

Ultimately, my book focused on how we can assert independence from the agendas of powerful entities and actively resist them, serving our own interests. ChatGPT suggested closing with a quote from Altman, but I opted for one from Ursula K. Le Guin: “We live in capitalism, and that power seems inevitable.” I pondered where we are headed. How can we ensure that governments sufficiently restrain the wealth and power of big technology? How can we fund and develop technology that aligns with our needs and desires, devoid of exploitation?

I imagined that my rhetorical struggle against powerful tech began and concluded within the confines of my book. Clearly, that was not the case. If the headlines I encountered truly reflect the end of that struggle, it indicates I was losing. Yet, readers soon reached out to me, stating that my book catalyzed their resistance against Big Tech. Some even cancelled their Amazon Prime memberships. I ceased to seek personal advice from ChatGPT. The fight continues, and collaboration among humans is essential.

Source: www.theguardian.com

Liverpool FC and DeepMind collaborate to create artificial intelligence for soccer strategy consultation

Corner kicks like this one taken by Liverpool's Trent Alexander-Arnold can lead to goal-scoring opportunities.

Robbie Jay Barratt/AMA/Getty

Artificial intelligence models predict the outcome of corner kicks in soccer matches and help coaches design tactics that increase or decrease the probability of a player taking a shot on goal.

petar veličković Google's DeepMind and colleagues have developed a tool called TacticAI as part of a three-year research collaboration with Liverpool Football Club.

A corner kick is awarded when the ball crosses the goal line and goes out of play, creating a good scoring opportunity for the attacking team. For this reason, football coaches make detailed plans for different scenarios, which players study before the game.

TacticAI was trained on data from 7176 corner kicks from England's 2020-2021 Premier League season. This includes each player's position over time as well as their height and weight. You learned to predict which player will touch the ball first after a corner kick has been taken. In testing, Ball's receiver ranked him among TacticAI's top three candidates 78% of the time.

Coaches can use AI to generate tactics for attacking or defending corners that maximize or minimize the chances of a particular player receiving the ball or a team getting a shot on goal. This is done by mining real-life examples of corner kicks with similar patterns and providing suggestions on how to change tactics to achieve the desired result.

Liverpool FC's soccer experts were unable to distinguish between AI-generated tactics and human-designed tactics in a blind test, favoring AI-generated tactics 90% of the time.

But despite its capabilities, Veličković says TacticAI was never intended to put human coaches out of work. “We are strong supporters of AI systems, not systems that replace AI, but augment human capabilities and allow people to spend more time on the creative parts of their jobs,” he says.

Velicković said the research has a wide range of applications beyond sports. “If you can model a football game, you can better model some aspects of human psychology,” he says. “As AI becomes more capable, it needs to understand the world better, especially under uncertainty. Our systems can make decisions and make recommendations even under uncertainty. It’s a good testing ground because it’s a skill that we believe can be applied to future AI systems.”

topic:

Source: www.newscientist.com