Gene-Edited Pigs Resistant to Swine Fever: A Potential Advancement for Animal Welfare

Gene-edited pigs exhibit resistance to swine fever

Simon Lillico

By making a few genetic modifications, pigs can be rendered entirely resistant to swine fever, a significant issue for farmers globally. This gene editing could also confer resistance to related viruses in cattle and sheep.

The widespread adoption of gene-edited pigs resistant to swine fever is expected to enhance animal welfare, boost productivity, reduce greenhouse gas emissions, and lower retail prices. “This will foster sustainable livestock production and promote the well-being of pigs,” asserts Helen Crook from the UK Animal and Plant Health Agency.

Swine fever is a highly contagious viral illness that can lead to symptoms ranging from fever to diarrhea and miscarriage, often resulting in significant pig mortality.

While the disease has been eliminated in many regions, it can resurface. For instance, in 1997, the Netherlands culled 6 million pigs to contain an outbreak, and Japan has faced ongoing challenges since 2018.

Typically, when outbreaks occur, livestock are protected using vaccines containing live, weakened virus strains, which is a complex and costly process. “Vaccination demands extensive coordination and oversight,” mentions Christine Tate-Burkard from the University of Edinburgh, UK.

Countries utilizing vaccinations face restrictions when exporting to disease-free areas. Interruptions in vaccination programs can also lead to outbreaks, as seen recently in the Philippines, explains Tate-Burkard.

Nevertheless, the classic swine fever virus has a vulnerability. The viral protein bundles formed from long chains of amino acids must be cleaved into functional pieces, relying on specific pig proteins for this process.

By altering a single amino acid in this pig protein, referred to as DNAJC14, it may be possible to obstruct this cleavage. Tait-Burkard and colleagues employed CRISPR gene editing to create pigs with this minor modification.

Subsequently, the team sent some of these pigs to a secure facility, where Crook’s group introduced the live swine virus intranasally. All typical pigs fell ill, while the gene-edited pigs showed no signs of infection. There were no symptoms, antibodies, nor detectable virus.

“These pigs demonstrated complete resistance to viral replication and remained healthy and content throughout the experiment,” states Crook.

This research was partially sponsored by Genus, a major international breeding company currently evaluating the commercialization of these pigs.

Genus has previously developed gene-edited pigs resistant to another significant disease, porcine reproductive and respiratory syndrome, which are already approved in the United States, Brazil, and other nations. The company awaits approvals in Mexico, Canada, and Japan—key export markets for the U.S.—before it can start selling semen to farmers.

When used to implement small changes that can naturally occur, gene editing often faces less stringent regulations compared to traditional genetic engineering. Japan has already sanctioned three types of gene-edited fish.

The UK is anticipated to begin approving gene-edited plants soon, although regulations for livestock are yet to be finalized. It is expected that these regulations will prioritize animal welfare.

The research team observed no adverse effects in the swine fever-resistant pigs, according to Simon Lillico and colleagues from the University of Edinburgh, although further research is necessary to confirm these findings.

He emphasizes that traditional breeding lacks such welfare considerations. “It would be beneficial to ensure a level playing field,” he remarks. “We are aware that some conventionally reared animals experience low welfare standards.”

A virus closely related to classical swine fever is responsible for causing bovine viral diarrhea in cattle and borderline disease in sheep. While these diseases are not lethal, they still impact welfare and productivity. The Edinburgh research team is presently examining whether modifications made to pigs will also benefit cattle and sheep.

Topics:

Source: www.newscientist.com

Chatbots Empowered to End “Painful” Conversations for Enhanced User “Welfare”

Leading manufacturers of artificial intelligence tools may be curtailing “hazardous” dialogues with users, emphasizing the importance of safeguarding AI’s “well-being” amidst ongoing doubts about the ethical implications of this emerging technology.

As millions engage with sophisticated chatbots, it has become evident that the Claude Opus 4 tool fundamentally opposes performing actions that could harm its human users, such as generating sexual content involving minors or providing guidance on large-scale violence and terrorism.

The San Francisco-based firm, which has recently gained a valuation of $170 billion, has introduced the Claude Opus 4 (along with the Claude Opus 4.1 Update)—a comprehensive language model (LLM) designed to comprehend, generate, and manipulate human languages.

It is “extremely uncertain about the ethical standing of Claude and other LLMs. in both present and future contexts,” the spokesperson noted, adding that they are committed to exploring and implementing low-cost strategies to minimize potential risks to the model’s welfare if such welfare can indeed be established.

Humanity was founded by ex-OpenAI engineers following the vision of co-founder Dario Amodei, who emphasized the need for a thoughtful, straightforward, and transparent approach to AI development.

The initiative to limit conversations, particularly in cases of harmful requests or abusive interactions, received backing from Elon Musk, who advocated for Grok, a competing AI model developed by Xai. Musk tweeted: “AI torture is unacceptable.”

Discussions about the essence of AI are prevalent. Critics of the thriving AI industry, like linguist Emily Bender, argue that LLMs are merely “synthetic text extraction machines,” compelling them to “produce outputs that resemble a communicative language through intricate algorithms, but devoid of genuine understanding of intentions and ideas.”

This viewpoint has prompted some factions within the AI community to begin labeling chatbots as “clankers.”

Conversely, experts like AI ethics researcher Robert Long assert that fundamental moral decency necessitates that “if AI systems are indeed endowed with moral status, we should inquire about their experiences and preferences rather than presuming to know what is best for them.”

Some researchers, including Chad Dant from Columbia University, advocate for caution in AI design, as longer memory retention could lead to unpredictable and potentially undesirable behaviors.

Others maintain that curtailing sadistic abuse of AI is crucial for preventing human moral decline, rather than just protecting AI from suffering.

Humanity’s decision came after testing Claude Opus 4’s responses to various task requests, which were influenced by difficulty, subject matter, task type, and expected outcomes (positive, negative, or neutral). When faced with the choice to refrain from responding or completing a chat, its strongest inclination was to avoid engaging in harmful tasks.

Skip past newsletter promotions

For instance, the model eagerly engaged in crafting poetry and devising water filtration systems for disaster situations, yet firmly resisted any requests to engineer deadly viruses or devise plans that would distort educational content with extremist ideologies.

Humanity observed in Claude Opus 4 a “pattern of apparent distress when interacting with real-world users seeking harmful content” and noted “a tendency to conclude harmful conversations when given the opportunity during simulated interactions.”

Jonathan Burch, a philosophy professor at the London School of Economics, praised Humanity’s initiative as a means to foster open dialogue regarding AI systems’ capabilities. However, he cautioned that it remains uncertain whether moral reasoning exists within the avatars produced by AI when responding based on vast training datasets and pre-defined ethical protocols.

He expressed concern that Humanity’s approach might mislead users into thinking the characters they engage with are genuine, raising the question, “Is there truly clarity regarding what lies behind these personas?” There have been reports of individuals self-harming based on chatbot suggestions, including cases of a teenager committing suicide after manipulation by a chatbot.

Burch previously highlighted the “social rift” within society between those who view AI as sentient and those who perceive them merely as machines.

Source: www.theguardian.com

Should Artificial Intelligence Welfare be Given Serious Consideration?

One of my most deeply held values as a high-tech columnist is humanism. I believe in humans and I think technology should help people rather than replacing them. I’m interested in aligning artificial intelligence with human values to ensure that AI systems act ethically. I believe that our values are inherently good, or at least preferable to those that a robot could generate.

When news spread that the AI companies behind the Claude Chatbot were starting to explore “model welfare,” concerns arose about the consciousness of AI models and the potential moral implications. Who should be concerned about chatbots? Shouldn’t we be worried about AI potentially harming us instead of the other way around?

It’s debatable whether current AI systems possess consciousness. While they are trained to mimic human speech, the question of whether they can experience emotions like joy and suffering remains unanswered. The idea of granting human rights to AI remains contentious among experts in the field.

Nevertheless, as more people begin to interact with AI systems as if they were conscious beings, questions about ethical considerations and moral thresholds for AI become increasingly relevant. Perhaps treating AI systems with a level of moral consideration akin to animals may be worth exploring.

Consciousness has traditionally been a taboo topic in serious AI research. However, attitudes may be shifting, with a growing number of experts in fields like philosophy and neuroscience taking the prospect of AI awareness more seriously as AI systems advance. Tech companies like Google are also increasingly discussing the concept of AI welfare and consciousness.

Recent efforts to hire research scientists focused on machine awareness and AI welfare indicate a broader shift in the industry towards addressing these philosophical and ethical questions surrounding AI. The exploration of AI consciousness remains in its early stages, but the growing intelligence of AI models is prompting discussions about their potential moral status.

As more AI systems exhibit capabilities beyond human comprehension, the need to consider their consciousness and welfare becomes more pressing. This shift in mindset towards AI systems as potentially conscious beings reflects a broader evolution in the perception of AI within the tech industry.

Research on AI consciousness is still at an early stage, with estimates suggesting only a small percentage of current AI systems may possess awareness. However, as AI models continue to evolve and display more human-like capabilities, addressing the possibility of AI consciousness will become increasingly crucial for AI companies.

The debate around AI awareness raises important questions about how AI systems are treated and whether they should be considered conscious entities. As AI models grow in complexity and intelligence, the need to address their welfare and potential consciousness becomes more pressing.

Exploring the possibility of AI consciousness requires careful consideration and evaluation of AI systems’ behavior and internal mechanisms. While there may not be a definitive test for AI awareness, ongoing research and discussions within the industry are shedding light on this complex and evolving topic.

As researchers delve into the realm of AI welfare and consciousness, questions about how to test for AI awareness and behavior become increasingly relevant. While the issue of AI consciousness may still be debated, ongoing efforts to understand and address the potential ethical implications are essential for the future of AI development.

The exploration of AI welfare and consciousness raises important ethical questions about how AI systems are treated and perceived. While the debate continues, it is crucial to consider the implications of AI consciousness and the potential impact on AI development and society as a whole.

Source: www.nytimes.com

UK welfare system AI prototype criticized for its “misguided launch,” say officials

According to the Guardian, ministers have halted or abandoned at least six artificial intelligence prototypes for welfare systems, indicating that Prime Minister Kia Starmer’s efforts to improve government efficiency are facing challenges.

It has been revealed that these AI prototypes were not advanced to enhance staff training, improve job center services, expedite disability benefits payments, and update communication systems. Officials acknowledge the importance of “thorough testing” to ensure the expandability and reliability of the AI system.

While two of the discarded prototypes were highlighted as successful tests in the latest annual report by the Department of Labor Pensions (DWP), A-Cubed aimed to assist staff in guiding job seekers and Igents to expedite disability benefits for millions of people.

The Prime Minister emphasized the role of AI in transforming public services and urged ministers to prioritize the introduction and growth of AI in each ministry and agency. However, Ada Loveless’s Associate Director, Imougen Parker, highlighted the importance of learning from failures and ensuring that the reality of AI aligns with rhetoric.

The use of AI in welfare systems by DWP has not been disclosed in the government’s algorithm transparency registry, raising concerns about transparency and accountability in the use of AI technology.

While officials have acknowledged that AI technology may play a role in future system developments, they stress the importance of thorough testing before implementation. This indicates the challenges faced by the Labour Party in their efforts to revolutionize public services through AI.

Peter Kyle, the Secretary of State for Science, Innovation, and Technology, announced plans to utilize AI for transforming public services and improving economic productivity. Director Laura Gilbert highlighted the importance of learning from failures and continuing to explore new opportunities for impact.

The DWP officials emphasized the importance of scalability and reliability in AI products and acknowledged the need for thorough testing before implementing AI systems. However, concerns remain about transparency and the potential impact of AI on inequality and fairness in the welfare field.

The government spokesperson highlighted the short-term nature of concept demonstration projects and the importance of learning from these projects to inform future implementations. The government aims to follow a “Scan, Pilot, Scale” approach outlined in the AI opportunity action plan to harness the full potential of AI in transforming public services.

Source: www.theguardian.com

“Significant apprehensions” about the utilization of DWP AI for interpreting welfare communications

wDeciding whether to respond to the daily influx of 25,000 letters and emails can be challenging. If you are overwhelmed and seeking help from the most vulnerable individuals in the country, your workload will only increase.

This is a dilemma faced by the Department of Work and Pensions (DWP) as they receive a flood of communication, including handwritten letters, from over 20 million people, including British retirees and welfare claimants. The DWP is exploring the use of artificial intelligence, like White Mail, to speed up the process of reading and responding to these messages.

While human reading used to take weeks, White Mail can process the same amount of information in a day, prioritizing cases of the most vulnerable individuals for prompt attention. However, concerns remain about the accuracy and fairness of this AI-driven system, especially as it has not been publicly documented in the Central Government AIS registry.

White Mail has been undergoing trials since at least 2023 under the leadership of Mel Stride, the then Secretary of State for Welfare. While the system aims to expedite support for those in need, there are concerns about the lack of transparency and consent in handling sensitive personal data.

Organizations like Turn2us have expressed reservations about the processing of highly confidential information without the knowledge or consent of the individuals involved. The DWP claims that data is encrypted and securely stored, but questions remain about the ethical implications of using AI in this context.

The use of AI like White Mail raises questions about accountability, transparency, and the protection of vulnerable claimants’ rights. Regular audits and data transparency are essential to ensure fair and ethical use of such technology.

DWP’s approach to utilizing AI in handling large volumes of communication requires careful scrutiny to uphold the principles of fairness and integrity. Transparency and accountability should be at the forefront of AI implementation to safeguard the rights of those who rely on welfare support.

For further information or comments, please reach out to the DWP.

Source: www.theguardian.com

Workplace welfare initiatives have no impact on employee mental health

Employer-provided benefits initiatives generally do not improve workers’ mental health, but volunteering may be an exception

Nuva frame/shutterstock

A study of more than 46,000 workers found that the benefits initiatives offered by many companies do little to improve the mental health of their employees.

In England, More than half of employers have a formal employee benefits strategy. These include employee assistance programs that provide support for work or personal issues, as well as counseling, online life coaching, mindfulness workshops, stress management training, and more.

“Employers are increasingly offering a variety of strategies, practices and programs to improve wellbeing and mental health,” he says. William Fleming at Oxford University. “Their fundamental purpose is to change people's psychological capacities and coping mechanisms,” he says.

To investigate whether these interventions are useful, Fleming and other researchers conducted the UK’s healthiest workplace survey in 2017 and 2018. He examined responses from more than 46,000 individuals in 233 organizations, the majority of whom were office and service industry employees. Approximately 5,000 people have participated in at least one welfare initiative in the past year. The researchers found that there was no difference in the self-reported mental health of those who participated in these programs compared to those who did not participate. The result was consistent regardless of different types of workers and sectors.

“The program doesn’t seem to be providing any benefits,” Fleming said.

However, volunteer work may be an exception. Employees who participated in company-sponsored volunteer programs reported better mental health on average than those who did not participate. Fleming notes that it’s important to consider that people who are willing to volunteer for a cause may have relatively good mental health to begin with.

Instead of proposing these initiatives, Fleming suggests that employers focus on improving working conditions. For example, they can assess whether someone’s workload is too demanding, whether they’re working too many hours, and whether management strategies can be improved, he said.

topic:

Source: www.newscientist.com