Transformative Choice: Jared Kaplan on Permitting Autonomous AI Learning | Technology




By the year 2030, humanity will face a critical decision regarding the “ultimate risk” of allowing artificial intelligence systems to self-train and enhance their capabilities, according to one of the foremost AI experts.

Jared Kaplan, chief scientist and co-founder of the $180bn (£135bn) US startup Anthropic, emphasized that crucial choices are being made concerning the level of autonomy granted to these evolving systems.

This could potentially spark a beneficial “intellectual explosion” or signify humanity’s loss of control.

In a conversation addressing the intense competition to achieve artificial general intelligence (AGI), also referred to as superintelligence, Kaplan urged global governments and society to confront what he termed the “biggest decision.”

Anthropic belongs to a network of leading AI firms striving for supremacy in the field, alongside OpenAI, Google DeepMind, xAI, Meta, and prominent Chinese competitors led by DeepSeek. Claude, one of the popular AI assistants, has gained significant traction among business clients.




Kaplan predicted that a decision to “relinquish” control to AI could materialize between 2027 and 2030. Photo: Bloomberg/Getty Images

Kaplan stated that aligning swiftly advancing technology with human interests has proven successful to date, yet permitting technology to recursively enhance itself poses “the ultimate risk, as it would be akin to letting go of AI.” He mentioned that a decision regarding this could emerge between 2027 and 2030.




Photo: Casey Clifford/The Guardian


“Envisioning a process generated by an AI that is as intelligent, or nearly as intelligent, as you. This is essentially about developing smarter AI.”




Photo: Casey Clifford/The Guardian


“This seems like a daunting process. You cannot predict the final outcome.”

Kaplan transitioned from a theoretical physicist to an AI billionaire in just seven years. During an extensive interview, he also conveyed:

  • AI systems are expected to handle “most white-collar jobs” in the coming two to three years.

  • His 6-year-old son is unlikely to outperform AI in academic tasks, such as writing essays or completing math exams.

  • It is natural to fear a scenario where AI can self-improve, leading humans to lose control.

  • The competitive landscape around AGI feels tremendously overwhelming.

  • In a favorable outcome, AI could enhance biomedical research, health and cybersecurity, productivity, grant additional leisure time, and promote human well-being.

Kaplan met with the Guardian at Anthropic’s office in San Francisco, where the interior design, filled with knitted rugs and lively jazz music, contrasts with the existential concerns surrounding the technology being cultivated.




San Francisco has emerged as a focal point for AI startups and investment. Photo: Washington Post/Getty Images

Kaplan, a physicist educated at Stanford and Harvard, joined OpenAI in 2019 following his research at Johns Hopkins University and Cologne, Germany, and co-founded Anthropic in 2021.

He isn’t alone in expressing concerns at Anthropic. One of his co-founders, Jack Clark, remarked in October: He considers himself both an optimist and a “deeply worried” individual. He described the path of AI as “not a simplistic and predictable mechanism, but a genuine and enigmatic entity.”

Kaplan conveyed his strong belief that AI systems would align with human interests, aligning them to the level of human cognition, although he harbors concerns about surpassing that boundary.

He explained: “If you envision creating this process using an AI smarter or comparable in intelligence to humans, it becomes about creating smarter AI. We intend to leverage AI to enhance its own capability. This suggests a process that may seem intimidating. The outcome is uncertain.”

The advantages of integrating AI into the economy are being scrutinized. Outside Anthropic’s headquarters, a sign from another tech corporation pointedly posed a question about returns on investment: “All AI and no ROI?” A September Harvard Business Review study indicated that AI “workthrop” — subpar AI-generated work requiring human corrections — was detrimental to productivity.

The most overt benefit appears to be the application of AI in computer programming tasks. In September, Anthropic unveiled its advanced AI, Claude Sonnet 4.5, a computer coding model allowing the creation of AI agents and granting autonomous computer utilization.




The attackers exploited the Claude Code tool for various organizations. Photo: Anthropic

Kaplan commented that the company can handle complex, multi-step programming tasks for 30 continuous hours and has, in specific instances, doubled the speed of its programmers through AI integration.

However, Anthropic revealed in November that it suspected a state-supported Chinese group engaged in misconduct by operating the Claude Code Tool, which not only assisted humans in orchestrating cyberattacks but also executed approximately 30 attacks independently, some of which were successful. Kaplan articulated that permitting an AI to train another AI is “a decision of significant consequence.”

“We regard this as possibly the most substantial decision or the most alarming scenario… Once no human is involved, certainty diminishes. You might begin the process thinking, ‘Everything’s proceeding as intended, it’s safe,’ but the reality is it’s an evolving process. Where is it headed?”

He identified two risks associated with the recursive self-improvement method, often referred to in this context, when allowed to operate uncontrollably.

“One concern is regarding potential loss of control. Is the AI aware of its actions? The fundamental inquiries are: Will AI be a boon for humanity? Can it be beneficial? Will it remain harmless? Will it understand us? Will it enable individuals to maintain control over their lives and surroundings?”




Photo: Casey Clifford/The Guardian


“It’s crucial to prevent power grabs and the misuse of technology.”




Photo: Casey Clifford/The Guardian


“It seems very hazardous if it lands in the wrong hands.”

The second risk pertains to the security threat posed by self-trained AI that could surpass human capabilities in scientific inquiry and technological advancement.

“It appears exceedingly unsafe for this technology to be misappropriated,” he stated. “You can envision someone wanting this AI to serve their own interests. Preventing power grabs and the misuse of technology is essential.”

Independent studies on cutting-edge AI models, including ChatGPT, have demonstrated that the length of tasks they can execute is expanding. Doubling every seven months.

The Future of AI

The contenders aiming to achieve superintelligence. This was compiled in collaboration with the Editorial Design team. Read more from the series.

Words

Nick Hopkins, Rob Booth, Amy Hawkins, Dara Kerr, Dan Milmo

Design and Development

Rich Cousins, Harry Fischer, Pip Lev, Alessia Amitrano

Picture Editors

Fiona Shields, Jim Hedge, Gail Fletcher

Kaplan expressed his worry that the rapid pace of advancement might not allow humanity sufficient time to acclimatize to the technology before it evolves significantly further.

“This is a source of concern… individuals like me could be mistaken in our beliefs and it might all culminate,” he remarked. “The best AI might be the one we possess presently. However, we genuinely do not believe that is the case. We anticipate ongoing improvements in AI.”

He added, “The speed of change is so swift that people often lack adequate time to process it or contemplate their responses.”

During its pursuit of AGI, Anthropic is in competition with OpenAI, Google DeepMind, and xAI to develop more sophisticated AI systems. Kaplan remarked that the atmosphere in the Bay Area is “certainly intense with respect to the stakes and competitiveness in AI.”

“Our perspective is that the trends in investments, returns, AI capabilities, task complexity, and so forth are all following this exponential pattern. [They signify] AI’s growing capabilities,” he noted.

The accelerated rate of progress increases the risk of one of the competitors making an error and falling behind. “The stakes are considerable to remain at the forefront in terms of not losing ground on exponential growth. [the curve] You could quickly find yourself significantly behind, particularly regarding resources.”

By 2030, it is anticipated that $6.7 trillion will be necessary for global data centers to meet increasing demand. Investors are eager to support companies that are aligned closest to the forefront.




Significant accomplishments have been made in utilizing AI for code generation. Photo: Chen Xin/Getty Images

At the same time, Anthropic advocates for AI regulation. The company’s mission statement emphasizes “the development of more secure systems.”

“We certainly aim to avoid a situation akin to Sputnik where governments abruptly realize, ‘Wow, AI is crucial’… We strive to ensure policymakers are as knowledgeable as possible during this evolution, so they can make informed decisions.”

In October, Mr. Anthropic’s stance led to a confrontation with the Trump administration. David Sachs, an AI advisor to the president, accused Anthropic of “fear-mongering” while promoting state-specific regulations beneficial to the company, while being detrimental to startups.

After Sachs suggested the company was positioning itself as an “opponent” of the Trump administration, Kaplan, alongside Dario Amodei, Anthropic’s CEO, countered by stating the company had publicly supported Trump’s AI initiatives and was collaborating with Republicans, aspiring to maintain America’s dominance in AI.

Source: www.theguardian.com

Drone Warfare: The Transformative Technology at the Heart of the Ukraine Conflict

“IAfar, the deputy commander of the “Davinci Wolves,” shares insights about the critical role of one of Ukraine’s renowned battalions in countering ongoing Russian attacks.

Amid the aftermath, the remaining forces strive to form around ten units to assault Ukrainian positions. It requires funding – “We’ve eliminated 11 individuals in the past 24 hours,” Afer remarks. Previously, attacks occurred once or twice daily, but the situation is now relentless. According to Da Vinci’s commander, the Russian troops seem to be operating under near-suicidal orders, driven by fear of their superiors.

At the command center of the Da Vinci Wolves Battalion
At the command center of the Da Vinci Wolves Battalion

A reconnaissance drone tracks a scorched treeline to the west of Pokrovsk. The imagery is relayed to Da Vinci’s command center, situated at one end of a 130-meter underground bunker. “Even taking a moment to relax is perilous,” Afer notes, as the team operates around the clock. Constructed in four or five weeks, the bunker features multiple rooms, including barracks for resting, alongside drawings by children and family reminders. The week’s menu adorns the wall.

Three and a half years into the Ukrainian conflict, there has been no progress on Donald Trump’s August peace initiative. As the war evolves, Afer elaborates on advancements in FPV (first-person view) drones, piloted remotely via onboard cameras. The so-called kill zone currently extends “12-14 kilometers” behind the frontline. A $500 drone, flying at speeds of up to 60mph, can maneuver within this area. “It’s all about logistics,” he explains, referring to food, ammunition, and medical supplies transported on foot or with the aid of ground drones.

Heavy machine gun near the temporary base of Da Vinci Battalion

Additionally, various types of ground drones are stationed in the countryside Dacha, currently occupied by Da Vinci soldiers. This concept rapidly evolved from an idea to a practical application. The drones include remote-controlled machine guns and flatbed robotic vehicles, such as the $12,000 Termit, capable of traversing rough terrain while carrying 300kg over 12 miles at a maximum speed of 7 mph.

End of land drones equipped for cargo, attack, and mine laying

Photo of the Ukrainian Ministry of Defense Termit drone.

These ground drones also contribute to saving lives. “Last night, we sustained two fractured legs and a chest injury,” Afer recounts. The entire rescue operation consumed “nearly 20 hours,” during which two soldiers successfully transported the injured man on a ground drone over a mile, delivering him to a safe village. Thankfully, the soldiers survived.

Da Vinci reports that their position remains secure, but the relentless Russian infiltration attempts effectively reveal locations where defenses are weak or coordination between nearby units is lacking. Recently, Russian forces breached Ukrainian territory, advancing 12 miles northeast of Pokrovsk, near Dubropyria. This marks a precarious moment in a critical sector, coinciding with Trump’s summit with Vladimir Putin in Alaska.

Initially, reports suggested countless soldiers had breached the area, but the confirmed figure appeared substantially higher. Ukrainian military sources estimate that roughly 2,000 Russians were involved, with 1,100 casualties reported during the offensive led by Ukraine’s newly formed 14th Chebona Karina Brigade from the Azov legions.

map

That night, another dacha was used by Da Vinci, where individuals stayed in the garden, and moths circled the light. Within, a specialized drone jammer occupies a gaming chair and is surrounded by seven screens mounted on fans and supported by intricate carpentry.

Although sensitive to the imagery, team leader Olexandre discusses the jammer’s operations, referring to him by the call sign Shoni. They both have the capability to intercept video feeds from the FPV drones, with three screens dedicated to capturing footage for reconnaissance. Upon detection, their mission is to identify the drone’s radio frequency and to secure the jammer at ground level (except when dealing with fiber optic drones that utilize up to 12 miles of fixed wiring instead of wireless connections).

“We manage to block about 70%,” shares Shaune, acknowledging that Russian forces achieve similar success rates. In their area, they encounter about 30-35 enemy drones daily, with some days witnessing even higher interception rates. “Last month, we seized control of the airspace. We intercepted their pilots expressing their inability to fly due to radio signals,” he adds, although these achievements waned after Russian artillery targeted their jamming equipment. The nature of battle is dynamic, with Shaune concluding, “It has become a drone war, where we wield shields while attacking with swords.”

Olexandre, call sign Shauni, resting in the kitchen

One drone pilot can undertake 20 missions within a 24-hour span. Sean manages to operate an FPV for prolonged periods, often for days, while strategically hiding miles behind the frontlines. The primary objective, particularly with Russians under attack, is targeting infantry. Sean candidly remarks that he “neutralizes at least three Russian soldiers” in this ongoing aerial and ground conflict. When asked if this makes it easier to eliminate adversaries from a distance, he responds, “I don’t know; I just know.” Dubok, another FPV pilot, shares this sentiment while sitting alongside Sean.

Other anti-drone measures are becoming increasingly sophisticated. Ukraine’s third brigade is stationed in the North Kalkiv sector, east of the Oskill River, while extensive defensive efforts are ongoing to the west. Inside their base, team members scan radar displays for signs of primarily Russian Supercams, Orlan, and Zara reconnaissance drones. Upon identifying a target, they launch an Albulet Interceptor, deploying two drones from the fields of sunflowers. This small delta-wing drone, made of black polystyrene, is operable with one hand and costs around $500.

Buhan, a pilot of a drone crew with Albaret interceptors in the position of the 3rd Attack Brigade in the Kharkiv region
Alvalet interceptor in the dugout of the 3rd Attack Brigade in the Kharkiv region

The Alvalet can reach a remarkable speed of 110 mph, though it features a limited battery life of just 40 minutes. It is piloted from a bunker using a control system designed for enthusiasts, guided by the onboard camera. The aim is to deploy its hand-ren bullets close enough to the Russian drones to ensure detonation. “If you’ve never flown an FPV drone before, it’s simple to learn,” Buhan shares, one of the drone operators.

Amidst an unusually wet and cloudy August, the adverse weather creates a rare lull in drone activity, as the Russians refrain from operating under such challenging conditions. The crew hesitates to activate the Albulet for fear of losing it, providing an opportunity for conversation. Buhan states he was a trading manager prior to the war, while DAOS was involved in investments. “Had it not been for the war, my life would have taken a different path,” he reflects. “But we all must unite to fight for our freedom.”

Do the pilots feel apprehensive about continuing their fight in what seems to be an endless conflict? The two men look towards me and nod, their silence speaking volumes.

Source: www.theguardian.com

Transformative Concepts: The Case for Embracing AI Doctors | Books

wOur physicians are exceptional, tireless, and often accurate. Yet, they are human. Increasingly, they face exhaustion, working extended hours under tremendous stress, and frequently with insufficient resources. Improved conditions—like more personnel and better systems—can certainly help. However, even the best-funded clinics with the most committed professionals can lack essential standards. Doctors, like all of us, often operate with a mindset reminiscent of the Stone Age. Despite extensive training, the human brain struggles to cope with the speed, pressure, and intricacies of contemporary healthcare.

Since patient care is the principal aim of medicine, what or who can best facilitate this? While AI can evoke skepticism, research increasingly illustrates how it can resolve some of the most enduring problems, including misdiagnosis, errors, and disparate access to care, and help rectify overlooked failures.

As patients, each of us will likely encounter at least one diagnostic error during our lifetime. In the UK, conservative estimates indicate that 5% of primary care visits result in an inability to diagnose correctly, putting millions at risk. In the US, diagnostic errors can lead to death or lasting harm, affecting 800,000 individuals each year. The risk of misdiagnosis is amplified for the one in ten people globally with rare diseases.

Modern medicine prides itself on being evidence-based, yet doctors don’t always adhere to what the evidence suggests. Studies reveal that evidence-based treatments are dispensed only about half the time for adults in the US. Furthermore, your doctor might not concur with the diagnosis either. In one study, reviewers providing second opinions on over 12,000 radiology images disagreed with the original assessment in roughly one-third of cases, leading to nearly 20% of treatment changes. As workloads increase, quality continues to decline, resulting in inappropriate antibiotic prescriptions and falling cancer screening rates.

While this may be surprising, there is a comprehensible reason for these errors. From another perspective, it’s remarkable that doctors often get it right. The human aspects—distraction, multitasking, even our circadian rhythms—play a significant role. However, burnout, depression, and cognitive aging affect more than just physicians; they raise the likelihood of clinical mistakes.

Additionally, medical knowledge advances more rapidly than any doctor can keep up with. By graduation, many medical students’ knowledge is already outdated, with an average of 22 hours required for a study to influence clinical practice. With a new biomedical article published every 39 seconds, even reviewing just the summaries demands a similar time investment. There are over 7,000 rare diseases, with 250 more identified each year.

In contrast, AI processes medical data at breakneck speeds, operating 24/7 without breaks. While doctors may waver, AI remains consistent. Although these tools can also make mistakes, it’s important not to underestimate the capabilities of current models. They outperform human doctors in clinical reasoning related to complex medical conditions.

AI’s superpower lies in identifying patterns often overlooked by humans, and these tools have proven surprisingly adept at recognizing rare diseases—often surpassing doctors. For instance, in a 2023 study, researchers tasked ChatGPT-4 with diagnosing 50 clinical cases, including 10 involving rare conditions. It accurately resolved all common cases by the second suggestion and achieved a 90% success rate for rare conditions by the eighth guess. Patients and their families are increasingly aware of these advantages. One child, Alex, consulted 17 doctors over three years for chronic pain, unable to find answers until his mother turned to ChatGPT, which suggested a rare condition known as tethered cord syndrome. The doctor confirmed this diagnosis, and Alex is now receiving appropriate treatment.

Next comes the issue of access. Healthcare systems are skewed. The neediest individuals—the sickest, poorest, and most marginalized—are often left behind. Overbooked schedules and inadequate public transport result in missed appointments for millions. Parents and part-time workers, particularly those in the gig economy, struggle to attend physical examinations. According to the American Time Use Survey, patients sacrifice 2 hours for a mere 20-minute doctor visit. For those with disabilities, the situation often worsens. Transportation issues, costs, and extended wait times significantly increase the likelihood of missed care in the UK. Women with disabilities are over seven times more likely to face unmet needs due to care and medication costs compared to men without disabilities.

Yet, it is uncommon to challenge the notion of waiting for a physician because it has always been the norm. AI has the potential to shift that paradigm. Imagine having a doctor in your pocket, providing assistance whenever it’s needed. The workers’ 10-year plan unveiled by Health Secretary Wes Streeting proposes that patients will be able to swiftly discuss AI and health concerns via the NHS app. This is a bold initiative, potentially offering practical clinical advice to millions much quicker.

Of course, this hinges on accessibility. While internet access is improving globally, substantial gaps remain, with 2.5 billion people still offline. In the UK, 8.5 million individuals lack basic digital skills, and 3.7 million families fall below the “minimum digital living standard.” This implies poor connectivity, obsolete devices, and limited support. Confidence is also a significant barrier; 21% of people in the UK feel they are behind in technological understanding.

Currently, AI healthcare research primarily focuses on its flaws. Evaluating biases and errors in technology is crucial. However, this focus overlooks the flaws and sometimes unsafe systems we already depend upon. A balanced assessment of AI must weigh its potential against the reality of current healthcare practices.

Charlotte Brees is a health researcher; Dr. Bott: Why Doctors Can Fail Us, and How AI Can Save LifePublished by Yale September 9th.

Read more

Deep Medicine: How Artificial Intelligence Can Make Health Care Human Eric Topol (basic book, £28)

Co-Intelligence: Life and cooperation with AI Ethan Morrick (WH Allen, £16.99)

Artificial Intelligence: A Guide to Thinking about Humans Melanie Mitchell (Pelican, £10.99)

Source: www.theguardian.com

Transformative Art: Brooklyn Exhibition Challenges and Explores White Domination in AI

At 300 Ashland Place in downtown Brooklyn, The Plaza will feature attendees gathered around a large yellow shipping container adorned with a black triangle. I acknowledge that the Flying Goose Quilt pattern may have functioned as a covert signal for enslaved individuals seeking freedom along the Underground Railroad. This design and the containers create a connection between the historical and contemporary narratives of the African diaspora. Central to an artistic initiative by Brooklyn-based transmedia artist Stephanie Dinkins, a large screen showcases AI-generated imagery reflecting urban diversity.

Commissioned by the New York-based art nonprofit Moartia Art and developed in collaboration with architect Lot-EK and The AI Laboratory, the exhibit titled Otherwise, Who Will Do It? will be open until September 28th. It aims to confront the ideologies of white supremacy by emphasizing the resilience and cultural foundations of the Black community.

In an era where society increasingly relies on AI, Dinkins envisions a future where these models comprehend and reflect the histories, aspirations, and realities of Black and Brown communities, thereby providing a more accurate representation of U.S. demographics. She expresses belief that her initiatives will reshape the AI landscape, challenging the prevailing bias in data that fails to represent the global majority. Currently, Black individuals comprise merely 7.4% of the high-tech workforce. Studies indicate that a lack of diversity in AI can lead to biased outcomes, as seen with predictive policing tools affecting Black communities and tenant screening programs that discriminate against people of color.

“We can develop machines that offer deeper insights into our community. Our representations should not stem from outsiders, which often results in misinterpretation; instead, they should reflect our identities as human beings, not merely as consumers,” Dinkins stated. “I pose the question: ‘Can we establish a system rooted in care and generosity?'”




If we don’t step up, who will? The AI Research Institute is situated in downtown Brooklyn, New York City. Photo: Driely Carter

Inside the AI lab, one image features a young Black girl with an afro, her gaze piercingly directed at the audience, accentuating her artificiality. Surrounding the public art installation are QR codes linked to an app that allows users worldwide to respond to prompts, such as “What privilege do you hold in society?” This interactive element integrates with the container; shortly thereafter, a generated image reflecting the submitted information will appear on the large screen. This image—mainly portraying a person of color—continues until new data is provided, regardless of the submitter’s own identity.


Dinkins has programmed the AI-generated art to focus on Black and Brown perspectives. She adjusted various AI models that identify patterns through specific datasets. Collaborating with her team, they sourced images from the renowned Black photographer Roy DeCarava, who documented the lives of Black individuals in Harlem. They also incorporated African American English to shape models recognizing its distinct tonality, resulting in more authentic image generation based on user stories. Additionally, she included images of okra—a staple in dishes of enslaved Africans and their descendants—serving as symbols connecting the past to the present within the portraits.

“We exist within a technological framework that’s altering our reality. If we remain uninformed, we lose the ability to navigate it effectively,” Dinkins remarked. While she empathizes with the public’s urge to protect privacy in the age of AI, she emphasizes the necessity of spaces that clarify that certain information is not intended for exploitation.

Democratizing AI

Dinkins was recognized as one of Time Magazine’s 100 Most Influential People in AI for 2023. With no formal technology education, she identifies as a “tinkerer.” She portrays Bina Rothblatt, the founder of a for-profit initiative focused on extending human life, inspired by a YouTube video featuring Bina48, an AI robot, more than a decade ago.

Her ongoing project Conversations with Bina48 documents video interviews with this robotic entity, starting in 2014. Later, she developed her own AI system intended to serve as a memoir for a Black American family. Through her initiative, It’s Not the Only One, Dinkins created a voice-responsive device that engages with passersby while being trained to converse with Nie and her aunt.

Lewis Tude Sokey, an English professor at Boston University, suggests Dinkins’ work is a crucial step toward democratizing AI by bringing technology to marginalized voices in spaces traditionally devoid of their representation. “There exists a troubling precedent of algorithms producing racist and sexist content. They are often trained on data from the internet, rife with harmful stereotypes,” Tude Sokey, who specializes in technology and race, explained.




If we don’t, who will address these dynamics? The AI Research Institute is situated in downtown Brooklyn, New York City. Photo: Driely Carter

“What Stephanie aims to explore is the possibility of training different algorithms to respond to diverse datasets that liberate content and include socially marginalized perspectives,” Tude Sokey noted.

Dinkins and fellow artists are reshaping the AI narrative, as highlighted by Tude Sokey: “There’s a significant cultural, political, and social realignment occurring within AI.” Dinkins embraces a philosophy she refers to as Afro-Now Rhythm, which she interprets as a proactive approach toward building a more equitable world—a “celebration of the potential to see technology as a force we can harness rather than fear.”

For Beth Coleman, a professor at the University of Toronto specializing in technology and society, it is vital to train AI models using diverse datasets to ensure accurate representations of the world. Dinkins’ work questions which voices are integrated into the technological ecosystems, she emphasized.

“There exists a thriving energy around collaborative efforts to craft a better world together,” Coleman remarked regarding Dinkins’ initiatives. “At this juncture, it feels profoundly revolutionary.”

Source: www.theguardian.com

Transformative Impacts of Anti-Vaccination Beliefs

Sioux Falls, SD – Prior to the widespread implementation of vaccines, catastrophic infections in the U.S. claimed millions of children’s lives and left many others with lifelong health complications.

Over the next century, vaccines successfully eradicated long-standing threats like polio and measles, leading to a significant decrease in many diseases. However, today, as preventable and contagious diseases resurface, vaccine hesitancy is causing a decline in vaccination rates. Moreover, established vaccines are facing skepticism from figures such as Robert F. Kennedy Jr. and some civil servants. Long-time anti-vaccine advocates are influencing perceptions managed by the Federal Health Bureau.

“These concerns, along with hesitations and queries regarding vaccines, stem from the profound success of vaccination, as it has eradicated many diseases,” explained Dr. William Schaffner, an infectious disease specialist at Vanderbilt University Medical Center. “If you don’t experience the disease, you lack respect or fear for it, thus undervaluing the vaccine.”

Anti-vaccine proponents often depict vaccines as perilous, emphasizing the rare side effects while neglecting the significantly greater risks posed by the diseases themselves.

Some Americans are acutely aware of the realities of vaccine-preventable diseases, as revealed in interviews conducted by the Associated Press.

Illness during pregnancy can impact two lives

For decades, Janice Farnham has cared for her daughter Jack. Jack, now 60, was born with congenital rubella syndrome, which caused complications with his hearing, vision, and heart. At the time, there was no vaccine for rubella, and Janice caught the infection early in her pregnancy.

Janice, now 80, did everything within her power to help Jack thrive, yet it took a toll on her own well-being. Jack eventually developed diabetes, glaucoma, autistic tendencies, and arthritis.

A nurse prepares measles, mumps, and rubella vaccines in Harbor Straw, New York, in 2019.
Johannes Eisele/AFP -Getty Images File

Currently, Jack resides in an adult care facility, spending four to five days a week with Janice. She is touched by Jack’s sense of humor and loving nature, frequently showering her with affection and often signing “Double I Love You.”

Given their family’s experiences, Janice finds it “more than frustrating” when individuals opt out of the MMR vaccine against measles, mumps, and rubella.

“I know what the outcome will be,” she expressed. “I simply want to spare others from enduring this.”

Delaying vaccinations can have dire consequences

Over fifty years have passed, yet Patricia Tobin vividly recalls finding her unconscious sister Karen on the bathroom floor.

In 1970, Karen was only six years old when she contracted measles. At that time, vaccines were not mandated for students in Miami. Doctors mentioned vaccinations for first graders, but urgency was not communicated by their mothers.

“It’s not that she was against it,” Tobin clarified. “She believed she had time.”

As measles outbreaks progressed, Karen collapsed in the restroom and never regained consciousness. She succumbed to encephalitis.

“We could never converse with her again,” Tobin mourned.

Presently, all states necessitate certain vaccinations for children to enroll in school. However, an increasing number of individuals are opting for exemptions. Schaffner from Vanderbilt emphasized that memories of measles outbreaks have been worsened by fraudulent studies falsely asserting a link between MMR vaccines and autism.

The result? Most states fall below the 95% vaccination threshold for kindergarten children – the minimum required to shield communities from measles outbreaks.

Preventable diseases can lead to lasting effects

One of Lora Duguay’s earliest memories is lying in a quarantined hospital ward, her frail body surrounded by ice. She was just three years old.

In 1959, polio was rampant in Clearwater, Florida. It was one of the most dreaded diseases in the U.S., leading some paranoid parents to isolate their children during the epidemic.

Due to the infamous nature of polio, the introduction of its vaccine was met with widespread excitement. However, the early vaccines Duguay received had an efficacy rate of only 80% to 90%, leaving many unvaccinated and vulnerable to the virus.

Polio patients receiving treatment with iron lungs in 1950 at a respiratory center in Los Angeles.
Bettmann Archive

The treatment allowed her to walk again, but she eventually developed post-polio syndrome, a neuromuscular disorder that deteriorates over time, leaving her in a wheelchair today.

Many children receive vaccinations now because the illness that changed her life is no longer a threat in the U.S. This new vaccine is much more effective than earlier versions, not only safeguarding individuals but also preventing sporadic cases from escalating among vulnerable populations.

Vulnerable populations remain at risk without vaccinations

Each night, Katie Van Troonhout cradles a small plaster cast of her daughter’s feet, a painful reminder of her child who succumbed to whooping cough in just 37 days.

Curry Grace was born on Christmas Eve in 2009. At just a month old, she showed symptoms of whooping cough after being exposed to someone too young to receive the Tdap vaccine.

At the hospital, Van Troonhout recalls the medical team desperately trying to save her, but “within minutes, she was gone.”

Today, Curry remains a part of her family’s life, with Van Troonhout advocating for vaccination and sharing her story with others.

“It’s our responsibility as adults to protect our children; that’s what parents do,” Van Troonhout stated. “I witnessed my daughter die from preventable illnesses… you don’t want to have my experiences.”

Source: www.nbcnews.com

Reviving Retro Games with Kids: A Surreal and Transformative Experience

TThe weather was distinctly Scottish during the holidays, so instead of attending the planned party, my family and I stayed home to celebrate Hogmanay. Our youngest son’s friends and their parents joined us for dinner. As the kids in our group started getting rowdy around 9pm, we decided to host a mini midnight countdown party in Animal Crossing.

I hadn’t played Animal Crossing since lockdown. Taking care of my virtual island kept me sane while stuck in my small apartment with a baby, toddler, and teenager. Our guests brought their Switch, so we created avatars for the kids to enjoy new games together at our year-end party.

They had fun chasing each other with bug nets for a while, then gathered in the plaza with other island residents to watch a giant countdown clock while Tom Nook, the raccoon king of the island, wore party gear. On New Year’s Eve 2021, a memory struck me. Even though I was alone on the couch, I felt accompanied by my Animal Crossing friends as we watched the countdown together. My youngest son had just started walking and was unsteady on his feet. Seeing him interact with his brother, eager to stay up late, felt surreal.

It’s always surreal to watch kids discover and enjoy video games. Their presence changes the game, reshaping my memories of playing it alone or with new save files. Last year we all started playing Pokemon together, which added a new layer of enjoyment to a game I loved as a child. Super Mario 3D World feels like a completely different game when played with my kids, with their reactions and interactions shaping the experience.

The Legend of Zelda: Link’s Awakening has been remade on Switch. Photo: Nintendo

Recently, my youngest son wanted to try a Zelda game, so we played Link’s Awakening on Switch. Despite my past difficult memories associated with the game, it was heartwarming to see my son navigate the game with joy and excitement.

To my parents, video games were unfamiliar and slightly suspect. Now, I act as a guide for my kids, introducing them to the worlds within the screen that fascinate them.

In the future, if our gaming interests diverge, I may become a tourist in their gaming world. For now, Animal Crossing remains a constant. I resurrected our family island for the kids to manage, pulling out my old Switch Lite. The island served as a refuge for our children during lockdown, a product of hours of labor that is now in need of revitalization. Despite my hesitations, my kids want to return and create something new.

Source: www.theguardian.com

New insights uncovered by scientists on the transformative effects of endurance training on muscles

Researchers at the University of Basel have conducted a study on muscle adaptations in mice and discovered that endurance training leads to significant muscle remodeling. This is evident in the differential gene expression in trained muscles compared to untrained muscles, with epigenetic changes playing a crucial role in these adaptations. Trained muscles become more efficient and resilient, allowing for improved performance over time. The findings shed new light on the mechanisms behind these muscle adaptations.

Endurance training comes with numerous benefits. Regular exercise not only enhances overall fitness and health but also brings about substantial changes in muscle structure. This results in decreased muscle fatigue, increased energy production, and optimized oxygen usage. The recent experiments conducted by researchers at the University of Basel, using mice as subjects, have further elucidated these muscle changes.

Professor Christoph Handsin, who has extensive experience in muscle biology research at the Biozentrum University of Basel, explains that it is well-known that muscles adapt to physical activity. The goal of their study was to gain a deeper understanding of the processes occurring in muscles during athletic training. The researchers found that training status is reflected in gene expression.

Comparing untrained and trained mice, Handsin’s team examined the changes in gene expression in response to exercise. Surprisingly, they discovered that a relatively small number of around 250 genes were altered in trained resting muscles compared to untrained muscles. However, after intense exercise, approximately 1,800 to 2,500 genes were regulated. The response of specific genes and the degree of regulation depended largely on the training condition.

Untrained muscles activated inflammatory genes in response to endurance training, which could lead to muscle soreness from small injuries. In contrast, trained muscles exhibited increased activity in genes that protect and support muscle function, allowing them to respond differently to exercise stress. Trained muscles were more efficient and resilient, enabling them to handle physical loads better.

The researchers found that epigenetic modifications, chemical tags in the genome, played a crucial role in shaping muscle fitness. Epigenetic patterns determine whether genes are turned on or off, and the patterns differed significantly between untrained and trained muscles. The modifications affected important genes that control the expression of numerous other genes, ultimately activating a distinct program in trained muscles compared to untrained muscles.

These epigenetic patterns determine how muscles respond to training. Chronic endurance training induces short and long-term changes in the epigenetic patterns of muscles. Trained muscles are primed for long-term training due to these patterns and exhibit faster reactions and improved efficiency. With each training session, muscular endurance improves.

The next step for researchers is to determine whether these findings in mice also apply to humans. Biomarkers that reflect training progress can be used to enhance training efficiency in competitive sports. Additionally, understanding how healthy muscles function is crucial for developing innovative treatments for muscle wasting associated with aging and disease.

In conclusion, the study conducted by researchers at the University of Basel has unveiled the mechanisms through which muscles adapt to regular endurance training in mice. The insights gained from these findings may have implications for human performance and health. Furthermore, understanding muscle function can aid in the development of treatments for muscle-related conditions.

Source: scitechdaily.com