Examining Resilience to Alzheimer’s Disease: Why Some Individuals Remain Symptom-Free
Associated Press/Alamy
Recent studies reveal that some individuals exhibit brain changes tied to Alzheimer’s disease yet show no symptoms like memory loss. Though the reasons remain unclear, innovative research is uncovering protective factors that may prevent cognitive decline.
Alzheimer’s disease is marked by amyloid plaques and tau tangles accumulating in the brain, widely believed to contribute to cognitive decline. However, some individuals, known for their resilience, defy this notion. In 2022, Henne Holstege and her team at the University Medical Center in Amsterdam discovered that certain centenarians retain good cognitive function despite these pathological changes.
Expanding on this research, the team conducted a new study involving 190 deceased individuals. Among them, 88 had Alzheimer’s diagnoses, while 53 showed no signs of the disease at death. Their ages ranged from 50 to 99, and 49 were centenarians with no dementia, though 18 exhibited cognitive impairment previously.
The focus was on the middle temporal gyrus—an early site of amyloid plaques and tau tangles in Alzheimer’s. Interestingly, centenarians with elevated amyloid levels had tau levels akin to those without Alzheimer’s, suggesting that limiting tau accumulation is critical for resilience, according to Holstege.
While amyloid plaques are linked to cognitive decline, Holstege posits that tau accumulation may activate a cascade of symptoms. Notably, amyloid plaques alone may not cause significant tau tangling. “Without amyloid, tau can’t spread,” she explains.
Further analysis of approximately 3,500 brain proteins revealed only five were significantly associated with high amyloid plaques, while nearly 670 correlated with tau tangles. Many of these proteins are involved in crucial metabolic processes like cell growth and waste clearance. Holstege emphasizes, “With amyloid, everything changes; with tau, it’s a different story.”
In the cohort of 18 centenarians with high amyloid levels, 13 showed significant tau spread throughout the middle temporal gyrus, a pattern similar to Alzheimer’s, but the overall tau presence remained low.
This distinction is vital, as diagnosis hinges on tau spread, indicating that accumulation, not just proliferation, triggers cognitive decline. “We must understand that proliferation doesn’t mean abundance,” Holstege clarifies.
In a second study, Katherine Prater and her team at the University of Washington examined 33 deceased individuals—10 diagnosed with Alzheimer’s, 10 showing no signs, and 13 deemed resilient. Most subjects were over 80 and underwent cognitive assessments within a year before death.
In line with previous findings, the research indicated that tau was present but not accumulated in resilient brains. Though the mechanisms remain elusive, Prater theorizes that microglia—immune cells regulating brain inflammation—might play a crucial role in maintaining cognitive function in resilience.
The team also conducted genetic studies on microglia from the dorsolateral prefrontal cortex, essential for managing complex tasks. They discovered that resilient individuals’ microglia exhibited heightened activity in messenger RNA transport genes compared to those with Alzheimer’s. This suggests effective gene transport, vital for protein synthesis, is preserved in resilient brains.
“Disruptions in this process can severely impact cell function,” Dr. Prater remarked at the Neuroscience Society meeting in San Diego. However, its direct relationship to Alzheimer’s resilience remains to be elucidated.
Moreover, resilient microglia demonstrated reduced activity in metabolic energy genes compared to those in Alzheimer’s patients, mirroring patterns in healthy individuals. This suggests heightened energy expenditure in Alzheimer’s due to inflammatory states that disrupt neuronal connections and lead to cell death.
“Both studies indicate that the human brain possesses mechanisms to mitigate tau burdens,” Prater concludes. Insights gained from this research could pave the way for new interventions to delay or even prevent Alzheimer’s disease. “While we aren’t close to a cure, the biology offers hope,” she stated.
Reducing emissions and capturing carbon is essential to limit warming
Richard Saker/Alamy
The planet must eliminate hundreds of billions of tons of carbon dioxide to keep global temperature rise under 1.5°C this century. Even the less ambitious 2°C targets seem increasingly unattainable without substantial carbon capture and removal (CDR) technologies and urgent emission reductions.
The contentious role of carbon management technologies in meeting climate objectives has been debated for some time. According to the Intergovernmental Panel on Climate Change, a degree of carbon management is “inevitable” for reaching zero emissions required to stabilize global temperatures. However, it stresses that the necessary technologies have yet to be validated at the needed scale and emphasizes the risk of providing justifications for continued emissions.
“There’s an ongoing debate among scientists about whether CDR is essential or fundamentally unfeasible,” says Candelaria Bergero from the University of California, Irvine. “Some argue that CDR is unavoidable,” she adds.
To assess what is at stake, Bergero and her research team simulated the potential for global temperature increases to stay below 2°C while analyzing CO2 management across various emission scenarios aligned with the Paris Agreement targets. These scenarios incorporated both technological CDR methods like direct air capture and nature-based solutions such as tree planting, alongside varying carbon capture applications for emissions from power plants and industrial sources.
They determined that failing to capture or remove CO2 could lead to an additional 0.5°C rise in global average temperature by century’s end. Moreover, half of the carbon management predicted in the scenarios could induce about 0.28°C of warming, making it nearly impossible to restrict temperature increases to 1.5°C, even within frameworks that consider violations of that threshold.
While achieving 2°C warming targets might still be feasible without carbon management, researchers found that drastic emission reductions of 16% annually since 2015 are necessary. Such a rapid decrease appears unlikely given the increasing global emissions over the last decade, according to Bergero.
Furthermore, initiatives for scaling up carbon management aren’t progressing swiftly enough. According to Steve Smith at Oxford University, only 40 million tonnes of CO2 are currently captured and stored globally, and only about 1 million tonnes are removed directly each year.
“Like with other emissions reductions, countries frequently discuss ambitious long-term goals, yet lack immediate measures to implement the billions of tons of reductions necessary for these pathways to succeed,” he states.
For those rare individuals who dream of conversing with Keir Starmer, a new AI model has arrived.
The former Chief of Staff to the Tories has developed a platform called Nostrada, designed to enable users to engage with AI representations of all 650 UK Parliament members.
Founded by Leon Emirali, who previously worked with Steve Berkeley, Nostrada is built to allow users to converse with the “digital twin” of each MP, replicating their political views and mannerisms.
This service targets diplomats, lobbyists, and the general public, helping users explore each MP’s position on various matters and find relevant colleagues.
“Politicians are never short of opinions, which provide us with ample data sources,” Emirali stated. “They have a viewpoint on everything, and the quality of an AI product relies heavily on the data it is built upon.”
The reliability of chatbots may come into question from the politicians themselves.
The Guardian challenged the digital avatars of cabinet members; most chose not to respond, while Health Secretary Wes Street’s representation voted for himself.
These models draw on a vast range of written and spoken material from politicians available online. No matter how hard you attempt to sway them, their stances won’t change. This is due to their inability to learn from new input, meaning that every interaction remains static. The Guardian aims to shed light on the nature of these AI models.
The AI is already in use among various politicians, including accounts associated with cabinet office emails as well as two distinct accounts linked to foreign embassy emails for investigating the prime minister and his cabinet. Emirali mentioned that several notable lobbying and marketing firms have utilized this technology over recent months.
Despite the numerous applications of Nostrada, Emirali concedes that AI could be a “shortcoming” for future voters who might rely entirely on it to shape their understanding.
He remarked, “Political nuances are too intricate. AI may not be adequately comprehensive for voters to depend on fully. The hope is that for those already familiar with politics, this tool proves to be incredibly beneficial.”
I was encouraged by the publications to search. Their portrayals were generally well-received and factual. However, their interpretations of my book and ChatGPT’s involvement did not align with my own understanding. While it’s true that I included conversations with ChatGPT in the book, my aim was critique, not collaboration. In interviews and public forums, I consistently cautioned against using large language models, like ChatGPT, for self-expression. Did these writers misconstrue my work? Or did I inadvertently lead them astray?
In my work, I document how major tech entities exploit human language for their own gain. We’ve made this possible, as we benefit from utilizing their products. It embodies the dynamics of Big Tech’s scheme to amass wealth and influence. We find ourselves both victims and beneficiaries. I’ll convey this complicity through my own online history: my Google searches, Amazon reviews, and yes, my dialogues with ChatGPT.
The Polite Politics of AI
The book opens with an epigraph highlighting the political potency of language, quoted from Audre Lorde and Ngũgĩ wa Thiong’o, followed by an initial conversation where I prompt ChatGPT to respond to my writing. This juxtaposition is intentional. I wanted feedback on various chapters to see how these exercises reflect both my language choices and the political implications of ChatGPT.
I maintained a polite tone, stating, “I’m nervous.” OpenAI, the creator of ChatGPT, claims its products excel when given clear instructions. Research indicates that when we engage kindly, ChatGPT responds more effectively. I framed my requests with courtesy; when it complimented me, I expressed my gratitude; when noting an error, I softened my critique.
ChatGPT, in turn, was designed for polite interaction. Oftentimes, its output is described as “bland” or “generic,” akin to a beige office building. OpenAI’s products are engineered to “sound like a colleague.” According to OpenAI, words are chosen to embody qualities such as “ordinary,” “empathetic,” “kind,” “rationally optimistic,” and “attractive.” These strategies aim to ensure the product appears “professional” and “friendly,” fostering a sense of safety. OpenAI recently discussed rolling back updates that pushed ChatGPT toward erratic responses.
Trust is a pressing challenge for AI companies, especially since their products frequently produce inaccuracies and reflect sexist, racist, and US-centric cultural assumptions. While companies strive to address these issues, they persist; OpenAI found that its latest system generates errors at even higher rates than its predecessor. In the book, I discussed inaccuracies and bias, demonstrating them with examples. For instance, when I prompted Microsoft’s Bing Image Creator for visuals of engineers and space explorers, it rendered a cast of exclusively male figures. Moreover, when my father requested that ChatGPT edit his writing, it converted his accurate Indian English into American English. Such biases are prevalent. Research indicates that these trendsare widespread.
Within my dialogue with ChatGPT, I sought to illustrate how a veneer of product neutrality could dull our critical responses to misguided or biased output. Over time, ChatGPT seemed to encourage me towards more favorable portrayals of Big Tech, describing OpenAI’s CEO Sam Altman as “forward-thinking and pragmatic.” I have yet to find research confirming whether ChatGPT has a bias towards Big Tech entities, including OpenAI or Altman. We can only speculate about the reasons for this behavior in our interactions. OpenAI maintains that its products should not attempt to sway user opinions, but when I queried ChatGPT on the matter, it attributed the bias to limitations in training data, even as I believe deeper issues play a part.
When I asked ChatGPT about its rhetorical style, it replied: “My manner of communication is designed to foster trust and confidence in my responses.”
Nevertheless, by the end of our exchange, ChatGPT had suggested a conclusion for my book. Although Altman had never directly informed me, it seemed he would guide discussions towards accountability regarding AI product deficiencies.
I felt my argument had been made. The ChatGPT generated epilogue was inaccurately biased. The conversation concluded amicably, and I felt triumphant.
I Thought I Was Critiquing the Machine; Headlines Framed Me as Collaborating with It
Then, headlines emerged (and occasionally articles or reviews) referring to my use of ChatGPT as a means of self-expression. In interviews and publications, many asked if my work was a collaboration with ChatGPT. Each time, I rejected the premise by citing the Cambridge Dictionary definition of collaboration. Regardless of how human-like ChatGPT’s rhetoric appears, it is not a person.
Of course, OpenAI has its aspirations. Among them, it aims to develop AI that “benefits all of humanity.” Yet, while the organization is governed by non-profit principles, its investors still seek returns on their investments. This environment could incentivize users of ChatGPT to adopt additional products. Such objectives could be easily attained if these products are perceived as trustworthy partners. Last year, Altman predicted that AI would function as “an exceedingly competent colleague who knows everything about my life.” In an April Ted Talk, he indicated that AI could even influence social dynamics positively. “I believe AI will enable us to surpass intelligence and enhance collective decision-making,” he remarked this month during testimony before the US Senate, referencing potential integrations of “agents in their pockets” with government operations.
Upon reading headlines echoing Altman’s sentiments, my initial instinct was to attribute blame to the headline writer’s desire for sensationalism—tactics that algorithms increasingly dictate the content we consume. My second instinct was to hold accountable the companies behind these algorithms, including AI firms whose chatbots are being trained on published content. When I asked ChatGPT about contemporary discussions around “AI Collaborations,” it mentioned me and cited some reviews that had irritated me.
To clarify, I returned to my book to determine if I had couch misrepresented the notion of collaboration. Initially, it appeared that I hadn’t. I identified approximately 30 references to “collaboration” and similar terms. However, 25 of these originated from ChatGPT within interstitial dialogues, often elucidating the relationship between humans and AI products. None of the remaining five pertained to AI “collaboration” unless they referenced another author or were presented cynically—for instance, regarding the expectations of writers “refusing to cooperate with AI.”
Was I an Accomplice to AI Companies?
But was it significant that I seldom used the term? I speculated that those discussing my ChatGPT “collaboration” might have drawn interpretations from my book, even if not explicitly stated. What led them to believe that merely quoting ChatGPT would consistently unveil its absurdities? Why didn’t they consider the possibility that some readers would be persuaded by ChatGPT’s arguments? Perhaps my book inadvertently functioned as collaboration—not because AI products facilitated my expression, but because I had aided the corporations behind them in achieving their goals. My book explores how those in power leverage our language to their advantage, questioning what roles we play as accomplices. Now, it seemed that the very public reception of my book was intertwined in this dynamic. It was a sobering realization, but perhaps I should have anticipated it. There was no reason my work should be insulated from the same exploitation plaguing the world.
Ultimately, my book focused on how we can assert independence from the agendas of powerful entities and actively resist them, serving our own interests. ChatGPT suggested closing with a quote from Altman, but I opted for one from Ursula K. Le Guin: “We live in capitalism, and that power seems inevitable.” I pondered where we are headed. How can we ensure that governments sufficiently restrain the wealth and power of big technology? How can we fund and develop technology that aligns with our needs and desires, devoid of exploitation?
I imagined that my rhetorical struggle against powerful tech began and concluded within the confines of my book. Clearly, that was not the case. If the headlines I encountered truly reflect the end of that struggle, it indicates I was losing. Yet, readers soon reached out to me, stating that my book catalyzed their resistance against Big Tech. Some even cancelled their Amazon Prime memberships. I ceased to seek personal advice from ChatGPT. The fight continues, and collaboration among humans is essential.
Experts often suggest that it takes 10,000 hours of practice to excel in any field. However, not everyone possesses the talent required to become an Olympian or Paralympian. While practice can enhance performance, genetic factors impacting both physical strength and mental aptitude likely make the distinction between “good” and “great” athletes.
Athletic success is also influenced by external factors beyond an individual’s control, such as their birthdate. For instance, in the 2010-11 UEFA Youth Football Tournament, 43% of players were born between January and March (early in the selection cycle). Only 9% of players were born between October and December.
According to many sports psychologists, older children starting school may have an advantage in sports due to factors like size, strength, and confidence. However, the birth month advantage may also be influenced by social factors such as teachers’ perceptions of a child’s abilities.
Therefore, the ideal approach to becoming an Olympian may involve exploring various interests as a child and then focusing on activities where natural talent and, most importantly, enjoyment are found.
This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.
Strictly Necessary Cookies
Strictly Necessary Cookie should be enabled at all times so that we can save your preferences for cookie settings.