“The pain was like being struck by lightning and being hit by a freight train at the same time,” shared Victoria Gray. New Scientist reflects on her journey: “Everything has changed for me now.”
Gray once endured debilitating symptoms of sickle cell disease, but in 2019, she found hope through CRISPR gene editing, a pioneering technology enabling precise modifications of DNA. By 2023, this groundbreaking treatment was officially recognized as the first approved CRISPR therapy.
Currently, hundreds of clinical trials are exploring CRISPR-based therapies. Discover the ongoing trials that signify just the beginning of CRISPR’s potential. This revolutionary tool is poised to treat a wide range of diseases beyond just genetic disorders. For example, a single CRISPR dose may drastically lower cholesterol levels, significantly reducing heart attack and stroke risk.
While still in its infancy regarding safety, there’s optimism that CRISPR could eventually be routinely employed to modify children’s genomes, potentially reducing their risk of common diseases.
Additionally, CRISPR is set to revolutionize agriculture, facilitating the creation of crops and livestock that resist diseases, thrive in warmer climates, and are optimized for human consumption.
Given its transformative capabilities, CRISPR is arguably one of the most groundbreaking innovations of the 21st century. Its strength lies in correcting genetic “misspellings.” This involves precisely positioning the gene-editing tool within the genome, akin to placing a cursor in a lengthy document, before making modifications.
Microbes utilize this genetic editing mechanism in their defense against other microbes. Before 2012, researchers identified various natural gene-editing proteins, each limited to targeting a single location in the genome. Altering the target sequence required redesigning the protein’s DNA-binding section, a process that was time-consuming.
However, scientists discovered that bacteria have developed a diverse range of gene-editing proteins that bind to RNA—a close relative of DNA—allowing faster sequence matching. Producing RNA takes mere days instead of years.
In 2012, Jennifer Doudna and her team at the University of California, Berkeley, along with Emmanuelle Charpentier from the Max Planck Institute for Infection Biology, revealed the mechanics of one such gene-editing protein, CRISPR Cas9. By simply adding a “guide RNA” in a specific format, they could target any desired sequence.
Today, thousands of variants of CRISPR are in use for diverse applications, all relying on guide RNA targeting. This paradigm-shifting technology earned Doudna and Charpentier the Nobel Prize in 2020.
“The gut microbiome has transformed our understanding of human health,” says Tim Spector, PhD, co-founder of the Zoe Nutrition App from King’s College London. “We now recognize that microbes play a crucial role in metabolism, immunity, and mental health.”
Although significant advancements in microbiome research have surged in the past 25 years, humans have a long history of utilizing microorganisms to enhance health. The Romans, for instance, employed bacterial-based treatments to “guard the stomach” without comprehending their biological mechanisms.
In the 17th century, microbiologist Antony van Leeuwenhoek made the groundbreaking observation of the parasite Giardia in his own stool. It took scientists another two centuries to confirm his discoveries, until the 21st century when the profound impact of gut and skin microbes on health became evident.
By the 1970s, researchers determined that gut bacteria could influence the breakdown of medications, potentially modifying their efficacy. Fecal transplant studies hinted at how microbial communities could restore health. However, it was the rapid advancements in gene sequencing and computing in the 2000s that truly revolutionized this field. Early genome sequencing revealed every individual possesses a distinct microbial “fingerprint” of viruses, fungi, and archaea.
In the early 2000s, groundbreaking studies illustrated that the microbiome and immune system engage in direct communication. This collaboration reshapes the microbiome’s role as a dynamic participant in our health, impacting a wide range of systems, from the pancreas to the brain.
Exciting findings continue to emerge; fecal transplants are proving effective against Clostridium difficile infections, while microorganisms from obese mice can induce weight gain in lean mice. Some bacterial communities have shown potential to reverse autism-like symptoms in mice. Recently, researchers have even suggested that microbial imbalances could trigger diabetes and Parkinson’s disease. “Recent insights into the human microbiome indicate its influence extends far beyond the gut,” states Lindsay Hall from the University of Birmingham, UK.
Researchers are gaining a clearer understanding of how microbial diversity is essential for health and how fostering it may aid in treating conditions like irritable bowel syndrome, depression, and even certain cancers. Studies are also investigating strategies to cultivate a healthy microbiome from early life, which Hall believes can have “profound and lasting effects on health.”
In just a few decades, the microbiome has evolved from an obscure concept to a pivotal consideration in every medical field. We are now entering an era that demands rigorous testing to differentiate effective interventions from overhyped products, all while shaping our approach to diagnosing, preventing, and treating diseases.
In today’s digital landscape, hostility often overshadows collaboration. Remarkably, Wikipedia—a publicly editable encyclopedia—has emerged as a leading knowledge resource worldwide. “While it may seem improbable in theory, it remarkably works in practice,” states Anusha Alikan from the Wikimedia Foundation, the nonprofit behind Wikipedia.
Founded by Jimmy Wales in 2001, Wikipedia continues to thrive, although co-founder Larry Sanger left the project the following year and has since expressed ongoing criticism, claiming it is “overrun by ideologues.”
Nonetheless, Sanger’s opinions are not widely echoed. Wikipedia boasts over 64 million articles in 300+ languages, generating an astonishing 15 billion hits monthly. Currently, it ranks as the 9th most visited website globally. “No one could have anticipated it would become such a trusted online resource, yet here we are,” Arikan commented.
Building trust on a massive scale is no small achievement. Although the Internet has democratized access to human knowledge, it often presents fragmented and unreliable information. Wikipedia disrupts this trend by allowing anyone to contribute, supported by approximately 260,000 volunteers worldwide, making an impressive 342 edits per minute. A sophisticated system grants broader editing rights to responsible contributors, fostering trust that encourages collaboration even among strangers.
Wikipedia also actively invites special interest groups to create and edit content. For instance, the Women in Red project tackles gender disparities, while other initiatives focus on climate change and the history of Africa. All articles uphold strict accuracy standards, despite critics like Sanger alleging bias.
As an anomaly in the technology sector, Wikipedia operates without advertising, shareholders, or profit motives. It has maintained this unique position for over two decades with great success.
However, the rise of artificial intelligence poses new challenges. AI can generate misleading content, deplete resources in training efforts, and lead to diminished website traffic and decreased donations due to AI-driven search summaries.
“Every so often, a groundbreaking product emerges that reshapes our reality.” Steve Jobs during the 2007 Apple presentation. Tech executives often hype their innovations, but this proclamation was substantiated. The iPhone not only popularized apps but also introduced compact, powerful computers into our daily lives.
However, this transformation comes with drawbacks. Much like a snail retreating into its shell, we can retreat into our devices at any moment, breeding social anxiety. Coupled with safety issues, numerous countries have restricted mobile phone use in educational settings, and Australia has implemented a total ban on social media for users under 16 as of December 2025. Additionally, reliance on a constantly connected device can diminish our sense of privacy, according to data scientists like Mar Hicks of the University of Virginia. “This technology is acclimating users to significantly less privacy, not only in public spaces but also within the privacy of their own homes.”
Smartphones transcend their basic function, emphasizing their role in our lives, as anthropologist Daniel Miller from University College London notes. “They’ve expanded our personal space,” he articulates. These handheld digital environments allow for seamless access to the virtual worlds of our friends and family, resulting in a continuous navigation between our physical and digital existence.
The global influence of smartphones is undeniable. According to GSMA, the mobile operators’ industry association, over 70% of the global population now owns a smartphone. In many low-income countries, people increasingly bypass traditional desktop computers altogether. Smartphone-driven fintech platforms facilitate transactions for 70 million users across 170 countries, removing the necessity for conventional banks. Furthermore, farmers utilize smartphone applications for crop monitoring, and doctors employ them in hospitals to reduce reliance on costly machinery.
Moreover, the ramifications of smartphones extend far beyond their immediate use. The rapid miniaturization of electrical components like cameras, transistors, and motion sensors has enhanced processing power and introduced new potentials. This technological evolution has spurred numerous 21st-century innovations, including versatile drones, smart wearables, virtual reality headsets, and miniature medical implants.
Batteries and solar energy technologies have been evolving for centuries, but they reached a pivotal moment in 2016. This year marked the launch of the first Gigafactory in Nevada, which produces cutting-edge battery technologies, electric motors, and solar cells on a large scale. The term ‘Gigafactory’ implies vast production capabilities.
The renewable energy potential—including solar, wind, and hydropower—is staggering. In merely a few days, the sun provides more energy to Earth than we can harvest from all fossil fuel reserves combined.
Efficiently harnessing this power remains a challenge. The photovoltaic effect, discovered by Edmond Becquerel in 1839, allows light to generate electric current. Although the first functional solar panels emerged in the 1950s, only in the 2010s did solar technology advance enough to rival fossil fuels. Simultaneously, lithium-ion batteries invented in the 1980s have created reliable energy storage solutions.
The Gigafactory has been instrumental in advancing these solar and battery technologies—not through new inventions but by integrating all components of electric vehicle production. This approach reflects Henry Ford’s legacy, populating the world with Teslas instead of fossil fuel-burning vehicles. “Batteries have made it possible to utilize solar power efficiently, and electric vehicles are now a reality,” says Dave Jones from Ember, a British energy think tank.
The economies of scale introduced by gigafactories have extended their impact beyond electric vehicles. “These batteries will enable a host of innovations: smartphones, laptops, and the capacity to transport energy efficiently at lower costs,” remarks Sarah Hastings-Simon from the University of Calgary, Canada.
Due to recent advancements, the costs associated with these technologies have plummeted. Many experts believe that the electrification of energy systems is now inevitable. In states like California and countries such as Australia, the abundance of solar energy has led grid operators to offer electricity at no cost. Battery technology is rapidly improving, enabling the development of solar-powered planes, ships, and long-haul trucks, effectively breaking our reliance on fossil fuels that have dominated energy systems for centuries.
In the last 25 years, the field of human evolution has witnessed remarkable growth, showcased by a significant increase in discoveries. Archaeologists have unearthed more fossils, species, and artifacts from diverse locations, from the diminutive “hobbits” to enigmatic creatures inhabiting Indonesian islands. Notably, Homo naledi is known solely from a single deep cave in South Africa. Simultaneously, advanced analytical techniques have enhanced our understanding of these findings, revealing a treasure trove of information about our origins and extinct relatives.
This whirlwind of discoveries has yielded two major lessons. First, since 2000, our understanding of the human fossil record has been extended further back in time. Previously, the oldest known human fossil was 4.4 million-year-old Ardipithecus, but subsequent discoveries in 2000 and 2001 unearthed even older species: Ardipithecus, Orrorin tugenensis from 6 million years ago, and Sahelanthropus tchadensis from 7 million years ago. Additionally, the Orrorin lineage was tentatively identified in 2022, suggesting it is slightly more recent than O. tugenensis.
According to Clement Zanoli from the University of Bordeaux, the discovery of these early human fossils represents “one of the great revolutions” in our understanding of evolution.
The second major lesson has enriched the narrative of how our species emerged from earlier hominins. By 2000, genetic evidence established that all non-Africans descend from ancestors who lived in Africa around 60,000 years ago. This revelation indicated that modern humans evolved in Africa and subsequently migrated, replacing other hominid species.
However, by 2010, the sequencing of the first Neanderthal genome opened a new chapter, along with the DNA analysis of several other ancient humans. These studies revealed that our species interbred with Neanderthals, Denisovans, and possibly other groups, creating a complex tapestry of human ancestry.
Skeletal research has long suggested interbreeding as many fossils exhibit traits that defy clear species categorization, as noted by Sheila Athreya at Texas A&M University. In 2003, Eric Trinkaus and colleagues described a jawbone excavated from Peștera cu Oase, Romania, as a Human-Neanderthal hybrid, based on its morphology. Later genetic testing in 2015 confirmed that individuals from Oase had Neanderthal ancestry, tracing back 4 to 6 generations ago.
This evidence highlights that our species did not merely expand from Africa; rather, our population absorbed genetic contributions from Neanderthals and Denisovans along the way. Genetically, we are a mosaic, a fusion of countless years of diverse human lineages.
You’ve likely encountered the parable of the blind men and the elephant, where each individual’s perspective is limited to one part, leading to a distorted understanding of the whole. This concept resonates deeply in neuroscience, which has historically treated the brain as a collection of specialized regions, each fulfilling unique functions.
For decades, our insights into brain functionality arose from serendipitous events, such as the case of Phineas Gage, a 19th-century railroad worker who dramatically altered personality following a severe brain injury. More recent studies employing brain stimulation have linked the amygdala with emotion and the occipital lobe with visual processing, yet this provides only a fragmented understanding.
Brain regions demonstrate specialization, but this does not encapsulate the entire picture. The advent of imaging technologies, particularly functional MRI and PET scans in the late 1990s and early 2000s, revolutionized our comprehension of the brain’s interconnectedness. Researchers discovered that complex behaviors stem from synchronized activity across overlapping neural networks.
“Mapping brain networks is playing a crucial role in transforming our understanding in neuroscience,” states Luis Pessoa from the University of Maryland.
This transformative journey commenced in 2001 when Marcus Raichle, now at Washington University in St. Louis, characterized the Default Mode Network (DMN). This interconnected network activates during moments of rest, reflecting intrinsic cognitive processes.
In 2003, Kristen McKiernan, then at the Medical College of Wisconsin, and her team identified that the DMN experiences heightened activity during familiar tasks, such as daydreaming and introspection, providing a “resting state” benchmark for evaluating overall brain activity. They began to correlate DMN activity with advanced behaviors, including emotional intelligence and theory of mind.
As discoveries proliferated across other networks—pertaining to attention, language, emotion, memory, and planning—our understanding of mental health and neurodiversity evolved. These neural differences are now thought to be linked with various neurological conditions, including Parkinson’s disease, PTSD, depression, anxiety, and ADHD.
Network science has emerged as a pivotal field, enhancing our comprehension of disorders from autism, characterized by atypical social salience networks—those that detect and prioritize salient social cues—to Alzheimer’s disease, where novel research indicates abnormal protein spread via network pathways. We also acknowledge the inspiration it provides for developing artificial neural networks in AI systems like ChatGPT.
Neural networks have not only reshaped our understanding of brain functionalities but also the methodologies for diagnosing and treating neurological disorders. While we might not yet perceive the entirety of the elephant, our view is undeniably clarifying as science progresses.
In 2005, physicists David Frame and Miles Allen were headed to a scientific conference in Exeter, England. According to Frame, they were “playing around” with climate models in preparation for their presentation.
At that time, most research centered on stabilizing the concentration of greenhouse gases in the atmosphere to avert severe climate change. However, scientists faced challenges in predicting how much the planet would warm if these concentrations reached specific levels.
Frame and Allen approached the issue from a different angle. Instead of focusing on atmospheric concentrations, they examined emissions. They wondered what would happen if humanity ceased emitting anthropogenic carbon dioxide. Using a climate model on a train, they found that global temperatures reached a new stable level. In other words, global warming would halt if humanity achieved “net-zero” carbon dioxide emissions. Frame recalled, “It was pretty cool to sit on the train and see these numbers for the first time and think, ‘Wow, this is a big deal.’
This groundbreaking presentation and the subsequent Nature paper published in 2009 reshaped the thinking within the climate science community. Prior to the net-zero concept, it was generally accepted that humans could emit around 2.5 gigatons annually (approximately 6% of current global emissions) while still stabilizing global temperatures. However, it became clear that to stabilize the climate, emissions must reach net zero, balanced by equivalent removals from the atmosphere.
The global consensus surrounding the need to achieve net zero CO2 emissions rapidly gained traction, culminating in a landmark conclusion in the 2014 Intergovernmental Panel on Climate Change (IPCC) report. The subsequent question was about timing: when must we reach net zero? At the 2015 Paris Agreement, nations committed to limiting temperature increases as close to 1.5°C as feasible, aiming for net-zero emissions by around mid-century.
Almost immediately, governments worldwide faced immense pressure to establish net-zero targets. Hundreds of companies joined the movement, recognizing the economic opportunities presented by the transition to clean energy. This “net-zero fever” has led to some dubious commitments that excessively rely on using global forests and wetlands to absorb human pollution. Nevertheless, this shift has altered the course of this century: approximately 75% of global emissions are now encompassed by net-zero pledges, and projections for global warming throughout this century have decreased from around 3.7–4.8°C to 2.4–2.6°C under existing climate commitments.Read more here.
While living with my parents, my mother claimed she could always sense when my period was imminent. I vividly recall the chaos that ensued when she mistakenly purchased chicken breast instead of thighs on the evening I was tasked with cooking.
Such dramatic reactions are typical of premenstrual syndrome (PMS), which is a central topic in the book The Brain of the Times: The New Science of How We Understand PMS. The author, Sarahill, who has previously examined the impact of birth control on the brain, outlines methods for managing PMS symptoms with a focus on lifestyle adjustments.
Women’s health has been largely overlooked in the scientific arena for years. Hill, who possesses a PhD in evolutionary psychology and leads a health and relationship lab at Texas Christian University, is in a good position to address these gaps. Unfortunately, her arguments can sometimes feel superficial.
At one point, she links PMS to the notion that women are told to burn an average of 2,000 calories. This implies that researchers should consider an additional 140 calories during the luteal phase of the menstrual cycle. Hill posits that adherence to these guidelines leads to cravings and misconceptions about food, which can exacerbate the issue.
Any woman paying such close attention to her caloric intake is unlikely to dismiss a 140-calorie snack for the sake of anecdotes. To me, Hill’s reasoning appears to overly simplify the onset of PMS.
Although she references plenty of scientific studies, Hill seldom shares details regarding participant numbers or the duration of interventions, which are critical since small studies often overlook various genetic factors.
The potential genetic influence on PMS is another topic that Hill only lightly touches on. While no specific genes linked to PMS have been identified, the condition is reported to occur more frequently in identical twins compared to fraternal twins. Given this, it’s not surprising that genetic factors could also play a role in different menstrual cycle aspects.
Hill frequently suggests symptom relief through inadequately tested supplements, increased sun exposure, and varying exercise routines throughout the menstrual cycle (though the last point may hold some merit). However, acknowledging that severe symptoms could stem from genetic factors rather than merely lifestyle choices would be beneficial.
One thing I concur with Hill about is the need for further research at various menstrual cycle stages to understand how these phases affect responses to psychological treatments like drug metabolism. I also agree that it may be easier to cope with mood swings by recognizing them as natural reactions to hormonal changes, potentially alleviating my anxiety about chicken.
I didn’t finish The Brain of the Times with any groundbreaking insights on reducing PMS. Nevertheless, every book on women’s health contributes to destigmatizing issues like PMS and could encourage more extensive research.
Actor Orlando Bloom recently made headlines when it was reported that he was compensated a staggering £10,000 ($13,600) for the removal, separation, and filtration of his blood.
This dramatic treatment underscores the escalating concern surrounding a disquieting reality. It’s not solely about evading these minuscule particles.
Research indicates that microplastics are prevalent from the heights of Mount Everest to the depths of our brains. Their omnipresence, including in the media, raises pressing public scientific concerns regarding the safety of having microscopic plastic flakes adrift on our bodies.
Once thought of as harmless, microplastics are now linked to various illnesses. Should we be testing at this nascent stage and worrying about their impact on our bodies, especially considering the lack of scientific consensus? And are we really justifying lining up to “clean” our blood?
Plastic Proof
The term “microplastic” refers to plastic particles or fibers smaller than 5mm (0.19 inches). These particles are often minuscule, necessitating a microscope for proper observation.
Scientists also use the term “nanoplastic” for particles smaller than 0.001mm (39.4 microinches), which are difficult to detect even with advanced microscopy. Evidence suggests they can be released from plastic materials and disseminate into their environments.
My research group focuses on quantifying plastic and other particles in the air we breathe, both indoors and outdoors. In London, we have observed that airborne microplastics can penetrate deep into our lungs.
read more:
To determine the presence of microplastics in the body, whole tissues or blood fragments are processed and filtered to concentrate the microplastic content. Analysis is conducted using chemical techniques that quantify plastic in a sample, or through physical and chemical methods, which count the number of plastic particles (along with their size and shape).
Each method has its merits, but they all share similar drawbacks. Modern laboratories are rife with microplastic pollution, laden with plastic consumables and the personnel that handle them.
This means that the very process of extracting and testing microplastic samples can lead to contamination. Consequently, samples often reveal microplastic particles that were previously considered too large to be absorbed and distributed throughout the body.
Some reports indicate that humans might consume an equivalent of one teaspoon of plastic daily.
Generally, particles smaller than 0.001mm (39.4 microinches) can traverse the lungs and enter the bloodstream. This occurs through the thin alveolar tissue in the lungs that separates the air-filled alveolar sacs from the small surrounding capillary blood vessels.
In the intestines, these minute particles can enter the lymph system, the bodily waste removal network. From there, the tiniest particles may enter the bloodstream and become larger aggregates trapped in the intestinal lining.
Thus, lab contamination may account for the larger plastics detected within the body.
Another complication arises because some biological components within samples emit signals resembling those of plastic. Specifically, fat can distort the signals from polyelectrolytes and polychlorinated compounds. If samples are not meticulously processed, this could lead to exaggerated estimates of the plastics present.
Taking all of this into account, the assumed high levels of microplastics in our bodies may be overstated. Variations in estimates range from nanograms to milligrams, influenced by factors like study methodology, location, tissue type, and analysis techniques.
Recent stringent research suggests an estimated 0.15µg (0.00000015g) of plastic per milliliter in our blood, amounting to less than the weight of a single human hair.
Moreover, this study predominantly focuses on polystyrene, the easiest microplastic to analyze.
Plastic People
Considering these levels, it may be more critical to focus on where microplastics accumulate in our bodies rather than their sheer quantity.
Nonetheless, accurately measuring microplastic accumulation in various body parts presents challenges. A recent study posits that the brain is a notable accumulation point, averaging around 4.5 bottle caps.
Not only are these levels considerably high, but the detected plastics largely consist of polyethylene, which poses complications in measurement due to its interaction with fat.
Hundreds of millions of tons of plastic are produced annually – Pexels
Polyethylene is the most widely produced plastic globally, with approximately 120 million tons manufactured each year, representing 25% of all plastics. Thus, it’s logical to find a higher concentration of this type in our bodies. However, the brain is composed of adipose tissue, making false positives a potential concern.
Furthermore, the research suggests that plastic levels in the brain surpass those in the liver, an organ responsible for cleansing blood. Expecting a high concentration of plastic in the body’s filtration organ would be reasonable.
Most studies investigating microplastics in human tissues focus on broad tissue-wide samples. This results in a lack of critical context regarding whether microplastics are embedded within cells or merely passing through.
Plastic Pure
Regardless of the exact measurements, public anxiety about microplastics remains high. Around two-thirds of 30,000 survey respondents from 31 countries express concern about microplastics in their bodies.
If you aim to minimize exposure to microplastic contamination, consider adopting a few lifestyle changes. Opt for natural fiber-based textiles in your home and clothing, avoid plastic packaging whenever feasible (especially when heat is involved), and refrain from running along quiet streets to dodge tire wear particles from traffic.
However, projections indicate that microplastic releases may rise 1.5-2.5 times by 2040. It’s likely that technology will soon emerge, claiming to eradicate microplastic invaders from our bodies.
Therapeutic apheresis — a medical process that separates blood and selectively removes harmful substances before returning the cleaned blood to the patient — has recently been commercialized for the removal of microplastics from the bloodstream.
However, there is scant public documentation on this microplastic removal method. A German study indicated that “microplastic-like” particles were detected in a patient’s plasma following the procedure. Without adequate lab controls and details regarding detected particle sizes, interpreting the significance of these findings is challenging.
Additionally, our understanding of the specific behavior of microplastics within the body remains limited. We lack clarity on whether they circulate freely in our plasma, adhere to red blood cells, or are contained within immune cells in the bloodstream.
In the absence of concrete evidence on the types of microplastics in our bodies, their pathways, or their interactions within the body, evaluating the health implications of these “blood-cleaning” efforts becomes nearly impossible.
Moreover, additional concerns may arise during treatment. One study documented 558 microplastics released from the cannula over a 72-hour period.
With all this taken into account, I intend to steer clear of the SF blood washing service in Hollywood until further studies emerge to clarify the impact of microplastics on our bodies and provide insight into their locations and functions.
TThe conveniences of modern life are incredible. Currently, my phone is wirelessly playing some of the greatest hits from the 1700s (like Bach) through a portable speaker. You can easily get a ride, order food to your doorstep, or start chatting on a dating app using the same device. To quote Arthur C. Clarke, for modern humans, this technology is third lawindistinguishable from magic.
It’s understandable that our culture seeks out and celebrates these shortcuts. They eliminate boredom, enhance fun, and save time and effort. However, it’s evident that convenience also has a downside.
Before discussing that, it’s crucial to understand why convenience is so attractive. We often resist doing what’s necessary for progress, whether it’s taxes, a pending report, or training. There’s a sense of inertia behind every well-meaning plan. Why is this resistance and the desire for comfort ingrained in us?
Insights from evolutionary psychology, specifically the concept of “evolutionary mismatch,” can provide clarity. Evolutionary mismatch suggests that we evolved for a hunter-gatherer lifestyle while our environment drastically changed, leaving our instincts out of sync with our surroundings.
Viewing the issue through this evolutionary lens makes sense of our tendency towards lethargy and seeking shortcuts. For early humans, food and energy were scarce and unreliable. Survival meant conserving energy wisely to tackle the challenges they faced.
In today’s world, technology has altered our environment to cater somewhat to our energy-conservation instinct. However, adopting trends that prioritize comfort and convenience may come at a cost. While innovations like washing machines and phones have enriched our lives, excessive convenience may pose challenges rather than easing them.
For instance, the increase in depression and anxiety linked to smartphones and social media is worrying. Also, metabolic issues from sedentary lifestyles and reliance on convenient but low-nutrient foods are on the rise. Loneliness levels have prompted the UK to appoint a ‘Minister for Loneliness’ in 2018, partly due to the technologies fostering such isolation.
Over-reliance on coping mechanisms can exacerbate problems they were meant to solve. Choosing comfort excessively can hinder our ability to face life’s challenges. Some discomfort is vital for our growth and survival, as evidenced by our ancestors’ ability to balance safety and risk intelligently.
Super-convenience has its allure, but it might also deplete us unknowingly, making it harder to achieve true success. Human flourishing hinges not just on survival but on growth, problem-solving, and unity in adversity.
Embracing life’s challenges is essential for personal development. While technology offers convenience, it’s crucial to recognize that overcoming obstacles and discomfort is part of our evolutionary heritage. This lesson is critical for the younger generation.
Dr. Alex Carmi is a psychiatrist, psychotherapist, and speaker. thinking mind Podcast.
Neuroscience seems like an unlikely place to find fundamental truths that might apply to everything in the universe. The brain is a special object that does things that few other objects in the universe are expected to be able to do. they recognize. they act. They read magazine articles. Usually they are the exception, not the rule.
Perhaps this is why the free energy principle (FEP) has attracted so much attention. In the early 2000s, what began as a tool to explain cognitive processes such as perception and behavior began to be presented as a “unified brain theory.” FEP was then put forward as the definition of life beyond the brain and, inevitably, as the basis for a new kind of artificial intelligence capable of reasoning. Today, some proponents argue that FEP even encapsulates what it means for something to exist in the universe. “The free energy principle can be read as the physics of self-organization,” says its founder. carl friston At University College London. “It's a description of what lasts.”
But some researchers, frustrated by the changes in scope, are skeptical that the FEP can deliver on many of its loftiest promises. “It was a moving target,” he says Mateo Colomboa philosopher and cognitive scientist at Tilburg University in the Netherlands.
All of this makes FEP a source of both fascination and frustration. While notoriously difficult to understand, its dizzying breadth is key to its enduring appeal. Therefore, given the claim that it can be used to explain…
WIn a surprising move, Nintendo made an announcement at the Oscars ceremony revealing its collaboration with Illumination Studios to create another Mario movie on March 10th. The decision was unexpected considering the lack of attention at the event that day. The previous Mario movie was a massive success, grossing $1 billion and marking the end of a cursed era for video game adaptations. The new movie is scheduled for April 2026, with co-directors Aaron Horvath and Michael Jelenic and screenwriter Matthew Fogel returning for the project. Despite this, the film may not necessarily be a direct sequel as both companies have not confirmed its title as a sequel. Nintendo’s Shigeru Miyamoto expressed excitement for “a new animated film based on the world of Super Mario Bros.” promising a bright and fun narrative.
The previous Mario movies faced criticism from movie critics, with mixed reviews on performances such as Jack Black’s Bowser and Seth Rogen’s Donkey Kong. Regardless, the upcoming sequel presents an opportunity to enhance the storytelling and expand the world of Mario beyond nostalgic references.
While the first movie focused primarily on Mario, there are hopes for the sequel to explore other characters and locations within the Mario universe. The lack of a complex storyline in the games provides a blank canvas for the filmmakers to develop a compelling narrative that goes beyond just fan service.
Detective Pikachu. Photo: Warner Bros. Pictures/AP
The next Mario movie could potentially expand beyond the Mushroom Kingdom, drawing inspiration from games like Mario Galaxy to create a visually spectacular experience. However, there is still room to explore within Mario’s familiar world, offering an opportunity to push creative boundaries and deliver a more ambitious sequel.
what to play
Expeditions: Mud Runner Game. Photo: Steam
Keith recommends diving into the world of Expeditions: Mud Runner Game, a unique off-road simulation game that offers a challenging and rewarding experience. Unlike traditional racing games, this title requires strategic decision-making and overcoming obstacles in rugged environments. It provides a refreshing gameplay style that goes beyond typical racing titles, offering a more intellectually engaging experience.
Available on: PC, PS4/5, Xbox, Switch Estimated play time: 30 hours or more
what to read
The Legend of Zelda: Kingdom of Tears. Photo: Nintendo
– WSJ reports former Activision president, Bobby Kotick, considering investing in TikTok as a potential acquisition amid regulatory pressure. The move raises questions about corporate motivations and industry dynamics.
– Toyota engineers develop a real-life robot version of a Pokémon capable of riding, highlighting the fusion of technology and pop culture.
– British Academy Game Award nominations include popular games like Baldur’s Gate 3, Alan Wake 2, and Zelda: Kingdom of Tears, showcasing the industry’s diverse offerings.
What to click
question block
Customers who have a PS5 on the release date. Photo: Hollandse Hoogte/REX/Shutterstock
Reader John poses a thought-provoking question about the dynamics between Sony and Microsoft within the gaming industry, exploring notions of corporate strategy and market dominance.
“Why is Microsoft perceived differently from Sony in terms of market dominance despite both companies aiming for profitability through different strategies?”
The response highlights the scale and influence of companies like Microsoft within the industry, shaping perceptions and dynamics based on their financial capabilities and strategic positioning.
Have a question for the question block or feedback on the newsletter? Feel free to reach out to pushbuttons@theguardian.com.
This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.
Strictly Necessary Cookies
Strictly Necessary Cookie should be enabled at all times so that we can save your preferences for cookie settings.