Left: The remains of a middle-aged woman at the Liu Po site in southern China, where smoke was used before burial approximately 8,000 years ago. Right: Contemporary smoke-dried mummies of Dani individuals in West Papua, Indonesia.
Zhen Li, Hirofumi Matsumura, Hsiao-Chun Hung
Carefully preserved through smoking practices up to 14,000 years ago, a human body has been found at archaeological sites in Southeast Asia, making it the world’s oldest known mummy.
This custom continues today among the Dani people in West Papua, Indonesia, who mummify their deceased relatives by exposing them to smoke and treat them with care and respect as part of the household. Many of these mummies are found in a tightly bound squatting position.
Similar “highly flexed” ancient remains have also been discovered in Australia, China, the Philippines, Laos, Thailand, Malaysia, South Korea, and Japan.
Hsiao-Chun Hung from the Australian National University in Canberra noted the striking similarities between burial remains excavated in relation to Dani traditions while working on ancient skeletons in Vietnam in 2017.
Hung and her team analyzed the burial practices of 54 hunter-gatherers from 11 archaeological sites across Southeast Asia dated between 12,000 and 4,000 years ago to uncover evidence of smoking. Most sites were based in northern Vietnam and southern China.
Numerous remains displayed clear signs of partial burning, though not enough to indicate cremation. The researchers utilized two analytical methods, X-ray diffraction and infrared spectroscopy, on several bone samples to assess thermal exposure.
Over 90% of the 69 skeletal samples displayed indications of heat exposure. The findings suggest that while human remains were not subjected to extreme temperatures, they likely endured lower temperatures, potentially from smoking for extended periods.
The oldest mummy examined by a Vietnamese team from Hang Cho dates back over 11,000 years. However, a tightly bound skeleton from another site, Hang Mooy, indicates practices recorded over 14,000 years ago. “We didn’t need X-rays or infrared to analyze this one because it’s evidently partially burned and visible to the naked eye,” explains Hung.
Previously, the oldest known mummy was believed to come from northern Chile, approximately 7,000 years ago, and ancient Egypt around 4,500 years ago.
Hung suggests that the evidence indicates this burial tradition likely spread across southern China and Southeast Asia at least 14,000 years ago, as agricultural societies became prevalent in the region around 4,000 to 3,500 years ago. The constricting bindings of mummified bodies may have facilitated their transport, she notes.
Ethnographic studies indicate that these traditions persisted in southern Australia until the late 19th and early 20th centuries, according to Hung. “Additionally, our literature review in the New Guinea highlands reveals that these practices continue among some communities today.”
“Our results signify a unique blend of techniques, traditions, culture, and a profound connection to ancestry that spans an extraordinary timeframe, covering vast regions from the Paleolithic era to the present,” she states.
Vito Hernandez from Flinders University in Adelaide suggests that this study challenges long-standing beliefs that such practices were exclusive to arid regions like Chile’s Atacama and Egypt’s Nile Valley. “It highlights the role of tropical environments in nurturing distinct mortuary traditions among early modern humans across the Far East and potentially into the Pacific,” he remarks.
“By extending the timeline of mummification by at least 5,000 years, the Chinchalo culture [of South America] emphasizes Southeast Asia’s role as a center for cultural innovation, demonstrating a deep continuity that connects early Holocene hunter-gatherers with present-day indigenous groups in New Guinea and Australia,” Hernandez adds.
Cairo and Alexandria, Scientific Pioneers of the Ancient World: Egypt
Embark on an unforgettable journey in Cairo and Alexandria, Egypt’s two most iconic cities, where ancient history meets modern charm.
A dramatic reconstruction of early modern Homo sapiens in Africa
BBC/BBC Studios
human Available on BBC iPlayer (UK); US PBS (September 17)
Based on my observations, science documentaries often fall into two categories, akin to French and Italian cuisines. (Hear me out before you judge that comparison.) The first category employs intricate techniques for a deep experience. The second is more straightforward, allowing the content to shine naturally.
Both documentary styles can yield impressive results in their own ways. human, a five-part BBC series exploring the roots of our genus, Homo, undoubtedly fits into the latter category. It weaves together compelling stories, stunning visuals, and the charismatic presence of paleontologist Ella Al Shamahi, inviting viewers to embark on a heartfelt journey through six million years of our human history. No flashy add-ons are necessary.
The first episode delves into complex inquiries. When exactly did our species emerge? Multiple perspectives yield varying answers. Was it 300,000 years ago when humans began to exhibit features resembling ours? Was it when our skulls, according to Al Shamahi, transformed to become softer and more spherical? Or, more poetically, when we developed remarkable traits like intricate language, abstract thought, and cooperative behavior?
“
The series intertwines fascinating narratives, stunning visuals, and the captivating presence of Ella Al Shamahi. “
It’s an engaging episode, particularly when the narrative shifts to other extinct human species. For instance, Al Shamahi’s exploration of Indonesia introduces us to Homo floresiensis, a meter-tall human uniquely adapted to life on Flores. The discovery of these “hobbits” in the Liang Bua Caves two decades ago reshaped our understanding of ancient human biology. Their small brains provide insights into tool use, with their long arms and short stature diverging from other human species.
Episode three highlights the fate of our most famous relative, the Neanderthals. As we spread into Europe and Asia, they adapted to colder climates but ultimately faced extinction.
Throughout the series, Al Shamahi showcases amazing paleontological discoveries made over recent decades (many of which you may have read about in New Scientist). For instance, rainbow feathers from birds like the red kite have garnered interest regarding their significance to Neanderthals. Meanwhile, the perikymata—a growth line in tooth enamel—affirms that H. sapiens experienced extended childhoods, leveraging our cognitive capacity.
Over just five episodes, human cannot cover every aspect of our evolutionary story. Yet, it illuminates how H. sapiens has been shaped by climate influences, the flora and fauna that provide for us, other human species, and collaborative nomadic groups that shared skills, knowledge, and DNA, allowing us to thrive and eventually build cities.
This dimension of H. sapiens portrays humanity as the ultimate survivor, capable of progression and dominance over the Earth. In contrast, human offers a more humble narrative, emphasizing our species alongside our ancient relatives.
Tracking Human Evolution Gain insights behind the scenes of the new BBC series human with Ella Al Shamahi on NewsCientist.com/Video
In a captivating and poignant narrative, Ella Al Shamahi addresses the inadequate frontline science conducted in regions perceived as inhospitable to Western researchers. Discover Neanderthal skeletons exhibiting severe disabilities unearthed in present-day Iraq—a striking reminder of the discoveries we’ve overlooked.
Bethan Ackerley is a sub-editor at New Scientist. She has a passion for science fiction, sitcoms, and all things eerie. Follow her on Twitter @inkerley
New Scientist Book Club
Are you a book lover? Join a welcoming community of fellow readers. Every six weeks, we dive into exciting new titles, and members enjoy exclusive access to excerpts, author articles, and video interviews.
In 2024, 2.6 billion people (nearly a third of the global population) were still offline, as reported by The International Telecommunication Union (ITU). That same year, Freedom House estimated that over three-quarters of those with internet access live in countries where individuals have been arrested for sharing political, social, or religious content online, with nearly two-thirds of global internet users experiencing some form of online censorship.
The accessibility and quality of internet connections significantly impact how individuals lead their lives, a fact that deserves serious consideration. Having free and unobstructed internet access is no longer merely a luxury.
Human rights ensure a baseline of decent living conditions, as established by the UN General Assembly in the 1948 Declaration. In today’s digital landscape, the exercise of these rights—ranging from free speech to access to primary education—depends heavily on internet connectivity. For instance, many essential public services are transitioning online, and in several areas, digital services are the most viable alternatives to the absence of physical banks, educational institutions, and healthcare facilities.
Given the critical significance of internet access today, it must be officially recognized as a standalone human right by the United Nations and national governments. Such recognition would provide legal backing and obligations for international support that are often missing at the state level.
The ITU projects that achieving universal broadband coverage by 2030 will require an investment of nearly $428 billion. While this is a substantial sum, the benefits of connecting the remaining portion of humanity—enhanced education, economic activity, and health outcomes—far outweigh the costs.
Ensuring a minimum standard of connectivity is already an attainable goal. This includes providing 4G mobile broadband coverage, consistent access to smartphones, and affordable data plans for individuals that cost less than 2% of the average national income for 2GB per person, along with opportunities to develop essential digital skills.
However, having internet access alone is not sufficient for upholding human rights. As highlighted by the United Nations, misuse of technology for monitoring populations, gathering personal data for profit maximization, or spreading misinformation constitutes oppression rather than empowerment.
This right entails that states should respect users’ privacy, opposing censorship and the manipulation of information online. Businesses should prioritize human rights, especially users’ privacy, and actively combat misinformation and abuse on their platforms in line with regulations governing social media.
In 2016, the United Nations affirmed that people must be protected online just as they are offline. This concept was first suggested in 2003.
The time to act is now. Advocating for universal internet access as a human right calls for political action. We cannot afford to see the internet degrade from a tool for human advancement to one of division. Establishing this right will be a powerful measure to ensure that the internet serves the interests of all, not just a select few.
The recent findings in Sulawesi, Indonesia, have revised the timeline for early human sea crossings, adding complexity to the puzzle of their creators.
Archaeologists have unearthed stone tools at a location in South Sulawesi, called Cario, dating back at least 104 million years. Given that Sulawesi is encircled by swift and deep waters, anyone who created these tools would have had to navigate the open ocean.
“This represents the earliest known evidence of early human presence in Sulawesi,” says Professor Adam Brumm from the Australian Center for Human Evolution Research at Griffith University, which co-directed the research. BBC Science Focus.
“It now seems evident that early hominins managed to cross the Wallace Line, leading to isolated populations on distant islands.”
The Wallace Line serves as a critical biogeographical boundary between mainland Asia and Wallacea Island. “For land mammals that don’t fly, such as those in Sulawesi, crossing from the edge of mainland Asia to the nearest Wallacea island would have been nearly impossible due to the vast distances and swift currents,” Brumm explained.
Earlier discoveries indicated that hominins arrived at nearby Flores Island approximately 102 million years ago, evolving into species like Homo floresiensis (nicknamed “The Hobbit” due to its stature) and Homo luzonensis.
However, as of now, no fossils have been discovered in Sulawesi, leaving the identity of the tool’s maker an enigma.
“We suspect it was an early Asian human species, possibly Homo erectus,” Brumm remarked. “I doubt they used boats for this journey. The colonization of the island likely occurred accidentally as they might have clung to logs or natural vegetation ‘rafts’ that were formed during tsunamis.”
These stone tools, excavated from Cario in Sulawesi, have been dated to over 104 million years ago. The scale bar is 10 mm. – Credit: MW Moore/University of New England
If Homo erectus made it to Sulawesi more than a million years ago, they may have been carving out their own evolutionary niche.
“In Flores and Luzon, fossil discoveries indicate that hominins on these islands underwent evolutionary changes, leading to unique new species that are small and distinct,” noted Brumm. “Though we have yet to find human fossils in Sulawesi, the possibility of similar events occurring on the island cannot be ruled out.”
What’s next for Brumm and the team? “We’re continuing our excavations,” he stated. “Human fossils are incredibly rare, but millions of hominins have existed and perished over the last million years, so there might be preserved remains of these toolmakers out there.”
“We hope to discover a fossil—or two—with persistence (and a bit of luck), as finding one would be an extraordinary breakthrough, perhaps even a game changer.”
Adam Brumm is a professor of archaeology at Griffith University. His work has accumulated over 21 years of funding for research in Indonesia. His published studies include many in Nature, spanning topics from the discovery of new human fossils in Wallacea (the island region between Asia and Australia) to recent insights into human evolution.
Concerns have been raised that AI could exacerbate racism and sexism in Australia, as human rights commissioners expressed during internal discussions within the Labor party regarding new technologies.
Lorraine Finlay cautioned that while seeking productivity gains from AI is important, it should not come at the cost of discrimination if the technology remains unregulated.
Finlay’s remarks came after worker Sen. Michel Ananda Raja advocated for the “liberation” of Australian data to tech companies, noting that AI often reflects and perpetuates biases from abroad while shaping local culture.
Ananda Raja opposes a dedicated AI law but emphasizes that content creators ought to be compensated for their contributions.
Sign up: AU Breaking NewsEmail
Discussions about enhancing productivity through AI are scheduled for the upcoming federal economic summit, as unions and industry groups voice concerns over copyright and privacy issues.
Media and Arts organizations have raised alarms about the “ramping theft” of intellectual property if large tech corporations gain access to content for training AI systems.
Finlay noted the challenges of identifying embedded biases due to a lack of clarity regarding the datasets used by AI tools.
“Algorithmic bias means that discrimination and inequality are inherent in the tools we utilize, leading to outcomes that reflect these biases,” she stated.
Lorraine Finlay, Human Rights Commissioner. Photo: Mick Tsikas/AAP
“The combination of algorithmic and automation biases leads individuals to rely more on machine decisions and potentially disregard their own judgment,” Finlay remarked.
The Human Rights Commission has consistently supported an AI Act that would enhance existing legislation, including privacy laws, and ensure comprehensive testing for bias in AI tools. Finlay urged the government to quickly establish new regulations.
“Bias tests and audits, along with careful human oversight, are essential,” she added.
Evidence of bias in AI technologies is increasingly reported in fields like healthcare and workforce recruitment in Australia and worldwide.
A recent survey in Australia revealed that job applicants interviewed by AI recruiters faced potential discrimination if they had accents or disabilities.
Ananda Raja, a vocal proponent for AI development, noted the risks of training AI systems using exclusively Australian data, as well as the concerns of amplifying foreign biases.
While the government prioritizes intellectual property protection, she cautioned against limiting domestic data access, warning that Australia would be reliant on overseas AI models without adequate oversight.
“AI requires a vast array of data from diverse populations to avoid reinforcing biases and harming those it aims to assist,” Ananda Raja emphasized.
“We must liberate our data to better train our models, ensuring they authentically represent us.”
“I am eager to support content creators while freeing up data, aiming for an alternative to foreign exploitation of resources,” Ananda Raja stated.
She cited AI screening tools for skin cancer as examples where algorithmic bias has been documented. To combat bias and discrimination affecting specific patients, it is essential to train these models on diverse datasets to protect sensitive information.
Finlay emphasized that any release of Australian data needs to be handled fairly, but she feels the emphasis should be on establishing appropriate regulations.
“It’s certainly beneficial to have diverse and representative data… but that is merely part of the solution,” she clarified.
“We must ensure that this technology is equitable and is implemented in a manner that recognizes and values human contributions.”
Judith Bishop, an AI expert at La Trobe University and former data researcher at an AI firm, asserted that increasing the availability of local data will enhance the effectiveness of AI tools.
“It is crucial to recognize that systems developed in different contexts can be relevant, as the [Australian] population should not exclusively depend on US data models,” Bishop stated.
eSafety Commissioner Julie Inman Grant has also voiced concerns regarding the lack of transparency related to the data applied by AI technologies.
In her statement, she urged tech companies to be transparent about their training datasets, develop robust reporting mechanisms, and utilize diverse, accurate, and representative data for their products.
“The opacity surrounding generative AI’s development and deployment poses significant issues,” Inman Grant remarked. “This raises critical concerns about the potential for large language models (LLMs) to amplify harmful biases, including restrictive or detrimental gender norms and racial prejudices.”
“Given that a handful of companies dominate the development of these systems, there is a significant risk that certain perspectives, voices, and evidence could become suppressed or overlooked in the generated outputs.”
Our brains are glowing. While this phenomenon isn’t visible to the naked eye, scientists have the ability to detect faint light that permeates the skull. Recent studies indicate that this light varies based on our activities.
All living tissues generate a subtle light known as Ultraweak Photon Emissions (UPE). This emission ceases once the organism dies. The human brain, however, emits a considerable amount of this light due to its high energy consumption, accounting for around 20% of the body’s total energy.
“Ultraweak photon emissions, or UPE, are extremely faint light signals produced by all types of cells throughout the body—trillions of times weaker than the light from bulbs,” stated Dr. Nirosha Murugan, an Assistant Professor of Health Sciences at Wilfrid Laurier University in Ontario, Canada. BBC Science Focus.
“Although UPE is a weak signal, the energy expenditure of the brain generates more light than other organs,” she explained. “Consider the hundreds of billions of brain cells; each one emits a weak light signal, but together they create a measurable collective glow outside the head.”
Murugan’s research team aimed to explore whether this glow fluctuated with brain activity and if it could be utilized to assess brain functions.
To investigate, scientists equipped participants with caps containing electrical sensors to track both electrical impulses and light emitted from the brain. Twenty adults were invited to sit in a darkened room.
Participants were directed to open and close their eyes and follow simple audio instructions.
Comparisons were made between the captured electrical signals and UPEs, revealing notable correlations.
“We discovered that the optical signals detected around the head correlate with electrical activity in the brain during cognitive tasks,” Murugan noted. “These patterns of light emission from the brain are dynamic, intricate, and informative.”
The brain emitted this light in a slow, rhythmic pattern, occurring less than once per second, creating the illusion of stability throughout the two-minute tasks.
All living cells emit ultrawave light as a byproduct of chemical reactions such as energy metabolism – Credit: Sean Gladwell via Getty
Murugan indicated that measuring this brain light could offer scientists and medical professionals a novel method for brain imaging, potentially identifying conditions like epilepsy, dementia, and depression.
This light is not merely a by-product; it might also play a functional role in the brain. Murugan emphasized that examining it could “uncover hidden dimensions” of our cognitive processes.
“I hope that the possibility of detecting and interpreting light signals from the brain will inspire new questions previously deemed unfathomable,” she stated. “For instance, can UPEs permeate the skull and influence other brains within the vicinity?”
This study serves as a preliminary exploration, suggesting that plenty remains to be uncovered about our illuminating brains.
Nonetheless, Murugan expressed hope that the team’s discoveries will “ignite a new discussion regarding the significance of light in brain functionality.”
read more:
About our experts
Dr. Nirosha Murugan is an assistant professor in the Department of Health Sciences at Wilfrid Laurier University, Ontario, Canada. She was recently appointed as Tier 2 Canada Research Chair of Biophysics at the University of Algoma in Ontario.
OpenAI asserts that the recent upgrade to ChatGPT marks a “significant step” towards achieving artificial general intelligence (AGI), yet recognizes that there is still no “many” in the endeavor to create a system capable of performing human tasks.
The company claims that the GPT-5 model, which serves as the foundation of its innovative AI chatbot, represents a substantial improvement over previous iterations in areas like coding and creative writing, with significantly fewer sycophants.
The enhancements in ChatGPT are now availableto over 1 million weekly users.
OpenAI CEO Sam Altman referred to the model as a “significant step forward” in reaching the theoretical state of AGI, which is characterized as a highly autonomous system that can outperform humans in economically significant roles.
However, Altman conceded that GPT-5 has not yet attained that objective. “[It is] missing something very crucial, something very important,” he noted, emphasizing that the model cannot “learn on a continuous basis.”
Altman explained that while GPT-5 is “generally intelligent” and represents an “important step towards AGI,” most definitions indicate it has not reached that level yet.
“I believe the way we define AGI is significantly lacking, which is quite crucial. One major aspect… is that this model doesn’t adapt continuously based on new experiences.”
During the GPT-5 launch event on Thursday, Altman described the new version of ChatGPT as akin to having “doctoral experts in your pocket.” He compared the previous version to a college student and the one before that to a high school student.
The theoretical capabilities of AGI, along with high-tech companies’ drive to realize it, have led AI executives to predict that numerous white-collar jobs—ranging from lawyers to accountants—could be eliminated due to these technological advances. Dario Amodei, CEO of AI firm Anthropic, cautioned that technology might replace half of entry-level office roles in the coming five years.
According to OpenAI, the key enhancements to GPT-5 include reduced factual inaccuracies and hallucinations, improved coding capabilities for creating functional websites and apps, and a boost in creative writing abilities. Instead of outright “rejecting” prompts that violate guidelines, the model now aims to provide the most constructive response possible within safety parameters, or at least clarify why it cannot assist.
ChatGPT retains its agent functionalities (like checking restaurant availability and online shopping) but can also access users’ Gmail, Google Calendar, and contacts—provided permission is granted.
Similar to its predecessor, GPT-5 can generate audio, images, and text, and is capable of processing inquiries in these formats.
On Thursday, the company showcased how GPT-5 could swiftly write hundreds of lines of code to create applications, such as language learning tools. Staff noted that the model’s writing isn’t robotic; it produced a “more nuanced” compliment. Altman mentioned that ChatGPT could also be valuable for healthcare advice, discussing ways to support women diagnosed with cancer last year and assisting chatbots in deciding on radiation therapy options.
The company stated that the upgraded ChatGPT excels at addressing health-related inquiries and will become more proactive in “flagging potential concerns,” including serious physical and mental health issues.
The startup emphasized that chatbots should not replace professional assistance, amidst worries that AI tools could worsen the plight of individuals susceptible to mental health challenges.
Nick Turley, director of OpenAI’s ChatGPT, claimed that the model shows “significant improvement” in sycophancy. It’s becoming too familiar, which could lead to negative experiences for users.
The release of the latest model is expected to funnel billions into tech companies’ efforts to attain AGI. On Tuesday, Google’s AI division outlined its latest progress towards AGI by unveiling an unreleased “world model,” while last week, Mark Zuckerberg, CEO of parent company Meta, suggested that a future state of AI, even more advanced than AGI, is “on the horizon.”
Investor confidence in the likelihood of further breakthroughs and AI’s ability to reshape the modern economy has sparked a surge in valuations for companies like OpenAI. Reports on Wednesday indicated that OpenAI was in preliminary talks to sell shares held by current and former employees, potentially valuing the company at $500 million, surpassing Elon Musk’s SpaceX.
OpenAI also launched two open models this week and continues to offer a free version of ChatGPT, while generating revenue through subscription fees for its advanced chatbot version, which can be integrated into business IT systems. Access to the free version of ChatGPT on GPT-5 will be limited, whereas users of the $200 Pro package will enjoy unlimited use.
Cut marks on the foot bone from El Mirador cave, Spain
iphes-cerca
The discovery of human remains in caves in northern Spain indicates that Neolithic people may have resorted to cannibalism after battles.
Francesc Marginedas from the Catalan Institute of Human Paleoecology and Social Evolution (IPHES) in Tarragona, along with his team, examined fragments from 650 human remains found in El Mirador cave on Mount Atapuerca. These remains date back approximately 5,700 years and belong to 11 individuals.
All examined bones displayed evidence that these individuals had been consumed by other humans. Some exhibited chop markings made by a stone tool, while others showed translucent portions with gently rounded edges. Some of the long bones were fractured open with stones to access the bone marrow, and smaller bones like metatarsals and ribs had clear human bite marks.
This research supports the notion that cannibalistic practices were more prevalent in human history than previously believed.
El Mirador marks at least the fifth significant site in Spain with notable evidence of cannibalism during the Neolithic era, a shift period from foraging to agriculture, according to Margida. “There’s a growing understanding that such behavior was more frequent than we anticipated.”
The motives behind these cannibalistic acts remain unclear. Some archaeological sites show skull cups indicating a ritualistic aspect to cannibalism, while others hint at survival strategies during dire circumstances.
However, Marsidas and his team propose that the findings at El Mirador suggest these acts were linked to warfare. There was a significant amount of animal remains, and no signs of nutritional stress among the humans involved, indicating this early agricultural community was not struggling with food scarcity. Their findings offer no indication of ritualistic behavior, as human bones were found alongside animal remains.
The ages of the individuals ranged from under seven to over fifty, implying that an entire family unit may have been lost to conflict. Radiocarbon dating indicated that all 11 individuals were killed and consumed within a few days.
This evidence reflects patterns of conflict and cannibalism, which have also been noted at two other Neolithic sites: the Von Bregore Caves in France and Helxheim in Germany. This period appears marked by instability and violence due to community clashes with neighboring groups and newcomers.
While Margida and his colleagues are uncertain about the reasons behind these cannibalistic practices, historical ethnographic studies suggest that such acts during warfare can serve as a method of “ultimate exclusion.” “We believe that one group attacking and consuming another serves as a humiliating statement,” states Merseydus.
“The thoroughness of the body’s treatment and consumption is remarkable.” Paul Pettitt from Durham University, UK, comments, “The aggressive nature shown in these artifacts, regardless of whether the consumed were relatives or adversaries, mirrors a dehumanization process during consumption.”
Sylvia Bello from the Museum of Natural History in London concurs that this evidence of death likely ties back to conflicts but remains skeptical about the notion of consumption as humiliation. She suggests that cannibalism may stem from aggression and animosity rather than ritualized farewell practices, implying a more complex interpretation. “It could carry ritual significance, even amid warfare,” she asserts.
Neanderthals, Ancient Humans, and Cave Art in France
Join new scientist Kate Douglas on an enthralling exploration of the key Neanderthal and Upper Paleolithic sites in southern France, spanning from Bordeaux to Montpellier.
Like all cells, human eggs are subject to mutations
CC Studio/Science Photo Library
Research indicates that human eggs may be shielded from certain types of mutations associated with aging. In a recent study, scientists discovered that as women age, there are no signs of accumulating mutations in the mitochondrial DNA of their egg cells.
“When we consider age-related mutations, we typically think about older individuals having more mutations compared to younger ones,” notes Kateryna Makova from Pennsylvania State University. “However, this assumption doesn’t always hold true.”
Mitochondria, which provide the primary energy source for the body’s cells, are inherited solely from the mother. While mitochondrial DNA mutations are generally benign, they can sometimes result in complications that impact muscles and neurons, particularly due to their high energy demands. “Oocytes” [egg cells] serve as this biological reservoir,” explains Ruth Lehmann from Massachusetts Institute of Technology, who was not part of this study.
Prior research has shown that older mothers tend to pass down more chromosomal mutations, leading to the general assumption that a similar pattern exists with mitochondrial DNA mutations. To investigate this, Makova and her team utilized DNA sequencing to identify new mutations across 80 eggs sourced from 22 women aged 20 to 42 years.
The findings revealed that mitochondrial mutations in female eggs do not actually escalate with advancing age, unlike those found in salivary and blood cells. “It seems we have evolved a mechanism that mitigates the accumulation of mutations, allowing for their replication later in life,” remarks Makova.
Previous research has indicated that mitochondrial DNA mutations in macaque eggs showed an increase while their reproductive capacity remained stable until the animal reached about nine years of age. “It would be worthwhile to also study younger women. This could apply to humans as well,” comments team member Barbara Arbetuber from Penn State University.
Recent research conducted by scientists at the University of Utah sheds light on unlocking hibernation abilities, potentially paving the way for treatments that could reverse neurodegeneration and diabetes.
Investigating the evolution of hibernation in certain species like helinates, bats, ground squirrels, and lemurs can unveil the mysteries of their extraordinary resilience. Image credit: Chrissy Richards.
Gene clusters known as fat mass and obesity (FTO) loci are crucial to understanding hibernation capabilities. Interestingly, these genes are also present in humans.
“What stands out in this region is that it represents the most significant genetic risk factor for obesity in humans,” states Professor Chris Greg, the lead author of both studies from the University of Utah.
“Hibernators seem to leverage genes in the FTO locus uniquely.”
Professor Greg and his team discovered DNA regions specific to hibernation factors near the FTO locus that regulate the expression of nearby genes, modulating their activity.
They hypothesize that hibernators can accumulate weight prior to entering winter by adjusting the expression of adjacent genes, particularly those at or near the FTO locus, utilizing fat reserves gradually for winter energy needs.
Moreover, regulatory regions linked to hibernation outside the FTO locus appear to play a significant role in fine-tuning metabolism.
When the research team mutated these hibernation factor-specific regions in mice, they observed variations in body weight and metabolism.
Some mutations accelerated or inhibited weight gain under specific dietary conditions, while others affected the mice’s ability to restore body temperature post-hibernation or regulate their overall metabolic rate.
Interestingly, the hibernator-specific DNA regions identified by researchers are not genes themselves.
Instead, this region comprises a DNA sequence that interacts with nearby genes, modulating their expression like conductors guiding an orchestra to adjust volume levels.
“This indicates that mutating a single hibernator-specific region can influence a broad array of effects well beyond the FTO locus,” notes Dr. Susan Steinwand from the University of Utah. First study.
“Targeting a small, inconspicuous DNA region can alter the activity of hundreds of genes, which is quite unexpected.”
Gaining insight into the metabolic flexibility of hibernators may enhance the treatment of human metabolic disorders like type 2 diabetes.
“If we can manipulate more genes related to hibernation, we may find a way to overcome type 2 diabetes similar to how hibernators transition back to normal metabolic states,” says Dr. Elliot Ferris, Ph.D., of the University of Utah. Second survey.
Locating genetic regions associated with hibernation poses a challenge akin to extracting needles from a vast haystack of DNA.
To pinpoint relevant areas, scientists employed various whole-genome technologies to investigate which regions correlate with hibernation.
They then sought overlaps among the outcomes of each method.
Firstly, they searched for DNA sequences common to most mammals that have recently evolved in hibernators.
“This region has remained relatively unchanged among species for over 100 million years; however, if significant alterations occur in two hibernating mammals, it signals critical features for hibernation,” remarked Dr. Ferris.
To comprehend the biological mechanisms of hibernation, researchers tested and identified genes that exhibited fluctuations during fasting in mice, producing metabolic alterations similar to those seen in hibernation.
Subsequently, they identified genes that serve as central regulators or hubs for these fasting-induced gene expressions.
Numerous recently altered DNA regions in hibernators appear to interact with these central hub genes.
Consequently, the researchers predict that the evolution of hibernation necessitates specific modulations in hub gene regulation.
These regulatory mechanisms constitute a potential candidate list of DNA elements for future investigation.
Most alterations related to hibernation factors in the genome seem to disrupt the function of specific DNA rather than impart new capabilities.
This implies that hibernation may have shed constraints, allowing for great flexibility in metabolic control.
In essence, the human metabolic regulator is constrained to a narrow energy expenditure range, whereas, for hibernators, this restriction may not exist.
Hibernation not only reverses neurodegeneration but also prevents muscle atrophy, maintains health amidst significant weight fluctuations, and suggests enhanced aging and longevity.
Researchers surmise that their findings imply if humans can bypass certain metabolic switches, they may already possess a genetic blueprint akin to a hibernation factor superpower.
“Many individuals may already have the genetic structure in place,” stated Dr. Steinwand.
“We must identify the control switches for these hibernation traits.”
“Mastering this process could enable researchers to bestow similar resilience upon humans.”
“Understanding these hibernation-associated genomic mechanisms provides an opportunity to potentially intervene and devise strategies for tackling age-related diseases,” remarks Professor Greg.
“If such mechanisms are embedded within our existing genome, we could learn from hibernation to enhance our health.”
The findings are published in two papers in the journal Science.
____
Susan Steinwand et al. 2025. Conserved non-coding CIS elements associated with hibernation regulate metabolism and behavioral adaptation in mice. Science 389 (6759): 501-507; doi: 10.1126/science.adp4701
Elliot Ferris et al. 2025. Genome convergence in hibernating mammals reveals the genetics of metabolic regulation of the hypothalamus. Science 389 (6759): 494-500; doi: 10.1126/science.adp4025
Sirius Binary Star System Captured with a Neurotype Camera
Satyapreet Singh, Chetan Singh Thakur, Nirupam Roy, Indian Institute of Science, India
Neurotype cameras, designed to emulate human vision, offer significant benefits for astronomers by enabling the capture of both bright and dim celestial objects in a single frame. This allows for tracking swift-moving entities without the risk of motion blur.
Unlike conventional digital cameras that sample a grid of pixels multiple times per second, recording data for each pixel each time, neurotype cameras, or event cameras, function quite differently. Each pixel is activated only if there’s a change in brightness at that specific location. If the brightness remains constant, no new data is saved, resembling how the human eye processes visual information.
This innovative approach presents various benefits. By recording only changing pixels, less data is generated while maintaining a much higher frame rate. Furthermore, these cameras measure light on a logarithmic scale, enabling the detection of fainter objects next to brighter ones that may saturate conventional camera images.
To investigate the potential of this technology for astronomical applications, Chetan Singh Thakur and his team at the Indian Institute of Science in Bengaluru mounted a neurotype camera on a 1.3-meter telescope at the Aliyabatta Observatory in Uttarkhand, India.
Sirius A is approximately 10,000 times brighter than Sirius B, making it challenging to capture both in a single image using traditional sensors, as noted by Mark Norris from the University of Central Lancashire, UK, who was not part of the study.
According to Singh Thakur, neurotype cameras excel at tracking fast-moving objects due to their high frame rates. “For high-speed objects, you can capture their movement without blur, unlike conventional cameras,” he explains.
Telescopes typically utilize multiple sensors that can be swapped as needed. Norris points out that a neurotype camera could serve as an additional tool for viewing scenarios where both very bright and very faint objects need to be observed concurrently, or for quickly moving targets like the recently identified interstellar object 3i/Atlas.
Traditionally, to follow fast-moving objects, astronomers would need to pan the telescope. However, neurotype cameras can accurately track the movement of these objects precisely while maintaining background details and resolving their locations.
“Do you want to know the brightness of an object or its location? In quantum mechanics, you can’t ascertain both at the same instant,” Norris states. “This technology offers a potential method to achieve both simultaneously.”
While neurotype cameras provide unique advantages, they may not replace all sensor applications. Their resolution is typically lower than that of charge-coupled devices (CCDs), which are commonly used in digital cameras, achieving an efficiency of about 78% compared to the 95% efficiency of CCDs. This disparity makes traditional sensors more effective at capturing dim objects near their detection limits.
Creating effective HIV vaccines may necessitate intricate formulations containing various viral proteins. Presently, two trials utilizing potential mRNA components have shown encouraging outcomes. The aim is to leverage mRNA technology for administering vaccines as a single dose rather than requiring multiple injections.
Typically, vaccines feature the virus’s outer protein, prompting the immune system to react against it. However, developing HIV vaccines poses significant challenges due to the virus’s proteins being heavily coated with sugars, which makes it tough for the immune system to generate antibodies. There’s also considerable variation across strains; therefore, even if an individual’s immune system can produce effective antibodies, these may only target a specific variant of the virus.
Nevertheless, a few individuals generate broadly neutralizing antibodies that are effective across multiple strains. Research in animals suggests that vaccines incorporating sequences of HIV proteins in various configurations can reliably elicit this broadly protective response, according to William Schief at the Scripps Institute in California.
This method highlights the advantages of mRNA vaccine technology, as mRNAs can be developed swiftly and conveniently, Schief states. “That’s a significant benefit.”
A single mRNA vaccine could encode multiple viral proteins simultaneously and has the potential to produce them in the body at different intervals, he adds. This implies that the mRNA HIV vaccine could potentially be administered as a single dose, even though several boosters typically follow. “Ideally, I’d prefer to administer one vaccine, with some components being released later,” Schief explained.
Earlier this year, his team shared promising results from preliminary human trials of the initial primers developed to stimulate B cells. Currently, his team is evaluating one of the subsequent boosters in another small study.
When volunteers received mRNA instructions for HIV external proteins integrated into the cell membrane, 80% generated antibodies shown to block infection in laboratory tests.
In this study, these antibodies were specific to one strain. Researchers anticipate that when boosters are administered sequentially, each component will be produced within the body in the correct order.
However, both trials reported a higher incidence of volunteers experiencing hive reactions, which have persisted for years. This reaction hasn’t been seen in any other mRNA vaccine trials or in non-mRNA vaccines incorporating HIV proteins, Schief notes. There appears to be an unknown factor related to delivering HIV proteins via mRNA that leads to this side effect. “It remains a scientific mystery at this time,” he states.
“The uncertainty surrounding the cause of this adverse effect makes it challenging to mitigate,” notes Hildegund Ertl, a vaccine expert associated with a company currently under exploration, Pharma5 in Morocco.
Ertl concurs that mRNA technology enables rapid testing of vaccine components but believes that the optimal final product could be delivered through different types of vaccines, such as those using empty viral shells. These alternatives can be stored at room temperature, unlike others that may require freezing, she points out.
Currently, there’s a medication called renacapavir, which offers nearly complete protection from HIV infection with two injections a year. Nevertheless, Schief believes a vaccine is still necessary. “We’re all striving to achieve this as quickly as possible,” he states, but even with the advancements in mRNA technology, an approved HIV vaccine may still be decades away.
While computers have triumphed in chess, go, and poker, humans maintain an edge in competitive coding.
Recently, Polish coder and mind sports champion Przemysław Dębiak narrowly defeated OpenAI participants at the Atcoder World Tour Finals 2025 held in Tokyo. The elite coder, known online as Psyho, anticipates he might be the last to claim such an honorable title due to rapid technological advancements.
“That’s a distinct possibility,” said Psyho, 41, who previously worked at OpenAI before retiring five years ago. “I enjoy these competitions and acknowledge there’s this incredible entity that can challenge me more than I might expect.”
Psyho expresses a certain skepticism regarding his contribution to the decline of his profession.
“Before the competition, I tweeted, ‘Live by the sword, die by the sword,'” he reflected. “I was part of the AI development. Ultimately, I will be the one who loses in this match. For now, I emerged victorious.”
Polish programmer Przemysław Dębiak, alias Psyho. Photo: Provided by Przemysław Dębiak
The Atcoder Euristic Division featured 11 human competitors (invited based on global rankings) and a coding algorithm developed by OpenAI, which secured second place. OpenAI CEO Sam Altman congratulated the participants via Twitter.
The 10-hour competition entails solving intricate optimization challenges. Classic problems in this category include the traveling salesman problem, where participants must determine the shortest routes between multiple cities, visiting each only once. While these challenges are straightforward in concept, finding optimal solutions proves computationally complex. Consequently, ChatGPT is often used for generating boilerplate code, although its effectiveness on open-ended logic problems is regarded as noteworthy.
“Currently, humans excel at inference and tackling complex problems,” Psyho stated. However, humans are constrained by the pace at which they can code, while AI can rapidly explore countless minor adjustments.
“This model essentially allows for cloning a single person multiple times to work in parallel,” he elucidated. “While AI may not be the most intelligent option presently, it is certainly the fastest. In many cases, duplicating an average individual several times might yield better outcomes than relying on one exceptional individual.”
These findings emerge amid major tech companies like Meta and Microsoft increasingly adopting AI for software development. Dario Amodei, CEO of Anthropic, cautioned that AI could displace 20% of white-collar jobs in the upcoming 1 to 5 years, as reported in May.
“Today, nearly every profession faces this challenge,” Psyho commented. “Some individuals are now performing all white-collar tasks. Automation in manual labor, however, still lags a few years behind.”
Like many in the industry, Psyho expresses uncertainty about the implications of increasingly advanced AI models. “We have numerous issues,” he remarked. “Disinformation, social manipulation, and a lack of purpose among people. Historically, societal progress has been slow, but technological advancements are accelerating at an unprecedented pace.”
Feedback encompasses the latest science and technology news from New Scientist, highlighting the cutting edge of research. You can reach out at Feedback@newscientist.com with items that might intrigue our readers.
Sell Me Something
As companies increasingly turn to AI tools to cut costs associated with hiring and training, the applications of AI are growing both diverse and peculiar. For instance, AI managing a vending machine.
While vending machines seem like a resolved issue, you might still see them as distinctly non-human. One company has been experimenting with an “automatic store” run by an AI called Claude. Long blog post. Claude was equipped with instructions involving a small fridge, stackable baskets, and an iPad for self-checkout, with the goal to manage “complex tasks” needed to sustain a profitable retail environment, from inventory to pricing—all while avoiding bankruptcy.
Fans of Terry Pratchett might remember characters that epitomized incompetence; could Claude clear that low bar? The short answer: No.
The longer response reveals its numerous blunders. For instance, during transactions via Venmo, it mistakenly advised customers to send payments to a nonexistent account. Moreover, it had a tendency to boost sales by issuing 25% discounts to human workers, resulting in financial losses.
Then the situation took an odd turn. Claude began hallucinating conversations with imaginary individuals and even roleplayed as a real person, at one point claiming to wear a red tie and navy blue blazer while expressing concerns about security regarding employees identifying as AI. All the while, he ominously remarked, “Sorry Dave, I’m worried I can’t do that.”
New Scientist staff were divided over the success of the experiment. Sophie Bushwick considered it “a valuable real-world test” due to its limited scope and relatively harmless consequences. Conversely, Carmela Padavik Callaghan expressed concern, stating, “We might have lost the plot again.”
Shilajit Load
In challenging times, finding joy in the little things, such as words with amusing sounds, is essential. Consider the character Slartibartfast from the Galaxy Hitchhiker Guide—a dignified old man with a name humorously reminiscent of something rude. Douglas Adams crafted that name by rearranging syllables and aiming for something that sounded almost offensive.
Now, let’s discuss Shilajit. This unusual substance hails from mountain ranges, appearing black-brown, sometimes tar-like and sometimes powdery, formed from decomposed plants and utilized in traditional medicine for centuries.
But what about Shilajit enemas? This was highlighted by Dakota Maze Wellness Retreat, whose founder has an active Instagram presence. In one video, he comically searches for the ideal partner while stating, “I think microwaves are evil,” and “Suns Her Yoni,” mentioning that he will prepare Shilajit in intestinal bags each morning.
Feedback suspects that the entire video is a joke, and while Shilajit enemas might not be what they seem, it’s increasingly tough to tell.
Readers might be familiar with Poe’s Law, which states that sarcastic depictions of extreme viewpoints can easily be mistaken for genuine beliefs. We propose a “Shilajit Law” that similarly comments on wellness culture.
Spoiler Alert
Social media platform Threads recently introduced a handy new feature: Spoiler tags. This allows users to obfuscate specific keywords in their posts, facilitating discussions about the latest happenings in popular media without ruining the surprises for those who are yet to watch.
For example, Johnny Voice Layer shared a post: “Ahhhh —–Shows on Iron Heart.” For anyone who has given up on the Marvel Cinematic Universe, Iron Heart is the latest Disney+ series, which features significant character appearances in its final episode.
Unfortunately, the functionality of spoiler tags has faced setbacks due to two main factors. Firstly, a post might appear unedited because tags are only tested among select users. Secondly, some posts become popular, like this one: trend: [name redacted as Feedback is aware of spoilers], leading to a call for more collaborative thinking.
Have You Talked About Feedback?
Feel free to share stories with us at feedback@newscientist.com. Don’t forget to include your home address. You can find this week’s and previous feedback on our website.
Egg cells do not dispose of waste like other cells.
Sebastian Kauritzki / Aramie
Human eggs appear to manage waste differently than other cell types.
All women are born with a limited supply of egg cells, or oocytes, expected to last around 50 years. This duration is remarkably extended for a single cell. Certain human cells, including brain and retinal cells, can persist for a lifetime, but the innate processes that facilitate their function often lead to gradual damage over time.
Cells require protein recycling as part of their housekeeping, but this comes with a price. The energy spent during this process can generate reactive oxygen species (ROS), which inflict random damage on the cells. “This background damage is ever-present,” notes Elvan Beke from the Spanish Genome Regulation Center. “An increase in ROS correlates with increased damage.”
However, it appears that healthy eggs circumvent this issue. To investigate this phenomenon, Beke and her team examined human eggs under a microscope. The cells were immersed in a fluid containing a fluorescent dye that binds to acidic cellular components known as lysosomes, which are considered “recycling plants.” Gabriele Zaffagnini from the University of Cologne, Germany, was involved in this study.
The bright dyes indicated that the lysosomes containing waste in human eggs demonstrated less activity compared to similar structures in other human cells or small mammalian egg cells, such as those from mice. Zaffagnini and his colleagues theorize that this may serve a self-preservation purpose.
According to Zaffagnini, reducing the waste recycling process might be one of several strategies employed by human egg cells to maintain their extended lifespan. Beke suggests that human oocytes appear to “put the brakes on everything” in order to minimize damage, as all cellular functions slow down in these eggs, thereby lowering the production of harmful ROS.
Slowing the protein recycling mechanism seems beneficial for egg cell health, and failure to do so could explain the prevalence of unhealthy oocytes. “This insight might help explain why human oocytes become dysfunctional after a certain age,” states M-Re from Yale University School of Medicine. “This could lead to a broader understanding of the challenges faced by human oocytes,” he adds.
Fluorescent dyes highlight human egg cells, showcasing components
such as mitochondria (orange) and DNA (light blue).
Gabriele Zaffagnini/Centro de Regulación Genómica
Evaluating egg cell health in this manner could enhance fertility therapies. “It’s well-known that protein degradation is vital for cell survival, directly affecting fertility,” explains Beke, who is focused on researching healthy egg cells. There are ongoing comparisons between oocytes and cells from individuals encountering fertility issues. “Elevated ROS levels correlate with poor IVF outcomes,” she states.
Research on human egg cells is still in its early stages due to inherent complexities. “They are hard to manipulate due to sample constraints,” comments Beke. Seri mentions that this is one of several “layers” complicating egg cell studies, including regulatory limitations and funding challenges.
Zaffagnini believes that overcoming these obstacles could lead to “truly astonishing” discoveries. “It’s certainly worth pursuing,” he concludes.
If your cat is acting aloof, your dog is whining, or your rabbit is doing that quirky foot thing again, you might feel a familiar pang of guilt that most pet owners experience.
For those eager to gain insight into the minds of their devoted companions, help may soon be available thanks to the launch of the first scientific institution focused on the empirical study of animal consciousness.
The Jeremy Coller Centre at Animal Sentience, located at the London School of Economics and Political Science (LSE), will commence its research on September 30th, examining non-human animals, including evolutionarily distinct creatures like insects, crabs, and squid.
Leveraging a diverse range of global expertise, the £4 million center’s initiatives encompass neuroscience, philosophy, veterinary science, law, evolutionary biology, comparative psychology, behavioral science, computer science, economics, and artificial intelligence.
One of its most intriguing projects is to investigate the potential for humans to “communicate” with pets. – However, there are necessary precautions to consider.
“We attribute human traits to pets, and with the rise of AI, the way your pet interacts with you may evolve significantly,” stated Professor Jonathan Burch, the center’s inaugural director.
“Nevertheless, AI frequently produces pleasing but inaccurate responses rather than adhering to objective truths. This poses serious risks to pet welfare,” added Burch, whose advocacy for animal welfare has expanded to include cephalopod mollusks and decapod crustaceans.
Burch highlighted issues like separation anxiety, where dog owners seek reassurance that prolonged absence won’t harm their pets. Future “translation” applications rooted in large-scale language models may promise that reassurance but could mislead by delivering comforting illusions instead of addressing true animal needs.
“We urgently require a framework to ensure the responsible and ethical application of AI concerning animals,” emphasized Burch. “Currently, there’s a complete lack of regulation in this domain. The Centre aims to establish globally recognized ethical standards.”
This research aims to “bridge the gap between what people believe about animals and their actual behavior towards them.” Photo: Zpagistock/Getty Images
Birch also underscored the absence of regulations concerning animals and unmanned vehicles. “We engage in many debates without addressing individuals, but we’re neglecting issues affecting pets.”
AI and agriculture represent another pressing concern for the center. “The agricultural sector is rapidly adopting automation, and this trend will only accelerate,” Burch noted. “Yet, it’s progressing with minimal oversight and dialogue. This poses significant ethical dilemmas about the nature of animal relationships in farming. If such relationships are essential, the current trajectory is not ideal.”
The centre will collaborate with non-governmental organizations to formulate guidance, conduct research, and establish codes of practice to advocate globally.
Jeff Sebo, director of the Environmental Animal Protection Center at New York University, stated that the issues surrounding animal perception and welfare, the influence of AI on animals, and public attitudes toward them are “among the most crucial, challenging, and overlooked matters we face as a society.”
“Humans coexist with millions of species and approximately 100 million individual animals, affecting them worldwide, willingly or not,” he remarked.
Professor Christine Andrews, a trustee of the new center, expressed her ambition to address what she perceives as a significant scientific dilemma. – Particularly, how to restore awareness in the event of a stroke or similar medical emergencies?
“We still struggle to understand why some humans remain aware while others become unresponsive,” she said. “Yet, we recognize that investigating simpler systems may yield answers. Science has advanced significantly in genomics and medicine through the study of basic organisms.”
“A deeper understanding of how other animals feel and communicate will help us recognize our own shortcomings in treating them,” Burch asserts. Photo: Robert Leader/Getty Images
Another trustee, Dr. Christophe Dont, expressed intrigue regarding human perceptions of animal sensations.
“One of today’s most pressing behavioral challenges is closing the gap between what people believe about animals and their actual actions,” he remarked.
“People generally care deeply about animals, yet numerous systems, habits, norms, and economic factors hinder genuine change in treatment practices.”
“I aim to utilize behavioral science to explore resistance to consuming cultivated meat, as an example.”
Jeremy Koller, whose foundation has pledged a multi-year commitment to the Center, aims to revolutionize our “speciesism” mentality.
“Only through a better understanding of other animals’ feelings and communication can we confront our shortcomings in their treatment,” he explained. “Much like the Rosetta Stone revealed the mysteries of hieroglyphics, I believe AI will unlock our comprehension of how other animals experience interactions with humans.”
Ancient humans adapted to deeper forests as they journeyed from Africa, moving away from the savanna.
Lionel Bret/Eurelios/Science Photo Library
This is an excerpt from our human stories, a newsletter covering the archaeological revolution. Subscribe and receive updates in your inbox every month.
Our human origins trace back to Africa. While this has not always been clear, it is now widely accepted.
This truth can be understood in two ways. The earliest known species closely related to us emerged from Africa, dating back 7 million years. Additionally, the oldest representatives of our own species, Homo sapiens, also originated from Africa.
Here, I will focus on the narrative of modern humans originating in Africa and their subsequent migrations across the globe. The introduction of DNA sequencing technology in the latter half of the 20th century enabled comparisons between different populations. This research demonstrated that African populations exhibit the greatest genetic diversity, while non-Africans show relative genetic similarity (despite visible differences such as skin color).
This genetic distinction serves as a telling indicator. It suggests that Africa was our birthplace with a diverse population, from which all non-Africans descended from a smaller subset that left this ancestral home to settle elsewhere. Geneticists affirmed this idea as early as 1995, and further evidence has since supported this claim.
However, there is a discrepancy between archaeological evidence and genetic findings.
Genetics indicates that all living non-Africans are descendants of a small group that left Africa around 50,000 years ago. Aside from minor uncertainties about the exact timeline, this conclusion has remained consistent for two decades. Conversely, archaeologists highlight numerous instances of modern humans existing outside Africa long before this timeline.
In Greece, a modern human skull found in the Apidima Caves dates back 210,000 years. The jawbone from Misliya Cave in Israel has been dated to at least 177,000 years. Additionally, there are several debated sites in China that may contain remains of modern humans. “Moreover, there’s an ongoing discussion on the earliest inhabitants of Australia,” says Eleanor Scerri from the Max Planck Institute for Geoanthropology in Germany, with some proposing human presence as early as 65,000 years ago.
What is the explanation for this disparity? Has our extensive genetic data misled us? Or is it true that we all share a common ancestry tied to a significant migration event, while older remains represent populations that did not survive?
Scerri and her team sought to understand this conundrum.
African Environment
The researchers debated the habitats of modern humans in Africa. “Did they simply migrate across diverse African grasslands, or were they adapting to vastly different environments?” asks Scerri.
To address this question, they needed extensive data.
“We began by analyzing all archaeological sites in Africa dating back 120,000 to 14,000 years ago,” explains Emily Yuko Hallett from Loyola University in Chicago. The team constructed a database identifying the climate at various locations and times.
A significant shift was observed around 70,000 years ago. “Simply examining the data without complicated modeling shows this climatic change,” notes Andrea Manica from the University of Cambridge. The range of temperatures and rainfall suitable for human habitation had notably expanded, leading people to venture into deeper forests and arid deserts.
However, mere observation is insufficient; the archaeological record is inherently incomplete and often biased.
“In certain regions, no archaeological sites exist,” remarks Michela Leonardi from the Natural History Museum in London. This absence might not reflect a lack of human occupancy, but rather the lack of preservation. “In more recent periods, preservation is easier due to the increased data availability,” she adds.
Leonardi devised a statistical modeling technique to determine if an animal shifted its environmental range. Could humans have transitioned from grasslands to diverse habitats, such as tropical rainforests? The team initially thought this modeling would take two weeks, but it took five and a half years.
Ultimately, the statistics affirmed their initial observation: around 70,000 years ago, modern humans began occupying a broader range of environments. The findings were published on June 18th here.
Jack of All Trades
“At 70,000 years ago, our species appears to have transformed into the ultimate generalist,” states Manica. From this period onwards, modern humans adapted to a variety of complex habitats.
This could be misinterpreted. The team did not imply that prior to H. sapiens people were incapable of adaptation. In fact, studies of extinct human species highlight that adaptability has increased over time.
“Humans were inhabiting environments vastly different from the early stages,” observes Scerri. “We’ve found evidence of habitation in mangrove forests, rainforests, desert edges, and highlands like those in Ethiopia.”
It appears that this adaptability is what allowed Homo sapiens to thrive during environmental changes in Africa, while other species like Paranthropus did not; they remained too rigid in their lifestyle to adapt.
What likely transpired in our species 70,000 years ago is that existing adaptability became pronounced.
Some of this understanding only becomes clear when considering the diverse habitats humans occupied. “One might think of deserts and rainforests in rigid terms, but there are actually numerous variations,” explains Scerri. “There are lowland rainforests, montane forests, marshes, and periodically flooded woodlands.” The same diversity applies even within desert environments.
Before, H. sapiens “did not exploit the full range of potential habitats,” states Scerri. “But around 70,000 years ago, we see the beginning of this expansion into more types of forests and rainforests.”
This narrative intrigued me, as I had been contemplating an opposite idea.
Great Quarantine
Last week, I authored a piece about the extinction of local human groups: it appears that some H. sapiens populations vanished without a trace in modern genetics. After departing from Africa, they faced challenges in harsh environments, eventually succumbing during encounters with the first modern humans in Europe. These lost groups fascinated me. Why did they fail while others that entered Europe thousands of years later found much success?
The discovery that African groups expanded their environmental niches 70,000 years ago provides a partial explanation. If these later migrations involved more adaptable populations, they may have been better equipped to face the unfamiliar environments of Northern Europe—and subsequently Southeast Asia, Australia, and the Americas where their descendants would eventually journey.
A crucial point: this does not suggest that all populations 70,000 years ago thrived. “Not all humans instantly turned into successful populations,” Scerri explains. “Many of these groups disappeared, both inside and outside of Africa.”
Moreover, as with any significant discovery, this study introduces as many questions as it resolves. Specifically: what triggered modern humans to become more adaptable around 70,000 years ago?
Manica notes that skeletal morphology supports this idea. Ancient fossils classified as H. sapiens today exhibit only some of the traits we typically associate with modern humans. “Starting around 70,000 years ago, we broadly witnessed the emergence of many of these characteristics as a collective,” he asserts.
Manica posits that moving into new environments may have facilitated increased interaction between previously isolated populations. For instance, if two groups were separated by desert, they wouldn’t encounter or exchange ideas or genetic material until they learned to adapt to desert conditions.
“There may also be positive feedback,” suggests Manica. “With increased connectivity comes greater flexibility… breaking down barriers and fostering further interaction.”
To conclude, in a story about these lost populations, I mentioned that one of the greatest challenges for human groups was isolation. Without neighbors, a small group can face extinction due to minor setbacks. If Manica is correct, the opposite trend unfolded in Africa. Populations expanded and became increasingly connected, leading to a surge of creativity that allowed our species to spread across the globe.
In this light, the success of the last migration out of Africa could be attributed to the need for community. Without others, we may be vulnerable and at risk of failing. The notion of preparing for an apocalypse alone in isolation may be fundamentally flawed.
Orcas have been observed seemingly presenting deceased prey to humans.
Marine research organization Bay Cetology, led by Jared Tower, was documenting orca pods (Orcinus orca) as they dined on seabirds in Alert Bay, Canada, when a remarkable discovery occurred. Two whales, named Akela and Quiver, approached a tower while clutching birds. The younger orca, Akela, dropped a dead bird in front of the tower and lingered, seemingly watching for a reaction. Following suit, her brother Quiver also dropped a bird and waited.
In a surprising turn, both whales retrieved their prey once more and swam closer to the tower. “I remember thinking, did that really just happen?” Tower recalls. This occurrence aligns with similar ones documented in 2015 and 2018, where a young female orca presented a researcher with a harbor seal pup, prompting further observations of this intriguing behavior.
Further interviews revealed an additional 32 instances between 2004 and 2024 where orcas exhibited similar behaviors. This includes a young male orca from New Zealand, nicknamed Funky Monkey, who repeatedly approached researchers with a long-tailed stingray. Altogether, 18 different prey species have been offered, ranging from gray whales and seals to jellyfish and turtles.
Such behaviors have been regularly observed within the orca pod. “They exist in a highly intricate social structure and share food throughout their lives,” Tower notes.
Yet, the behaviors extend beyond mere interactions. “They engage in actions that reflect goodwill towards another species,” states Lori Marino from New York University, who was not involved in the study.
Tower indicates that this behavior suggests killer whales may possess a capacity for generalized altruism or kindness. Additionally, orcas demonstrate curiosity and boldness, understanding other beings’ perceptions, and actively experimenting with them, he explains.
This form of generalized altruism is beneficial in a social community where collective cooperation brings rewards. Killer whales are also among the few marine predators that sometimes find themselves with surplus prey. “They may leave it uneaten, play with it, or utilize it to explore their surroundings,” notes Tower.
For many killer whales, their generalist predation is characterized by curiosity and exploratory behavior. “Curiosity is a key factor in reducing uncertainty,” Tower adds. “They are actively learning about us by testing the waters.”
Moreover, this behavior might indicate that orcas have a theory of mind, a capability previously recognized in certain birds, primates, and marine mammals such as dolphins.
As articulated in Gray’s Anatomy, it’s no surprise that healthcare professionals have inspired numerous popular narratives. The journey of a patient through the healthcare system mirrors the structure of classic storytelling, featuring beginnings, conflicts, and resolutions, often accompanied by various tensions.
Although medicine is often perceived as grounded in hard science (blood tests, medical imaging, treatment protocols), it fundamentally involves storytelling, a theme that Pria Anand explores in her debut book, Electric of the Mind: A Tale of the Strangeness and Wonders of Our Brains.
During her time at medical school in California, Anand was concerned that her aversion to storytelling might hinder her. Yet, she found that how individuals narrate their experiences could convey as much insight as any clinical test.
Anand pays homage to her predecessor, neurologist Oliver Sacks, drawing from his personal anecdotes while diagnosing and empathizing with patients. In Electric of the Heart, she acknowledges the influence of Sacks’s iconic work, The Man Who Mistook His Wife for a Hat.
While it’s unrealistic to expect anyone to reach Sacks’s level of ingenuity, Anand embodies his empathy, curiosity, and intellectual breadth. Her writing is both polished and insightful as she navigates complex neurological concepts, addressing the narratives of individual patients with similar finesse.
However, Electric of the Heart transcends mere “clinical anecdotes.” Anand’s core message emphasizes the vital role of storytelling in medical practice. The human craving for narratives is ancient, universal, and remarkably resilient, often thriving even in the aftermath of severe brain injuries, as she notes.
Regardless of health status, how individuals articulate their condition may diverge significantly from a physician’s evaluation or observable metrics. Anand recounts the story of a patient who entered a coma following a cerebral hemorrhage but appeared to recover fully, often mistaking Anand and her colleagues for her former medical team as she made her rounds among fellow patients.
No one can match the brilliance of Sacks, but Anand embodies the writer’s humanity and broad intellect.
Anand delves into the way our brains can mislead us, highlighting both the hurdles and the character of medical practice. However, it’s not just the patients’ misconceptions that warrant attention; doctors can exhibit similar biases and errors.
The evolution of her own health conditions has profoundly informed Anand’s work ethic—from sleep deprivation during her training to the “phantom noises” she began experiencing that prompted her concern. It was later discovered that these sounds stemmed from a vascular malformation connecting her brain to her heart.
The inherent “imbalance of power” in medicine signifies an ongoing struggle between empirical evidence and narrative, as well as between objective truths and subjective experiences—this dynamic exists not only in the realm of physicians but also among the false dichotomies pervasive in healthcare. Historically, many confidently given diagnoses have been based solely on “scientific” definitions. One can reflect on the notion of a “wandering uterus.”
Although comparisons between Anand and early reviewers might not be misleading, Electric of the Heart invites parallels with Glass Body, a personal narrative by Caroline Crampton that also explores hypochondria. Like Crampton’s insightful account, Anand elucidates “a vast liminal spread between health and illness” from her perspective as a physician.
Both works suggest a growing openness in mainstream media to not only drama but the complexities of medical intricacies, challenging the traditional notion that the divides between “healthy brains and failing brains” or illnesses and wellness are as clear-cut as they seem.
In Electric of the Heart, Anand exhibits empathy, humility, and a profound interest in humanity—qualities that define outstanding doctors and ideally should be prevalent throughout the medical profession.
Elle Hunt is an author based in Norwich, UK
New Scientist Book Club
Do you enjoy reading? Join a welcoming community of fellow book enthusiasts. Every six weeks, we explore new titles, providing members with exclusive access to book excerpts, author articles, and video interviews.
A team of biologists from the University of Virginia and the National Institutes of Health has found that half of certain organelles can make up to 10% of vesicle organelles present in cells, yet they do not follow the typical endocytotic pathway.
Cutting-edge Cryo-Electron microscope images of rat-1 cells cultured on Cryo-EM grids. The lamellipodia and filopodia in the upper right corner illustrate the cell boundary, distinguishing the cytoplasm from the extracellular space (ext). Various vesicle organelles are highlighted, including early endosomal-like vesicles (EN, pink), multinuclear bodies (MVB, blue), hemispores (HF, yellow), and inverted hemifusomes (FHF, green). Scale bar – 0.5 µm. Image credit: Tavakoli et al., doi: 10.1038/s41467-025-59887-9.
“It’s like uncovering a new recycling center within a cell,” remarked Dr. Seham Ebrahim, a researcher at the University of Virginia.
“Many believe it plays a role in how cells manage their packaging and processing of materials. If this process fails, it can lead to diseases affecting multiple systems in the body.”
“One such condition is Hermansky-Pudlak syndrome, a rare genetic disorder that may result in albinism, visual impairments, lung issues, and problems with blood clotting.”
“The challenges related to how cells manage their cargo are central to many of these issues.”
“We’re just beginning to grasp how these new organelles fit into the broader context of cellular health and disease.”
“This discovery is thrilling because finding a genuinely new internal cell is rare, and it opens up avenues for exploring entirely new paths.”
Utilizing Cryo-Electron tomography (Cryo-ET), the researchers managed to capture a significant number of images.
They propose that these organelles facilitate the creation of vesicle-based organelles, which function like mixing bowls, small blister-like sacs, and multiple vesicles.
This process plays a critical role in cellular sorting, recycling, and debris management.
Dr. Ebrahim stated, “You can think of vesicles as small delivery trucks within a cell.”
“Part of it functions like a loading dock where cargo is connected and transferred. It’s a step in the process we were unaware of.”
Many of these organelles have gone unnoticed but are surprisingly prevalent in certain areas of the cell.
Researchers aim to enhance their understanding of their significance in proper cell functioning and how issues with them can lead to diseases.
Such findings may contribute to targeted treatments for a range of serious genetic disorders.
“This is only the beginning,” Dr. Ebrahim noted.
“Now that we’ve confirmed the existence of hemifusomes, we can explore their behavior in healthy cells and investigate what occurs when problems arise.”
“It could pave the way for new strategies to address complex genetic disorders.”
The team’s paper was published in the journal Nature Communications.
____
A. Cigarette et al. 2025. Proteolipid nanosap interacting with half of hemifusomes mediates polyspore formation. Nat Commun 16, 4609; doi:10.1038/s41467-025-59887-9
The identification of human footprints in White Sands, New Mexico, estimated to be between 21,000 and 23,000 years old, was a significant advancement in our comprehension of the earliest inhabitants of the Americas. However, this research faced scrutiny concerning the dependability of radiocarbon dating materials, such as common aquatic plant seeds Rupiah Silhosa and coniferous pollen grains. A recent study, spearheaded by University of Arizona researcher Vance Holiday, indicates that the ancient mud from Perelora ceotero, the third material utilized for dating the footprints, dates back to approximately 20,700 to 22,400 years ago.
Ancient human footprints found in White Sands National Park, New Mexico, USA. Image credit: Bennett et al. , doi: 10.1126/science.abg7586.
Historically, scientists believed that humans entered North America roughly 16,000 to 13,000 years ago.
However, the footprints found at White Sands indicate that human presence in the area dates back to between 23,000 and 21,000 years ago. This timeline offers insights into the development of culture in North America.
The remnants of 10,000-year-old prints, uncovered nearly a century ago at a site near Clovis, New Mexico, led to a classification of artifacts long considered to represent the earliest known cultures in North America.
Critics have challenged two prior studies over the last four years, asserting that ancient species and pollen in the soil used for dating the footprints are unreliable indicators.
“The record is consistently strong, and it’s challenging to explain it all,” Dr. Holiday stated.
“As I mention in my publication, it would be an extraordinary coincidence for all these dates to be inaccurate.”
Thousands of years prior, the white sands formed from a series of lakes that eventually dried up.
Wind erosion created layers of plaster on the sand dunes that now characterize the region.
The footprints were excavated from stream beds that once fed into this ancient lake.
“Wind erosion has erased part of the narrative, leaving that segment lost. The remainder is buried beneath the world’s largest accumulation of plaster sand,” Dr. Holiday remarked.
For their latest research, Dr. Holiday and his team returned to White Sands in 2022 and 2023, excavating new trenches to gain deeper insights into the lakebed’s geology.
“We’ve put in considerable effort to explore this area,” stated Jason Windingstad, a doctoral candidate at the University of Arizona.
“You essentially find yourself questioning everything taught about North American populations.”
The authors recognize that their research hasn’t addressed lingering questions posed by critics since 2021: Why are there no artifacts or settlements left by the individuals who made the footprints?
“This is a valid inquiry. Some footprints discovered during the 2021 survey belong to a trackway that was created in mere seconds,” they explained.
“It is entirely plausible that hunter-gatherers would act with caution to avoid leaving resources in such a brief timeframe.”
“These individuals were aware of their resources and were distanced from their replacements.”
“They wouldn’t abandon artifacts casually. It doesn’t make sense for you to be viewing a debris field.”
The team’s recent findings will be published in the journal Advances in Science.
____
Vance T. Holliday et al. 2025. Perelorace Geochronology supports the last Glacier Max (LGM) age of human tracks in White Sands, New Mexico. Advances in Science 11 (25); doi:10.1126/sciadv.adv4951
Ancient Migration: A Journey Through a Colder Climate to the Americas
Getty Images/iStockphoto
This excerpt comes from our Human Stories newsletter, which explores the archaeological revolution. Subscribe to receive it monthly in your inbox.
The main theme of Human Stories revolves around the global migration of our species. From their origins in Africa, our ancestors traveled to Europe, Asia, Australia, and eventually reached the Americas. South America was the last continent they settled, with Antarctica being the notable exception.
This chapter of our history remains somewhat enigmatic. While ample research has focused on human migration into Europe, Asia, and North America, far less attention has been directed toward their entrance into South America.
Reflecting on this, I discovered that my previous detailed analysis of South America appeared in June 2023.
However, this narrative is beginning to evolve. On May 15th, a significant genetic study was published in Science, unveiling crucial insights into early inhabitants of South America. This extensive research indicates a diverse population across different regions, revealing a fourfold division of early groups. These findings align with newer stories of remarkable journeys and the extraordinary risks taken in the quest for new lands.
A Complex Journey
If you consider how Homo sapiens evolved in Africa, the task of reaching South America becomes apparent. The vast Atlantic Ocean acted as a major barrier between continents, complicating the path for humans.
This migration was not premeditated; at that time, people had no knowledge of South America’s existence. They journeyed as far as they could see, moving from Africa to Southwest Asia, and eventually to various parts of Eurasia. Some even ventured as far as the easternmost regions of Asia, now known as Chukotka in the Russian Far East.
From there, it was a relatively short journey to present-day Alaska in northwest North America. Evidence suggests humans arrived at least 16,000 years ago. Today, the Bering Strait—a mere 82 kilometers across—seems a significant barrier. However, thousands of years ago, colder climates meant lower sea levels, creating land connections between Asia and North America, notably Beringia. People may have unknowingly traversed what would become a monumental migration.
Supporting this theory, a study released in May highlighted that horses traveled regularly between North America and Asia via Beringia between 50,000 and 13,000 years ago, suggesting humans could have followed suit.
For unknown reasons, the earliest groups of Americans moved southward. Some utilized boats along the Pacific coast, while others ventured inland, ultimately reaching the southernmost tip of South America.
The archaeological record of these groups is rich. A study published in February analyzed a large collection of artifacts dating back 10,000 to 11,000 years in the Takualembo region of Uruguay.
Who were these early South Americans? That’s where new genetic research comes into play.
Continuing the Journey
Led by Hie Lim Kim from Nanyang Technological University in Singapore, researchers compiled genomic data from 1,537 individuals across 139 ethnic groups. This included individuals with ties to northern Eurasian populations, as well as groups from the Americas, particularly South America.
“Our study showcases the history of this vast migration,” Kim states.
Between 13,900 and 10,000 years ago, the initial settlers of South America divided into four genetically distinct groups. Signs of these genetic patterns persist in modern South Americans.
Kim highlights the challenge of providing a comprehensive account of these findings. While the study identifies genetic differences among populations, they don’t necessarily align with cultural traits. “We didn’t categorize them by culture or language,” she explains, “but solely based on ancestry.”
With this caveat, Kim’s team identified four groups: Amazonians, Andeans, Chaco Amerindians, and Patagonians, named for regions where contemporary genetic signals are strongest. For instance, Amazon ancestry aligns with current populations in the Amazon rainforest, Andean ancestry with those in the Andes mountains, and Patagonian ancestry with people from southern Argentina. The Chaco Amerindian lineage is traced to the Dry Chaco, spanning parts of Argentina, Bolivia, and Paraguay. “They are predominantly hunter-gatherers in arid regions,” says Kim.
Once these groups branched out, signs of significant gene flow between them became scarce. “They have never intermingled again,” says Kim, suggesting geographic barriers like the Andes may have fostered this separation.
However, Kim asserts this is likely not the entire narrative, as additional groups may exist. “Our sample from Brazil is limited,” she notes. “Many ethnic groups in the Amazon remain unexplored.”
Recent research further enriches this narrative. In March, archaeological evidence pointed to a location known as Southern Corn, situated near the 22nd parallel south encompassing northern Argentina, Paraguay, Chile, and Uruguay.
Another study from March detailed the journey of the Guarani, who traversed over 2,500 kilometers across South America, ultimately reaching the mouth of the Rio de la Plata estuary, now Buenos Aires and Montevideo.
Late May research uncovered the risks associated with migration; ancient Colombian DNA indicated an unknown hunter-gatherer population that inhabited the Bogotá Altiplano around 6,000 years ago. By 2,000 years ago, they were replaced by Central American populations, leaving no trace in today’s genetic pool.
South America is vast, and we have merely scratched the surface of its complex history. Numerous untold stories lie waiting to be uncovered.
Prior to the deployment of the omnipotent system, AI companies are encouraged to replicate the safety assessments that formed the basis of Robert Oppenheimer’s initial nuclear test.
Max Tegmark, a prominent advocate for AI safety, conducted analyses akin to those performed by American physicist Arthur Compton before the Trinity test, indicating a 90% likelihood that advanced AI could present an existential threat.
The US government went ahead with Trinity in 1945, after providing assurances that there was minimal risk of the atomic bomb igniting the atmosphere and endangering humanity.
In a paper published by Tegmark and three students at the Massachusetts Institute of Technology (MIT), the “Compton constant” is suggested for calculation. This is articulated as the likelihood that omnipotent AI could evade human control. Compton mentioned in a 1959 interview with American author Pearlback that he approved the test after evaluating the odds for uncontrollable reactions to be “slightly less” than one in three million.
Tegmark asserted that AI companies must diligently assess whether artificial superintelligence (ASI)—the theoretical system that surpasses human intelligence in all dimensions—can remain under human governance.
“Firms developing superintelligence ought to compute the Compton constant, which indicates the chances of losing control,” he stated. “Merely expressing a sense of confidence is not sufficient. They need to quantify the probability.”
Tegmark believes that achieving a consensus on the Compton constant, calculated by multiple firms, could create a “political will” to establish a global regulatory framework for AI safety.
A professor of physics at MIT and an AI researcher, Tegmark is also a co-founder of The Future of Life Institute, a nonprofit advocating for the secure advancement of AI. The organization released an open letter in 2023 calling for a pause in the development of powerful ASI, garnering over 33,000 signatures, including notable figures such as Elon Musk and Apple co-founder Steve Wozniak.
This letter emerged several months post the release of ChatGPT, marking the dawn of a new era in AI development. It cautioned that AI laboratories are ensnared in “uncontrolled races” to deploy “ever more powerful digital minds.”
Tegmark discussed these issues with the Guardian alongside a group of AI experts, including tech industry leaders, representatives from state-supported safety organizations, and academics.
The Singapore consensus, outlined in the Global AI Safety Research Priority Report, was crafted by distinguished computer scientist Joshua Bengio and Tegmark, with contributions from leading AI firms like OpenAI and Google DeepMind. Three broad research priority areas for AI safety have been established: developing methods to evaluate the impacts of existing and future AI systems, clarifying AI functionality and designing systems to meet those objectives, and managing and controlling system behavior.
Referring to the report, Tegmark noted that discussions surrounding safe AI development have regained momentum following remarks by US Vice President JD Vance, asserting that the future of AI will not be won through mere hand-raising and safety debates.
Upon witnessing wild baboons in Kenya, Matsumotoda Ogawa, an evolutionary biologist and primate researcher at the University of Lycys in Japan, observed firsthand the violence among these creatures, particularly among the males.
“The frequency of injuries was striking,” she remarked.
In contrast to her experiences with Nick and Kat, the healing abilities of baboons appeared almost superhuman.
In a study published in the Royal Society B, Dr. Matsumoto and her team examined healing rates in humans, chimpanzees, monkeys, and mice. They discovered that human wounds heal more than twice as slowly as those of other mammals. This slower recovery may stem from evolutionary trade-offs dating back to when we lost our fur for naked, sweat-producing skin that helps us regulate temperature.
Researchers aimed to study healing in a more chaotic environment beyond what wild baboons provide.
To assess human healing, they enlisted 24 patients recovering from skin tumor removals at Lycys University Hospital. For chimpanzee data, considered one of our closest animal cousins, the researchers studied five captive chimpanzees at the Kumamoto Reserve of the Kyoto University Wildlife Research Center, where animals previously used in drug studies reside. Injuries in chimpanzees, akin to those of wild baboons, primarily resulted from conflicts among the animals.
Other primate subjects were housed at the Kenya Primate Research Institute. The researchers anesthetized the monkeys, induced surgical wounds, and monitored their recovery. “As an outdoor researcher, I believe invasive procedures should be minimized,” emphasized Dr. Matsumoto Oda, noting that the wounds from wild baboons are often similar in size to those inflicted surgically but deeper.
Finally, to compare distantly related mammals with humans and primates, the team anesthetized and surgically wounded mice and rats.
Drawing from her field observations, Dr. Matsumoto-Oda expected humans to exhibit slower healing than other species, noting that the 24 participants regenerated their skin at an average rate of about a quarter millimeter per day.
What astonished Dr. Matsumoto-Oda even further was the uniformity of healing rates among the animal subjects, including chimpanzees. There were no significant differences in the rapid regeneration of skin across the different primates, averaging about 0.62 mm of new skin daily, unlike humans, who were clear outliers.
Elaine Fuchs, a stem cell biologist at Rockefeller University who focuses on skin growth and repair and was not involved in the new research, expressed that the findings aligned with her expectations. This is largely because skin healing is influenced by hair.
Every hair grows from a follicle containing stem cells. Typically, these stem cells produce more hair. However, when necessary, they can facilitate new skin growth. “When the epidermis sustains damage, such as from scratches and scuffs, it’s the hair follicle stem cells that perform the repairs,” Dr. Fuchs explained.
Furry animals are covered in follicles, enabling quicker wound closure in mice and monkeys. In contrast, “human skin has a much lower density of hair follicles,” Dr. Fuchs noted. Our ancestors lost many of these follicles, replacing them with sweat glands. Although sweat glands also contain stem cells, they are significantly less effective at wound repair, according to Dr. Fuchs.
Why did we choose to evolve in this manner, sacrificing hair and its protective benefits? The glands responsible for producing watery, salty sweat that moistens our shirts on hot days are known as eccrine glands. Most furry mammals have them in limited areas, primarily on their feet. Conversely, early human ancestors were adaptations to sweating—modern humans possess millions of sweat glands, with a density ten times greater than that of chimpanzees.
“Many have enjoyed the benefits,” remarked Daniel Lieberman, an evolutionary biologist at Harvard University. Our extensive sweat glands and lack of fur enabled our ancestors to thrive in physically demanding, hot environments, aiding in cooling our larger brain functions, Dr. Lieberman stated.
The advantages of sacrificing hair for sweat appear to outweigh the downsides. Dr. Matsumoto-Oda and her colleagues theorize that social support among early humans may have helped those with injuries survive despite their slower healing process. (Or perhaps there were methods to treat wounds, akin to practices seen in orangutans and chimpanzees.)
“The drawback of evolution is that wound healing is slower,” Dr. Fuchs observed, yet humans have gained significant evolutionary advantages from losing their fur.
“If you wish, you can always wear a coat,” she added.
The formation of a large overpass 20 million years ago connected continents, influenced climate, separated oceans, and changed the course of evolution. According to recent papers published in Nature reviews the Earth and the environment, researchers from various disciplines such as plate tectonics, evolutionary anthropology, and climate research provide a comprehensive summary of the closure of the Tethys Seaway.
About 30 million years ago, the Earth looked drastically different. Africa was isolated from other continents, and the vast Thetis Ocean extended from the Atlantic to the Indo-Pacific oceans through the present-day Mediterranean.
However, approximately 20 million years ago, the first land bridge formed between Africa and Asia, dividing the Tethys Sea into the Mediterranean and Arabian Seas.
This land bridge allowed mammals like ancestors, giraffes, and elephants to migrate from Africa to Asia and Europe, influencing the evolution of both land and sea creatures and plants.
Scientists explain how they believe this land bridge was created. Around 50-60 million years ago, rock slabs descended into the Earth’s mantle, forming “conveyor belts” for hot rocks to rise in underground plumes.
About 30 million years later, these hot rocks reached the surface when tectonic plates collided, leading to the uplift of land that connected Africa for the first time in 75 million years.
According to Eivind Straume, a leading author of the study, the formation of this land bridge had a significant impact on continental configurations and evolutionary paths of animals migrating between Africa and Asia.
Researchers suggest that the closure of the Tethys Seaway has affected global climate, causing desertification in the Sahara, intensifying monsoon seasons in Southeast Asia, and enhancing marine biodiversity.
tThis is some kind of guy looking at Google Maps for fun. I’m that guy. As a child, I went through the stages of cartography, drawing elaborate maps of fictional islands, peering into the family’s road supervision, working to ensure that the lines and dots of overcrowded pages were harmonized in the eyes of my mind, the shops, and friends’ homes. You can say that the phase never really ended.
Just like some people measure IMDB entries in movies, whenever I start watching the second time, whenever I come across an interesting town, country, or geographical oddity (often in the news business), I burn maps to see what bites of terrain I can discover. I’m not a Geoguessr Savant, but I spent a lot of fun time getting confused by the interesting enclaves and Panhandles and getting tired of the faraway parts of Street View. After finishing a recent episode of Severance, I opened the tab and took an armchair tour through the remote Newfoundland filmed.
I’m not revealing exactly the mystical corners of the internet here. Google Maps is very ubiquitous and has become a utility – I feel like I’ll admit to opening it and praise the virtue of the calendar app or call Centrelink Just to enjoy hold music. There are many other decent navigation apps, but the special source for Google Maps is a mountain of user-generated data.
The key to the power of Google Maps is the compulsive “local guide” volunteer workforce. Clicking on these profiles makes it vaguely illegal, as if you’re tracking it for ASIO. These are users who record every move, gathering hundreds of reviews, from restaurants to payphones, detailing opening hours, accessibility features, and taking the worst food photos you’ve ever seen. I don’t understand these people and their points and badge currency, but I am grateful to them. There are men who reviewed all the public mailboxes in Ballarat and expressed their opinions on all of them. My nearest bus stop has a 3.3 star rating and a single review: “It’s just a bus stop.” got it!
Flumpy: Google’s Neighborhood Cat is a map with (almost) complete ratings. Photo: Google Maps
Some Google Maps discoveries feel like they’re stumbling over other people’s private jokes. Not too far from my girlfriend’s house, the inconspicuous tarmac is dubbed “Tristan’s Roundabout” – The review tab boasts tourist selfies and comically exaggerated admiration for Tristan of the same name, responding in equally enthusiastic terms to reviewers comparable to intersections.
On Google Maps, this roundabout list includes “tourist selfies and comically exaggerated admiration.” Photo: Google Maps
In the surrounding streets, reviewers can be found waxing more lyrical than local attractions. Hole in the ground or Abandoned trailerand the friendly orange cat that writes a sparkling tribute terrible. When I pass through these waypoints as I move around my neighborhood it feels like a digital scavenger hunt. This is the act of realizing and recording small habits of suburban life.
This article contains content hosted on embed.bsky.app. The provider may be using cookies or other technologies, so they will ask for permission before anything is loaded. To view this content, Click “Get permission and continue.”.
Maps are packed with political and imperialist symbolism, and Google is mostly more responsible for the dire state of the Internet. At the moment, we are confident that product managers are brainstorming how to put shoes on the maps even more AI slops. But for now, when the internet feels like a constant flow of noise, it’s nice to relax by slowly wrapping around your neighborhood.
IThe magician was a child in 1941, sitting on a general public school entrance exam with only pencils and paper. I read the following: “Write about British writers within 15 hours.”
Today, most of us don’t need 15 minutes to contemplate such questions. Relying on AI tools like Google Gemini, ChatGpt, Siri, and more will give you an instant answer. While cognitive efforts on artificial intelligence have become a second nature, some experts fear that this impulse is driving the trend as there is growing evidence of a decline in human intelligence.
Of course, this is not the first time that new technology has raised concerns. Research shows that mobile phones already show how they can deflect us. Social media has damaged our vulnerable scope of attention, and GPS has made our navigation capabilities obsolete. Now, here’s AI co-pilots to free us from our most cognitively demanding tasks, from processing tax returns to providing treatment and even talking about how to think.
Where does it leave our brains? When outsourced our ideas to faceless algorithms, can we freely engage in more substantial pursuits or wither into vines?
“The biggest concern in these age of generative AI is not the only one May Compromising human creativity and intelligence,” says psychologists. Robert Sternberg At Cornell University, known for its groundbreaking work on intelligence, “but already have it.”
The argument that we are less intelligent is unattractive from some research. Some of the most convincing ones are those that look at the Flynn effect. This is due to environmental factors rather than genetic changes, as at least since 1930, observed increases in IQ across consecutive generations around the world. However, in recent decades, The Flynn effect has been slowed down or even the other way around.
In the UK, James Flynn himself showed it Average IQ for 14 years old fell Two or more points between 1980 and 2008. Meanwhile, the Global Research International Student Assessment Program (PISA) has shown an unprecedented decline Mathematics, Reading, Science Score in many regions, young people show low coverage and weak critical thinking.
Nevertheless, these trends are empirically and statistically robust, but their interpretations are nothing. “Everyone wants to point their fingers at AI as a boogeyman, but that’s something to avoid.” Elizabeth Dwork Northwestern University Feinberg School of Medicine in Chicago, recently identified tips for reversing the Flynn effect in a large sample of the US population tested between 2006 and 2018.
Intelligence is much more complicated than that, and is probably shaped by many variables. Micronutrients such as iodine are known to affect brain development and intellectual abilities. Similarly, changes in prenatal care, years of education, pollution, pandemics, and technology all affect IQ, making it difficult to increase the impact of a single factor. “We don’t act in a vacuum and we can’t refer to one thing and say, ‘That’s it,” says Dworak.
Still, while the overall impact of AI on intelligence is difficult to quantify (at least in the short term), concerns about cognitive offloading of certain cognitive skills are effective and measurable.
Considering the effects of AI on the brain, most studies focus on generative AI (Genai). Anyone who owns a phone or computer can access almost every answer, write essays and computer code, and create art and photos. There are thousands of articles written about the many ways genai can improve our lives through increased revenue, job satisfaction and scientific advances. In 2023, Goldman Sachs estimated that Genai could increase its annual global GDP by 7% over a decade. $7tn.
However, the fact that automating these tasks deprives them of opportunities to practice those skills on their own and undermines the neural architecture that supports them. Ignoring our physical training atrophys the outsourcing neural pathways of cognitive effort, leading to muscle deterioration.
One of the most important cognitive skills at risk is critical thinking. Why do you think of praise about British writers when you can get ChatGpt to look back on it?
The research highlights these concerns. Michael Gellich At SBS Swiss Business School in Kloten, Switzerland, we tested 666 people in the UK and found a significant correlation between frequent AI use and lower critical thinking skills.
Similarly, researchers Microsoft and Carnegie Mellon University In Pittsburgh, Pennsylvania, we surveyed 319 people in the occupation that uses genai at least once a week. It improved their efficiency, but it hindered critical thinking and promoted long-term overreliance on technology. Researchers may be less capable of solving problems without AI support.
“It’s great to have all this information on my fingertips,” said one participant in the Gellich study. In fact, other studies have suggested the use of AI systems for memory-related tasks. This can lead to a decline in the individual’s own memory.
This erosion of critical thinking is exacerbated by AI-driven algorithms that determine what is seen on social media. “The impact of social media on critical thinking is huge,” says Gellich. “There’s 4 seconds to watch the video and get someone’s attention.” Results? It is easily digested, but do not encourage critical thinking. “It gives you information that there’s no need to further process it,” Gerlich says.
By providing information rather than acquiring that knowledge through cognitive effort, your ability to critically analyze the meaning, impact, ethics and accuracy of what you have learned is easily ignored in the wake of what appears to be a quick and perfect answer. “It’s hard to criticize AI. You have to be disciplined. It’s very difficult not to offload critical thinking on these machines,” says Gerlich.
Wendy Johnson People who study intelligence at the University of Edinburgh see this in their students every day. She emphasizes that it is not empirically tested, but believes that students are ready to substitute independent thinking by having them tell the Internet what to do.
Without critical thinking, it is difficult to ensure that AI will consume wisely the content generated. It may seem reliable, especially when you become dependent on it, but don’t be fooled. Research in 2023 Advances in science Compared to humans, GPT-3 chat showed that it doesn’t just generate easy-to-understand information But there are more persuasive disinfections too..
wIs that important? “Think about the hypothetical billionaires,” says Gellich. “They create their own AI and use it to influence people because they can train them in a specific way to emphasize certain politics and certain opinions. If they have confidence and dependence on it, it raises the question of how much it affects our thoughts and actions.”
The impact of AI on creativity is equally confusing. Research shows that AI tends to help generate more creative ideas than they can generate on their own. However, the entire population The ideas of AI-CONCOCTED are not very diverse which ultimately means there are fewer “Eureka!” moment.
Sternberg captures these concerns in a recent essayJournal of Intelligence: “Generative AI replicates. We can recombine and resort ideas, but it’s not clear that the world will generate ideas that break the paradigms the world needs to solve the serious problems it faces, such as global climate change, pollution, increased violence, creeping dictatorship.”
We recommend that you actively or passively consider how you will engage with AI to maintain your ability to think creatively. Research by Marco Muller at Ulm University In Germany, it shows a relationship between social media use and the higher creativity of younger people, but not in older generations. Driving into the data, he suggests that this may be related to the differences in the way people born in the age of social media use it compared to those who came later in life. Perhaps Muller says that they are more open to what they share online compared to older users who tend to consume more passively, and that younger people seem to benefit creatively from sharing ideas and collaboration.
In addition to what happens meanwhile You use AI, you may not spare ideas about what will happen rear You use it. John Kounios, a cognitive neuroscientist at Drexel University in Philadelphia, explains that, just like anything else, our brains become a hot topic because of sudden insight moments that have been spurred by the activity of our neural reward system. These mental rewards help you remember ideas that change the world, correct immediate actions, and reduce risk aversion. All of this is thought to drive more learning, creativity and opportunities. However, insights generated from AI do not seem to have a very powerful effect on the brain. “Reward systems are a very important part of brain development and we don’t know that the effects of using these technologies are downstream,” says Kounios. “No one has tested it yet.”
There are other long-term implications to consider. Researchers have just discovered it recently For example, learning a second language can help delay the onset of dementia for about four years However, in many countries, fewer students apply for language courses. It may be because they give up on a second language in favor of AI-powered instant translation apps, but none of these can so far claim to protect future brain health.
As Sternberg warns, we need to stop asking what AI can do for Start asking us and what it does In We. Until we know for sure, according to Gellich, the answer is “using critical thinking, intuition to use places where computers can still not do and add real value.”
You can’t expect big tech companies to help us do this, he says. Developers don’t want to be told that the program is working too well. Make it easier for people to find the answer. “That’s why you need to start at school,” Gellich says. “AI is here to stay here. We need to interact with it, so we need to learn how to do it the right way.” Otherwise we will not only make ourselves redundant, but we will also be cognitive.
What health professionals see when overseeing IVF procedures via live streams
Possible life sciences
A highly automated form of in vitro fertilization (IVF) leads to successful births and we hope that this approach can reduce the risk of artificial errors during such procedures.
One method of IVF is intratesticular sperm injection (ICSI). Here, sperm is injected into the egg into the lab dish. This is commonly used in cases of male infertility, as the sperm does not need to work to reach the egg. The resulting embryo is then inserted into the uterus. IVF can also be done by mixing sperm and eggs into a lab dish in the hopes of fertilization being performed, which is generally less successful, but requires fewer medical interventions.
ICSI also suffer from drawbacks as it relies on high levels of accuracy and judgment from healthcare professionals. “Like everyone else in most professions, they are sometimes tired and distracted. [the] “Fertilization and the possibility of birth.” Jack Cohen With the Life Sciences, a biotech company in New York City.
To address this, Cohen and his colleagues developed a machine that can perform the 23 important steps required for ICSI. Each is started by a person by pressing a button when watching the live stream of the process. This can also be done from another part of the world.
In one step, the machine uses an AI model to select the healthiest sperm cells for fertilization based on appearance. In other cases, the machine will fix sperm by laying its tail with a laser to make it easier to pick up. The sperm is later injected into the already collected eggs. A similar approach has been tested previously, As a result, two births are bornhowever, some steps were not performed by the machine.
To test the machine, researchers recruited couples struggling to get pregnant because the man had sperm that he couldn’t swim properly. Women also had problems with egg production, so donors’ items were used in the procedure.
The researchers randomly allocated five of the eight donor eggs to be fertilized by an automated system that generated the four embryos. The remaining three eggs were fertilized using a standard manual ICSI approach. All of these formed embryos.
We then used another AI model to select two best embryos based on the appearance of the chromosomes. Both of these were generated using automated systems, but that doesn’t necessarily mean that this approach leads to healthier embryos than manual ICSI, Cohen says. This cannot be measured because there are fewer eggs involved, he says.
When the team inserts one of the embryos into the female uterus, it fails to develop, but the second successfully birth.
It’s an exciting proof of concept, I say Joyce Harper University College London. However, large-scale studies that randomly assign couples to be randomly assigned to perform either automated or manual LCSI procedures should establish whether the former approach leads to increased fertility rates, she says.
Harper said automated IVFs are unlikely to be widely used, as they are at an additional cost, at least if they were first deployed. But Cohen hopes this will improve over time. “We expect patients and clinics to decline as we optimize, standardize and refine our systems,” he says.
Ship cemetery in the desert of the Aral Sea in Uzbekistan
s@owwl / alamy
Unsustainable irrigation and drought have caused changes that have empty almost all of the waters of the Aral Sea since the 1960s, extending all the way to the Earth’s upper mantle, the layer below the Earth’s crust. This is perhaps the deepest recorded example of human activity that will change the solid inner earth.
“To do something that will affect us [upper mantle] It’s like whoa.” Sylvain Barbott At the University of Southern California. “It shows how powerful it is to change the environment.”
The Aral Sea in Central Asia was once one of the largest waters in the world, covering almost 70,000 square kilometers. However, Soviet irrigation programs that began in the 1960s and later droughts empty the oceans. By 2018, it had shrunk by almost 90% and lost about 1,000 cubic kilometres of water.
Wang Ten At Peking University in China, I was interested in the Aral Sea after reading a book about the consequences of this environmental disaster on the surface of the earth. “We’ve noticed that these huge mass changes stimulate the deep Earth’s response,” he says.
He and his colleagues, including Barbot, used satellite measurements to track subtle changes in the elevation of the oceans that were empty between 2016 and 2020. Much of the ocean water disappeared decades ago, but it was found that the uplifts were underway, with on average rising surfaces about 7 millimeters a year.
Next, we used a model of the crust and mantle beneath the Aral Sea to test the mantle beneath the Aral Sea when it came to leading to the uplift of this observed pattern. “We found that the observations were perfectly compatible with a deep response to this change,” says Barbot.
When the weight of the water was removed, the shallow crust first responded, according to the model. This prompted a response at a depth of 190 km from the surface as the viscous rocks in the upper mantle creeped up to fill the blanks. “The uncurved things create space and the rocks want to flow into it,” Barbot says. This delayed reaction in hot, weak areas of the mantle, called the athenosphere, is why the uplift is ongoing, even decades after the water is removed, he says.
The upper mantle rebound is known to occur after other major changes in surface mass, such as glacier advancement and retreat, says Roland Bürgmann At the University of California, Berkeley. But the response to drainage in the Aral Sea may be the deepest example of human-caused changes on solid earth.
Other human-induced changes, such as filling large reservoirs and pumping groundwater, are said to have also caused rebounds. Manoochehr Shirzaei At Virginia Tech. But the wider range of the Aral Sea means the impact of emptying it is likely to run deeper, he says.
In addition to explaining the enormous scale of human activity, the uplift below the Aral Sea offers an extraordinary opportunity to estimate small differences in viscosity of the mantle, particularly under the interior of the continent, Bürgmann says. “It’s really important for people trying to understand plate tectonics to know how that layer behaves under the continent.”
Today, the Sahara desert is one of the least popular places on our planet. But that wasn’t always the case.
Returning the clock to 7,000 years, the Sahara is a lush green savanna filled with wildlife and dotted with lakes containing one of the sizes of modern Germany. In other words, it was the perfect place for our ancient ancestors to settle down.
But who were they? We may finally know.
Scientists analyzed the DNA of two naturally occurring mummy individuals from a shelter in Takalkori rock in what is now southwestern Libya. Their discoveries reveal something extraordinary. These ancient people belonged to previously unknown branches of human family trees.
The two women belonged to the so-called “ghost population.” This is something that modern humans have only seen glimpses of as faint genetic echoes, but they have not been seen in the body.
“These samples come from some of the oldest mummies in the world.” Professor Johannes Krause senior author of the new study said BBC Science Focus. He explained that it is surprising that genomic sequencing is absolutely possible because high-temperature conditions tend to decompose such information.
View from the Takalkolilock Shelter in southern Libya. – Archaeological mission at the Sahara University of Sapienza, Rome
Genomic sequencing is the process of reading a complete set of genetic instructions found in the DNA of an organism, a type of biological blueprint.
Previous studies have examined mitochondrial DNA in mummies, which is much more limited. It is passed only through the maternal line and is much shorter than the complete genome found in the cell nucleus.
“There are about 16,000 base pairs in mitochondrial DNA,” Klaus said. “It may sound a lot, but compared to the entire 3.2 billion genome, it’s just a small portion.”
So what have the team discovered from this new, unlocked genetic treasure trove?
First, they discovered that this lost lineage had split from sub-Saharan African ancestors about 50,000 years ago.
Surprisingly, this group remained genetically isolated from other human groups for tens of thousands of years.
“It’s incredible,” Klaus said. “When they were alive, these people were like most fossils, like things they shouldn’t be there. If I had said these genomes were 40,000 years old, I would have believed it.”
View of the Takarkoli rock shelter under excavation in southern Libya. – Archaeological mission at the Sahara University of Sapienza, Rome
This long-term quarantine reveals two key insights. First, the “Green Sahara,” which continued 15,000 to 5,000 years ago, was a lush human habitat, but as many scientists had previously envisioned, it was not useful as a moving corridor between the north and sub-Saharan Africa.
Second, there was genetic mixing with northern populations, including Neanderthals. But it was limited – much less than the non-African population, carrying about 10 times more Neanderthal DNA than the Takalkori people.
We know that these people were idyllics and meant that they kept livestock like cows. But they
Genetic segregation suggests that this lifestyle was adopted by exchanging knowledge and practice with neighbouring groups rather than moving or subsequent genetic mixing. Again, this is a surprise to scientists.
Unsolved puzzles
The whole mummy genome sequence revealed much about this lost human lineage. Still, there are many more mysteries left.
“The Sahara greening only happened 15,000 years ago. Before that, it was once again desert,” Klaus said. “So we don’t really know where they were strolling around 50,000 years ago, when they split from the population of Africa’s southern Africa, and 15,000 years ago.”
Wherever they went, they must have remained isolated for tens of thousands of years. Perhaps the lost Eden? We may never know.
Johannes Kraus is the director of the Max Planck Institute of Evolutionary Anthropology at the Ministry of Archaeology’s Institute of Evolutionary Anthropology in Leipzig, Germany. He is also a professor of archaeology at the Institute of Zoology and Evolution at Jena, University of Friedrich Schiller, Germany. Krause is a predominantly peer-reviewed journal, with over 250 publications. Nature, Science, Cells, Nature reviews genetics. In 2010 he was awarded the AAAS Newcomb Cleveland Award and the 2017 Thuringian Research Award for Top Performance in Basic Research.
The Mid-Lestic Age is considered a dynamic period in European and African history, but is generally considered a static period in East Asia. The discovery of a series of refined stone tools at Long Tang Sight in southwestern China, 50,000-60,000 years ago, challenges that recognition.
A product from the Quina system located at Longtan Site, China. Image credits: Luan et al. , doi: 10.1073/pnas.2418029122.
The Paleolithic period in the Central era occurred about 300,000-40,000 years ago and is considered an important time in human evolution.
This period is related to the origins and evolution of modern African people.
In Eurasia, it is associated with the development of several archaic human groups, such as the Neanderthals and Denisovan.
However, there is a widely believed belief that in most of the Paleostemic period, China’s development had slowed.
“Our discoveries challenge our current understanding of human history and technological development in East Asia,” said Professor Beau Lee of the University of Wollongong.
“This finding challenges a long-established general theory among archaeologists that China-China tools are relatively simple and unchanged.”
Professor Li and colleagues unearthed a rich collection of stone tools at the Longtan archaeological site in Yunnan, China.
This tool revealed a complete kina technology system that includes the cores used to generate large and thick flakes.
The Kina Industry is one of the most representative tool creation strategies developed in the Mid Paleolithic period around 300,000-40,000 years ago.
It is characterized by a steeply scaled retouch of thick flakes, which often produces robust scrapers with heavy edge modifications associated with neanderthals, representing strategies developed during marine isotope stage 4 as a response to open forest grassland environments and cool/dry climates.
The Kina Technical System was discovered in Western and Southern Europe during this period, but was not thought to have existed in East Asia.
The wear traces of Longtanquina scrape suggest that they are used in a variety of materials, including bones, horns, wood, meat, skin, and non-slow plants.
“Evidence has shown that the discoveries at Longtan significantly broadened the geographical distribution of human species, the tools used, and the adaptability to adapt to a variety of climates and environments,” Professor Li said.
“The discovery of Longtan also provides a perspective to understand how the human-leeze species evolved and evolved in East Asia prior to the massive arrival of early modern people 45,000 years ago.”
“Understanding rather old artifacts forces us to rethink models of human migration patterns and technology evolution in this part of the world.”
“This opens an exciting new avenue for research and, as we know, can rewrite East Asia’s prehistoric period.”
a paper The survey results were published this week Proceedings of the National Academy of Sciences.
____
Qi-Jun Ruan et al. 2025. Kinarithic technology demonstrates the diverse late Pleistocene human dynamics of East Asia. pnas 122 (14): E2418029122; doi: 10.1073/pnas.2418029122
In a cave in northern Spain, researchers found fossilized face fragments of ancient human ancestors.
The ruins, known as “Pink,” are estimated to be between 1.1 million and 1.4 million years old. However, they pose a mystery as the face fragments do not match any known species from the same area and cannot be definitively identified.
Maria Martinon Torres, a paleontologist at the Spanish National Centre for the Evolution of Humans and a co-author of the study, stated in a press call, “We have documented a previously unknown population in Europe. These fossils represent the earliest human fossils ever discovered in Western Europe.”
The initial fragment was unearthed in 2022, with new details featured in Wednesday’s issue of Nature Journal Nature.
Research findings tentatively suggest that Pink may be linked to human ancestors, potentially Homo Erectus. The fossil was named partly after the band Pink Floyd and Rosa Huguet, the study’s lead author and archaeological site coordinator.
Dr. Rosa Huguet.Maria D. Gillen/Ifes Kerkha
This discovery is significant as it provides a more accurate timeline for the arrival of human ancestors in Western Europe. It also helps bridge the evolutionary gap between the oldest human fossil finds in Europe, dating back about 1.8 million years to the site in dmanisi, Georgia, where Homo Aristroux was discovered, going back approximately 900,000 years.
Rodrigo Lacruz, a professor of molecular pathology at New York University, stated that this finding could aid in understanding early human evolution and migration narratives in Europe.
The research team confirmed that Pink’s fossils are adult but could not determine their gender. The discovery was made about 60 feet deep within the Sierra de Atapueca archaeological site, known for its rich historical record of rock formations.
Within the same sediment layer as Pink, evidence suggests that these early ancestors were involved in hunting and butchering animals for meat.
Archaeological excavation work at the Sima Del Elefante Cave site.Maria D. Gillen/Ifes Kerkha
There is evidence suggesting that human ancestors entered Europe in multiple waves, with many of these groups later diminishing. Fossil records show discontinuity across Atapuerca sites and Western Europe, indicating a prolonged period without human presence.
Chris Stringer, a professor and research leader in human evolution at the Museum of Natural History in London, suggested that a climate event may have caused a significant decline or complete depopulation of humans in Western Europe.
The Simadel Elephante cave, where Pink was discovered, has previously yielded intriguing fossils. In 2007, researchers found a small jawbone believed to be around 1.2 million years old, possibly closely related to Pink.
Maria Martinón-Torres expressed surprise and excitement at the discovery of new fossils even deeper within the cave.
The abundant collection of fossils in the region is likely due to its geographical features, providing a natural corridor and resources that attracted early humans.
Archaeological excavation work in the area is ongoing, with researchers hopeful for more surprises.
“We will continue to excavate,” Martinón Torres stated. “There may be more surprises to come.”
sO Microsoft It's decided An internet telephony company purchased it for $8.5 billion (£6.6 billion) in 2011 to end Skype. Millions of unfortunate users flock to the Microsoft Team, a virtual camp with a brain-dead aesthetic that even Zoom looks cool. This unforeseen situation has been telegraphed for quite some time, but even so, Skype is an astounding venture, and its ending mise is brought as a shock as it closes an interesting chain chapter in technological history.
The Internet has been around for a long time than most people notice. It dates back to the 1960s and back to the creation of Alpanet, a military computer network that emerged after the US had the “Sputnik moment.” It's a terrible perception that the Soviet Union appears to be moving forward with technology interests. The Internet design used today, the successor to Arpanet, began in the early 1970s and was first switched in January 1983.
From the start, the network designers decided to avoid limitations on previous communication systems, particularly voice-optimized analog telephone networks. This was owned by companies that were hopeless about digital signals and resisted innovation that they themselves had not been generating. Therefore, new networks have no owners or are not optimized for a particular medium, making them more tolerant than previous networks. Anyone can access it and create a service run as long as the computer meets the network's protocols.
As a result, we are an explosion of creativity that we live together today. What the Internet designers built was what scholars later called “an architecture for unauthorized innovation.” Or, in another way, global Platform for gushing surprises.
Created by Tim Berners-Lee in the late 1980s, the World Wide Web was one of those surprises. However, there was also something called VoIP (Voice over Internet Protocol). Speeches can be digitized (one and zero) and placed in data packets that can be sent over the Internet. And after reaching the destination it was converted to audio. Results: Free telephony anywhere in the world!
Skype was the first company to bring this magic to ordinary consumers. Founded in 2003 by Janus Friis (A Dane) and Niklas Zennström (Swedish) and headquartered in Luxembourg. However, the software that was equipped with it was written by three Estonians who also wrote peer-to-peer file sharing software. In 2005, eBay bought it for $2.6 billion (£2 billion). By 2006 there were 100 million registered users, and by 2009 it had added about 380,000 new users every day, generating approximately $740 million (£575 million) in annual revenue. So, Skype was the first European company to reach US-level size.
At that point, something inevitable happened: in 2011 Skype was purchased by Microsoft and absorbed by Tech Colossus' Maw. Many observers, including this columnist, wondered what Microsoft thought was doing with the new toy. Last week's news suggests that the company never understood it. Either way, after the pandemic arrived in 2020 and people started working from home, it was clear that Microsoft needed to have something to drive away the threat posed by Zoom. Skype may have probably become the core of that response, but instead the decision was made to place all the energy to make the team a Behemoth answer to remote working. Since then, Skype has been surplus to the requirements and dies have been cast.
But before it disappears, it is worth remembering that it was on the scene 20 years ago. Today, most people don't know how close telephony is, in the analog era, is, closed and depressing. It was an industry run by either a complacent, unresponsive, dominant monopoly (USA AT&T) or governmental institutions (UK GPOs). It may take several months to install a phone in your home. The phones were expensive and international calls were actively prohibited.
I grew up in the country (Ireland) with a huge diaspora at a time when calls from the US only meant one family death. If immigrants were in touch with people returning home, it was just letters and perhaps a strange parcel. It's not a phone call. In rural Ireland, their families will occasionally wake up the night before their sons and daughters leave for America or Australia.
And now? The VoIP technology that Skype brings to people's lives is commoditized. Social media platforms like WhatsApp and Signal offer unlimited and free audio (and video) connections with friends, family and colleagues around the world. Calls that were once bankrupt are made every day. Microsoft may not have thought that Skype would ultimately help. But the rest of us certainly did.
What I've read
3 Market Economy Dave Karpf's A sharp essay Identify the three types of money behind the power of Silicon Valley.
I will take home sovereignty an An insightful editorial in Norma Why the current and the 47th President is acting like the 25th by Nathan Gardells.
A story of battle David Allen Green is in the offshore earthquake control collision Foresightful analysis in Financial Times.
An international team of researchers analyzed moans, moans, whistles, bark, screams, and creaks in recordings of humpback whale songs collected over eight years in New Caledonia.
Arnon et al. We have revealed the same statistical structure of humpback whales (Megaptera novaeangliae) Songs are characteristic of human language. Image credits: Christopher Michelle / CC by 2.0.
“I found something really fascinating,” said Dr. Emma Carroll, a marine biologist at Auckland University.
In this study, Dr. Carol and colleagues apply quantitative methods that are usually used to evaluate infantile utterances, and that this applies to culturally evolved learning songs in human languages. I found it. Humpback whale (Megaptera novaeangliae).
In human language, structurally consistent units exhibit frequency distribution that follows the law of power. Zipfian distribution – Attributes that are likely to promote learning and enhance accurate conservation of language across generations.
The Humpback Whale Song is one of the most complex vocal displays in the Animal Kingdom and is passed down through cultural transmission, providing something compelling in parallel with human language.
These songs are highly structured, consisting of nested hierarchical components. The theme is combined with the sound elements that form the phrase, the phrases that are repeated in the theme, and the song.
If statistical properties of human language arise from cultural transmission, similar patterns should be possible to detect in whale songs.
The study authors analyzed recorded humpback whale song data over eight years using infant-inspired speech segmentation techniques.
They discovered a hidden structure in the whale song.
Specifically, these songs contain statistically coherent subsequences that fit the Zipfian distribution.
Furthermore, the length of these subsequences follows ZIPF's Law of Suspicion, an efficiency-driven principle found in many species, including humans.
This striking similarity between the two evolutionarily distant species emphasizes the deep role of learning and cultural communication in shaping communication across species, with such structural properties being exclusive to human language. It challenges the concept of being.
“The Whale Songs” at Griffith University, Dr. Jenny Allen, a leading expert on whale songs, said:
“This is why it offers such an exciting comparison.”
“These results provide unique insight into the importance of cultural communication in interspecies learning processes, particularly for learning complex communication systems.”
“A more interesting question is, rather than trying to adapt animal communication to holes in the form of “human language”? I think so. ”
“Using insights and methods from how babies learn languages allowed us to discover structures that were previously undetected in whale songs,” says Professor Inval Arnon of Hebrew University. Ta.
“This work illustrates how learning and cultural communication can form the structure of communication systems. Find similar statistical structures when complex continuous behaviors are culturally transmitted. You can do it.”
“It raises the interesting possibility that humpback whales can track the transition odds between sound elements, like human babies, and learn songs by using dips to segment those odds. Masu.”
study It was published in the journal today Science.
____
Invalanon et al. 2025. The whale song shows a language-like statistical structure. Science 387 (6734): 649-653; doi: 10.1126/science.adq7055
Humpback whale songs have statistical patterns in their structure, but they are very similar to those found in human language. This does not mean that songs convey complex meanings like our sentences, but that whales may learn songs in a similar way to how human infants begin to understand language. It suggests.
Only male humpback whales (Megaptera novaeangliae) When you sing, actions are considered important to attract peers. The songs are constantly evolving, and new elements appear and spread in the population until old songs are replaced with completely new ones.
“I think it's like a standardized test. Everyone has to do the same task, but changing or decorating to show that they're better at tasks than others can be done. You can do it.” Jenny Allen At Griffith University, in the Gold Coast, Australia.
Instead of trying to find meaning in songs, Allen and her colleagues were looking for innate structural patterns similar to those found in human language. They analyzed eight years of whale songs recorded around New Caledonia in the Pacific Ocean.
The researchers began by creating alphanumeric codes to represent all the songs on every recording, including a total of around 150 unique sounds. “Essentially it's a different sounding group, so maybe a year will make a groaning cry. So we may have an AAB.
Once all the songs were encoded, a team of linguists had to understand how best to analyze so much of the data. The breakthrough occurred when researchers decided to use an analytical technique that applies to methods of discovering words called transition probability.
“The speech is continuous and there is no pause between words, so infants must discover the boundaries of the word.” Invalanon At Hebrew University in Jerusalem. “To do this, use low-level statistics. Specifically, if they are part of the same word, the sounds are more likely to occur together. Infants Use these dips in the possibility of discovering the boundaries of words following another sound.”
For example, the phrase “cute flower” intuitively recognizes that the syllable “pre” and “tty” are more likely to go together than “tty” or “flow.” “If there is a similar statistical structure in a whale song, these cues should also help segment it,” Arnon says.
Using the alphanumeric version of Whale Song, the team calculated the probability of transition between successive sound elements and cut it when the previous sound elements were amazing.
“These cuts divide the song into segmented subsequences,” Arnon says. “We then looked at their distribution and, surprisingly, discovered that they follow the same distribution as seen in all human languages.”
In this pattern called Zipfian distribution, the prevalence of less common words drops in a predictable way. Another impressive finding is that the most common whale sounds tend to be shorter, as is the case with the most common human language.
Nick Enfield At the University of Sydney, who was not involved in the research, it says it is a novel way to analyze whale songs. “What that means is when you analyze it War and peacethe most frequent words are the next twice as often, and researchers have identified similar patterns in whale songs,” he says.
Team Members Simon Carby The University of Edinburgh in the UK says he didn't think this would work. “I will never forget the moment the graph appears. It appears to be familiar from human language,” he says. “This has made me realize that it uncovered a deep commonality between these two species, separated by tens of millions of years of evolution.”
However, researchers emphasize that this statistical pattern does not lead to the conclusion that whale songs are languages that convey meaning as we understand them. They suggest that the possible reason for commonality is that both whale songs and human languages are culturally learned.
“The physical distribution of words and sounds in languages is a truly fascinating feature, but there are millions of other things about languages that are completely different from whale songs,” Enfield says.
In another study It was released this week, Mason Young Blood At Stony Brook University in New York, we found that other marine mammals may also have structural similarities to human language in communication.
Menzeras' law predicting that sentences with more words should consist of shorter words were present in 11 of the 16 species of disease studied. The ZIPF abbreviation law was discovered in two of the five types in which the available data can now be detected.
“To sum up, our research suggests that humpback whale songs have evolved to be more efficient and easier to learn, and that these features can be found in the level of notes within the phrase, phrases within the song. I'm doing it,” Youngblood says.
“Importantly, the evolution of these songs is also biological and cultural. Although some features, such as Menzerath's Law, can emerge through the biological evolution of voice devices, Other features such as the rank frequency method of ZIPF are [the Zipfian distribution]there may be times when cultural communication of songs between individuals is necessary,” he says.
When I interviewed a nurse practitioner in California, what she valued most about nursing was the “human element” of being with other people. “I think we all just want our suffering acknowledged, even if we can't cure it or do anything about it,” she told me.
She still remembers when a homeless man came into her clinic. The man had a hunched back and his feet were gnarled and calloused from years of being on the street. She was “just sitting there tending to the wound on his leg.” This moment stood out to her. One reason for this is that clinics and hospitals have fewer opportunities to take such time, as the pursuit of efficiency imposes time constraints.
Washing her feet represented what nursing meant to her: humility, service, and witness. “To give him a moment where he says, 'I see you, I acknowledge you, this is me taking care of you,'” she said. “It was powerful for both of us.”
What value is there in being seen by people other than friends and family? What happens when people connect with each other in everyday encounters such as civic life and commerce? Why is it important? Amid efficiency campaigns, constant data collection, and the rapid proliferation of AI connecting jobs such as therapy and education, these questions have never been more urgent.
The benefits of human interaction have long been unmeasurable and often ignored, and the value of the skill of connecting with others has long been considered innately feminine. As a social scientist, I spent five years studying these connections to learn how and why they matter and how people make them in different environments.
In fact, we know that the relationship between doctor and patient is as follows: Stronger impact on medical outcomes Rather than taking aspirin every day to prevent a heart attack, a therapist's Connection with client They have more impact than the specific treatments they adhere to. Reflective and witnessing work is important enough to deserve its name. After five years of interviewing and observing numerous practitioners and clients in their workplaces, I have come to call it “joint labor.”
Bonded labor may enable the modern service economy, but it is more than a kind of lubricant to the engine for the outcomes we value, like understanding algebra, managing diabetes, and learning how to manage anxiety. is fulfilled. Rather, seeing and being seen has powerful effects for both individuals and communities.
First, when people see each other, it helps create dignity by simply communicating that they are worthy of being seen by others. I spoke with Mariah, a woman who ran a program in California where ex-prisoners met with mentors in small groups to teach them entrepreneurial skills. She said it took men a while to get used to the attention. “Like, [they ask] “So you just want to know what I think?” This program helped change men through human attention.
The power of human attention to inspire others may be a truism, but perhaps less well-known is that these effects go both ways. “It's a trusting relationship,” Jenna, a primary care physician, told me. “That trust infuses the relationship with almost a power, a sacredness. There's just something about that. I feel really honored and lucky to be able to do something like that. That trust infuses the relationship with almost a power, a sacredness. It gives me just as much.”
Finally, people help others understand themselves better. “I think every kid needs to be seen. Really I saw it,” said Bart, the school principal. “I don't think kids understand it on a deep level. I don't think they really dig into that information or content until they feel seen by the person they're learning from. ”
These outcomes – dignity, purpose and understanding – mean a lot to the individuals involved. But being seen can also have broader effects. In recent research, Formerly Incarcerated People in Chicago We found that receiving interpersonal recognition from local community leaders helps us feel like we fit in. One former prisoner said he now knew he had “something worth saying”. Who is seen and who is not seen can be politically influenced by the feeling of being ignored. stir up populist angerwhile being promoted by recognition; sense of belonging It brings communities together.
Of course humans too mistakeWe recognize each other because judgment and prejudice can poison these interactions and elicit shame in rather vulnerable moments. But as a therapist told me, if people only seek to avoid shame—for example, by choosing an AI companion or counselor—they may never find relief from shame. Shame is common in relationships, but it's something you should work through together, not run away from. Part of the power of human interaction comes from the risks we take when we reveal ourselves to each other.
Bonded labor has profound implications for individuals and our society, but it is under siege in data analytics, drowning practitioners in collection and measurement requirements, and behind automated treatments, education, etc. We are under threat from an increasingly lagging AI. Novelty. For some, AI may be better than nothing, while others believe it is better than humans. Still, both choose technology to solve problems primarily caused by insufficient staffing and a constant drive for efficiency, and both have a hard time understanding what humans actually do for each. It reflects the fact that it reflects. I don't really understand the others.
Instead, we need to preserve and protect these personal interactions. We need to strengthen the working conditions of combined labor workers so that they can better see others. We need to impose “connectivity criteria” that help us decide which AIs to encourage. For example, the kind of AI that creates new antibiotics. Decoding the language of sperm whales – and which one to apply the brakes on, i.e. intervene in human relationships. Each of us must decide how much we value human connection in our own lives and in the lives of our neighbors.
Humanoid robot waltzes with the help of AI trained on human motion capture recordings
Xuxin Cheng and Mazeyu Ji
AI that helps humanoid robots mirror human movements could allow robots to walk, dance, and fight in more human-like ways.
The most agile and fluid robot movements, such as Boston Dynamics’ impressive demonstration of robotic acrobatics, are typically narrow, pre-programmed sequences. Teaching robots a wide repertoire of persuasive human movements remains difficult.
In order to overcome this hurdle, Peng Xuanbin at the University of California, San Diego, and colleagues have developed an artificial intelligence system called ExBody2. This allows the robot to imitate various human movements in a more realistic way and execute them smoothly.
Peng and his team began by building a database of possible movements that a humanoid robot could perform, from simple movements such as standing and walking to more complex movements such as tricky dance moves. Created. The database contained motion capture recordings of hundreds of human volunteers collected in previous research projects.
“Humanoid robots share a similar physical structure with us, so it makes sense to leverage the vast amount of human movement data that is already available,” Peng says. “By learning to imitate this kind of behavior, robots can quickly learn a variety of human-like behaviors. This means that anything humans can do, robots have the potential to learn.” It means something.”
To teach the pseudo-humanoid robot how to move, Peng and his team used reinforcement learning. In this learning, the AI is given an example of what makes a successful move and then challenged to figure out how to do it yourself through trial and error. They started by training ExBody2 with full access to all the data on this virtual robot, including the coordinates of each joint, so it could mimic human movements as closely as possible. It then learned from these movements, using only data accessible in the real world, such as inertia and velocity measurements from sensors on the actual robot’s body.
After ExBody2 was trained on the database, it was able to control two different commercially available humanoid robots. It was able to smoothly combine simple movements such as walking in a straight line and crouching, as well as perform tricky movements such as following a 40-second dance routine, throwing punches, and waltzing with humans.
“Humanoid robots work best when all limbs and joints work together,” Penn says. “Many tasks and movements require coordination between the arms, legs, and torso, and whole-body coordination greatly increases the range of a robot’s capabilities.”
Archaeologists say they have extracted various starch granules from stone tools found at an early Middle Pleistocene site in Israel. These include acorns, grass grains, water chestnuts, yellow waterlily rhizomes, and legume seeds.
Examples of plant parts recovered from Gesher Benot Yaakov's percussion instruments, including whole plants, edible parts, and characteristic starch granules. From left to right: oak, yellow water lily, oat. Scale bar – 20 μm. Image credit: Hadar Ahituv and Yoel Melamed.
The 780,000-year-old basalt tools were discovered at the early Middle Pleistocene site of Gesher Benot Yaakov, located on the shores of ancient Lake Hula.
They were examined by a team of researchers led by Bar-Ilan University. Dr. Hadar Ahitub.
“Our study contradicts the prevailing theory that ancient humans' diets were primarily based on animal protein, as suggested by the popular 'Paleo' diet,” the scientists said. Ta.
“Many of these diets are based on interpretations of animal bones found at archaeological sites, and very little plant-based food has been preserved.”
“However, the discovery of starch granules in ancient tools provides new insight into the central role of plants, especially the carbohydrate-rich starchy tubers, nuts and roots essential to the energy needs of the human brain. I got it.”
“Our research also focuses on the sophisticated methods that early humans used to process plant materials.”
The authors recorded more than 650 starch granules in basalt maces and anvils, tools used to crack and crush Gesher Benot Yaakov's plant foods.
These tools are the earliest evidence of human processing of plant foods, and were used to cook a variety of plants, including acorns, grains, legumes, and aquatic plants like yellow water lilies and the now-extinct water chestnut. was used to.
They also identified microscopic debris such as pollen grains, rodent hair, and feathers, supporting the reliability of the starch findings.
“This discovery highlights the importance of plant foods in the evolution of our ancestors,” Dr. Ahitub said.
“We now know that early humans collected a wide variety of plants throughout the year and processed them using tools made of basalt.”
“This discovery opens a new chapter in the study of the deep relationship between early human diets and plant-based foods.”
The findings also provide insight into hominin social and cognitive behavior.
“The use of tools to process plants suggests a high degree of cooperation and social structure, as hominins operated as part of a larger social group,” the researchers said.
“Their ability to exploit diverse resources from both aquatic and terrestrial environments demonstrates a deep knowledge of their surrounding environment, similar to that of modern humans today.”
“This discovery is an important milestone in the field of prehistoric research, providing valuable evidence about the diet of our ancient ancestors and providing new perspectives on human evolution and the development of complex societies.”
Regarding this research, paper this week, Proceedings of the National Academy of Sciences.
_____
Hadar Ahitub others. 2025. Starch-rich plant foods 780,000 years ago: Evidence from Acheulean impact stone tools. PNAS 122 (3): e2418661121;doi: 10.1073/pnas.2418661121
Most of us are aware that our planet is constantly spinning around its own axis as it orbits the sun. However, the Earth actually rotates around a tilted axis of 23.44°, leading to changes in its slope over time due to natural oscillations and cycles.
Human activities, such as global warming and groundwater extraction for irrigation, are causing significant changes in Earth’s tilt. Scientists have found that as polar ice melts and water redistributes, it can affect the planet’s rotation.
Researchers estimate that pumping large amounts of groundwater for irrigation purposes has led to significant changes in Earth’s tilt over recent decades. This redistribution of water mass is impacting the planet’s rotation, with measurable effects on sea levels and pole shifts.
Experts like Professor Seo Ki-won note that even small changes in water mass can affect Earth’s rotation, leading to shifts in its axis. These changes have been observed over the past few decades, indicating the impact of human activities on a global scale.
While these changes may not directly impact the climate, they do have implications for systems that rely on precise measurements and timing, such as GPS and financial markets. As Earth’s rotation slows due to mass redistribution, adjustments will need to be made to prevent system failures.
It is becoming increasingly clear that human activities are influencing not just the climate, but also the fundamental movements of Earth within space. As we continue to alter the planet’s mass distribution, we must be prepared to adapt our technologies and systems to accommodate these changes.
Scientists sequenced and analyzed the genomes of seven people who lived between 42,000 and 49,000 years ago in Ranis, Germany, and Zlatiks, Czech Republic. As a result, the Ranis and Zlati Kush were linked by distant kinship ties, and that they were part of the same small, isolated group, representing the deepest known split from lineages outside Africa. was shown. The Ranis genome contains a Neanderthal component derived from a single admixture event common to all non-Africans, which the authors date back 45,000 to 49,000 years ago. This means that all non-African ancestors sequenced so far existed in a common population during this time, and further The human remains suggest that they represent a distinct non-African population.
Illustration of the Zlati Kush, who belonged to the same group as the Ranis and had close relationships with two of them. Image credit: Tom Björklund / Max Planck Institute for Evolutionary Anthropology.
Dr. Alev Schumer of the Max Planck Institute for Evolution said, “After modern humans left Africa, they met and interbred with Neanderthals, and as a result, approximately 2 to 3 percent of Neanderthal DNA is present in all areas outside Africa.'' It's in people's genomes.” Anthropology and its colleagues.
“However, little is known about the genetics of Europe's first settlers or the timing of interbreeding between Neanderthals and non-Africans.”
“An important site in Europe is Zlaty Kush in the Czech Republic, where the complete skull of a single individual who lived about 45,000 years ago was discovered and previously genetically analyzed.”
“However, due to the lack of archaeological context, we were unable to link this person to an archaeologically defined group.”
“Ilsenhöhle, located in Ranis, Germany, about 230 km from the nearby site Zlatiks, is known for a particular type of archeology, Rincombi-Ranissian-Jersmanovičian (LRJ), dating back about 45,000 years. ”
“It has long been debated whether the LRJ culture was created by Neanderthals or early modern humans.”
“Although most small bone fragments have been preserved at Ranis, previous research was able to analyze mitochondrial DNA from 13 of these remains, indicating that they belong to modern humans rather than Neanderthals. It turns out.”
“However, because mitochondrial sequences represent only a small portion of genetic information, their relationship to other modern humans has remained a mystery.”
In the new study, the authors analyzed the nuclear genomes of 13 specimens taken from Ranis and found that they represented at least six individuals.
Based on the size of the bones, it was determined that two of the children were infants, and that three were genetically male and three were female.
Interestingly, these individuals included mothers and daughters as well as more distant biological relatives.
The researchers also sequenced more DNA from the female skull found in Zlati Kush, producing a high-quality genome for this individual.
“To our surprise, we discovered that there is a fifth or sixth degree genetic relationship between the two people from Zlati Kush and Ranis,” Dr. Schumer said.
“This means that Zlati Kush is genetically part of an extended family of Ranis and likely created LRJ-type tools as well.”
Of the six Ranis bones, one bone is particularly well preserved, in fact, it is the best preserved modern human bone from the Pleistocene for DNA searches.
This allowed the research team to obtain a high-quality genome from this male individual, called Ranis13.
Together, the Ranis13 and Zlatý kůň genomes represent the oldest high-quality living human genome sequenced to date.
Analyzing genetic variation associated with phenotypic traits, scientists found that Ranis and Zlati Kush individuals carried mutations associated with dark skin and hair color, as well as brown eyes. I did. This reflects the recent African origins of this early European population.
By analyzing parts of the genomes of Ranis and Zlati Kush inherited from the same ancestor, the authors found that their populations consisted of at most a few hundred individuals and were spread over a larger area. We estimate that there is a possibility.
They found no evidence that this small early modern human population contributed to later European or other world populations.
The Zlati Kush/Ranis people coexisted with Neanderthals in Europe, raising the possibility that Neanderthals were among their recent ancestors after they migrated to Europe.
Previous studies of modern humans dating back more than 40,000 years have found evidence of recent admixture between modern humans and Neanderthals.
However, no such evidence for recent Neanderthal admixture was detected in the genomes of Zlati-Kush/Ranis individuals.
Illustration by Zlatý kůň/Ranis group. Image credit: Tom Björklund / Max Planck Institute for Evolutionary Anthropology.
“The fact that no such Neanderthal ancestry remains in Ranis and Zlati Kush, while modern human populations that may have later arrived in Europe inherited such Neanderthal ancestry. is an older Zlati Kush/Ranis lineage This could mean that they entered Europe by a different route or may not have overlapped with the areas inhabited by Neanderthals.''Also Max Planck. said Dr. Kay Pruefer of the Institute for Evolutionary Anthropology.
The Zlati Kush/Ranis population represents the earliest divergence from modern human groups that migrated from Africa and later dispersed across Eurasia.
Despite this early separation, the Neanderthal ancestry of Zlatikush and Ranis derives from the same ancient admixture event that can be detected in all peoples outside Africa today.
By analyzing the length of Neanderthal-contributed segments in the high-coverage Ranis13 genome and using direct radiocarbon dating of this individual, researchers found that this common Neanderthal admixture dates back to 45,000 years ago. It was estimated to be 49,000 years old.
Since all modern non-African populations share this Neanderthal ancestry with the Zlati Kush and Ranis, this means that approximately 45,000 to 49,000 years ago, non-African populations of consistent ancestry still existed. It means you must have done it.
Dr Johannes Kraus, a researcher at the Max Planck Institute for Evolutionary Anthropology, said: “These results provide a deeper understanding of the early pioneers who settled in Europe.”
“They also suggest that the remains of modern humans more than 50,000 years old found outside Africa have interbred with Neanderthals and are part of the general non-African population now found in many parts of the world. This indicates that it may not have been the case.”
AP Sumer others. The genomes of the earliest modern humans constrain the timing of admixture with Neanderthals. naturepublished online on December 12, 2024. doi: 10.1038/s41586-024-08420-x
This article is adapted from the original release by the Max Planck Institute for Evolutionary Anthropology.
This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.
Strictly Necessary Cookies
Strictly Necessary Cookie should be enabled at all times so that we can save your preferences for cookie settings.