Severe Heat Impacts Children’s Early Learning

Extreme heat and poverty can hinder child development

Riccardo Renato Niels Mayer/Alamy

Research indicates that young children exposed to extreme heat typically exhibit a reduced vocabulary, with fewer words, letters, and numbers understood, suggesting that global warming could negatively affect early human development.

Specifically, when average monthly maximum temperatures reached 32°C (90°F) or higher, children aged 3 and 4 were 2.8 to 12.2 percent less likely to meet developmental benchmarks compared to those in environments with maximum temperatures below 26°C (79°F).

“This marks the first instance in literature demonstrating that excessive heat influences not just physical health but also developmental capabilities,” stated Jorge Quartas from New York University.

Quartas and his team analyzed data from 19,600 children surveyed by UNICEF across Georgia, Gambia, Madagascar, Malawi, Sierra Leone, and the State of Palestine, referencing the early childhood development index. They assessed children’s abilities in naming letters, reading simple words, and recognizing numbers from 1 to 10.

The researchers correlated this data with climate records while adjusting for variables such as poverty, maternal education, and baseline temperatures. Notably, even temperatures of 30°C (86°F) began to adversely affect literacy and numeracy skills, with heat also impeding children’s social, emotional, and physical development to a lesser degree.

“Minor effects in early childhood can become more pronounced over time,” Quartas explains. For instance, children who struggle with number recognition might find it challenging to learn math concepts, potentially falling behind academically.

Heat-related stress remains the primary cause of weather-related fatalities, claiming nearly 500,000 lives annually. A recent rapid assessment estimated that the heatwave in June and July was responsible for 2,300 deaths across 12 European cities, primarily occurring among those aged 65 and older.

The findings also revealed that the impacts of heat extend even to prenatal periods. For instance, temperatures of 33 degrees Celsius (91 degrees Fahrenheit) during early pregnancy correlated with a 5.6% reduction in the likelihood of successful childhood development.

Children from poorer, urban households with limited access to water resources were found to be more heavily affected by the heat. “Climate change and excessive heat serve as amplifiers of existing threats,” Quartas articulated. “These children are already at a disadvantage.”

Nonetheless, the study may not comprehensively address barriers such as violence and political instability, which can also impede childhood development, as noted by Giulia Pescarini from the London School of Hygiene and Tropical Medicine.

Further investigations are needed to clarify how heat impacts development, she suggests, noting that low-income households might lack air conditioning, and parents may experience increased stress during heat events.

Pescarini emphasizes that a better understanding of who is affected and how can aid in developing adaptive strategies to support these vulnerable groups.

Topics:

Source: www.newscientist.com

Transformative Choice: Jared Kaplan on Permitting Autonomous AI Learning | Technology




By the year 2030, humanity will face a critical decision regarding the “ultimate risk” of allowing artificial intelligence systems to self-train and enhance their capabilities, according to one of the foremost AI experts.

Jared Kaplan, chief scientist and co-founder of the $180bn (£135bn) US startup Anthropic, emphasized that crucial choices are being made concerning the level of autonomy granted to these evolving systems.

This could potentially spark a beneficial “intellectual explosion” or signify humanity’s loss of control.

In a conversation addressing the intense competition to achieve artificial general intelligence (AGI), also referred to as superintelligence, Kaplan urged global governments and society to confront what he termed the “biggest decision.”

Anthropic belongs to a network of leading AI firms striving for supremacy in the field, alongside OpenAI, Google DeepMind, xAI, Meta, and prominent Chinese competitors led by DeepSeek. Claude, one of the popular AI assistants, has gained significant traction among business clients.




Kaplan predicted that a decision to “relinquish” control to AI could materialize between 2027 and 2030. Photo: Bloomberg/Getty Images

Kaplan stated that aligning swiftly advancing technology with human interests has proven successful to date, yet permitting technology to recursively enhance itself poses “the ultimate risk, as it would be akin to letting go of AI.” He mentioned that a decision regarding this could emerge between 2027 and 2030.




Photo: Casey Clifford/The Guardian


“Envisioning a process generated by an AI that is as intelligent, or nearly as intelligent, as you. This is essentially about developing smarter AI.”




Photo: Casey Clifford/The Guardian


“This seems like a daunting process. You cannot predict the final outcome.”

Kaplan transitioned from a theoretical physicist to an AI billionaire in just seven years. During an extensive interview, he also conveyed:

  • AI systems are expected to handle “most white-collar jobs” in the coming two to three years.

  • His 6-year-old son is unlikely to outperform AI in academic tasks, such as writing essays or completing math exams.

  • It is natural to fear a scenario where AI can self-improve, leading humans to lose control.

  • The competitive landscape around AGI feels tremendously overwhelming.

  • In a favorable outcome, AI could enhance biomedical research, health and cybersecurity, productivity, grant additional leisure time, and promote human well-being.

Kaplan met with the Guardian at Anthropic’s office in San Francisco, where the interior design, filled with knitted rugs and lively jazz music, contrasts with the existential concerns surrounding the technology being cultivated.




San Francisco has emerged as a focal point for AI startups and investment. Photo: Washington Post/Getty Images

Kaplan, a physicist educated at Stanford and Harvard, joined OpenAI in 2019 following his research at Johns Hopkins University and Cologne, Germany, and co-founded Anthropic in 2021.

He isn’t alone in expressing concerns at Anthropic. One of his co-founders, Jack Clark, remarked in October: He considers himself both an optimist and a “deeply worried” individual. He described the path of AI as “not a simplistic and predictable mechanism, but a genuine and enigmatic entity.”

Kaplan conveyed his strong belief that AI systems would align with human interests, aligning them to the level of human cognition, although he harbors concerns about surpassing that boundary.

He explained: “If you envision creating this process using an AI smarter or comparable in intelligence to humans, it becomes about creating smarter AI. We intend to leverage AI to enhance its own capability. This suggests a process that may seem intimidating. The outcome is uncertain.”

The advantages of integrating AI into the economy are being scrutinized. Outside Anthropic’s headquarters, a sign from another tech corporation pointedly posed a question about returns on investment: “All AI and no ROI?” A September Harvard Business Review study indicated that AI “workthrop” — subpar AI-generated work requiring human corrections — was detrimental to productivity.

The most overt benefit appears to be the application of AI in computer programming tasks. In September, Anthropic unveiled its advanced AI, Claude Sonnet 4.5, a computer coding model allowing the creation of AI agents and granting autonomous computer utilization.




The attackers exploited the Claude Code tool for various organizations. Photo: Anthropic

Kaplan commented that the company can handle complex, multi-step programming tasks for 30 continuous hours and has, in specific instances, doubled the speed of its programmers through AI integration.

However, Anthropic revealed in November that it suspected a state-supported Chinese group engaged in misconduct by operating the Claude Code Tool, which not only assisted humans in orchestrating cyberattacks but also executed approximately 30 attacks independently, some of which were successful. Kaplan articulated that permitting an AI to train another AI is “a decision of significant consequence.”

“We regard this as possibly the most substantial decision or the most alarming scenario… Once no human is involved, certainty diminishes. You might begin the process thinking, ‘Everything’s proceeding as intended, it’s safe,’ but the reality is it’s an evolving process. Where is it headed?”

He identified two risks associated with the recursive self-improvement method, often referred to in this context, when allowed to operate uncontrollably.

“One concern is regarding potential loss of control. Is the AI aware of its actions? The fundamental inquiries are: Will AI be a boon for humanity? Can it be beneficial? Will it remain harmless? Will it understand us? Will it enable individuals to maintain control over their lives and surroundings?”




Photo: Casey Clifford/The Guardian


“It’s crucial to prevent power grabs and the misuse of technology.”




Photo: Casey Clifford/The Guardian


“It seems very hazardous if it lands in the wrong hands.”

The second risk pertains to the security threat posed by self-trained AI that could surpass human capabilities in scientific inquiry and technological advancement.

“It appears exceedingly unsafe for this technology to be misappropriated,” he stated. “You can envision someone wanting this AI to serve their own interests. Preventing power grabs and the misuse of technology is essential.”

Independent studies on cutting-edge AI models, including ChatGPT, have demonstrated that the length of tasks they can execute is expanding. Doubling every seven months.

The Future of AI

The contenders aiming to achieve superintelligence. This was compiled in collaboration with the Editorial Design team. Read more from the series.

Words

Nick Hopkins, Rob Booth, Amy Hawkins, Dara Kerr, Dan Milmo

Design and Development

Rich Cousins, Harry Fischer, Pip Lev, Alessia Amitrano

Picture Editors

Fiona Shields, Jim Hedge, Gail Fletcher

Kaplan expressed his worry that the rapid pace of advancement might not allow humanity sufficient time to acclimatize to the technology before it evolves significantly further.

“This is a source of concern… individuals like me could be mistaken in our beliefs and it might all culminate,” he remarked. “The best AI might be the one we possess presently. However, we genuinely do not believe that is the case. We anticipate ongoing improvements in AI.”

He added, “The speed of change is so swift that people often lack adequate time to process it or contemplate their responses.”

During its pursuit of AGI, Anthropic is in competition with OpenAI, Google DeepMind, and xAI to develop more sophisticated AI systems. Kaplan remarked that the atmosphere in the Bay Area is “certainly intense with respect to the stakes and competitiveness in AI.”

“Our perspective is that the trends in investments, returns, AI capabilities, task complexity, and so forth are all following this exponential pattern. [They signify] AI’s growing capabilities,” he noted.

The accelerated rate of progress increases the risk of one of the competitors making an error and falling behind. “The stakes are considerable to remain at the forefront in terms of not losing ground on exponential growth. [the curve] You could quickly find yourself significantly behind, particularly regarding resources.”

By 2030, it is anticipated that $6.7 trillion will be necessary for global data centers to meet increasing demand. Investors are eager to support companies that are aligned closest to the forefront.




Significant accomplishments have been made in utilizing AI for code generation. Photo: Chen Xin/Getty Images

At the same time, Anthropic advocates for AI regulation. The company’s mission statement emphasizes “the development of more secure systems.”

“We certainly aim to avoid a situation akin to Sputnik where governments abruptly realize, ‘Wow, AI is crucial’… We strive to ensure policymakers are as knowledgeable as possible during this evolution, so they can make informed decisions.”

In October, Mr. Anthropic’s stance led to a confrontation with the Trump administration. David Sachs, an AI advisor to the president, accused Anthropic of “fear-mongering” while promoting state-specific regulations beneficial to the company, while being detrimental to startups.

After Sachs suggested the company was positioning itself as an “opponent” of the Trump administration, Kaplan, alongside Dario Amodei, Anthropic’s CEO, countered by stating the company had publicly supported Trump’s AI initiatives and was collaborating with Republicans, aspiring to maintain America’s dominance in AI.

Source: www.theguardian.com

German Court Rules ChatGPT Violates Copyright Law by ‘Learning’ from Song Lyrics

A court in Munich has determined that OpenAI’s ChatGPT breached German copyright laws by utilizing popular songs from renowned artists to train its language model, which advocates for the creative industry have labeled a pivotal ruling for Europe.

The Munich District Court supported the German music copyright association GEMA, stating that ChatGPT gathered protected lyrics from well-known musicians to “learn” them.

GEMA, an organization that oversees the rights of composers, lyricists, and music publishers with around 100,000 members, initiated legal action against OpenAI in November 2024.

This case was perceived as a significant test for Europe in its efforts to prevent AI from harvesting creative works. OpenAI has the option to appeal the verdict.


ChatGPT lets users pose inquiries and issue commands to a chatbot, which replies with text that mimics human language patterns. The foundational model of ChatGPT is trained on widely accessible data.

The lawsuit focused on nine of the most iconic German hits from recent decades, which ChatGPT employed to refine its language skills.

This included Herbert Groenemeyer’s 1984 synthpop hit manners (male), and Helen Fischer’s Atemlos Durchi die Nacht (Breathless Through the Night), which became the unofficial anthem for the German team during the 2014 World Cup.

The judge ruled that OpenAI must pay undisclosed damages for unauthorized use of copyrighted materials.

Kai Welp, GEMA’s general counsel, mentioned that GEMA is now looking to negotiate with OpenAI about compensating rights holders.

The San Francisco-based company, co-founded by Sam Altman and Elon Musk, argued that its language learning model utilizes the entire training set rather than retaining or copying specific songs, as stated by the Munich court.

OpenAI contended that since the outputs are created in response to user prompts, the users bear legal responsibility, an argument the court dismissed.

GEMA celebrated the ruling as “Europe’s first groundbreaking AI decision,” indicating that it might have ramifications for other creative works.

Tobias Holzmuller, the company’s CEO, remarked that the verdict demonstrates that “the internet is not a self-service store, and human creative output is not a free template.”

“Today, we have established a precedent to safeguard and clarify the rights of authors. Even AI tool operators like ChatGPT are required to comply with copyright laws. We have successfully defended the livelihood of music creators today.”

The Berlin law firm Laue, representing GEMA, stated that the court’s ruling “creates a significant precedent for the protection of creative works and conveys a clear message to the global tech industry,” while providing “legal certainty for creators, music publishers, and platforms across Europe.”


The ruling is expected to have ramifications extending beyond Germany as a legal precedent.

The German Journalists Association also praised the decision as a “historic triumph for copyright law.”

OpenAI responded that it would contemplate an appeal. “We disagree with the ruling and are evaluating our next actions.” The statement continued, “This ruling pertains to a limited set of lyrics and does not affect the millions of users, companies, and developers in Germany who utilize our technology every day.”

Furthermore, “We respect the rights of creators and content owners and are engaged in constructive discussions with various organizations globally that can also take advantage of this technology.”

OpenAI is currently facing lawsuits in the U.S. from authors and media organizations alleging that ChatGPT was trained on their copyrighted materials without consent.

Source: www.theguardian.com

Scientists Say Learning Music Can Reverse Brain Aging, Even in Older Adults

Recent research indicates that older adults who play musical instruments tend to have healthier brains.

One investigation examined the impacts of decades of music practice, while another focused on learning new instruments later in life.

In both studies, engaging in music was linked to better brain health and a decrease in age-related cognitive decline.

The first study was published in PLOS Biology and involved collaboration between Canadian and Chinese researchers. They recruited 50 adults with an average age of 65, half of whom had been playing instruments for at least 32 years, while the others had no musical experience.

Additionally, they included 24 young adults with an average age of 23 who had no musical training.

The researchers utilized magnetic resonance imaging (MRI) to assess blood flow in the brains of the participants.

During the scans, participants listened to a recording of speakers amid background noise, where 50 other voices were present, and were tasked with identifying what the main speaker was saying.

The scans revealed that older musicians’ brains responded to challenges similarly to those of the younger participants.

Nonetheless, older adults showed signs of cognitive decline. Specifically, musicians exhibited strong neural connections on the right side of the brain that non-musicians lacked, which could place additional strain on their brain.

“The brains of older musicians remain finely tuned due to years of training, so they don’t need to play well-tuned instruments at high volumes,” stated co-author Dr. Yi from the Chinese Academy of Sciences.

“Our findings suggest that musical experience helps mitigate the additional cognitive strain typically associated with age-related challenges, particularly in noisy environments.”

A 2025 YouGov poll revealed that 25% of UK adults can play at least one instrument, with the guitar being the second most favored instrument after the piano.

As individuals age, cognitive functions such as memory, learning, and perception often deteriorate, eventually contributing to dementia.

However, researchers posit that cognitive reserve—the brain’s capability to manage damage and decline—can enhance resilience against this deterioration.

The precise mechanisms remain unclear, as noted by Morten Scheibye-Knudsen, Associate Professor of Aging at the University of Copenhagen, Denmark, in an interview with BBC Science Focus.

Some studies suggest that “exercising” the brain through activities like playing instruments, learning new languages, and solving puzzles can improve brain health, but results from other research have been inconsistent.

“Overall, we advocate for brain training, but the evidence is not conclusive,” Scheibye-Knudsen remarked.

Conversely, another recent study, published in Imaging Neuroscience, indicated that musical practice can enhance brain health, even when individuals start playing in later life.

According to a 2024 poll from the University of Michigan, 17% of US adults aged 50-80 engage in playing instruments at least several times a year – Credit: DMP via Getty

Researchers at Kyoto University in Japan continued previous studies that included 53 elderly individuals (average age 73) who took music lessons for four months. Initial findings indicated no significant differences in brain health among participants.

Four years later, the same participants underwent MRI scans (13 of whom had maintained their music practice).

Those who ceased playing their newly learned instruments showed declines in memory performance, with a noticeable reduction in the volume of the putamen—a brain region associated with motor function, learning, and memory.

However, those who continued playing music over the four years exhibited no cognitive decline.

Scheibye-Knudsen noted that the study demonstrates that “playing an instrument not only helps preserve cognitive function as we age, but it may also directly contribute to maintaining the structural integrity of the brain.”

He added, “Engaging in music beyond what this study covered offers additional advantages, such as enhanced social interaction.”

“I encourage people to start making music; it’s never too late to learn.”

Read more:

About Our Experts

Morten Scheibye-Knudsen is an associate professor of aging at the University of Copenhagen, Denmark, and leads the Scheibye-Knudsen Research Group. He also serves as the president of the Nordic Aging Association.

Source: www.sciencefocus.com

Machine Learning Aids in Discovering New Planets

Astronomers are focused on discovering planets that closely resemble Earth in size, composition, and temperature. Earth-like planets face numerous challenges in this quest. These planets are small and rocky, making them hard to detect. The current methods of planet hunting tend to favor gas giants, complicating matters. For a planet to have temperatures similar to Earth, it must orbit its host star at a similar distance, similar to Earth’s orbit around the Sun. This means it takes about a year to complete its orbit around the star. This raises an additional challenge for astronomers: locating Earth-like planets around a star requires telescopes to be dedicated to monitoring them for more than a year.

To maximize efficiency and reduce time spent on monitoring, scientists are seeking alternative methods to identify promising stars for in-depth searches before committing resources. A team of astronomers explored whether observable characteristics of planetary systems could indicate the presence of Earth-like planets. They found that the arrangement of known planets, along with their mass, radius, and proximity to their nearest star, could help predict the likelihood of Earth-like planets existing in those systems.

How effectively did the team test their approach using Machine Learning? They initiated their study by compiling a sample of planetary systems, some with Earth-like planets and some without. Since astronomers have only discovered about 5,000 stars that host orbiting planets, this sample size was too small for training machine learning models effectively. Consequently, the team generated three sets of planetary systems using a computational framework that simulates how planets form, based on the Bern model.

The Bern model initiates with 20 dust clumps, measuring around 600 meters, which is approximately 2,000 feet. These clumps help kickstart the accumulation of gas and dust into full-sized planets over a timespan of 20 million years. The planetary system evolves to a stable state over more than 10 billion years, leading to a Synthetic Planetary System that astronomers can utilize in their datasets. Using this model, they created 24,365 systems with sun-sized stars, 14,559 systems with similar stars, and 14,958 systems with different types of stars. Each group was further subdivided into those containing Earth-like planets and those without.

With these larger datasets in hand, the team utilized machine learning techniques known as Random Forest Models to categorize planetary systems based on their potential to host Earth-like planets. In a random forest setup, outputs are determined as either true or false through various components called trees that outline subsections of the entire training dataset. The team concluded that if a planetary system could host one or more Earth-like planets, the Random Forest algorithm should categorize it as “true.” They evaluated the algorithm’s accuracy using a metric known as the Precision Score.

The random forests made decisions based on specific characteristics within each synthetic planetary system. These factors included the number of planets, the presence of similar systems observed by astronomers, the system’s total planet count, and the mass and distance of planets over 100 times that of Earth, as well as the characteristics of the stars involved. The team allocated 80% of the synthetic planetary systems for training data, reserving the remaining 20% for initial testing of the completed algorithm.

The findings revealed that the random forest models accurately predicted where Earth-like planets are likely to exist with an impressive precision score of 0.99. Building on this success, they tested the model against data from 1,567 stars of similar sizes, each with at least one known orbiting planet. Out of these, 44 met the algorithm’s threshold for having Earth-like planets, suggesting that the majority of systems in this subset are stable enough to host such planets.

The team concluded that their models can effectively identify candidate stars for hosting Earth-like planets; however, they issued a caution. One concern is that the synthesis of planetary systems is time-consuming and resource-intensive, limiting the availability of training data. A more significant caution is rooted in the assumption that the Bern model accurately simulates the layered structure of planets. They urged researchers to rigorously validate their models for future theoretical work.


Post view: 230

Source: sciworthy.com

The constraints of machine learning in analyzing galaxies that are difficult to observe

The recent focus in news has been on the progress of artificial intelligence (AI) in the past couple of years. ChatGPT and DALL·E are examples of AI models that many people associate with AI. AI tools are utilized by astronomers to analyze vast data sets, which would be impractical to manually go through. Machine Learning Algorithms (ML) are crucial for categorizing data based on predetermined parameters derived from previous studies. An example of ML usage is in the identification of elusive patterns in sky surveys by astronomers, though the limitations of this method in classifying objects in space are not thoroughly understood.

To address these limitations, a group of scientists led by Pamela Marchand-Cortes at the University of La Serena in Chile tested the capabilities of ML. They used ML models like Rotation forest, Random forest, and Logit Boost to categorize objects beyond the Milky Way galaxy based on their properties. The team aimed to see if ML could accurately categorize objects already manually classified. The challenge was in the dense region of sky obscured by dust in the Milky Way, known as the “Avoidance Zone.” The team’s experiment showed that ML had difficulty in categorizing objects in this challenging area.

The team gathered and analyzed data from X-ray images to manually identify objects and compare ML’s performance. ML correctly identified large objects like galaxies in only a few instances, showcasing its limitations. Despite the potential for ML to assist in studying obscured regions of the universe, the team recommended training AI models with diverse samples to enhance accuracy in future research.

Post View: 120

Source: sciworthy.com

New study finds bumblebees can acquire intricate skills through social learning

Culture refers to behaviors that are socially learned and persist within a group over long periods of time. Growing evidence suggests that animal culture, like human culture, may be cumulative. However, the accumulated culture of humans contains behaviors so complex that they exceed the ability of individuals to discover them independently over a lifetime. New research shows that the buff-tailed bumblebee (Western bumblebee) can learn how to open new two-step puzzle boxes and obtain food from trained conspecifics, even if they fail to open them independently.

Bufftail Bumblebee (Western bumblebee) socially learn behaviors that are too complex to innovate alone. Image credit: Ralphs Fotos.

“This groundbreaking research opens new avenues for understanding the evolution of intelligence and social learning in animals,” said study lead author Lars, a researcher at Queen Mary University of London. Professor Chitka said.

“This challenges long-held assumptions, paves the way to further explore the cognitive wonders hidden in the insect world, and even suggests the exciting possibility of accumulated culture among seemingly simple creatures. Masu.”

Professor Chitka and his colleagues designed a two-step puzzle box that required bumblebees to perform two different actions in sequence to access a sweet reward at the end.

Training bees to do this was no easy task, and we had to help them by adding additional rewards along the way.

This temporary reward was eventually taken away, and the bees were forced to open the entire box before getting the treat.

Surprisingly, while individual bees had difficulty solving the puzzle from the beginning, bees allowed to observe trained demonstration bees completed the entire sequence, including the first step. You just learned quickly and got rewarded at the end.

This study shows that bumblebees have a level of social learning that was previously thought to be unique to humans.

They can share and acquire behaviors that are beyond the cognitive capacity of individuals. This ability is thought to underpin the vast and complex nature of human culture, and was previously thought to be exclusive to us.

“This is a very difficult task for bees,” said study lead author Dr. Alice Bridges, a researcher at Queen Mary University of London and the University of Sheffield.

“They had to learn two steps to get the reward, and the first action in the sequence was not rewarded.”

“Initially, we had to train demonstration bees to include temporary rewards, which highlighted the complexity.”

“But other bees learned the sequence from the social observations of these trained bees, without ever experiencing the reward of the first step.”

“But when we tried to get other bees to open the box without a bee trained to show them the solution, they couldn't open it at all.”

This study opens up exciting possibilities for understanding the emergence of cumulative culture in the animal kingdom, beyond individual learning.

Cumulative culture refers to the gradual accumulation of knowledge and skills over generations, allowing increasingly complex behaviors to develop.

The ability of bees to learn such complex tasks from demonstrators suggests potential pathways for cultural transmission and innovation beyond the bees' individual learning abilities.

“This challenges the traditional view that only humans can learn socially complex behaviors beyond individual learning,” says Professor Chitka.

“Many of the most remarkable achievements of social insects, such as the nesting structures of honey bees and wasps and the agricultural habits of ants that farm aphids and fungi, may have first been spread by imitation by clever innovators, and then spread. , which is increasingly likely.'' They eventually became part of the species-specific behavioral repertoire. ”

Regarding this research, paper Published in the Journal on March 6, 2024 Nature.

_____

AD bridge other. Bumblebees socially learn behaviors that are too complex to innovate alone. Nature, published online March 6, 2024. doi: 10.1038/s41586-024-07126-4

Source: www.sci.news

Storks refine migratory routes through experiential learning

Storks in their breeding grounds in Germany

Christian Ziegler/Max Planck Institute for Animal Behavior

As storks grow older, they choose faster and more direct migration routes. This suggests that storks are learning by experience to perfect these routes.

“We were able to track these animals and get detailed information about when and where they go,” he says. Ellen Aikens at the University of Wyoming. “But we wanted to learn more about how migration is refined and developed over the lifespan of storks.”

Stork (ciconia) Breeds mainly in Europe, but flies to central or southern Africa during the winter. From 2013 to 2020, Aikens and his colleagues captured 258 young storks at five breeding sites in Germany and Austria. They attached tags to them that tracked their location before releasing them.

In total, the team was able to record 301 migration events from 40 storks, with all storks completing at least two consecutive migrations.

After analyzing the data, the researchers found that young birds tend to spend more time exploring new places and trying different trails each year.

“The reason behind this is that during early childhood they collect information to better understand their environment,” Aikens said. “Because they haven’t yet bred, they have less time pressure to move into the territory they need to breed or build nests.”

However, as the storks grew, their paths gradually became straighter and they began to fly much faster in order to reach their destination faster.

“This suggests that they are progressively upgrading their routes to shorter and more direct ones, but this comes at the cost of requiring energetically more expensive transitions. ,” Aikens said. She says this change occurred because, as storks mature, they need to compete with other storks for quality nesting sites in order to be successful in breeding.

“Storks learn the same way we learn,” Aikens says. “We should appreciate more how wise and how wonderful it is that they are able to complete these journeys successfully and do better over the years.”

topic:

Source: www.newscientist.com