How Google’s DeepMind Tool Accelerates Hurricane Behavior Predictions

As Tropical Storm Melissa wreaked havoc south of Haiti, meteorologist Philippe Papin from the National Hurricane Center (NHC) firmly believed it was on the verge of evolving into a formidable hurricane.

In his capacity as the lead forecaster, he forecasted that within a mere 24 hours, the storm would escalate to a Category 4 hurricane and shift its course toward Jamaica’s coastline. Up to that point, no NHC forecaster had made such an announcement. What a daring prediction for quick validation.

However, Mr. Papin had an ace up his sleeve: artificial intelligence, specifically Google’s newly released DeepMind hurricane model from June. As expected, Melissa transformed into an unbelievably strong storm that devastated Jamaica.

NHC forecasters are increasingly depending on Google DeepMind. On the morning of October 25th, Mr. Papin elaborated on this in a public forum. He also shared on social media that Google’s model was central to his confidence: “Approximately 40 out of 50 members of the Google DeepMind ensemble predict Melissa will reach Category 5. While we are cautious about predicting its intensity due to track uncertainty, it remains a strong possibility.”


“Rapid intensification is likely as the storm traverses very warm ocean waters, characterized by the highest ocean heat content in the entire Atlantic Basin.”

Google DeepMind’s first AI model specifically designed for hurricanes has now surpassed traditional weather forecasters at their own game. It has accurately predicted all 13 Atlantic storms so far this year, outperforming human forecasters in course prediction.

Ultimately, Melissa made landfall in Jamaica as a Category 5 hurricane, marking one of the most powerful landfalls recorded in nearly two centuries across the Atlantic. Mr. Papin’s audacious forecasts could provide Jamaicans with critical time to brace for disasters, potentially safeguarding lives and property.

Google DeepMind is revolutionizing weather forecasts in recent years, and the parent forecasting system that the new hurricane model is based on has also excelled in identifying last year’s large-scale weather patterns.

Google’s models function by discovering patterns that traditional, slower, physics-based weather models may overlook.

“They operate much faster than their physics-based counterparts, with increased computational efficiency that saves both time and resources,” remarked former NHC forecaster Michael Rowley.

“This hurricane season has demonstrated that emerging AI weather models can be competitive, and in some instances, more accurate than the slower, traditional physics-based models that have long been our standard,” Rowley noted.

It’s important to note that Google DeepMind exemplifies machine learning—not generative AI like ChatGPT. Machine learning processes large data sets to identify patterns, allowing models to generate answers in minutes using standard computing resources. This stands in stark contrast to the flagship models employed by governments for decades, which take hours to compute using some of the world’s largest supercomputers.

Nevertheless, the fact that Google’s model has quickly surpassed traditional models is nothing short of remarkable for a meteorologist devoted to forecasting the planet’s most powerful storms.

Skip past newsletter promotions

Former NHC forecaster James Franklin expressed his admiration: “The sample size is now significant enough to conclude this isn’t merely beginner’s luck.”

Franklin indicated that Google DeepMind has eclipsed all other models in tracking hurricane paths globally this year. As with many AI models, high-end intensity predictions can sometimes miss the mark. Earlier this year, Hurricane Erin rapidly intensified to Category 5 in the northern Caribbean, while Typhoon Karmaegi struck the Philippines on a recent Monday.

Looking ahead, Franklin mentioned his intention to engage with Google during the upcoming offseason to enhance DeepMind’s output by providing additional internal data for better assessment of its predictions.

“What concerns me is that while these predictions appear very accurate, the model’s output operates like a black box,” Franklin remarked.

No private or commercial entity has ever developed a leading weather model that allows researchers to scrutinize its methods. Unlike the majority of models built and maintained by the government, which are available to the public at no cost, Google has established high-level resources for DeepMind; published in real-time on a dedicated website, though its methodologies largely remain concealed.

Google is not alone in harnessing AI for challenging weather forecasting issues. Governments in the US and Europe are also working on their own AI weather models, demonstrating enhanced capabilities compared to previous non-AI versions.

The next frontier in AI weather forecasting seems to be for startups to address sub-seasonal forecasts and challenges that have so far proven difficult. To enhance advance warning of tornado outbreaks and flash floods—a goal supported by US government funding. Additionally, a company named WindBorne Systems is launching weather balloons to bridge gaps in the U.S. weather observation network, recently diminished by the Trump administration.

Source: www.theguardian.com

Research Shows Accurate Age Predictions Can Be Made with Just 50 DNA Molecules

Researchers at Hebrew University leveraged a deep learning network to analyze DNA methylation patterns, achieving a time series age (defined as postnatal time) with median accuracy for individuals under 50 years, ranging from 1.36 to 1.7 years. result This work will be published in the journal Cell Report.



Utilizing ultra-depth sequences from over 300 blood samples of healthy individuals, the research indicates that age-dependent methylation changes happen in a probabilistic or coordinated block-like fashion across clusters of CPG sites. Image credit: Ochana et al., doi: 10.1016/j.celrep.2025.115958.

“We observe that our DNA leaves measurable marks over time,” commented Professor Tommy Kaplan from Hebrew University.

“Our model interprets these marks with remarkable precision.”

“The essence lies in how our DNA evolves through a process known as methylation – the chemical tagging of DNA by methyl groups (CH)3.

“By focusing on two vital regions of the human genome, our team successfully decoded these changes at the level of individual molecules, employing deep learning to generate accurate age estimations.”

In this research, Professor Kaplan and his team examined blood samples from over 300 healthy subjects and analyzed data from a decade-long study of the Jerusalem Perinatal Study.

The model developed by the team showed consistent performance across various factors, including smoking, weight, gender, and diverse indicators of biological aging.

In addition to potential medical applications, this technique could transform forensic science by enabling experts to estimate the age of suspects based on DNA traces.

“This provides us with a new perspective on cellular aging,” stated Yuval Dor, a professor at Hebrew University.

“It’s a striking example of the intersection between biology and artificial intelligence.”

Researchers found new patterns in DNA alterations over time, suggesting that cells encode both mature and tuned bursts, akin to biological clocks.

“It’s not solely about knowing your age,” explained Professor Ruth Shemmer of Hebrew University.

“It’s about comprehending how cells and molecules keep track of time.”

“This research could redefine our approach to health, aging, and identity,” added the scientist.

“From assisting physicians in treatment based on an individual’s biological timeline to equipping forensic investigators with advanced tools for crime-solving, the capability to decipher age from DNA paves the way for groundbreaking advancements in science, medicine, and law.”

“Moreover, it enhances our understanding of the aging process and brings us closer to unraveling our body’s internal clock.”

____

Bracha-Lea Ochana et al. Time is encoded by changes in methylation at clustered CPG sites. Cell Report Published online on July 14th, 2025. doi:10.1016/j.celrep.2025.115958

Source: www.sci.news

Antarctic Ocean Ice Loss Accelerates Ocean Warming Beyond Predictions

Recent Summers Show Antarctic Sea Ice Cover at Unprecedented Lows

Nature Picture Library / Alamy

The decline of sea ice around Antarctica has led to a doubling of icebergs calved from the ice sheet and increased spikes in seawater temperatures, exacerbating the effects of heat accumulation in the Southern Ocean.

In recent years, sea ice extent at both poles has sharply decreased. In 2023, the Antarctic winter sea ice area fell 1.55 million square kilometers short of the expected average.

This loss is equivalent to disappearing an ice area nearly 6.5 times larger than the UK. Projections for 2024 suggest similarly low figures, with 2025 also anticipated to experience harsh conditions.

Edward Dodridge from the University of Tasmania and his team are investigating the implications of the long-term reduction of protective buffers provided by Antarctic sea ice.

The researchers discovered that the average temperature in the South Seas has increased by 0.3°C between latitudes 65° and 80° since 2016. Additionally, summer sea ice losses have similarly raised temperatures by 0.3°C.

Alarmingly, the heat from a year with particularly low sea ice does not dissipate by the next year. Instead, it continues to warm the ocean for at least the following three years, resulting in even greater temperature increases than expected, according to Dodridge.

“For some time, we’ve known that summer sea ice loss contributes to ocean warming because ice and its reflective snow cover keep heat at bay,” explains Doddridge.

“The fact that the ocean retains warming effects for three years complicates the consequences of warming in the Southern Ocean.”

Moreover, the dramatic reduction in sea ice may accelerate the loss of inland ice sheets. When sea ice freezes, it dampens the expansion of the South Seas, preventing contact with the ice sheets sitting above Antarctica. Once the protective sea ice barrier disappears, the coastal ice sheets become more susceptible to breaking apart.

The research found that for every additional 100,000 square kilometers of sea ice lost, six more icebergs larger than one square kilometer were formed. “We witnessed double the amount of icebergs at periods of low sea ice,” said Doddridge.

Additionally, the reduction in sea ice significantly impacts species that rely on transferring from the ocean to solid ground for survival. The study indicates that species like the Emperor Penguin (Aptenodytes forsteri) and Crabeater Seal (Lobodon carcinophagus) may face severe challenges.

The scientific investigation in Antarctica is becoming increasingly difficult as the presence of sea ice is crucial for safely resupplying research stations.

Nellie Abram from The Australian National University remarks that “this analysis shows very few positives surrounding the loss of sea ice and its impact on the environment.”

“In years with extremely low sea ice, the Antarctic ecosystem continues to experience effects for years afterward. This isn’t just a one-time event,” Abram asserts. “There are numerous ways this loss of ocean ice influences Antarctic ecosystems.”

Topics:

  • Climate Change/
  • Antarctica

Source: www.newscientist.com

AI from DeepMind outperforms current weather predictions in accuracy

Weather forecasting today relies on simulations that require large amounts of computing power.

Petrovich9/Getty Images/iStockphoto

Google DeepMind claims its latest weather forecasting AI can predict faster and more accurately than existing physics-based simulations.

GenCast is the latest in DeepMind's ongoing research project to improve weather forecasts using artificial intelligence. The model was trained on 40 years of historical data from the European Center for Medium-Range Weather Forecasts (ECMWF). ERA5 ArchiveThis includes regular measurements of temperature, wind speed, and barometric pressure at various altitudes around the world.

Data up to 2018 was used to train the model, and then 2019 data was used to test predictions against known weather conditions. The company found that it outperformed ECMWF's industry standard ENS forecasts 97.4% of the time, and 99.8% of the time when forecasting more than 36 hours ahead.

Last year, DeepMind collaborated with ECMWF to create an AI that outperformed the “gold standard” high-resolution HRES 10-day forecast by more than 90%. Previously, he developed a “nowcasting” model that used five minutes of radar data to predict the probability of rain over a given one square kilometer area from five to 90 minutes in advance. Google is also working on ways to use AI to replace small parts of deterministic models to speed up calculations while maintaining accuracy.

Existing weather forecasts are based on physical simulations run on powerful supercomputers to deterministically model and estimate weather patterns as accurately as possible. Forecasters typically run dozens of simulations with slightly different inputs in groups called ensembles to better capture the variety of possible outcomes. These increasingly complex and large numbers of simulations are computationally intensive and require ever more powerful and energy-consuming machines to operate.

AI has the potential to provide lower-cost solutions. For example, GenCast uses an ensemble of 50 possible futures to create predictions. Using custom-built, AI-focused Google Cloud TPU v5 chips, each prediction takes just 8 minutes.

GenCast operates at a cell resolution of approximately 28 square kilometers near the equator. Since the data used in this study were collected, ECMWF's ENS has been upgraded to a resolution of just 9 kilometers.

Yilan price DeepMind says AI doesn't have to follow, and could provide a way forward without collecting more detailed data or performing more intensive calculations. “If you have a traditional physics-based model, that's a necessary requirement to solve the physical equations more accurately, and therefore to get more accurate predictions,” Price says. “[With] machine learning, [it] It is not always necessary to go to higher resolution to get more accurate simulations and predictions from your model. ”

david schultz Researchers at the University of Manchester in the UK say AI models offer an opportunity to make weather forecasts more efficient, but they are often over-hyped and rely heavily on training data from traditional physically-based models. states that it is important to remember that

“is that so [GenCast] Will it revolutionize numerical weather forecasting? No, because in order to train a model, you first have to run a numerical weather prediction model,” says Schulz. “These AI tools wouldn't exist if ECMWF didn't exist in the first place and without creating the ERA5 reanalysis and all the investment that went into it. It's like, 'I can beat Garry Kasparov at chess. But only after studying every move he's ever played.''

Sergey Frolov Researchers at the National Oceanic and Atmospheric Administration (NOAA) believe that further advances in AI will require higher-resolution training data. “What we're basically seeing is that all of these approaches are being thwarted.” [from advancing] “It depends on the fidelity of the training data,” he says. “And the training data comes from operational centers like ECMWF and NOAA. To move this field forward, we need to generate more training data using higher-fidelity physically-based models. .”

But for now, GenCast offers a faster way to perform predictions at lower computational costs. kieran hunt A professor at the University of Reading in the UK believes ensembles can improve the accuracy of AI predictions, just as a collection of physics-based predictions can produce better results than a single prediction. states.

Mr Hunt points to the UK's record temperature of 40C (104C) in 2022 as an example. A week or two ago, there was only one member of the ensemble who was predicting it, and they were considered an anomaly. Then, as the heat wave approached, the predictions became more accurate, providing early warning that something unusual was about to happen.

“You can get away with it a little bit if you have one member who shows something really extreme. That might happen, but it probably won't happen,” Hunt says. “I don’t think it’s necessarily a step change; it’s a combination of new AI approaches with tools we’ve been using in weather forecasting for a while to ensure the quality of AI weather forecasts. There is no doubt that this will yield better results than the first wave of AI weather forecasting.”

topic:

Source: www.newscientist.com

DESI’s Latest Observations Confirm General Relativity’s Predictions

astronomer using dark energy spectrometer The most advanced instrument (DESI) aboard NSF's Nicholas U. Mayall 4-meter Telescope at Kitt Peak National Observatory maps how nearly 6 million galaxies cluster together over 11 billion years of the universe's history I did. Their results provide one of the most rigorous tests of Albert Einstein's theory of general relativity to date.

This artist's impression shows the evolution of the universe, starting with the Big Bang on the left and continuing with the emergence of the Cosmic Microwave Background. The formation of the first stars ends the Dark Ages of the universe, followed by the formation of galaxies. Image credit: M. Weiss / Harvard-Smithsonian Center for Astrophysics.

“General relativity has been very well tested at the scale of the solar system, but we also needed to test whether our assumptions work on even larger scales,” said the CNRS and Institute for Nuclear and High Energy Research. said cosmologist Dr. Pauline Zarouk. Physics.

“Studying the rate of galaxy formation allows us to directly test our theory, and so far it is consistent with what general relativity predicts on cosmological scales.”

In a new study, Dr. Zarouk and his colleagues found that gravity behaves as predicted by Einstein's theory of general relativity.

This result validates our main model of the universe and limits the possibility of a modified theory of gravity. Modified gravity theories have been proposed as an alternative way to explain unexpected observations, such as the accelerated expansion of the universe, which is usually attributed to dark energy.

This complex analysis uses around 6 million galaxies and quasars, allowing researchers to look up to 11 billion years into the past.

Today's results provide an expanded analysis of DESI's first year of data. DESI created the largest 3D map of the universe to date in April, revealing hints that dark energy may be evolving over time.

April's results examine a particular feature of how galaxies cluster together, known as baryon acoustic oscillations (BAOs).

The new analysis expands the scope by measuring how galaxies and matter are distributed across the universe at different scales.

The study also improved constraints on the mass of neutrinos, the only fundamental particle whose mass has not yet been precisely measured.

Neutrinos slightly affect the clustering pattern of galaxies, which can be measured by the quality of the DESI data.

The DESI constraints are the most stringent to date and complement those from laboratory measurements.

The study required months of additional work and cross-checking. As with the previous study, they used a method that kept the results of the study hidden from the scientists until the end, reducing unconscious bias.

“This research is one of the important projects of the DESI experiment to learn not only fundamental aspects of particles, but also fundamental aspects of the large-scale universe, such as the distribution of matter and the behavior of dark energy.” he said. Dr. Stephanie Juneau is an astronomer in NSF's NOIRLab and a member of the DESI Collaboration.

“By comparing the evolution of the distribution of matter in the universe with existing predictions, such as Einstein's theory of general relativity and competing theories, we are further narrowing down the possibilities for the gravitational model.”

“Dark matter makes up about a quarter of the universe, and dark energy makes up another 70%, but we don't actually know what either is,” says Dr. Mark Maus. student at Berkeley Lab and the University of California, Berkeley.

“The idea that we can take pictures of the universe and address these big fundamental questions is amazing.”

The DESI Collaboration today shared its results below. some papers in arXiv.org.

Source: www.sci.news

New Tool from NOAA and CDC Reveals Heat Predictions and Risk Levels

CDC Director Mandy Cohen emphasized the importance of utilizing tools and guidelines to help individuals identify places to stay cool when air conditioning is not available, recognize symptoms of heat illness, and properly manage medications. Cohen highlighted the significance of understanding how drugs interact with heat during a press conference on Monday.

“While heat can impact our health, it is crucial to remember that heat-related illness and death are preventable,” Cohen stated.

Heat-related deaths outnumber those caused by other extreme weather events such as floods, hurricanes, and tornadoes in the United States each year. The record-breaking heat experienced last summer highlighted the threat of scorching temperatures, particularly in the South and Southwest regions of the country.

NOAA officials expressed optimism that the new resources will assist communities in preparing for the upcoming summer season. The agency anticipates above-average temperatures in May and June across the United States, indicating another hot summer ahead.

“It is never too early to start preparing for heat-related challenges,” emphasized NOAA Administrator Rick Spinrad during a briefing.

NOAA’s HeatRisk tool categorizes heat risks on a scale from 0 (green) to 4 (magenta), with 4 indicating extreme and/or prolonged heat impacts. The tool considers factors such as maximum and minimum temperatures as well as the combined effects of heat during both day and night. It is tailored to provide location-specific heat outlooks as environmental conditions vary from one place to another.

The forecast also includes historical data to provide context on the predicted temperatures relative to past records during the same time of year.

NOAA National Weather Service Director Ken Graham highlighted that the HeatRisk tool can assist individuals in making informed decisions about outdoor activities based on the heat risk level. The tool aims to complement heat watches and warnings issued by government agencies by offering additional context for users.

The initial prototype of the HeatRisk tool was developed for California by the National Weather Service in 2013 and expanded to include Western states in 2017. It is currently available as a trial tool across the continental United States.

Members of the public are encouraged to submit feedback on the tool by September 30th to the National Weather Service.

Source: www.nbcnews.com

Anticipating the Future: 8 AI Predictions for 2024

It was shocking for AI, as it moved from niche to mainstream technology faster than ever before. But 2024 will be the year when this hype really becomes reality as people consider the capabilities and limitations of AI as a whole. Here are some ways we think that could happen.

OpenAI becomes a product company

After a management shake-up in November, OpenAI will be a different company — it may not look like it on the outside, but the trickle-down effect of Sam Altman taking more full charge will make all the difference. You can feel it on the level. And one of the ways we expect that to manifest is through the idea of “shipping.” You can see that in the GPT store. Originally he was scheduled for release in December, but was understandably delayed due to executive turmoil. “AI app store” will continue to be strongly promoted as a platform to get AI toys and tools. Don’t worry about Hugging Face or any other open source model. They have a great model called Apple and they follow it all the way to the bank. Expect to see more similar moves from OpenAI in 2024, as the prudence and academic reserve exhibited by previous boards gives way to unseemly greed for markets and customers. Other big companies working on AI will likely follow this trend (e.g. we expect Gemini/Bard to get into a ton of Google products), but I suspect it will be more pronounced in this case.

Agents, generated videos, and generated music graduate from quaint to experimental

Some niche applications of AI models, such as agent-based models and generative multimedia, will grow beyond “meh” status by 2024. If AI is going to help you do more than summarize things or create lists, it will need access to spreadsheets, ticket-buying interfaces, transit apps, and more. In 2023, several attempts were made with this “agent” approach, but none really caught on. I don’t expect anything to really take off in 2024 either, but I think agent-based models will look a little more convincing than they did last year. We’ll also see some clutch use cases that are notorious for tedious processes such as submission. Insurance claims. Video and audio will also find a niche where their shortcomings are less obvious. In the hands of skilled creators, the lack of photorealism will not be an issue and AI video will be used in fun and interesting ways. Similarly, generative music models are likely to be adopted by some major productions, such as games, where professional musicians can also leverage the tools to create endless soundtracks.

The limitations of monolithic LLM become clearer

So far, there’s been a lot of optimism about the capabilities of large-scale language models, and they’ve actually proven to be better than anyone expected, and can be used to create more computing power. As more are added, its capabilities further increase accordingly. But 2024 will be the year that something will be given. There is a lot of research going on at the forefront of this field, so it is impossible to predict exactly where this will happen. The seemingly magical “new” features of the LLM will be further studied and understood in 2024. Also, things like LLM not being able to multiply large numbers would make more sense. At the same time, the returns on number of parameters also start to decrease, and while training a 500 billion parameter model might technically yield better results, the compute required to do so could probably be more effectively deployed. may be. A single monolithic model is unwieldy and expensive, whereas a combination of multiple experts (a collection of smaller, more specific and perhaps multimodal models) is much easier to update in parts. However, it may prove to be nearly as effective.

Marketing meets reality

The simple fact is that it will be very difficult for companies to live up to the hype built in 2023. Marketing claims about machine learning systems that companies deploy to keep up will be subject to quarterly and annual reviews…and there is a strong possibility that they will be found to be inadequate. It is high. We expect significant customer withdrawal from AI tools as the benefits do not justify the costs and risks. At the other end of the spectrum, we may see litigation and regulatory action with AI service providers who are unable to substantiate their claims. Capabilities will continue to grow and advance, but it is not far off that all 2023 products will survive, and there will be a round of consolidation as the wave’s erratic riders decline and are consumed.

Source: techcrunch.com

Astronomers make breakthrough discovery in planet formation, conflicting with theoretical predictions

Recent observations of the young star DG Taurus reveal a smooth protoplanetary disk in which no planets have yet formed, suggesting that it is on the brink of this process. The findings show unexpected dust grain growth patterns and provide new insights into the early stages of planet formation. Credit: SciTechDaily.com

Astronomers have become very good at finding signs of planet formation around stars. However, to fully understand planet formation, it is important to examine cases where this process has not yet begun.

Looking for something and not finding it can sometimes be even more difficult than finding it, but new detailed observations of the young star DG Taurus reveal that the planet is a smooth protoplanet with no signs of planet formation. It was shown that it has a system disk. This lack of detected planet formation may indicate that DG Taurus is on the eve of planet formation.

Image of radio radiation intensity from a disk near DG Taurus observed with ALMA. Rings have not yet formed within the disk, suggesting that planets are about to form.Credit: ALMA (ESO/National Astronomical Observatory/NRAO), S. Obashi et al.

Protoplanetary disk and planet growth

Planets form around protostars, which are young stars that are still forming, in disks of gas and dust known as protoplanetary disks. Planets grow so slowly that it is impossible to observe their evolution in situ. Therefore, astronomers observe many protostars at slightly different stages of planet formation to build theoretical understanding.

This time, an international research team led by Satoshi Ohashi of the National Astronomical Observatory of Japan (NAOJ) has developed the Atacama Large Millimeter/Submillimeter Array (alma telescope) will conduct high-resolution observations of the protoplanetary disk surrounding the relatively young protostar DG Taurus, located 410 light-years away in the direction of Taurus. The researchers found that DG Taurus has a smooth protoplanetary disk and no rings that would indicate planet formation. This led the research team to believe that the DG Taurus system could begin forming planets in the future.

Unexpected discoveries and future research

The researchers found that during this pre-planetary stage, dust particles are within 40 astronomical units (about twice the size of Earth’s orbit). Uranus The radius of the central protostar is still small, but beyond this radius the dust particles begin to grow, which is the first step in planet formation. This goes against the theoretical expectation that planet formation begins inside the disk.

These results provide surprising new information about dust distribution and other conditions at the beginning of planet formation. Studying more examples in the future will further deepen our understanding of planet formation.

Reference: “Dust concentration and particle growth in the smooth disk of a DG tau protostar revealed by ALMA triple-band frequency observations” Satoshi Ohashi, Munetake Momose, Akiraka Kataoka, Aya Higuchi E, Takashi Tsukagoshi, Takahiro Ueda, Claudio Codella, Linda Podio, Tomoyuki Hanawa, Nami Sakai, Hiroshi Kobayashi, Satoshi Okuzumi, Hidekazu Tanaka, August 28, 2023, of astrophysical journal.
DOI: 10.3847/1538-4357/ace9b9

This research was funded by the Japan Society for the Promotion of Science, the German Foundation, and the European Union.

Source: scitechdaily.com