Scientists Forecast Wildfire Smoke as Leading Climate-Related Health Risk in Costs

Wildfire smoke contributes to tens of thousands of annual deaths, inflicting greater harm on American residents by mid-century than other climate change-related threats, including extreme heat.

This assertion comes from a new research paper that presents extensive modeling of the increasing health impacts of wildfire smoke on public health in the U.S.

The study, published in Nature magazine on Thursday, reveals an average annual contribution of wildfire smoke, leading to over 41,400 excess deaths—more than twice what previous research had suggested.

By mid-century, the study’s authors project an additional increase of 26,500 to 30,000 deaths as human-driven climate change exacerbates wildfire risks.

Marshall Burke, an environmental and social sciences professor at Stanford University and one of the study’s authors, states:

Economically quantified, Burke mentions that their findings surpassed other financial damages associated with climate change identified in previous studies, including agricultural losses, heat-related fatalities, and energy expenses.

Numerous studies indicate that wildfire smoke exposure results in severe health issues. Tiny smoke particles can infiltrate the lungs and enter the bloodstream, raising the risk of asthma, lung cancer, and other chronic respiratory conditions. Wildfire smoke is also associated with premature births and miscarriages.

This research paints a stark picture of a country increasingly filled with smoke. Fires in the western U.S. and Canada release smoke into the atmosphere, spreading across regions and undermining decades of efforts to curb industrial air pollution through clean air regulations.

Dr. Joel Kaufman, a professor at the University of Washington School of Medicine, commented on the study, noting, “This poses a new threat that can be directly linked to climate change. That’s the crucial point here.”

As the study suggests, wildfire smoke-related deaths could rise by 64% to more than 73%, varying by emission rates.

“Regardless of mitigation efforts, we are likely to experience more smoke by 2050,” Burke added, though emphasizing that emission reduction efforts will have long-term benefits.

Kaufman noted that over the past five to ten years, accumulating evidence indicates that wildfire smoke is at least as detrimental as other forms of air pollution.

“We previously assumed wood burning was less harmful,” Kaufman explained. “These findings indicate that wildfire smoke could be more toxic,” particularly when wildfires consume structures, vehicles, and other human-made materials.

Kaufman highlighted that earlier this year, the Los Angeles fire started from a burning area, but much of it involved homes and plastics, which created “another toxic mixture.” The new research does not differentiate the sources of future wildfire smoke.

The implications of this research could influence public policy.

The Environmental Protection Agency is attempting to revoke a vital legal provision known as “danger detection.” This is part of a broader rollback of environmental regulations. A legal decision from 2009 asserted that greenhouse gases like carbon dioxide and methane are contributing to global warming, posing risks to public health and safety. This decision is crucial for the EPA’s ability to regulate greenhouse gas emissions under the Clean Air Act.

Dr. John Balmes, a spokesperson for the American Lung Association and a professor at the University of California San Francisco School of Medicine, expressed that this new study could serve as a “counterargument” against such actions.

The proposal to withdraw these findings is currently undergoing a lengthy regulatory process that is open to public commentary. Balmes mentioned that he referenced the study in a letter opposing the EPA’s proposed changes.

“It reinforces our claims regarding wildfires tied to climate change and their associated public health consequences,” Balmes stated.

On Wednesday, the National Academy of Sciences, Engineering, and Medicine released a report confirming that human-induced global warming is causing harm and will continue to do so in the future. The evidence is “extremely beyond scientific conflict,” asserted the committee behind the report.

The White House did not respond to requests for comments. The EPA stated that the administration is “committed to reducing the risks of catastrophic wildfires,” prioritizing strategies such as prescribed burns, fuel treatment, and debris cleanup to prevent these events.

“The EPA welcomes all public feedback on its proposal to rescind the 2009 danger findings until September 22, 2025, and looks forward to hearing diverse perspectives on this matter,” a spokesperson noted in an email.

In a novel study, researchers estimated the annual excess deaths attributed to wildfire smoke by comparing three models: one that assesses climate change’s impact on fire activity, another predicting changes in fire activity and smoke dispersion, and a third quantifying health outcomes from prolonged smoke exposure.

Researchers used data from 2011-2020 as a baseline to forecast future conditions under various climate scenarios, utilizing datasets that included all U.S. deaths within that period, both satellite and ground-level data on smoke dispersion, and global climate models.

The study assumes that people will take similar protective measures against smoke exposure as they do today.

This study has its limitations, as it primarily relies on a set of models to draw national conclusions. It does not track individual deaths linked to smoke exposure or catalog their health effects.

Results from this study were published alongside another study in Nature that employed a similar methodology and adopted a global perspective. Separate research teams estimate that premature deaths due to wildfire smoke could reach about 1.4 million annually by century’s end—approximately six times the current figure.

Source: www.nbcnews.com

NASA and IBM Develop AI to Forecast Solar Flares Before They Reach Earth

Solar flares pose risks to GPS systems and communication satellites

NASA/SDO/AIA

AI models developed with NASA satellite imagery are now capable of forecasting the sun’s appearance hours ahead.

“I envision this model as an AI telescope that enables us to observe the sun and grasp its ‘mood,'” states Juan Bernabe Moreno from IBM Research Europe.

The sun’s state is crucial because bursts of solar activity can bombard Earth with high-energy particles, X-rays, and extreme ultraviolet radiation. These events have the potential to disrupt GPS systems and communication satellites, as well as endanger astronauts and commercial flights. Solar flares may also be accompanied by coronal mass ejections, which can severely impact Earth’s magnetic field, leading to geomagnetic storms that could incapacitate power grids.

Bernabé-Moreno and his team at IBM and NASA created an AI model named Surya, derived from the Sanskrit word for ‘sun,’ by utilizing nine years of data from NASA’s Solar Dynamics Observatory. This satellite captures ultra-high-resolution images of the sun across 13 wavelength channels. The AI models have learned to recognize patterns in this visual data and create forecasts of how the sun will appear from future observational stations.

When tested against historical solar flare data, the Surya model demonstrated a 16% improvement in accuracy for predicting flare occurrences within the next day compared to traditional machine learning models. There is also a possibility that the model could generate visualizations of flares observable for up to two hours in advance.

“The strength of AI lies in its capacity to comprehend physics in unconventional ways. It enhances our intuition regarding physical processes,” remarks Lisa Upton at the Southwest Research Institute in Colorado.

Upton is especially eager to explore if the Surya model can aid in predicting solar activity across the sun and at its poles—areas where NASA instruments cannot directly observe. While Surya does not explicitly aim to model the far side of the sun, it has shown promise in forecasting what the sun will resemble for several hours ahead as sections rotate into view, according to Bernabe Moreno.

However, it remains uncertain whether AI models can overcome existing obstacles in accurately predicting how solar activity will influence Earth. Bernard Jackson from the University of California, San Diego, points out that there is currently no means to directly observe the magnetic field composition between the Sun and Earth, a crucial factor determining the direction of high-energy particles emanating from the star.

As stated by Bernabé-Moreno, this model is intended for scientific use now, but future collaborations with other AI systems that could leverage Surya’s capabilities may allow it to support power grid operators and satellite constellation owners as part of early warning frameworks.

Topic:

Source: www.newscientist.com

Centuries-Old Equations Forecast Flow—Until They Fail

The Navier-Stokes equations provide predictions for fluid flow

Liudmila Chernetska/Getty Images

Here’s an excerpt from the elusive newsletter of space-time. Each month, we let physicists and mathematicians take over your keyboard, sharing intriguing concepts from the universe’s vast expanse. You can Sign up for Losing Space and Time here.

The Navier-Stokes equations have approximately 200 years of history in modeling fluid dynamics, yet I still find them perplexing. It’s a strange feeling, especially given their significance in building rockets, creating medications, and addressing climate change. But it’s crucial to adopt a mathematical mindset.

The equations are effective. If they weren’t, we wouldn’t rely on them across such diverse applications. However, achieving results doesn’t guarantee comprehending them.

This situation parallels many machine learning algorithms. We can set them up, code for training, and observe outputs. Yet when we hit ‘GO’, they evolve, utilizing every step in their process to optimize outcomes. Thus, we often refer to them as “black boxes” for their obscure input-output mechanics.

The same uncertainty looms over the Navier-Stokes equations. While we possess a clearer understanding of the processes behind fluid dynamics compared to many machine learning methods—thanks to outstanding computational fluid dynamics solvers—these equations can still yield chaotic results. Identifying why this occurs is a significant problems in mathematics, linked to the Millennium Prize Problems, marking it as one of the seven most challenging unresolved questions. This makes deciphering the Navier-Stokes anomaly a million-dollar endeavor.

To grasp the challenge, let’s delve into the Navier-Stokes equation, particularly the adaptation for modeling “incompressible Newtonian fluids.” Think of it like water—conversely to air, it resists compression. (Though a more generalized version exists, I will focus on this variant, as it tied closely to my four-year doctoral thesis.)

These equations may seem daunting, but they stem from two well-established principles of the universe: mass conservation and Newton’s second law. For instance, the first equation describes the fluid parcel’s velocity, addressing how the fluid moves and alters shape without adding or removing mass.

The second equation is a complex representation of Newton’s famed equation, f = ma, applied to fluid parcels with density (ρ). It states that the momentum change rate of a fluid (left side) equals the applied force (right side). Simply put, the left side addresses mass acceleration; the right side deals with pressure (p), viscosity (μ), and exerted forces (f).

So far, so good. These equations derive from solid universal laws and function admirably—until they don’t.

2D liquid flows at right angles

NumberPhile

Consider a setup where a 2D fluid flows around a right angle. As the fluid approaches the corner, it is compelled to pivot along the channel. You could replicate this experiment in a laboratory setting, and many do around the globe. The fluid smoothly adapts its path, and life as we know it persists.

But what happens when you apply the Navier-Stokes equations to this scenario? These equations model fluid behavior and reveal how velocity, pressure, density, and related attributes progress over time. Yet, upon inputting this setup, the calculations suggest an infinite angular velocity. This isn’t just excessively large; it’s beyond comprehension—endless.

Model of 2D fluids’ flow at right angles using the Navier-Stokes equation

Keaton Burns, Dedalos

What’s happening? This result is absurd. I have conducted this experiment and observed that nothing unusual occurred. So, why did the equations fail? This is precisely where mathematicians get intrigued.

When I visit schools to discuss university applications, students invariably inquire about the admission processes at institutions like Oxford or Cambridge (I participate in selection interviews for both). I share my criteria for evaluating a strong applicant, emphasizing the importance of “thinking like a mathematician.” Breaking equations fascinates mathematicians for a reason.

It’s remarkably useful when a model operates successfully in 99.99% of cases, producing meaningful, viable results that tackle real-world problems. Despite its occasional failure, the Navier-Stokes equations remain indispensable for engineers, physicists, chemists, and biologists, aiding in solving intricate matters.

Designing a quicker Formula 1 car requires harnessing airflow dynamics. Developing a fast-acting drug necessitates understanding blood flow patterns. Predicting carbon dioxide’s effect on climate demands insights into atmospheric-oceanic interactions. Each of these scenarios pertains to fluid dynamics, making the Navier-Stokes equations critical across varied applications as they adapt to fill different mediums.

However, addressing a multitude of complex scenarios with unique dynamics necessitates elaborate equations. This complexity explains our limited understanding. Indeed, the Navier-Stokes equations are designated as Millennium Prize Problems. The Clay Mathematics Institute emphasizes the need for deeper insight as fundamental to resolving the million-dollar inquiry.

“Our vessel follows the waves as they ripple across the lake. Meanwhile, turbulent airflow continues to affect modern aircraft travel. Mathematicians and physicists feel that answers regarding turbulence and breezes lie in understanding the solutions to the Navier-Stokes equations. They seek to unveil the hidden secrets of these equations.”

How can we enhance our comprehension of equations? By experimenting until they break, something I often suggest to high school students. The cracks represent your gateway. Continue probing until the facade shatters, revealing the hidden treasures beneath.

Consider the historical context of solving quadratic equations, particularly in finding the value of x that satisfies the equation ax2 + bx + c = 0. Many will recognize this from their GCSE studies and understand that quadratic equations typically yield two roots.

This equation usually functions correctly, producing two solutions when substituting values for A, B, and C. However, certain conditions can render it ineffective, such as when b2 – 4AC <0, leading to non-existent square roots. I’ve identified circumstances where equations fail.

But how is this possible? Mathematicians from the 16th and 17th centuries proposed utilizing instances where quadratic equations seemed faulty to define “imaginary numbers,” stemming from negative square roots. This insight catalyzed the emergence of complex numbers and the rich mathematical frameworks that followed.

In essence, we often learn invaluable insights from failures more than from successful instances. For the Navier-Stokes equations, the rare occasions of malfunction occur when modeling infinite velocity in a right-angled fluid flow. Similar instances can arise when addressing vortex reconnection or soap membrane separation processes—real phenomena replicable in labs that produce infinite variable trends using Navier-Stokes.

Such apparent failures could uncover deeper truths about our mathematical models. Nevertheless, discussions remain open. It might indicate a level of detail issue in numerical simulations or faulty assumptions regarding individual liquid molecule behavior.

Conversely, these breakdowns may enlighten aspects of the Navier-Stokes equation’s inherent structure, bringing us a step closer to unlocking their mysteries.

Tom Crawford is a mathematician at Oxford University. speaker at this year’s New Scientist Live.

Topic:

Source: www.newscientist.com

The Limited Impact of the Tsunami on the U.S. Does Not Indicate an Inaccurate Forecast

The 8.8 magnitude earthquake off the coast of Russia’s Kamchatka Peninsula generated water waves traveling at jetliner speeds toward Hawaii, California, and Washington states on Wednesday.

Yet, when the tsunami reached the U.S., it appears not to have inflicted widespread devastation, with some areas where warnings were issued showing no signs of significant flooding.

This doesn’t mean the tsunami was a “bust” or poorly predicted, according to earthquake and tsunami researchers.

“When you hear ‘tsunami warning,’ people often think of dramatic scenes from movies, and when it arrives at just three feet, they might wonder, ‘What’s going on?’,” remarked Harold Tobin, director of the Pacific Northwest Earthquake Network and professor at the University of Washington. “We should view this as a success; we received a warning, but the situation wasn’t catastrophic.”

Here’s what you should know.

How intense was the Kamchatka earthquake? What caused the initial discrepancies?

Initially, the US Geological Survey assessed the Kamchatka earthquake at magnitude 8.0, which was later adjusted to 8.8.

“It’s not unusual for major earthquakes to see such adjustments in the first moments,” Tobin explained. “Our standard methods for calculating earthquake sizes can quickly saturate, akin to turning up the volume on a speaker until it distorts.

A buoy measuring the quake, located approximately 275 miles southeast of the Kamchatka Peninsula, gave the first signs of the earthquake, showing bigger waves than the initial report.

This buoy belongs to the National Oceanic and Atmospheric Administration’s DART (Deep Ocean Assessment and Reporting) system and is connected to a submarine pressure sensor roughly four miles deep.

That sensor detected waves measuring 90 centimeters (over 35 inches), which caught the attention of tsunami researchers.

Vasily Titov, a senior tsunami modeler at NOAA’s Pacific Ocean Environment Research Institute, noted:

Titov reflected on the 2011 Tohoku earthquake and tsunami, which tragically claimed nearly 16,000 lives in Japan.

Subsequent earthquake models confirmed the Wednesday earthquake’s magnitude as 8.8, as detailed by the USGS calculator.

In comparison, Tohoku was significantly larger.

Tobin estimated that the energy released during the Kamchatka quake was two to three times less than that in Japan, with the tsunami generated there being approximately three times as severe.

He further noted that the Tohoku event “created a notably large seafloor displacement.”

Tobin speculated that the Kamchatka quake likely had less seafloor displacement than what could occur in a worst-case 8.8 scenario, though more research is needed for substantiation.

Emergency services experts assess damage on Sakhalin Island in the Far East post-earthquake.Russia’s Ministry of Emergency via Getty Images / AFP

How did researchers generate predictions? How accurate were they?

Within two hours, researchers produced tsunami predictions for various “warning points” along both the Pacific and US coasts, forecasting tidal gauge and flood levels.

The tsunami took around eight hours to reach Hawaii and twelve hours to arrive at the California coast.

Titov, who assisted in developing the model used by predictors in the National Tsunami Warning Centers in Hawaii and Alaska, explained that the model relies on seismic data and a network of over 70 DART buoys along the Pacific edge. The U.S. operates more than half of these buoys.

Titov indicated that the model projected tsunami waves hitting Hawaii’s North Shore region at approximately two meters.

“Hawaii was predicted to have waves of about 2 meters [6.5 feet], and actual measurements were around 150 centimeters, or 1.5 meters (5 feet). That aligns perfectly with our expectations,” Titov stated.

A similar trend was observed in parts of California, according to Titov.

As assessments of flooding continue to come in, it takes time to determine how well the model performed.

“We know there were floods in Hawaii. We can’t ascertain the full extent yet, but initial reports seem to align closely with our predictions,” Titov shared.

On Wednesday at the Pacifica Municipal Pier Coastline in California, tsunami alerts were triggered following the earthquake.Tayfun Coskun/Anadolu via Getty Images

Why did residents in Hawaii evacuate for a 5-foot wave?

Yong Wei, a tsunami modeler and senior research scientist at the University of Washington and NOAA’s tsunami research center, indicated that 1.5 meters (5 feet) of tsunami waves could be highly perilous, particularly in Hawaii’s shallow waters.

Tsunami waves carry significantly more energy than typical wind-driven waves, possessing shorter wavelengths and durations between waves, resulting in slower speeds.

Wei noted that tsunami waves of this stature could surge several meters inland, producing hazardous currents and endangering boats and other objects.

Visitors stand on the balcony of the Alohilani Resort facing Waikiki Beach in Hawaii, responding to warnings of potential tsunami waves.Nicola Groom / Reuters

“People can get hurt. If you ignore the warning and stay, even a wave of two meters can be deadly,” Wei warned. “Being on the beach can expose you to powerful currents that may pull you into the ocean, which can lead to fatalities.”

Tobin expressed that he viewed the initial warning as conservative yet necessary.

“It’s essential not to belittle warnings. If nothing happens, people shouldn’t think, ‘Oh, we had alerts and nothing transpired.’ Warnings need to be cautious, allowing for some margin of error.”

Was this a significant event?

No. The Kamchatka Peninsula has a long history of seismic activity.

“This area has been slated for another earthquake, with several occurring recently, which indicates a heightened risk,” researchers noted.

In 1952, prior to a robust understanding of plate tectonics, a 9.0 magnitude quake struck the Kamchatka Peninsula in a similar location, resulting in a tsunami that impacted the town of Severokrilsk.

“The Russian populace was caught off guard. It was an immensely powerful quake, leading to a massive tsunami, and they were unprepared,” McInnes shared.

McInnes explained that the tsunami measured between 30 to 60 feet in height in the southern section of the peninsula.

“Thousands perished, and the town suffered considerable destruction,” stated Joanne Bourgeois, a professor emeritus of sedimentology at the University of Washington.

How will the tsunami warning system function if an earthquake threatens your area?

The Kamchatka tsunami arose from a massive earthquake along a subduction zone fault, where one tectonic plate is pushed below another. A comparable fault exists offshore the U.S. West Coast, known as the Cascadia Subduction Zone, stretching from Northern California to Northern Vancouver Island.

“It’s akin to a mirrored image of the Pacific Ocean,” remarked Tobin. “The relatively shallow depth of 8.8 in Cascadia is certainly plausible for a scenario here.”

In fact, Cascadia has the potential to produce significantly larger earthquakes, as modeling suggests it could generate tsunami waves reaching heights of 100 feet.

Typically, earthquakes in subduction zones yield tsunamis that reach the coast within 30 minutes to an hour, and predictions are developing better methods for estimating tsunami impacts along the U.S. West Coast before flooding occurs.

Titov emphasized that enhancing predictions will necessitate advancements in underwater sensors, improved computing infrastructure, and AI algorithms.

Tobin noted that the success of Tuesday’s tsunami warning should inspire more investments in underwater sensors and earthquake monitoring stations along the subduction zones.

“This incident highlights the significant role of NOAA and USGS. Many questioned these agencies’ relevance, but without NOAA, no alert would have been issued. The next warning could be for a more imminent threat. They truly demonstrated their importance,” he asserted.

Source: www.nbcnews.com

Study Reveals Your Brain’s Biological Age Can Forecast Your Lifespan

Researchers have devised a technique to assess the biological age of the brain, revealing it to be a key indicator of future health and longevity.

A recent study involved an analysis of blood samples from 45,000 adults, with protein levels measured in over 3,000 individuals. Many of these proteins correlate with particular organs, including the brain, enabling the estimation of each organ system’s “biological age.”

If an organ’s protein profile significantly deviated from its expected age (based on birthday count), it was categorized as either “very matured” or “very youthful.”

Among the various organs assessed, the brain emerged as the most significant predictor of health outcomes, according to the research.

“The brain is the gatekeeper of longevity,” stated Professor Tony Wyss-Coray, a senior author of the newly published research in Natural Medicine. “An older brain correlates with a higher mortality rate, while a younger brain suggests a longer life expectancy.”

Participants exhibiting a biologically aged brain were found to be 12 times more likely to receive an Alzheimer’s diagnosis within a decade compared to peers with biologically youthful brains.

Additionally, older brains increased the risk of death from any cause by 182% over a 15-year span, whereas youthful brains were linked to a 40% decrease in mortality.

Wyss-Coray emphasized that evaluating the brain and other organs through the lens of biological age marks the dawn of a new preventive medicine era.

“This represents the future of medicine,” he remarked. “Currently, patients visit doctors only when they experience pain, where doctors address what’s malfunctioning. We are transitioning from illness care to wellness care, aiming to intervene before organ-specific diseases arise.”

The team is in the process of commercializing this test, which is anticipated to be available within the next 2-3 years, starting with major organs like the brain, heart, and immune system.

read more:

Source: www.sciencefocus.com

NOAA Speeds Up Hiring for Forecast Positions Following National Weather Service Cuts

As some weather forecast offices discontinue overnight staffing, the National Weather Service is swiftly reassigning personnel internally, working to fill over 150 vacancies and address critical staffing gaps.

On Tuesday, the National Oceanic and Atmospheric Administration considered initiating a “reallocation period” to fill key positions that have remained unstaffed since the Trump administration’s decisions to dismiss probationary employees and incentivize veteran federal workers to retire early within the National Weather Service (NWS).

The agency is actively recruiting to fill five pivotal meteorologist roles overseeing field offices, including locations in Lake Charles, Louisiana; Houston, Texas; and Wilmington, Ohio.

Meanwhile, at least eight out of 122 weather forecasting offices nationwide—including those in Sacramento, California; Goodland, Kansas; and Jackson, Kentucky—have announced no plans to operate overnight or reduce overnight services in the coming six weeks, according to Tom Fahy, legislative director of the National Weather Service Employees Organization, which monitors staffing levels for the agency.

Critics of the recent cuts argue that the efforts to reassign meteorologists and other staff indicate severe reductions in services, negatively impacting vital public safety operations.

“This has never occurred before. We have always been an agency dedicated to providing 24/7 service to American citizens,” Fahy stated. “The potential risk is extremely high. If these cuts continue within the National Weather Service, lives could be lost.”

The National Weather Service acknowledged adjustments to its service levels and staffing but asserted that it continues to fulfill its mission and maintain the accuracy of forecasts.

“NOAA and NWS are dedicated to minimizing the impact of recent staffing changes to ensure that core mission functions persist,” the agency stated. “These efforts encompass temporary modifications to service levels and both temporary and permanent internal reallocations of meteorologists to offices with urgent needs.”

Fahy revealed that 52 of the nation’s 122 weather forecasting offices currently have staffing vacancy rates exceeding 20%.

The latest update on field office leadership, published on Wednesday, highlighted vacancy challenges, with 35 meteorologist positions at forecast offices remaining unfilled.

Since the new administration assumed power, the National Weather Service has reduced its workforce by more than 500 employees through voluntary early retirement packages for senior staff and the dismissal of probationary hires.

“Our greatest fear is that the weather offices will remain extremely understaffed, prompting unnecessary loss of life,” the director expressed earlier this month.

Recently retired NWS employees have voiced concerns that staffing levels have dropped below critical thresholds amid service freezes and the dismissal of many early-career professionals in probationary roles.

Alan Gerald, a former director at NOAA’s National Intensive Storm Institute who accepted early retirement in March, likened the NWS’s reassignment strategies to “deck chair relocation,” arguing that they fail to solve fundamental issues.

“They are merely shifting personnel from one office to another, which might address short-term crises, but that’s no sustainable solution,” Gerald remarked. “There’s no real influx of new staff.”

Brian Lamare, who recently retired from the Tampa Bay Area Weather Office in Florida, understands the desire to modernize and streamline services.

In fact, Lamare was involved in efforts to reorganize certain aspects of the service prior to the Trump administration.

The agency had plans to modernize its staffing structures by launching a “mutual assistance” system, allowing local forecast offices to request and offer aid during severe weather events or periods of understaffing.

“Many of these initiatives are now being expedited due to urgency,” Lamare commented. “When rearranging your living room furniture, you don’t set the house on fire—that’s the situation we are facing.”

Lamarre emphasized the necessity for the NWS to resume hiring as numerous forecasters in their 50s and 60s opted for voluntary retirement, leading to the loss of extensive experience. Concurrently, the agency has reduced its cohort of probationary employees, many of whom are just starting their careers.

“Eliminating probationary positions severely limits the agency’s future potential,” Lamare stated. “That’s where fresh, innovative talent is cultivated, making recruitment essential.”

Source: www.nbcnews.com

Even basic bacteria can forecast seasonal shifts

Scanning electron microscope image Synechococcus Cyanobacteria

Eyes of Science/Science Photo Library

Despite being one of the simplest life forms on Earth, cyanobacteria are able to predict and prepare for seasonal changes based on the amount of light they receive.

It has been known for over a century that complex organisms can use day length as a cue to future environmental conditions – for example, days shortening before cold weather sets in. Phenomena such as plant and animal migration, flowering, hibernation and seasonal reproduction are all guided by such responses, known as photoperiodism, but this has not previously been seen in simpler life forms such as bacteria.

Luisa Jabbour Later, at Vanderbilt University in Nashville, Tennessee, colleagues artificially Synechococcus elongatus By exposing the cyanobacteria to different day lengths, they found that those that experienced simulated short days were two to three times better able to survive icy temperatures, preparing them for winter-like conditions.

By testing shorter and longer durations, the researchers found that it took four to six days for a response to appear.

Because these organisms can produce new generations within a matter of hours, their cells must pass on information about the length of daylight to their offspring, but researchers don’t yet understand how this information is transmitted.

Cyanobacteria, which capture energy from sunlight through photosynthesis, have been around for more than two billion years and are found almost everywhere on Earth.

“The fact that organisms as ancient and simple as cyanobacteria have a photoperiodic response suggests that this is a phenomenon that has evolved much longer than we had imagined,” says Jabbour, who is now at the John Innes Centre in Norwich, UK.

The team also looked at how gene expression patterns change in response to changes in day length, suggesting that photoperiodism likely evolved by exploiting existing mechanisms to cope with acute stresses such as bright light and extreme temperatures.

These findings also have implications for the evolution of circadian rhythms, the internal clocks that regulate day-night cycles, team members say. Karl Johnson At Vanderbilt University.

“I think we’ve always thought that circadian clocks evolved before organisms were able to measure the length of days and nights and predict the changing of seasons,” he says, “but the fact that photoperiodism evolved in such ancient, simple organisms, and that our gene expression results are linked to stress response pathways that seem to have evolved very early in life on Earth, suggests that photoperiodism may have evolved before the circadian clock,” Johnson says.

topic:

Source: www.newscientist.com

Scientists in neuroscience claim that certain dreams can accurately forecast events to come

Kamran Dibba, an anesthesiologist at the University of Michigan, and his colleagues have found that during sleep, some neurons not only replay the recent past but also anticipate future experiences.

To dynamically track the spatial tuning of neurons offline, Mahboudi others We used a novel Bayesian learning approach based on spike-triggered average decoded positions in population recordings from freely moving rats.

“Certain neurons fire in response to certain stimuli,” Dr. Dibba said.

“Neurons in the visual cortex fire when presented with an appropriate visual stimulus, and the neurons we study show location preference.”

In their study, Dr. Dibba and his co-authors aimed to study the process by which these specialized neurons generate representations of the world after new experiences.

Specifically, the researchers tracked sharp ripples, patterns of neural activity that are known to play a role in consolidating new memories and, more recently, have also been shown to tag which parts of a new experience will be stored as a memory.

“In this paper, for the first time, we observe individual neurons stabilizing spatial representations during rest periods,” said Rice University neuroscientist Dr. Caleb Kemele.

“We imagined that some neurons might change their representation, mirroring the experience we've all had of waking up with a new understanding of a problem.”

“But to prove this, we needed to trace how individual neurons achieve spatial tuning – the process by which the brain learns to navigate new routes and environments.”

The researchers trained rats to run back and forth on a raised track with liquid rewards at each end, and observed how individual neurons in the animals' hippocampus “spiked” in the process.

By calculating the average spike rate over multiple round trips, the researchers were able to estimate a neuron's place field – the area of ​​the environment that a particular neuron is most “interested” in.

“The key point here is that place fields are inferred using the animal's behavior,” Dr Kemele said.

I’ve been thinking for a long time about how we can assess neuronal preferences outside the labyrinth, such as during sleep,” Dr. Dibba added.”

“We addressed this challenge by relating the activity of individual neurons to the activity of all the other neurons.”

The scientists also developed a statistical machine learning approach that uses other neurons they examined to infer where the animals were in their dreams.

The researchers then used the dreamed locations to estimate the spatial tuning process of each neuron in the dataset.

“The ability to track neuronal preferences in the absence of stimulation was a significant advance for us,” Dr. Dibba said.”

This method confirmed that the spatial representation formed during the experience of a novel environment remained stable in most neurons throughout several hours of sleep following the experience.

But as the author predicted, there was more to the story.”

“What I liked most about this study, and why I found it so exciting, was that it showed that stabilizing memories of experiences isn’t the only thing these neurons do during sleep. It turns out some of them are doing other things after all,” Dr. Kemmele said.”

“We can see these other changes that occur during sleep, and then when we put the animals back into the environment, we can see that these changes actually reflect something that the animals learned while they were asleep.”

“It’s as if the animal is exposed to that space a second time while they’re sleeping.”

This is important because it provides a direct look at the neuroplasticity that occurs during sleep.

“It appears that brain plasticity and rewiring require very fast timescales,” Dr. Dibba said.”

This study paper In the journal Nature.

_____

K. Mabudi others2024. Recalibration of hippocampal representations during sleep. Nature 629, 630-638; doi: 10.1038/s41586-024-07397-x

Source: www.sci.news

Blockchain experts forecast which tokens will generate profits

As Polkadot (DOT) adoption soars, investors are looking to diversify with promising tokens. Learn about RCO Finance (RCOF) and how it can help you maximize your profits.

Despite being a revolutionary technology, Polkadot (DOT) has struggled to gain adoption since its launch. The coin showed a downward trend through most of 2023. However, the trend reversed in the fourth quarter of 2023 with increased user and developer adoption.

This marked the beginning of significant growth for Polkadot (DOT), reflecting the platform’s growing influence in the cryptocurrency scene. In the last 24 hours 37.51% increase in trading volume, $225 million.

Given the market trends, crypto analysts expect Polkadot (DOT) to continue its bull run soon. Investors are also considering adding tokens such as: RCO Finance (RCOF) We have solid growth potential in our portfolio.

Polkadot (DOT) Bull Run’s future expectations

The current market price of Polkadot (DOT) is: $6.98 is expected to rise further in the coming days. Cryptocurrency expert We predict that the asset will skyrocket and increase by 229.98% to $22.82 by next month.

Based on technical analysis, Polkadot (DOT) has gained 33% in the past 30 days.

They predict prices will remain around $6.92 at the 2025 low end. Their analysts predict that Polkadot (DOT) will hit a high of $32.90. If Polkadot (DOT) reaches the upper end of the target price, investors who buy this coin today could earn a return of 377.90% by next year.

Sure, a 300% ROI on a token would be great, but other assets could yield even more returns in the long run. Investors who prioritize portfolio diversification are always looking for new profitable projects to get into early.

AI-powered investment platform to discover promising altcoins

RCO Finance (RCOF) Decentralized stock trading platform This allows investors to buy and trade cryptocurrencies and stocks without the need for fiat currency. This provides both novice and seasoned investors with several investment opportunities that improve on traditional improvements.

RCO Finance (RCOF) allows users to participate in various DeFi activities such as providing liquidity, participating in yield farming, and automated market making. Investors can also buy real-world stocks, bonds, commodities, real estate, and other alternative investments while managing all their assets from her one place at the same time.

It also features an AI trading platform that prioritizes user experience and recommends profitable investment opportunities. AI tools study trends in the cryptocurrency market and provide valuable insights into which trades to exit and optimal trading times based on changing market dynamics.

Both beginners and seasoned crypto traders can take advantage of this tool to expand their portfolios with promising assets that can yield huge profits.

Investors look forward to a 1000% return on this hidden gem

Apart from the potential profits you can get from buying stocks and cryptocurrencies, RCO Finance (RCOF), early investors who purchase RCOF pre-sale tokens stand to gain more profits. RCOF is currently in its first pre-sale stage, priced at $0.0127 per token.

As the token completes more pre-sale stages, the price will rise rapidly. When the popular crypto trading platform finally lists his RCO Finance (RCOF), investors who bought it incredibly cheap during this initial pre-sale stage could reap drool-worthy profits .

For more information on the RCO Finance presale, please see below.

Access RCO Finance Presale

Join the RCO Finance Community

Source: www.the-blockchain.com

Pregnancy Strap that Monitors Heart Rate Could Forecast Preterm Birth

Scientists used fitness tracker WHOOP to monitor heart rate during pregnancy

Oops

Wearing a wrist-strap heart rate tracker during pregnancy may help doctors predict who is at risk for premature birth.

In previous research, shon rowan Researchers at West Virginia University recruited 18 women to wear heart-tracking wrist straps from the brand WHOOP throughout their pregnancies.

They were all born at term, and tracking data showed that heart rate variability (the variation in the time interval between heartbeats) decreased clearly during the first 33 weeks of pregnancy, and then steadily increased until birth. It became clear.

Rowan was curious to see if the same pattern occurred in people who give birth prematurely. Emily Capodilupo A larger study is being conducted at WHOOP in Boston, Massachusetts. They and colleagues analyzed tracker data provided by 241 pregnant women between the ages of 23 and 47 in the United States and 15 other countries. It is unclear whether this data includes data for transgender men.

All participants were pregnant with one child born between March 2021 and October 2022. In total, more than 24,000 heart rate variability records were provided.

Similar to the previous study, those who gave birth at term showed an obvious switch in heart rate variability around 33 weeks of gestation, or an average of seven weeks before delivery.

However, the 8.7% who were born prematurely had much less consistent patterns of heart rate variability, Rowan said. This change from decrease to increase in variability occurred at different times during pregnancy, but similar to those born at term, the change occurred on average about 7 weeks before birth, although the birth was premature. It seemed like there was.

In the future, the device could identify pregnancies that require closer monitoring or benefit from administering drugs such as steroids to help the fetus' lungs develop, Rowan said.

You can also plan to stay near hospitals that provide specialized care, which can be especially helpful for people who live in remote areas, he says.

“Once we are able to remotely monitor some of their health using things like the WHOOP tracker, and we start to see changes in that. [in heart rate variability]Then you might be able to be a little more proactive,” says Rowan.

topic:

Source: www.newscientist.com

Researchers have developed the ability to forecast which organs are most likely to fail earliest

New research suggests that scientists may now be able to predict which organs will fail first, providing an opportunity for doctors to target aging organs earlier, before disease symptoms appear.

A study published in Nature found that one in five healthy adults over the age of 50 have at least one aging organ, increasing their risk of developing disease in that organ over the next 15 years. This discovery provides insight into the aging process of the body.

How does aging occur at different rates in the body?

We all have two ages: the chronological age that increases by one each year and the “biological age,” which is more flexible and changes based on health status. By studying biological signs within the body, scientists can determine a person’s biological age.

In a study of 5,678 people, researchers at Stanford Medicine determined the biological age of their organs by analyzing proteins in the blood, revealing that if a person’s organs are older than others of the same age, they are at a higher risk of disease.

Each organ in our body dies at a different rate, with certain proteins in the blood associated with specific organs. Scientists developed a machine learning algorithm using protein combinations to predict a person’s biological age and verified its accuracy on 4,000 people.

The study focused on the biological age of 11 important organs and revealed that people with rapidly aging organs are at a higher risk of disease and mortality. The research team hopes to replicate these findings in a larger group of people to detect which organs are aging at an accelerated rate, allowing for early treatment.

read more:

Source: www.sciencefocus.com

AI trained on extensive life stories has the ability to forecast the likelihood of early mortality

Data covering Denmark’s entire population was used to train an AI that predicts people’s life outcomes

Francis Joseph Dean/Dean Photography/Alamy Stock Photo

Artificial intelligence trained on personal data covering Denmark’s entire population can predict people’s likelihood of dying more accurately than existing models used in the insurance industry. Researchers behind the technology say it has the potential to have a positive impact on early prediction of social and health problems, but must be kept out of the hands of large corporations. There is.

Sune Lehmann Jorgensen The researchers used a rich Danish dataset covering the education, doctor and hospital visits, resulting diagnoses, income, and occupation of 6 million people from 2008 to 2020.

They converted this dataset into words that can be used to train large-scale language models, the same technology that powers AI apps like ChatGPT. These models work by looking at a set of words and statistically determining which word is most likely to come next based on a large number of examples. In a similar way, the researcher’s Life2vec model can look at the sequence of life events that form an individual’s history and determine what is most likely to happen next.

In the experiment, Life2vec was trained on all data except for the last four years of data, which was kept for testing. Researchers took data on a group of people aged 35 to 65, half of whom died between 2016 and 2020, and asked Life2vec to predict who lived and who died. This was 11% more accurate than existing AI models and life actuarial tables used in the financial industry to price life insurance policies.

The model was also able to predict personality test results for a portion of the population more accurately than AI models trained specifically to do the job.

Jorgensen believes the model has consumed enough data that it has a good chance of shedding light on a wide range of topics in health and society. This means it can be used to predict and detect health problems early, or by governments to reduce inequalities. But he stresses that it can also be used by companies in harmful ways.

“Obviously, our model should not be used by insurance companies, because the whole idea of ​​insurance is that if some unlucky person suffers some kind of incident, dies, loses their backpack, etc. ‘Because we share the lack of knowledge about what to do, we can share this burden to some extent,’ says Jorgensen.

But such technology already exists, he says. “Big tech companies that have large amounts of data about us are likely already using this information against us, and they are using it to make predictions about us. It is.”

Matthew Edwards Researchers from UK professional institutes the Institute of Actuaries and the Faculty of Actuaries say that while insurers are certainly interested in new forecasting techniques, the bulk of decision-making is based on a type of model called a generalized linear model. The research is done using AI, which he says is rudimentary compared to this research. .

“If you look at what insurance companies have been doing for years, decades, centuries, they’ve taken the data they have and tried to predict life expectancy from that,” Edwards said. “But we are deliberately conservative in adopting new methodologies, because when we are creating policies that are likely to be in place for the next 20 or 30 years, the last thing we want is to make any significant mistakes. . Everything can change, but slowly because no one wants to make mistakes.”

topic:

Source: www.newscientist.com