Research Indicates Our Universe Is Already Entering a Slowdown Phase

A recent study from Yonsei University in Seoul, South Korea, challenges the previously accepted notion that dark energy is causing the accelerated movement of distant galaxies away from us. The researchers found no evidence supporting the idea that the universe is currently accelerating. If validated, this finding could significantly alter our understanding of dark energy, address the “Hubble strain,” and provide insights into the universe’s past and future.

The expansion of the universe may be slowing down, not accelerating. Image credit: M. Weiss / Harvard-Smithsonian Center for Astrophysics.

For over three decades, astronomers have generally accepted that the universe is expanding at an increasing rate due to a hidden force dubbed dark energy, which functions as a sort of anti-gravity.

This conclusion, derived from distance measurements of far-off galaxies using Type Ia supernovae, earned the Nobel Prize in Physics in 2011.

However, Professor Yongwook Lee of Yonsei University and his team have introduced new evidence suggesting that Type Ia supernovae, once thought to be the universe’s “standard candle,” are significantly affected by the age of their progenitor stars.

“Our findings indicate that the universe is currently in a phase of decelerating expansion, and that dark energy is evolving at a much faster rate than previously assumed,” stated Professor Lee.

“If verified, these outcomes would signify the most substantial shift in cosmology since the identification of dark energy 27 years ago.”

Even after adjusting for brightness, supernovae from younger star populations seem systematically dimmer, while those from older populations appear brighter.

Utilizing a more extensive sample of 300 host galaxies, the researchers validated these findings with remarkable significance (99.999% confidence), indicating that the dimming of distant supernovae is influenced not only by cosmological factors but also by stellar astrophysical characteristics.

After correcting for this systematic bias, the supernova data no longer aligned with the classic ΛCDM cosmology model that includes a cosmological constant.

Instead, it aligns more closely with a new model backed by the Dark Energy Spectroscopy Instrument (DESI) project, based on Baryon Acoustic Oscillations (BAO) and Cosmic Microwave Background (CMB) data.

Both the adjusted supernova data and the results from BAO+CMB demonstrate that dark energy diminishes and evolves significantly over time.

Importantly, when the corrected supernova data were integrated with BAO and CMB findings, the traditional ΛCDM model was decisively ruled out.

Most notably, this comprehensive analysis reveals that the universe is not accelerating as much as once believed, but has already transitioned into a state of slowing expansion.

“The DESI project has yielded significant results by merging unadjusted supernova data with baryon acoustic vibration measurements, concluding that while the universe will decelerate in the future, it is still accelerating at present,” remarked Professor Lee.

“Conversely, our analysis, which incorporates an age-bias correction, indicates that the universe is already entering a slowing phase today.”

“Surprisingly, this aligns with predictions made independently from BAO analyses, which has yet to receive much attention.”

To further validate their findings, the researchers are now conducting an evolution-free test using only supernovae from young, contemporaneous host galaxies across the entire redshift range.

Initial results already support their primary conclusion.

“With the Vera C. Rubin Observatory set to discover more than 20,000 new supernova host galaxies within the next five years, accurate age measurements will provide a more robust and conclusive examination of supernova cosmology,” stated Yonsei University professor Chul Chung.

The team’s paper published today in Royal Astronomical Society Monthly Notices.

_____

Song Joon Hyuk et al. 2025. Strong founder age bias in supernova cosmology – II. Alignment of DESI BAO with signs of a non-accelerating universe. MNRAS 544 (1): 975-987; doi: 10.1093/mnras/staf1685

Source: www.sci.news

The Limited Advantages of GPT-5 Indicate a Slowdown in AI Advancement

GPT-5 is the latest version of OpenAI’s flagship language model

Cheng Xin/Getty Images

OpenAI has recently unveiled GPT-5, their latest AI model, marking another step in AI evolution rather than a dramatic breakthrough. Following the successful rollout of GPT-4, which significantly advanced ChatGPT’s capabilities and influence, the improvements found in GPT-5 seem marginal, indicating that innovative strategies may be needed to achieve further advancements in artificial intelligence.

OpenAI has described GPT-5 as a notable advancement over its predecessor, boasting enhancements in areas such as programming, mathematics, writing, healthcare, and visual comprehension. The company claims a reduction in the incidence of “hallucinations,” instances where AI generates incorrect information as factual. According to their internal metrics, GPT-5 claims to excel in complex and economically significant tasks across various professions, asserting it matches or exceeds expert-level performance.

Notably, however, GPT-5’s results on public benchmarks are less competitive when compared with leading models from other companies, such as Anthropic’s Claude and Google’s Gemini. Although it has improved from GPT-4, the enhancements are subtler than the leap observed between GPT-3 and GPT-4. Numerous users have expressed dissatisfaction with GPT-5’s performance, citing instances where it struggled with straightforward queries, leading to a chorus of disappointment on social media.

“Many were expecting a major breakthrough, but it seems more like an upgrade,” remarked Mirela Rapata from the University of Edinburgh. “There’s a sense of incremental progress.”

OpenAI has disclosed limited details regarding the internal benchmarks for GPT-5’s performance, making it challenging to assess them scientifically, according to Anna Rogers from the University of Copenhagen.

In a pre-release press briefing, Altman emphasized, “It feels like engaging with an expert on any topic, comparable to a PhD-level specialist.” Yet, Rogers pointed out that benchmarks do not substantiate such claims, and the correlation between advanced degrees and intelligence is questionable. “Highly intelligent individuals do not always hold PhDs, nor does a PhD guarantee superior intelligence,” she noted.

The modest advancements in GPT-5 may reflect broader challenges within the AI development community. Once believed to be an inexorable progression, the capabilities of large-scale language models (LLMs) seem to be plateauing, as recent results have not supported the prior assumptions that increased training data and computational power would lead to significant enhancements. As Lapata noted, “Now that everyone has adopted similar approaches, it’s evident that we’re following a predictable recipe, utilizing vast amounts of pre-training data and refining it during the post-training phase.”

However, whether LLMs are nearing a plateau remains uncertain, as technical design specifics about models like GPT-5 are not widely known, according to Nicos Aretra from the University of Sheffield. “It’s premature to claim that large-scale language models have reached their limits without concrete technical insights.”

OpenAI is also exploring alternative methods to enhance their offerings, such as the new routing system in GPT-5. Unlike previous versions where users could select from various models, GPT-5 intelligently assesses requests and directs them to the appropriate model based on the required computational power.

This strategy could potentially be more widely adopted, as Lapata mentions, “The reasoning model demands significant computation, which is both time-consuming and costly.” Yet, this shift has frustrated some ChatGPT users, prompting Altman to indicate that efforts are underway to enhance the routing process.

Another OpenAI model has recently achieved remarkable scores in elite mathematics and coding contests, hinting at a promising future for AI. This accomplishment was beyond the capabilities of leading AI models just a year ago. Although details on its functioning remain scarce, OpenAI staff have stated that this success implies the model possesses improved general reasoning skills.

These competitions allow us to evaluate models on data not encountered during training, according to Aletras, but they still represent a narrow aspect of intelligence. Enhanced performance in one domain may detrimentally affect results in others, warns Lapata.

GPT-5 has notably improved in pricing, as it is now significantly cheaper compared to other models—e.g., Claude models are approximately ten times more expensive when processing an equal volume of requests. However, this could lead to financial issues for OpenAI if revenue is insufficient to sustain the high costs of developing and operating new data centers. “Pricing is extraordinary. It’s so inexpensive; I’m uncertain how they can sustain it,” remarked Lapata.

Competition among leading AI models is intense. The first company to launch a superior model could secure a substantial market share. “All major companies are vying for dominance, which is a challenging endeavor,” noted Rapata. “You’ve only held the crown for three months.”

Topic:

Source: www.newscientist.com