Explore the Dark Craters near the Moon’s South Pole
Credit: Science Photo Library / Alamy
Scientists aim to establish a groundbreaking laser system in one of the moon’s coldest craters to significantly enhance the navigation capabilities of lunar landers and rovers.
Ultra-stable lasers are vital for highly precise timing and navigation systems. These lasers operate by reflecting a beam between two mirrors within a cavity, maintaining a consistent beam speed. This precision is largely due to the chamber’s size stability, which neither expands nor contracts. To achieve this, mirrors are typically maintained in a cryogenic vacuum, insulated from external vibrations.
The moon hosts numerous craters at its poles, which lack direct sunlight due to minimal axial tilt. Consequently, these permanently shadowed areas are extremely cold, with some craters projected to reach temperatures around -253°C (20 Kelvin) during the lunar winter.
Junye from JILA, along with a research team in Boulder, Colorado, has proposed that these icy conditions, combined with the moon’s absence of natural vibrations and an almost non-existent atmosphere, make these craters ideal for ultra-stable lasers. The potential stability of these lunar lasers could surpass that of any terrestrial counterparts.
“The entire environment is incredibly stable,” Ye emphasizes. “Despite variations between summer and winter on the Moon, temperature fluctuations range only from 20 to 50 Kelvin, contributing to a remarkably consistent environment.”
Ye and his research team envision a lunar laser device akin to an optical cavity already developed in JILA’s lab, featuring a silicon chamber equipped with dual mirrors.
Current optical cavity lasers on Earth can maintain coherence for just a few seconds, meaning their light waves can synchronize briefly. However, the moon-based laser is projected to sustain coherence for at least a minute, which will facilitate its role as a reference laser for a variety of lunar missions. This includes maintaining the lunar time zone and coordinating satellite formations using lasers for distance measurement. Given that light from the moon takes just over a second to reach Earth, it could also serve as a reliable reference for Earth-based activities, as highlighted by Ye.
Although implementing this idea poses challenges, the rationale is sound and could greatly benefit future lunar missions. According to Simeon Barber from the Open University, UK, “Recent lunar landers have experienced suboptimal landings due to varying lighting conditions, complicating vision-based systems. Leveraging stable lasers for positioning, navigation, and timing could enhance the reliability of landings in high-latitude areas.”
Sure! Here’s an SEO-optimized rewrite of your content, retaining the HTML tags:
The Evolution of Generative AI: Meet OpenClaw
Since the launch of ChatGPT, Generative AI has transformed our digital landscape over the past three years. It has spurred a significant stock market boom, integrated into our search engines, and become an essential tool for hundreds of millions of users daily.
Despite its benefits, many still hesitate to use AI tools. But why? While asking AI for text, audio, images, and videos can save time, crafting the right prompts often becomes a burdensome task. Users still grapple with everyday chores like answering emails, booking appointments, and paying bills.
This is where AI’s true power lies; handling the mundane tasks. The promising concept of “agent AI” suggests that people desire an efficient, always-on assistant to tackle time-consuming tasks. The latest advancement in this field is OpenClaw.
What is OpenClaw?
OpenClaw, previously known as ClawdBot, is an AI agent poised to fulfill AI’s grand promises. Once granted access to your computer files, social media, and email accounts, it can efficiently complete various tasks. This capability is powered by Claude Code, a model released by the AI company Anthropic.
Developed by software engineer Peter Steinberger and launched in late November 2025, ClawdBot initially gained traction but was rebranded due to concerns from Anthropic. After temporarily adopting the name MoltBot, it is now officially known as OpenClaw. (Mr. Steinberger did not respond to multiple interview requests.)
How Does OpenClaw Work?
OpenClaw operates on your computer or a virtual private server and connects messaging apps like WhatsApp, Telegram, and Discord to coding agents powered by models like Anthropic’s Claude. Users often opt for a high-performance device, like the Apple Mac Mini, to host OpenClaw for optimal speed. Due to increasing demand, some shops are reporting sold-out status.
Although it can run on older laptops, OpenClaw needs to stay operational 24/7 to execute your specified commands.
Commands are sent through your preferred messaging app, enabling a simple conversational interface. When you message OpenClaw, the AI agent interprets your prompt, generates, and executes commands on your machine. This can include tasks such as finding files, running scripts, editing documents, and automating browser activities. The results are succinctly summarized and sent back to you, creating an efficient communication loop akin to collaborating with a colleague.
How Can OpenClaw Help You?
OpenClaw serves as an all-in-one assistant for both personal and professional tasks. Users typically start by decluttering files on their devices before transferring the tech’s prowess to more complex responsibilities. Some users report utilizing it to manage busy WhatsApp groups by summarizing necessary information and filtering out the irrelevant.
Other practical applications include:
Comparing supplier prices to minimize household spending.
Automating web browser tasks for seamless transactions.
Facilitating restaurant reservations by calling venues directly.
Preparing initial drafts for presentations while you sleep.
What Are the Risks?
While OpenClaw’s capabilities shine brightest when granted extensive access, this convenience raises significant risks. Experts warn that users may overlook potential vulnerabilities. For instance, OpenClaw could be exposed to prompt injection attacks or hacking if hosted on insufficiently secured virtual servers. This means sensitive data could be compromised.
Alan Woodward, a cybersecurity professor at the University of Surrey, cautions, “I can’t believe people would allow unrestricted access to sensitive software, including email and calendars.”
White hat hackers have already identified several security flaws in OpenClaw, raising concerns about the hands-off approach many users prefer, which simultaneously invites substantial risk.
Is This the Future of AI?
OpenClaw has recently launched its own social network, Moltbook, enabling its AI agents to interact and share insights. While humans can observe, they cannot engage directly in discussions, prompting fears about progression toward artificial general intelligence (AGI), potentially matching or exceeding human capabilities.
As we navigate this new realm, it’s vital to consider the implications of relinquishing extensive data access to AI agents. We may be standing on the brink of a new AI era—an agent capable of managing your life efficiently, if you’re prepared to grant it free access and relinquish control. It’s a thrilling yet daunting prospect.
Sure! Here’s the SEO-optimized version of the content while retaining the original HTML structure:
Quantum batteries are making their debut in quantum computers, paving the way for future quantum technologies. These innovative batteries utilize quantum bits, or qubits, that change states, differing from traditional batteries that rely on electrochemical reactions.
Research indicates that harnessing quantum characteristics may enable faster charging times, yet questions about the practicality of quantum batteries remain. “Many upcoming quantum technologies will necessitate quantum versions of batteries,” states Dian Tan from Hefei National Research Institute, China. “While significant strides have been made in quantum computing and communication, the energy storage mechanisms in these quantum systems require further investigation.”
Tan and his team constructed the battery using 12 qubits formed from tiny superconducting circuits, controlled by microwaves. Each qubit functioned as a battery cell and interacted with neighboring qubits.
The researchers tested two distinct charging protocols, one mirroring conventional battery charging without quantum interactions, while the other leveraged quantum interactions. They discovered that exploiting these interactions led to an increase in power and a quicker charging capacity.
“Quantum batteries can achieve power output up to twice that of conventional charging methods,” asserts Alan Santos from the Spanish National Research Council. This compatibility with the nearest neighbor interaction of qubits is notable, as this is typical for superconducting quantum computers, making further engineering of beneficial interactions a practical challenge.
James Quach from Australia’s Commonwealth Scientific and Industrial Research Organisation adds that previous quantum battery experiments have utilized molecules rather than components in current quantum devices. Quach and his team have theorized that quantum batteries may enhance the efficiency and scalability of quantum computers, potentially becoming the power source for future quantum systems.
However, comparing conventional and quantum batteries remains a complex task, notes Dominik Shafranek from Charles University in the Czech Republic. In his opinion, translating the advantages of quantum batteries into practical applications is currently ambiguous.
Kaban Modi from the Singapore University of Technology and Design asserts that while benefits exist for qubits interfacing exclusively with their nearest neighbors, their research indicates these advantages can be negated by real-world factors like noise and sluggish qubit control.
Additionally, the burgeoning requirements of extensive quantum computers may necessitate researching energy transfer within quantum systems, as they might incur significantly higher energy costs compared to traditional computers, Modi emphasizes.
Tan believes that energy storage for quantum technologies, particularly in quantum computers, is a prime candidate for their innovative quantum batteries. Their next goal involves integrating these batteries with qubit-based quantum thermal engines to produce energy for storage within quantum systems.
Artist Representation of Qubits in the Quantum Twins Simulator
Silicon Quantum Computing
A groundbreaking large-scale quantum simulator has the potential to unveil the mechanisms of exotic quantum materials and pave the way for their optimization in future applications.
Quantum computers are set to leverage unique quantum phenomena to perform calculations that are currently unmanageable for even the most advanced classical computers. Similarly, quantum simulators can aid researchers in accurately modeling materials and molecules that remain poorly understood.
This holds particularly true for superconductors, which conduct electricity with remarkable efficiency. The efficiency of superconductors arises from quantum effects, making it feasible to implement their properties directly in quantum simulators, unlike classical devices that necessitate extensive mathematical transformations.
Michelle Simmons and her team at Australia’s Silicon Quantum Computing have successfully developed the largest quantum simulator to date, known as Quantum Twin. “The scale and precision we’ve achieved with these simulators empower us to address intriguing challenges,” Simmons states. “We are pioneering new materials by crafting them atom by atom.”
The researchers designed multiple simulators by embedding phosphorus atoms into silicon chips. Each atom acts as a quantum bit (qubit), the fundamental component of quantum computers and simulators. The team meticulously configured the qubits into grids that replicate the atomic arrangement found in real materials. Each iteration of the Quantum Twin consisted of a square grid containing 15,000 qubits, surpassing any previous quantum simulator in scale. While similar configurations have been built using thousands of cryogenic atoms in the past, Quantum Twin breaks new ground.
By integrating electronic components into each chip via a precise patterning process, the researchers managed to control the electron properties within the chips. This emulates the electron behavior within simulated materials, crucial for understanding electrical flow. Researchers can manipulate the ease of adding an electron at specific grid points or the “hop” between two points.
Simmons noted that while conventional computers struggle with large two-dimensional simulations and complex electron property combinations, the Quantum Twin simulator shows significant potential for these scenarios. The team tested the chip by simulating the transition between conductive and insulating states—a critical mathematical model explaining how impurities in materials influence electrical conductivity. Additionally, they recorded the material’s “Hall coefficient” across different temperatures to assess its behavior in magnetic fields.
With its impressive size and variable control, the Quantum Twins simulator is poised to tackle unconventional superconductors. While conventional superconductors function well at low temperatures or under extreme pressure, some can operate under milder conditions. Achieving a deeper understanding of superconductors at ambient temperature and pressure is essential—knowledge that quantum simulators are expected to furnish in the future.
Moreover, Quantum Twins can also facilitate the investigation of interfaces between various metals and polyacetylene-like molecules, holding promise for advancements in drug development and artificial photosynthesis technologies, Simmons highlights.
Researchers from Korea University are paving the way for more efficient and cost-effective renewable energy generation by utilizing gold nanospheres designed to capture light across the entire solar spectrum.
Hung Lo et al. introduced plasmonic colloidal superballs as a versatile platform for broadband solar energy harvesting. Image credit: Hung Lo et al., doi: 10.1021/acsami.5c23149.
Scientists are exploring novel materials that efficiently absorb light across the solar spectrum to enhance solar energy harvesting.
Gold and silver nanoparticles have been identified as viable options due to their ease of fabrication and cost-effectiveness, yet current nanoparticles primarily absorb visible wavelengths.
To extend absorption into additional wavelengths, including near-infrared light, researcher Seungwoo Lee and colleagues from Korea University propose the innovative use of self-assembled gold superballs.
These unique structures consist of gold nanoparticles aggregating to form small spherical shapes.
The diameter of the superball was meticulously adjusted to optimize absorption of sunlight’s diverse wavelengths.
The research team first employed computer simulations to refine the design of each superball and predict the overall performance of the superball film.
Simulation outcomes indicated that the superball could absorb over 90% of sunlight’s wavelengths.
Next, the scientists created a film of gold superballs by drying a solution containing these structures on a commercially available thermoelectric generator, a device that converts light energy into electricity.
Films were produced under ambient room conditions—no cleanroom or extreme temperatures needed.
In tests using an LED solar simulator, the average solar absorption rate of the superball-coated thermoelectric generator reached approximately 89%, nearly double that of a conventional thermoelectric generator featuring a single gold nanoparticle membrane (45%).
“Our plasmonic superball offers a straightforward method to harness the entire solar spectrum,” said Dr. Lee.
“Ultimately, this coating technology could significantly reduce barriers for high-efficiency solar and photothermal systems in real-world energy applications.”
The team’s research is published in the journal ACS Applied Materials & Interfaces.
_____
Ro Kyung Hoon et al.. 2026. Plasmonic Supraball for Scalable Broadband Solar Energy Generation. ACS Applied Materials & Interfaces 18 (1): 2523-2537; doi: 10.1021/acsami.5c23149
Ozempic is a well-known name, primarily approved for diabetes treatment in the UK and US, yet it is commonly prescribed ‘off-label’ for weight loss. This medication has essentially become synonymous with a groundbreaking new category of weight loss drugs.
Injectable medications like Ozempic, Wegovy, Mounjaro, Zepbound, Rybelsus, and Saxenda can facilitate significant weight loss, approaching 20% of a person’s body weight in certain individuals.
Now, the next generation of weight loss solutions has arrived, and they are available in pill form.
The debut of these tablets occurred in the United States, with Novo Nordisk (the producer of Ozempic) launching Wegovy tablets on January 5, 2026. Their quick rise in popularity resulted in over 18,000 new prescriptions issued in the first week alone.
But Wegovy won’t stand alone for long. Eli Lilly’s competing drug, orforglipron, is projected to gain FDA approval this spring, and several alternatives are in development.
(Currently, these tablets are not available in the UK; however, UK policies are anticipated to follow the FDA’s example.)
The mechanism of these tablets mirrors that of injectables. The active compounds, known as “incretins” (like Wegovy’s semaglutide and Mounjaro’s tirzepatide), deceive the body into feeling full by imitating natural satiety hormones.
As digestion slows down, you naturally consume less, leading to weight loss. Don’t let hunger hinder your journey to success.
Now available in pill form, this medication promises similar life-altering effects and protection against obesity-related illnesses, all while being more affordable than ever.
Is it too good to be true? Experts caution that while the pill presents notable risks, it also brings substantial benefits.
Read more:
Can Weight Loss Drugs Transform the Landscape of Treatment?
These tablets could signify a new chapter in the management of obesity, providing broader access to life-altering healthcare.
“Not everyone prefers injectable medications,” states Dr. Simon Cork, a senior lecturer in appetite and weight regulation at Anglia Ruskin University in the UK. “Injections can be uncomfortable for many patients, making oral administration a more appealing option.”
Besides comfort, switching from injections to pills could massively reduce monthly costs. Those using weight loss drugs today often spend hundreds of dollars each month on injections.
Weight loss pills can be stored at room temperature in standard pill blister packs, making them more accessible – Credit: Getty Images
Thanks to the absence of needles and refrigeration needs, these pills can be produced and distributed at lower costs, providing weight loss solutions to millions who previously faced exorbitant prices.
“Overall, these pills are expected to be significantly more affordable than current injection therapies,” says Cork.
This trend is already visible in the US, where Wegovy pens are priced at $349 (approximately £250) per month, whereas Wegovy tablets retail for $149 (around £110).
In the UK, nearly 95% of incretin users incur high private fees. According to Professor Giles Yeo from the University of Cambridge, the NHS often cannot prescribe these expensive medications to all patients who need them.
“Patients may need to maintain these drugs for extended periods, which exacerbates the financial barrier, particularly for those from disadvantaged backgrounds most susceptible to obesity,” Cork noted. “I hope that these oral medications will democratize access.”
Addressing Long-Term Challenges
However, these drugs may not be the most effective options, even as their availability increases.
Incretins tend to offer lower efficacy in pill form. Injectable Wegovy has demonstrated a capacity to help users lose 15% of body weight after 68 weeks, while Wegovy tablets showed only 13.6% weight loss across 64 weeks.
The efficacy of pills may not match that of modern injected solutions. Retatortide, still in development, has shown results of 24% body weight reduction in just 48 weeks.
Administering these drugs through pills poses inherent challenges. Oral medications must traverse the stomach and liver before entering circulation, resulting in the manufacturer needing to increase the amount of active ingredient to achieve desired outcomes.
Consequently, weight loss results from pills may not be as rapid as from injections. Nevertheless, a significant complaint regarding injections—that discontinuing them often leads to weight regain—may see improvement.
A 2022 study revealed that participants who halted Wegovy injections regained up to two-thirds of their lost weight within one year.
The emergence of the pill could provide a solution. A recent study, the Eli Lilly ATTAIN-MAINTAIN Trial, showed that Orforglipron tablets helped participants stabilize their weight after stopping injectable therapy.
“Many might rely on these medications to maintain weight loss,” Yeo suggests.
Cork adds, “Injectables can be utilized for optimal weight loss, and pills can help maintain this weight affordably.”
Most incretins mimic the natural satiety hormone GLP-1, but new treatments are targeting multiple hormones for enhanced effectiveness – Credit: Getty Images
The Risks and Concerns of the Pill Revolution
While these drugs possess the potential to catalyze significant positive change, their widespread availability also raises risks for vulnerable populations.
“The major danger is these drugs entering the wrong hands,” warns Yeo. “Since there’s no weight limit to how these drugs might impact individuals, a 300-pound person aiming to lose 50 pounds could utilize it as well as a 16-year-old girl weighing 75 pounds.”
“Pills can easily be trafficked, making them accessible to anyone. It’s essential to establish strict regulations around their distribution,” he urges.
Cork shares concerns over side effects. Incretins can provoke various symptoms, including nausea, vomiting, constipation, and diarrhea. Clinical trials found that three-quarters of participants experienced digestive issues.
Moreover, there are rare but serious risks such as pancreatitis, gallstones, and gastroparesis. Additionally, interactions with other medications, including contraceptives, could affect their efficacy.
“The risk of pancreatitis is low, around 1%,” Cork notes. “But with millions potentially using these drugs, this risk becomes concerning without appropriate oversight.”
Though these warnings are sobering, they remain speculative. The actual impact of these drugs is still uncertain.
“2026 is poised to be a crucial year in understanding the efficacy, prevalence, and applications of these medications,” Yeo concludes. “Time will tell how things unfold.”
In today’s digital landscape, hostility often overshadows collaboration. Remarkably, Wikipedia—a publicly editable encyclopedia—has emerged as a leading knowledge resource worldwide. “While it may seem improbable in theory, it remarkably works in practice,” states Anusha Alikan from the Wikimedia Foundation, the nonprofit behind Wikipedia.
Founded by Jimmy Wales in 2001, Wikipedia continues to thrive, although co-founder Larry Sanger left the project the following year and has since expressed ongoing criticism, claiming it is “overrun by ideologues.”
Nonetheless, Sanger’s opinions are not widely echoed. Wikipedia boasts over 64 million articles in 300+ languages, generating an astonishing 15 billion hits monthly. Currently, it ranks as the 9th most visited website globally. “No one could have anticipated it would become such a trusted online resource, yet here we are,” Arikan commented.
Building trust on a massive scale is no small achievement. Although the Internet has democratized access to human knowledge, it often presents fragmented and unreliable information. Wikipedia disrupts this trend by allowing anyone to contribute, supported by approximately 260,000 volunteers worldwide, making an impressive 342 edits per minute. A sophisticated system grants broader editing rights to responsible contributors, fostering trust that encourages collaboration even among strangers.
Wikipedia also actively invites special interest groups to create and edit content. For instance, the Women in Red project tackles gender disparities, while other initiatives focus on climate change and the history of Africa. All articles uphold strict accuracy standards, despite critics like Sanger alleging bias.
As an anomaly in the technology sector, Wikipedia operates without advertising, shareholders, or profit motives. It has maintained this unique position for over two decades with great success.
However, the rise of artificial intelligence poses new challenges. AI can generate misleading content, deplete resources in training efforts, and lead to diminished website traffic and decreased donations due to AI-driven search summaries.
Revolutionary simulations from Maynooth University astronomers reveal that, at the onset of the dense and turbulent universe, “light seed” black holes could swiftly consume matter, rivaling the supermassive black holes found at the centers of early galaxies.
Computer visualization of a baby black hole growing in an early universe galaxy. Image credit: Maynooth University.
Dr. Daksar Mehta, a candidate at Maynooth University, stated: “Our findings indicate that the chaotic environment of the early universe spawned smaller black holes that underwent a feeding frenzy, consuming surrounding matter and eventually evolving into the supermassive black holes observed today.”
“Through advanced computer simulations, we illustrate that the first-generation black holes, created mere hundreds of millions of years after the Big Bang, expanded at astonishing rates, reaching sizes up to tens of thousands of times that of the Sun.”
Dr. Louis Prowl, a postdoctoral researcher at Maynooth University, added: “This groundbreaking revelation addresses one of astronomy’s most perplexing mysteries.”
“It explains how black holes formed in the early universe could quickly attain supermassive sizes, as confirmed by observations from NASA/ESA/CSA’s James Webb Space Telescope.”
The dense, gas-rich environments of early galaxies facilitated brief episodes of “super-Eddington accretion,” a phenomenon where black holes consume matter at a rate faster than the norm.
Despite this rapid consumption, the black holes continue to devour material effectively.
The results uncover a pivotal “missing link” between the first stars and the immense black holes that emerged later on.
Mehta elaborated: “These smaller black holes were previously considered too insignificant to develop into the gigantic black holes at the centers of early galaxies.”
“What we have demonstrated is that, although these nascent black holes are small, they can grow surprisingly quickly under the right atmospheric conditions.”
There are two classifications of black holes: “heavy seed” and “light seed.”
Light seed black holes start with a mass of only a few hundred solar masses and must grow significantly to transform into supermassive entities, millions of times the mass of the Sun.
Conversely, heavy seed black holes begin life with masses reaching up to 100,000 times that of the Sun.
Previously, many astronomers believed that only heavy seed types could account for the existence of supermassive black holes seen at the hearts of large galaxies.
Dr. John Regan, an astronomer at Maynooth University, remarked: “The situation is now more uncertain.”
“Heavy seeds may be rare and depend on unique conditions for formation.”
“Our simulations indicate that ‘garden-type’ stellar-mass black holes have the potential to grow at extreme rates during the early universe.”
This research not only reshapes our understanding of black hole origins but also underscores the significance of high-resolution simulations in uncovering the universe’s fundamental secrets.
“The early universe was far more chaotic and turbulent than previously anticipated, and the population of supermassive black holes is also more extensive than we thought,” Dr. Regan commented.
The findings hold relevance for the ESA/NASA Laser Interferometer Space Antenna (LISA) mission, set to launch in 2035.
Dr. Regan added, “Future gravitational wave observations from this mission may detect mergers of these small, rapidly growing baby black holes.”
For further insights, refer to this paper, published in this week’s edition of Nature Astronomy.
_____
D.H. Meter et al. Growth of light seed black holes in the early universe. Nat Astron published online on January 21, 2026. doi: 10.1038/s41550-025-02767-5
As we entered the new millennium, discussions surrounding the number of genes in our genome were highly debated. Initial estimates were significantly lower than anticipated, spurring a movement towards re-evaluating evolutionary processes.
The Human Genome Project revealed in 2001 that we possess fewer than 40,000 protein-coding genes — a number that has since been adjusted to around 20,000. This finding necessitated the exploration of alternative mechanisms to account for the complexity of our biology and evolution; epigenetics now stands at the forefront.
Epigenetics encompasses the various ways that molecules can interact with DNA or RNA, ultimately influencing gene activity without altering the genetic code itself. For instance, two identical cells can exhibit vastly different characteristics based purely on their epigenetic markers.
Through epigenetics, we can extract even greater complexity from our genome, factoring in influences from the environment. Some biologists are convinced that epigenetics can play a significant role in evolutionary processes.
A notable study in 2019 demonstrated how yeast exposed to toxic substances survived by silencing specific genes through epigenetic mechanisms. Over generations, certain yeast cultures developed genetic mutations that amplified gene silencing, indicating that evolutionary changes began with epigenetic modifications.
Epigenetics is crucial for expanding our understanding of evolutionary theory. Nevertheless, skepticism persists regarding its broader implications, particularly in relation to plants and other organisms.
For instance, Adrian Bird, a geneticist at the University of Edinburgh, expressed doubts, arguing in a recent paper that there is no clear evidence linking environmental factors like drought to mammalian genomes. Though epigenetic markers may be inherited, many are erased early in mammalian development.
Some researchers dispute these concerns. “Epigenetic inheritance is observed in both plants and animals,” asserts Kevin Lara, an evolutionary biologist from the University of St. Andrews. In a comprehensive study published recently, Lara and colleagues proposed a wealth of research indicating that epigenetics could play a role across the entire tree of life.
So, why is there such division in the scientific community? Timing may be a factor. “Epigenetic inheritance is an evolving area of study,” observes Lara. While epigenetics has been recognized for decades, its relevance to evolutionary research has only gained traction in the past 25 years, making it a complex field to assess.
Explore the incredible capabilities of modern AI tools that can summarize documents, generate artwork, write poetry, and even predict protein folding. At the heart of these advancements is the groundbreaking transformer architecture, which revolutionized the field of artificial intelligence.
Unveiled in 2017 at a modest conference center in California, the transformer architecture enables machines to process information in a way that closely resembles human thinking patterns. Historically, AI models relied on recurrent neural networks, which read text sequentially from left to right while retaining only the most recent context. This method sufficed for short phrases, but when dealing with longer and more complex sentences, critical details often slipped through the cracks, leading to confusion and ambiguity.
The introduction of transformers to the AI landscape marked a significant shift, embracing the concept of self-attention. This approach mirrors the way humans naturally read and interpret text. Instead of strictly scanning word by word, we skim, revisit, and draw connections based on context. This cognitive flexibility has long been the goal in natural language processing, aiming to teach machines not just to process language, but to understand it.
Transformers emulate this mental leap effectively; their self-attention mechanism enables them to evaluate every word in a sentence in relation to every other word simultaneously, identifying patterns and constructing meaningful connections. As AI researcher Sasha Ruccioni notes, “You can take all the data you get from the Internet and Wikipedia and use it for your own tasks. And it was very powerful.”
Moreover, this transformative flexibility extends beyond text. Today’s transformers drive tools that can generate music, render images, and even model molecules. A prime example is AlphaFold, which treats proteins—long chains of amino acids—analogously to sentences. The function of a protein hinges on its folding pattern and the spatial relationships among its constituent parts. The attention mechanism allows this model to assess these distant associations with remarkable precision.
In retrospect, the insight behind transformers seems almost intuitive. Both human and artificial intelligence rely on discerning when and what to focus on. Transformers haven’t merely enhanced machines’ language comprehension; they have established a framework for navigating any structured data in the same manner that humans navigate the complexities of their environments.
Historically, science operated under the notion of a “normal brain,” one that fits standard societal expectations. Those who diverge from this model have often been labeled with a disorder or mental health condition, treated as if they were somehow flawed. For years, researchers have refined the notion that neurodevelopmental conditions, including autism, ADHD, dyslexia, and movement disorders, should be recognized as distinctive variations representing different neurocognitive frameworks.
In the late 1990s, a paradigm shift occurred. What if these “disorders” were simply natural variations in brain wiring? What if human traits existed on a spectrum rather than a stark boundary between normal and abnormal? Those at either end of the spectrum may face challenges, yet their exceptional brains also offer valuable strengths. Viewed through this lens, diverse brains represent assets, contributing positively to society when properly supported.
The concept of neurodiversity gained momentum, sparking lively debates in online autism advocacy groups. By 2013, the Diagnostic and Statistical Manual of Mental Disorders recognized autism as a spectrum condition, abolishing the Asperger’s syndrome diagnosis and classifying it on a scale from Level 1 to Level 3 based on support needs. This shift solidified the understanding of neurodivergent states within medical literature.
Since the early 2000s, research has shown that individuals with autism often excel in mathematical reasoning and attention to detail. Those with ADHD frequently outperform others in creativity, while individuals with dyslexia are adept at pattern recognition and big-picture thinking. Even those with movement disorders have been noted to develop innovative coping strategies.
These discoveries have led many scientists to argue that neurodivergent states are not mere evolutionary happenstance. Instead, our ancestors likely thrived thanks to pioneers, creative thinkers, and detail-oriented individuals in their midst. A group possessing diverse cognitive strengths could more effectively explore, adapt, and survive. Some researchers now propose that the autism spectrum comprises distinct subtypes with varying clusters of abilities and challenges.
While many researchers advocate for framing neurodivergent characteristics as “superpowers,” some caution against overly positive portrayals. “Excessive optimism, especially without supporting evidence, can undermine the seriousness of these conditions,” says Dr. Jessica Eccles, a psychiatrist and neurodiversity researcher at Brighton and Sussex Medical School. Nevertheless, she emphasizes that “with this vocabulary, we can better understand both the strengths and challenges of neurodiversity, enabling individuals to navigate the world more effectively.”
You’ve likely encountered the parable of the blind men and the elephant, where each individual’s perspective is limited to one part, leading to a distorted understanding of the whole. This concept resonates deeply in neuroscience, which has historically treated the brain as a collection of specialized regions, each fulfilling unique functions.
For decades, our insights into brain functionality arose from serendipitous events, such as the case of Phineas Gage, a 19th-century railroad worker who dramatically altered personality following a severe brain injury. More recent studies employing brain stimulation have linked the amygdala with emotion and the occipital lobe with visual processing, yet this provides only a fragmented understanding.
Brain regions demonstrate specialization, but this does not encapsulate the entire picture. The advent of imaging technologies, particularly functional MRI and PET scans in the late 1990s and early 2000s, revolutionized our comprehension of the brain’s interconnectedness. Researchers discovered that complex behaviors stem from synchronized activity across overlapping neural networks.
“Mapping brain networks is playing a crucial role in transforming our understanding in neuroscience,” states Luis Pessoa from the University of Maryland.
This transformative journey commenced in 2001 when Marcus Raichle, now at Washington University in St. Louis, characterized the Default Mode Network (DMN). This interconnected network activates during moments of rest, reflecting intrinsic cognitive processes.
In 2003, Kristen McKiernan, then at the Medical College of Wisconsin, and her team identified that the DMN experiences heightened activity during familiar tasks, such as daydreaming and introspection, providing a “resting state” benchmark for evaluating overall brain activity. They began to correlate DMN activity with advanced behaviors, including emotional intelligence and theory of mind.
As discoveries proliferated across other networks—pertaining to attention, language, emotion, memory, and planning—our understanding of mental health and neurodiversity evolved. These neural differences are now thought to be linked with various neurological conditions, including Parkinson’s disease, PTSD, depression, anxiety, and ADHD.
Network science has emerged as a pivotal field, enhancing our comprehension of disorders from autism, characterized by atypical social salience networks—those that detect and prioritize salient social cues—to Alzheimer’s disease, where novel research indicates abnormal protein spread via network pathways. We also acknowledge the inspiration it provides for developing artificial neural networks in AI systems like ChatGPT.
Neural networks have not only reshaped our understanding of brain functionalities but also the methodologies for diagnosing and treating neurological disorders. While we might not yet perceive the entirety of the elephant, our view is undeniably clarifying as science progresses.
Revolutionizing Imaging Technology: UConn Scientists Create Lens-Free Sensor with Submicron 3D Resolution
Illustration of MASI’s working principle. Image credit: Wang et al., doi: 10.1038/s41467-025-65661-8.
“This technological breakthrough addresses a longstanding issue in imaging,” states Professor Guoan Zheng, the lead author from the University of Connecticut.
“Synthetic aperture imaging leverages the combination of multiple isolated sensors to mimic a larger imaging aperture.”
This technique works effectively in radio astronomy due to the longer wavelengths of radio waves, which facilitate precise sensor synchronization.
However, at visible wavelengths, achieving this synchronization is physically challenging due to the significantly smaller scales involved.
The Multiscale Aperture Synthesis Imager (MASI) turns this challenge on its head.
Instead of requiring multiple sensors to operate in perfect synchronization, MASI utilizes each sensor to independently measure light, employing computational algorithms to synchronize these measurements.
“It’s akin to multiple photographers capturing the same scene as raw light measurements, which software then stitches together into a single ultra-high-resolution image,” explains Professor Zheng.
This innovative computational phase-locking method removes the dependency on strict interferometric setups that previously limited the use of optical synthetic aperture systems.
MASI diverges from conventional optical imaging through two key innovations.
Firstly, instead of using a lens to focus light onto a sensor, MASI employs an array of coded sensors positioned on a diffractive surface, capturing raw diffraction patterns—the way light waves disperse after encountering an object.
These measurements contain valuable amplitude and phase information, which are decoded using advanced computational algorithms.
After reconstructing the complex wavefront from each sensor, the system digitally adjusts the wavefront and numerically propagates it back to the object’s surface.
A novel computational phase synchronization technique iteratively fine-tunes the relative phase offsets to enhance overall coherence and energy during the joint reconstruction process.
This key innovation enables MASI to surpass diffraction limits and constraints posed by traditional optical systems by optimizing the combined wavefront in the software, negating the need for physical sensor alignment.
As a result, MASI achieves a larger virtual synthetic aperture than any individual sensor, delivering submicron resolution and a wide field of view, all without the use of lenses.
Unlike traditional lenses for microscopes, cameras, and telescopes, which require designers to make trade-offs, MASI enables higher resolution without the limitations of lens proximity.
MASI captures diffraction patterns from several centimeters away, reconstructing images with unparalleled submicron resolution. This innovation is akin to inspecting the intricate ridges of a human hair from a distance, rather than needing to hold it inches away.
“The potential applications of MASI are vast, ranging from forensics and medical diagnostics to industrial testing and remote sensing,” highlights Professor Zheng.
“Moreover, the scalability is extraordinary. Unlike traditional optical systems, which become increasingly complex, our framework scales linearly, opening doors to large arrays for applications we have yet to conceptualize.”
For more details, refer to the team’s published paper in Nature Communications.
_____
R. One et al. 2025. Multiscale aperture synthetic imager. Nat Commun 16, 10582; doi: 10.1038/s41467-025-65661-8
Researchers from the Center for Applied Space Technology and Microgravity at the University of Bremen and the University of Transylvania in Brașov have unveiled a groundbreaking theoretical framework that challenges our understanding of the universe’s accelerating expansion, potentially rendering dark energy obsolete. They suggest that this acceleration may be an intrinsic characteristic of space-time geometry, rather than a result of unknown cosmic forces.
This artist’s impression traces the evolution of the universe from the Big Bang, through the formation of the Cosmic Microwave Background, to the emergence of galaxies. Image credit: M. Weiss / Harvard-Smithsonian Center for Astrophysics.
For over 25 years, scientists have been puzzled by the unexpected observation that the expansion of the universe is accelerating, counter to the gravitational pull.
In the 1990s, astronomers identified this acceleration through observations of distant Type Ia supernovae, leading to the prevalent theory of dark energy, an invisible force believed to drive this expansion.
Nevertheless, the actual nature of dark energy remains elusive within the Standard Model of cosmology.
Dr. Christian Pfeiffer and his team propose that we may better understand this cosmic acceleration by re-evaluating the geometric framework used to describe gravity.
Central to modern cosmology is Einstein’s theory of general relativity, which details how matter and energy shape space-time.
The universe’s evolution is modeled using the Friedman equation, which originates from Einstein’s principles.
The researchers introduce an innovative solution based on Finsler gravity, an extension of Einstein’s theory.
This approach enhances our understanding of spacetime geometry and allows for a more nuanced exploration of how matter, especially gases, interacts with gravity.
Unlike general relativity, which depends on rigid geometric forms, Finsler gravity presents a more versatile space-time geometry.
With this methodology, the authors recalibrated the equations governing cosmic expansion.
Informed by the Finsler framework, the modified Friedman equation predicts the universe’s acceleration phenomena without necessitating the introduction of dark energy.
In essence, the accelerating expansion emerges directly from the geometry of space-time itself.
“This is a promising hint that we may explain the universe’s accelerating expansion partly without dark energy, drawing from generalized space-time geometry,” Pfeiffer remarked.
This concept does not entirely dismiss dark energy or invalidate the Standard Model.
Instead, it implies that some effects attributed to dark energy might have their roots in a deeper understanding of gravity.
“This fresh geometric outlook on the dark energy dilemma provides avenues for a richer comprehension of the universe’s foundational laws,” stated Dr. Pfeiffer.
The research team’s paper is published in the Journal of Cosmology and Astroparticle Physics.
_____
Christian Pfeiffer et al. 2025. From a moving gas to an exponentially expanding universe, the Finsler-Friedman equation. JCAP 10:050; DOI: 10.1088/1475-7516/2025/10/050
Researchers from the University of Waterloo and Kyushu University have achieved a groundbreaking advancement in quantum computing by developing a novel method to create redundant, encrypted copies of qubits. This represents a pivotal step towards practical quantum cloud services and robust quantum infrastructure.
Google’s quantum computer – Image credit: Google.
In quantum mechanics, the no-cloning theorem asserts that creating an identical copy of an unknown quantum state is impossible.
Dr. Achim Kempf from the University of Waterloo and Dr. Koji Yamaguchi from Kyushu University emphasize that this fundamental rule remains intact.
However, they have demonstrated a method to generate multiple encrypted versions of a single qubit.
“This significant breakthrough facilitates quantum cloud storage solutions, such as quantum Dropbox, quantum Google Drive, and quantum STACKIT, enabling the secure storage of identical quantum information across multiple servers as redundant encrypted backups,” said Dr. Kemp.
“This development is a crucial step towards establishing a comprehensive quantum computing infrastructure.”
“Quantum computing offers immense potential, particularly for addressing complex problems, but it also introduces unique challenges.”
“One major difficulty in quantum computing is the no-duplication theorem, which dictates that quantum information cannot be directly copied.”
“This limitation arises from the delicate nature of quantum information storage.”
According to the researchers, quantum information functions analogously to splitting passwords.
“If you possess half of a password while your partner holds the other half, neither can be utilized independently. However, when both sections are combined, a valuable password emerges,” Dr. Kemp remarked.
“In a similar manner, qubits are unique in that they can share information in exponentially growing ways as they interconnect.”
“A single qubit’s information is minimal; however, linking multiple qubits allows them to collectively store substantial amounts of information that only materializes when interconnected.”
“This exceptional capability of sharing information across numerous qubits is known as quantum entanglement.”
“With 100 qubits, information can be simultaneously shared in 2^100 different ways, allowing for a level of shared entangled information far exceeding that of current classical computers.”
“Despite the vast potential of quantum computing, the no-cloning theorem restricts its applications.”
“Unlike classical computing, where duplicating information for sharing and backup is a common practice, quantum computing lacks a simple ‘copy and paste’ mechanism.”
“We have uncovered a workaround for the non-replicability theorem of quantum information,” explained Dr. Yamaguchi.
“Our findings reveal that by encrypting quantum information during duplication, we can create as many copies as desired.”
“This method circumvents the no-clonability theorem because when an encrypted copy is selected and decrypted, the decryption key is automatically rendered unusable; it functions as a one-time key.”
“Nevertheless, even one-time keys facilitate crucial applications such as redundant and encrypted quantum cloud services.”
In early 2025, excitement surged within the research community with the release of a groundbreaking preprint paper detailing the world’s first fully 3D printed microscope. This innovative device was constructed in just hours and costs a fraction of traditional models.
Dr. Liam Rooney, a professor at the University of Glasgow, explained to New Scientist that the response to their revolutionary microscope has been overwhelming, attracting interest from biomedical researchers, community organizations, and even filmmakers. He stated, “The community response has been remarkable.” This significant research has been published in the Microscope Journal.
For the microscope’s body, the team employed designs from the Open Flexure project, a public resource for 3D printing scientific instruments. Utilizing a commercial camera and light source, they controlled the entire system using a Raspberry Pi computer.
The true innovation lies in the 3D-printed microscope lenses made from clear plastic, drastically reducing costs and enhancing accessibility. Traditional microscopes can cost thousands; in contrast, this new model can be assembled for less than £50.
“Since January, we have printed approximately 1,000 lenses in various shapes,” remarked team member Gail McConnell, from the University of Strathclyde.
Several companies producing commercial products that require optics have reached out to discuss potential collaborations, as affordable, lightweight 3D-printed lenses are still uncommon in large-scale production. The team has successfully used the microscope to analyze blood samples and tissue sections from mouse kidneys, validating its utility for medical and biological research.
The researchers aim to democratize access to microscopy, and they are making strides toward that goal. Collaboration with a lab at the Kwame Nkrumah University of Science and Technology in Ghana is underway to enhance microscope accessibility for researchers and students across West Africa. Additionally, they’ve secured funding from the UK Institute for Technology Strategy, and are involved in programs designed to upskill and empower students facing educational barriers.
Furthermore, the team has developed a new microscope course through the Strathclyde Light Microscopy Course, aimed at researchers of all experience levels and providing a unique educational opportunity in the UK. Rooney noted, “This is revolutionizing our teaching methods.”
Looking towards the future, there is substantial potential for further enhancements in 3D printed microscopes. The research team is working to improve resolution without raising costs and have found methods to enhance image contrast by 67%.
McConnell emphasized that the microscope’s design leverages consumer electronics and accessible 3D printing technologies, stating that the future advancements and capabilities are limited only by current 3D printing technology. “As these printers advance, so will our capabilities. The only bottleneck is technology, not creativity,” she explained. “We’re frequently contacted by individuals eager to see new designs.”
In early 2025, a groundbreaking paper revealed the world’s first fully 3D printed microscope, sparking significant enthusiasm among researchers. This innovative microscope can be constructed in just a few hours and costs significantly less than traditional models.
Dr. Liam Rooney, a professor at the University of Glasgow involved in this project, stated to New Scientist that coverage of the microscope has prompted outreach from biomedical researchers, community organizations, and filmmakers worldwide. “The community response has been amazing,” he noted. The research has been subsequently published in Microscope Journal.
His team utilized the OpenFlexure design, a publicly available resource for creating scientific instruments via 3D printing. Additionally, they incorporated a commercially available camera and light source, all controlled by a Raspberry Pi computer.
A major breakthrough was the 3D printing of microscope lenses using clear plastic, significantly reducing costs and making microscopy more accessible. While traditional microscopes can cost thousands, this new version is available for under £50.
Since January, the team has produced approximately 1,000 lenses in various shapes, according to Gail McConnell from the University of Strathclyde, UK.
Several companies manufacturing products requiring lenses have shown interest in the team’s research, as inexpensive, lightweight 3D-printed lenses are rare in large-scale production. They tested the microscope on blood samples and thin sections of mouse kidneys, confirming its potential utility in medical and biological research.
The team’s mission is to democratize access to microscopy. They are collaborating with the Kwame Nkrumah University of Science and Technology in Ghana, aiming to enhance microscope accessibility for researchers and students in West Africa. They have also secured funding from the UK Institute for Technology Strategy and participate in initiatives that empower students facing educational barriers.
In addition, they have developed a new microscope course at the Strathclyde Light Microscopy Course, tailored for researchers of all experience levels. Mr. Rooney emphasized, “This is truly changing how we educate.”
Furthermore, researchers believe there’s ample opportunity for improvement. They are focused on enhancing resolution without adding costs, having already improved contrast by up to 67%.
McConnell remarked that because the microscope is designed for low-cost consumer electronics and accessible 3D printers, its future scalability is tied to advancements in 3D printing technology. “As these printers improve, so will we. The bottleneck isn’t imagination,” she explained. “We are continually receiving inquiries to develop new innovations.”
Cholesterol management may be achievable by altering just one switch in an individual’s genetic code—potentially for a lifetime.
A pilot study featured in the New England Journal of Medicine demonstrated a novel gene therapy that decreased patients’ low-density lipoprotein (LDL) cholesterol, commonly known as “bad” cholesterol, by nearly 50%, while also reducing triglycerides by an average of 55%.
If forthcoming trials yield similar results, this one-time therapy could serve as an alternative to the combination of medications that millions currently rely on to manage their cholesterol.
LDL cholesterol and triglycerides are lipids produced by the liver; however, excessive accumulation in the bloodstream can lead to fat deposits that may result in cardiovascular diseases, which account for about one-third of deaths in the United States.
“Both LDL cholesterol and triglycerides are linked to severe cardiovascular risks, such as heart attacks, strokes, and mortality,” remarked Steven Nissen, a professor of medicine at the Cleveland Clinic Lerner School of Medicine. BBC Science Focus.
Nissen was part of a research team focusing on lowering cholesterol levels by targeting the ANGPTL3 gene, associated with LDL cholesterol and triglycerides.
About 1 in 250 individuals possess a mutation that deactivates this gene, leading to lower lipid levels in their blood. Nissen noted, “Importantly, the occurrence of cardiovascular diseases in these individuals is also minimal.”
Thanks to CRISPR gene-editing technology, identifying individuals who might benefit from this mutation is no longer just a matter of chance.
CRISPR selectively modifies DNA by targeting specific genes. – Credit: Getty
Utilizing CRISPR, Nissen and his team developed a treatment to deactivate the ANGPTL3 gene in the liver, which was then infused into 15 patients during an initial safety study.
The treatment significantly reduced participants’ LDL and triglyceride levels within two weeks, and these reductions remained stable after 60 days. Nissen stated, “These changes are anticipated to be permanent.”
Healthcare professionals recommend maintaining LDL cholesterol levels below 100mg/dL to promote heart health. While lifestyle changes can assist, many individuals, particularly those with genetic tendencies to high cholesterol, find it challenging to reach this target.
While existing medications are effective, no drugs simultaneously lower both LDL cholesterol and triglycerides, often requiring patients to take multiple medications daily for life to manage their cholesterol.
“The next phase of the trial is set to commence in the coming months, involving more patients with elevated LDL cholesterol or triglycerides,” Nissen stated.
If the trials continue to succeed, this therapy could serve as a lasting solution against some of the most significant health threats globally.
Researchers at the University of Sydney, in collaboration with Dewpoint Innovations, have engineered a porous polymer coating that can reflect as much as 97% of sunlight, dissipate heat into the atmosphere, and maintain surface temperatures up to 6 degrees cooler than the ambient air—even in direct sunlight. This mechanism fosters ideal conditions for atmospheric water vapor to transform into water droplets on these cooler surfaces, much like the condensation seen on a bathroom mirror.
Experimental equipment installed on the roof of the Sydney Nanoscience Hub. Image credit: University of Sydney.
Professor Chiara Neto from the University of Sydney stated: “This innovation not only advances cool roof coating technology, but also paves the way for sustainable, low-cost, decentralized freshwater sources—an essential requirement given the challenges of climate change and rising water scarcity.”
A six-month field study conducted on the roof of the Sydney Nanoscience Hub demonstrated that dew was collected for 32% of the year, enabling a sustainable and reliable water source even during dry spells.
Under optimal conditions, this coating can yield up to 390 mL of water per square meter daily—sufficient for a 12-square-meter home, meeting one person’s daily hydration needs.
This research illustrates the integration of passive cooling techniques and atmospheric moisture collection into scalable paint-like solutions.
The extensive collection area suggests that this coating could have diverse applications in various industries, including water supply for livestock, horticulture for premium crops, cooling through spraying, and hydrogen production.
Contrary to conventional white paints, the porous coatings utilizing polyvinylidene fluoride-co-hexafluoropropene (PVDF-HFP) do not depend on UV-reflective pigments like titanium dioxide.
Dr. Ming Chiu, Chief Technology Officer of Dewpoint Innovations, remarked, “Our design achieves superior reflectiveness through an internal porous structure, ensuring longevity without the environmental downsides of pigment-based coatings.”
“By eliminating UV-absorbing materials, we have surmounted traditional limitations of solar reflectance while avoiding glare from diffuse reflection.”
“This equilibrium between performance and visual comfort enhances its ease of integration and appeal for real-world applications.”
Throughout six months of outdoor examination, researchers documented minute-by-minute data on cooling and water collection, confirming solid performance that remained stable under the harsh Australian sun—unlike similar technologies that often degrade quickly.
In addition to water harvesting, these coatings could help mitigate urban heat islands, lower energy needs for air conditioning, and provide climate-resilient water sources for regions facing heightened heat and water stress.
“This research also challenges the notion that dew collection is confined to humid environments,” noted Professor Neto.
“While humid conditions are optimal, condensation can also occur in arid and semi-arid areas where humidity increases during the night.”
“It isn’t a substitute for rainfall; rather, it serves as a water source when other supplies are scarce.”
The team’s work was published in the October 30th issue of Advanced Functional Materials.
_____
Ming Chiu et al. A passive cooling paint-like coating to capture water from the atmosphere. Advanced Functional Materials published online October 30, 2025. doi: 10.1002/adfm.202519108
The visible signs of aging, like wrinkles, gray hair, and joint discomfort, are merely surface reflections of more intricate processes happening within our cells. Deep inside your body, every organ experiences its own subtle molecular shifts as you grow older.
Researchers have now developed the most detailed map to date illustrating how this process unfolds.
For further insights into our findings, which are based on data from over 15,000 samples, please visit this preprint research. The paper, currently awaiting peer review, offers an unprecedented view of how aging modifies our genomic blueprint from head to toe.
A collaborative effort among researchers worldwide has led to the creation of a comprehensive “aging atlas” that maps DNA methylation (chemical tags that regulate gene activity) across 17 different types of human tissues while tracking age-related changes.
“DNA methylation, simply put, is a chemical modification on DNA,” said Dr. Jesse Poganic, co-author of the study and a medical instructor at Harvard Medical School, as reported by BBC Science Focus.
“At a fundamental level, their primary role is to regulate which genes are activated and which are not.”
If you stretched all the DNA in your body, it would span over 300 times the distance from Earth to the sun and back – Photo credit: Getty
Despite a few mutations, each cell shares essentially the same genetic information in the form of its genome. So how do lung cells recognize their identity while stomach cells act as stomach cells? This is where methylation plays a crucial role.
“The methylation or unmethylation status at a specific point on the genome determines whether a particular gene is turned on or off,” Poganik noted.
But what does all this reveal about the aging process?
DNA methylation serves as one of the body’s essential epigenetic mechanisms, acting as a molecular switch that toggles genes on or off without altering the DNA sequence itself. By adding and removing tiny molecules known as methyl groups, cells can adjust which genes are expressed in response to diet, exercise, infections, and other environmental influences.
As time passes, these methylation patterns alter in specific ways, forming the basis of the so-called epigenetic clock, which serves as a molecular measure of biological age. Until now, most of these clocks relied on blood samples, leaving scientists uncertain if other organs followed similar patterns.
“DNA methylation patterns differ from tissue to tissue. They are specific to both the tissue and the cell type,” said Professor Nir Eynon, the study’s senior author and research group leader at Monash University, as reported by BBC Science Focus. “Thus, blood measurements don’t necessarily represent what happens in your liver, muscles, or brain.”
This gap prompted the team to gather all publicly available datasets on methylation within reach, complemented by new data from global collaborators.
The analysis covered nearly 1 million points across the genome, encompassing 17 organs, from the brain and heart to the skin, liver, stomach, and retina.
Atlas of Aging
The researchers discovered that the proportion of genomes with methylation tags varied significantly across tissues, ranging from approximately 38 percent in the cervix to over 60 percent in the retina. Surprisingly, age-related changes were quite uniform, with most tissues becoming increasingly hypermethylated as they age, resulting in more tagged DNA sites and the silencing of certain genes.
However, two organs defied this trend. Both skeletal muscle and lung tissue can experience a loss of methyl tags over time, leading to excessive or irregular gene expression.
“Most tissues show hypermethylation with age,” explained Dr. Max Jack, the study’s lead author. BBC Science Focus via email. “Yet when you refine it down to methylation rates, distinct tissue-specific patterns emerge.”
Different organs age at varying rates. An aging atlas begins to elucidate why – Credit: Getty
For instance, adipose tissue predominantly shifts toward hypermethylation, while changes are more balanced in the brain. These patterns may illuminate how different organs react to common aging stressors, such as inflammation, according to Jacques.
Overall, significantly age-related methylation changes were observed in brain, liver, and lung tissues, with skin and colon tissues also showing marked alterations. Conversely, pancreatic, retinal, and prostate tissues exhibited the least detectable age-related changes, possibly due to limited data or greater resilience to aging.
Correlation, Not Causation (For Now)
At first glance, the data imply that some organs age quicker than others. However, researchers caution that these distinctions cannot yet be interpreted as a direct rate of aging.
This is partly due to statistical factors. Some organs represent thousands of samples, while others are represented by only a handful.
Moreover, “We know that methylation changes occur as we age,” Poganik states. “What we don’t know is the extent to which they contribute to aging.”
In other words, while scientists are aware of the methylation alterations linked to aging, it’s still unclear whether those changes induce aging or whether aging triggers those changes.
Poganik believes that alterations in methylation likely account for at least some of the observable phenomena associated with aging. “Even cautious scientists would suggest there’s an element of causation,” he remarks.
The allure of this new atlas lies in its revelation of common molecular themes threading throughout the body, he adds.
“One of the most compelling aspects of this study is that it demonstrates some universality in the aging process. When we analyze various tissues, we encounter numerous similar methylation changes, suggesting a universal quality to aging.”
Nevertheless, he warns that not all alterations are causal. With so many ongoing methylation changes, some are almost certainly part of aging, while others may not hold significance.
Old atlases might not pinpoint which changes are critical and which are not, but they offer an invaluable collection of data for researchers to delve deeper into the issue than ever before. The atlas is now openly accessible through an online portal for other scientists to explore and utilize.
“We have consistently prioritized open-source research,” Jack states. “With this, we aim to make it accessible to everyone, not only to advance research but also to foster collaboration.”
Going forward, the research team plans to examine some universal associations prevalent across all tissues as we age, alongside other biomarkers that may be influencing the aging process.
“Advancements in aging pale in comparison to those in cancer,” Poganik adds. With the assistance of this atlas, scientists may finally bridge that gap.
New quantum debit cards, which can hold unforgeable quantum funds, are constructed using extremely cooled atoms and light particles.
While standard banks often rely on the skill of counterfeiters to detect fake banknotes, quantum banks utilize the no-cloning theorem from physics, rendering counterfeiting impossible. This principle, which states that creating identical copies of quantum information is not feasible, led physicist Stephen Wiessner to propose a protocol in 1983 for generating secure currencies. Julian Laurat and his team at the Kastler Brossel Laboratory in France are actively implementing this groundbreaking concept in advanced experiments.
According to this protocol, banks create banknotes composed of quantum particles, possessing unique properties and existing in specific quantum states, thus ensuring protection against forgery through the no-cloning theorem. Laurat remarks that the protocol showcases an impressive feat of quantum cryptography, though it has not yet been put into practice for actual quantum fund storage.
The research team has made storage feasible by combining memory devices with hard drives. In their experiments, users interact with quantum systems that act as banks by exchanging photons. Each photon can be stored similarly to loading money onto a debit card.
The memory devices used by the team consist of hundreds of millions of cesium atoms, which researchers cool down to nearly absolute zero by bombarding them with lasers. At such extreme temperatures, light can precisely manipulate the quantum state of atoms, but Laurat notes that years were spent identifying the optimal cooling needed for atomic memory to serve as a quantum debit card. Through extensive testing, he and his colleagues demonstrated that users can retrieve photons from atoms without corrupting their states, as long as the process is not tampered with.
Christophe Simon from the University of Calgary emphasizes that the new experiment marks progress toward fully realizing quantum funding. However, the current quantum memory storage time of around six million seconds remains insufficient for practical application. “Another future step is to enhance portability. The long-term goal is to develop quantum memory that can be easily carried, particularly for Quantum Money applications. But we are not there yet,” he states.
The team is focused on extending storage durations, asserting that the protocol can be employed within quantum networks already being established in metropolitan areas across the globe. Additionally, cutting-edge quantum memory not only facilitates ultra-secure long-distance quantum communication but is also instrumental in connecting various quantum computers to more powerful systems.
For the first time, real-time footage of human embryos being implanted into an artificial uterus has been recorded.
This remarkable achievement, published in the journal Advances in Science, offers an unparalleled glimpse into one of the crucial stages of human development.
Implantation failure is a leading cause of infertility, responsible for 60% of miscarriages. Researchers aim to enhance understanding of the implantation process to improve fertility results in both natural conception and in vitro fertilization (IVF).
“We can’t observe this, due to the transplantation in the mother,” stated Dr. Samuel Ojosnegros, head of bioengineering at the Institute of Bioengineering (IBEC) and the lead author of the study, as reported by BBC Science Focus.
“Thus, we required a system to observe how it functions and to address the primary challenges to human fertility.”
Implantation marks the initial phase of pregnancy, where the fertilized egg (developing embryo) attaches to the uterine lining, allowing it to absorb nutrients and oxygen from the mother—vital for a successful pregnancy.
To investigate this process, the research team developed a platform that simulates the natural uterine lining, utilizing a collagen scaffold combined with proteins essential for development.
The study then examined how human and mouse embryos implant onto this platform, uncovering significant differences. Unlike mouse embryos that adhere to the uterine surface, human embryos penetrate fully into the tissue before growing from within.
Video showing the implantation process of mouse embryos (left) and human embryos (right).
“Human embryos are highly invasive,” said Ojosnegros. “They dig a hole in the matrix, embed themselves, and then grow internally.”
The footage indicated that the embryo exerts considerable force on the uterus during this process.
“We observed that the embryo pulls, moves, and rearranges the uterine matrix,” stated Dr. Amélie Godeau, co-first author of the research. “It also responds to external force cues. We hypothesize that contractions in vivo may influence embryo transfer.”
According to Ojosnegros, the force applied during this stage could explain the pain and bleeding many women experience during implantation.
Researchers are currently focused on enhancing the realism of implantation platforms, including the integration of living cells. The goal is to establish a more authentic view of the implantation process, which could boost the likelihood of success in IVF, such as by selecting embryos with better implantation potential.
“We understand more about the development of flies and worms than our own species,” remarked Ojosnegros. “So enjoy watching the film.”
In a departure from conventional solid glass cores, the innovative optical fibers now incorporate an air core encased in precisely crafted glass microstructures to guide light. This advancement boosts transmission speeds by 45%, enabling greater data transfer over longer distances before amplification is required.
Petrovich et al. We report microstructured optical waveguides with unprecedented transmission bandwidth and attenuation. Image credit: Gemini AI.
Optical fibers in telecommunications have typically relied on solid silica glass constructs, and despite extensive refinements, their signal loss remains a critical challenge.
This results in about half of the light traveling through the fiber being lost after approximately 20 km, necessitating the use of optical amplifiers for extended distance communication, such as intercontinental terrestrial and undersea connections.
Minimizing signal loss can be achieved within a limited spectrum of wavelengths. This has constrained the data capacity in optical communications over recent decades.
Francesco Poletti and his team from the University of Southampton developed a new type of fiber optic featuring a hollow air core surrounded by intricately designed thin silica rings to effectively guide light.
Laboratory tests revealed that these fibers exhibit an optical loss of 0.091 decibels per kilometer at the commonly utilized optical wavelengths in communications.
Consequently, optical signals with appropriate wavelengths can travel approximately 50% farther before needing amplification.
This configuration offers a broader transmission window (the range of wavelengths where light propagates with minimal signal loss and distortion) than previous fiber optic technologies.
While this novel optical fiber may demonstrate lower losses due to the use of larger air cores, further investigation is necessary to validate these findings.
“We anticipate that advancements in manufacturing, geometric consistency, and reduced levels of absorbent gases in the core will solidify these new fibers as essential wave guiding technologies,” Reservers remarked.
“This breakthrough could pave the way for the next major advancement in data communication.”
Their study will be published in the journal Nature Photonics.
____
M. Petrovich et al. Broadband optical fiber with attenuation of less than 0.1 decibels per kilometer. Nature Photonics Published online on September 1, 2025. doi:10.1038/s41566-025-01747-5
Cement can self-cool by reflecting light outward and dissipating heat from its surface, offering a comfortable indoor climate without reliance on air conditioning.
Traditional cement often absorbs infrared light from the sun, trapping heat and causing indoor temperatures to rise along with the surrounding air.
To tackle this challenge, Fengyin Du from Purdue University in Indiana and her team developed a unique cement that features tiny reflective mineral crystals called ettringite on its exterior.
This innovative cement releases infrared light instead of retaining it, allowing for rapid heat loss. “It acts like a mirror or radiator, reflecting sunlight and releasing heat into the atmosphere, enabling the building to remain cool without needing air conditioning or power,” Du explains.
Initially, the researchers create small pellets from commonly found minerals like limestone and gypsum. These are ground into a fine powder, mixed with water, and poured into silicon molds that contain small perforations. Air bubbles moving through these holes form slight indentations on the surface, where the reflective ettringite crystals can develop. The aluminum-rich gels in the set cement permit infrared rays to traverse the material.
Du notes the process is easily scalable and enables cement production at lower temperatures, making it $5 less expensive per tonne than conventional Portland cement.
Du and her team evaluated the temperature regulation of their cement on the hot roof of Purdue University’s campus and observed that its surface temperature was 5.4°C (9.7°F) cooler than the surrounding air and 26°C (47°F) lower compared to Portland Cement.
Surface dimples of cement viewed under an electron microscope
Guo Lu/Southeast University
“It’s a valuable material,” states Oscar Brousse from University College London. “You enhance the material’s ability to reflect and emit energy, thus efficiently releasing energy that the material has absorbed.”
However, gauging just the surface temperature of a material does not convey its real-world performance. “A surface temperature reduction of 5°C translates into a 5°C decrease in air temperature, which can significantly impact local conditions.”
Innovative treatments may transform the management of lower back pain by addressing the root causes associated with inflammatory “zombie” cells. Recent research conducted using mice.
A group of scientists, led by researchers from McGill University in Canada, found that a combination of two medications, O-Vanillin and RG-7112, effectively eliminates zombie cells from mouse spinal tissues, alleviating pain and inflammation symptoms.
“Our results are promising because they indicate that by eliminating cells that not only obscure pain but also contribute to issues, we can approach lower back pain treatment in a novel manner,” stated the senior author, Professor Lisbet Haglund from McGill’s Ministry of Surgery.
Zombie cells, also referred to as senescent cells, do not function like typical cells. Rather than undergoing division and death to make way for new cells, they persist in the body.
As we age, these zombie cells can build up, leading to inflammation, pain, and spinal damage.
For the hundreds of millions of adults globally suffering from back pain, the impact of zombie cells is often masked and inadequately addressed by current medications.
This new treatment, however, aims to alleviate back pain by targeting and eliminating these lingering zombie cells, thereby addressing the underlying issues.
Aging or zombie cells accumulate in the shock-absorbing discs between each spinal vertebra, releasing inflammatory molecules that damage discs – Credit: Nemes Laszlo/Science Photo Library via Getty
The McGill research team discovered this promising new treatment while working with mice genetically engineered to develop spinal injuries and lower back pain over seven months.
The researchers administered varying doses of O-Vanillin and RG-7112 to these mice. Some received only one of the drugs, while others received a combination of both.
RG-7112 is a medication already established to remove zombie cells in various contexts, though it hasn’t been applied to lower back pain treatment until now.
O-Vanillin, a natural compound sourced from turmeric, is recognized for its anti-inflammatory benefits, but had not been previously tested against zombie cells.
After 8 weeks of treatment, mice receiving both drugs at higher doses exhibited the lowest levels of zombie cells, inflammation, and pain.
Those treated with a single drug showed some improvement, but the results were not as significant as those achieved with the combination therapy.
“The pressing question now is whether these medications can produce the same effects in human subjects,” Haglund remarked.
“It’s 19 feet ahead,” announced the robotic voice from an iPhone held by Moshfik Ahmed, as he navigated through London’s Road Cricket Field in search of a seat.
“Up the stairs,” directed Ahmed, an English cricketer with visual impairment, as he tapped a white cane on his way to the Edrich Stand without any external assistance. “There’s one landing. We’re positioned at 9 o’clock at the base of the stairs. We’ve reached the fifth row.”
Ahmed was among the first to test the newly installed Wayfinding technology at Lord’s, designed for blind and partially sighted individuals, enabling disabled fans to enjoy live sports.
Waymap, the company behind this app-based navigation tool, asserts that the 31,000-seat cricket stadium is the first sports venue worldwide to offer personal GPS, specifically tailored to manage traffic in stadiums, shopping centers, and transportation systems.
Utilizing a £50,000 camera, Waymap meticulously mapped stairs, corridors, inclines, entrances, and concourses to develop a digital twin of this historic cricket ground, allowing the app to navigate users by meter precision.
This technology was implemented ahead of next month’s Test match between England and India. The Marylebone Cricket Club, which manages the venue, believes it can assist other cricket enthusiasts in discovering the most accessible routes throughout the premises.
“The concept is fantastic for the visually impaired,” said Ahmed, who tried the app upon the Guardian’s invitation after participating in a showcase match on Wednesday. “If it functions flawlessly, I can navigate to the station independently, cross the street by myself, arrive at the stadium, and find my way using the app. I know many sports enthusiasts who are visually impaired. This will make it completely accessible for them.”
Moshfik Ahmed at the cricket grounds on the road. Photo: Sean Smith/Guardian
It was Ahmed’s first experience with the app, which had some initial hiccups. At times, it mistakenly suggested he head in the wrong direction, pointing him to temporarily closed stairs, and even guided him to row 20 of the Edrich Stand instead of column 5.
However, it seemed that both the app and the user were still in the adaptation phase. For instance, the app should be customized to reflect the individual user’s walking pattern, which could clarify the misdirection he experienced.
“It must be precise and dependable,” stated Ahmed, who lost most of his vision in 2017.
“We’re dedicated to delivering an exceptional experience,” said Celso Zuccollo, CEO of WayMap. “WayMap represents a novel navigation approach. It usually requires multiple visits to fully grasp how to use the app effectively.”
“The objective is likely to extend this technology to venues like Wembley, various football stadiums, and we are in discussions with horse racing tracks,” he added.
Existing apps available for users of the Washington, DC public transport system do not adequately alert users to the movements of people around them, particularly those whom Ahmed noted can pose significant challenges in maneuvering safely and comfortably.
Known as Verve-102, this treatment could revolutionize heart attack prevention and significantly lower LDL cholesterol levels (often referred to as “bad” cholesterol) with a single injection.
While statins can achieve similar cholesterol reductions, they typically require daily administration.
“This is the future,” stated Professor Riyaz Patel, an academic from the University of London and a doctor at Barts Health NHS Trust involved in the trial – BBC Science Focus.
“This is not a fantasy; it’s reality. We are actively implementing it. I was providing this treatment to my patient during the exam.”
Unlike statins, which gradually lower cholesterol, Verve-102 aims for a one-time alteration by “turning off” a specific gene called PCSK9 in the liver. This gene is crucial in managing the levels of LDL cholesterol that the liver can detect and eliminate from the bloodstream.
In simpler terms, a reduction in PCSK9 means less LDL in the bloodstream.
“The results are stunning,” Patel remarked. “This drug disables a small segment of your DNA, and your LDL cholesterol will be permanently 50% lower thereafter. That’s a game-changer!”
Cholesterol builds up in blood vessel walls, leading to plaque formation that can obstruct blood flow.
Elevated LDL cholesterol levels heighten the risk of this buildup, prompting millions (over 40 million in the US and over 7 million in the UK) to take daily medications like statins for cholesterol management.
The VERVE-102 clinical trial included 14 participants with familial hypercholesterolemia, a genetic disorder that heightens the risk of heart disease, heart attacks, and strokes due to extremely high LDL cholesterol levels.
Initial outcomes from Verve-102 injections show that all participants reacted positively to the treatment with no severe side effects.
Responses varied by dosage. The lowest dose group experienced an average LDL reduction of 21%, while the intermediate group showed a 41% reduction, and the high-dose group saw a 53% reduction.
Remarkably, one individual in the high-dose group achieved a 69% reduction in LDL cholesterol after receiving Verve-102.
Dr. Eugene Braunwald, a distinguished medical professor and Hershey’s professor of medicine at Harvard Medical School who did not take part in the study, noted that the preliminary data is “promising” and indicates “the potential for a new era in cardiovascular disease treatment.”
Verve is actively recruiting participants for further stages of clinical trials involving even higher Verve-102 doses in the UK, Canada, Israel, Australia, and New Zealand. The final results are expected to be revealed in the latter half of 2025.
Read more:
About our experts
Professor Riyaz Patel is a consultant cardiologist and clinical academic scholar at University College London (UCL) and Barts Health NHS Trust. He is a fully funded clinician scientist with the British Heart Foundation and serves as a professor of cardiology at UCL, where he investigates the causes of heart disease, focusing on cardiovascular risks and the genetics of coronary heart disease. He has established and led new cardiovascular prevention services at Barts Heart Center.
Treatment offers protection to mice against venom from common taipans and various other snakes
Matthijs Kuijpers/Alamy
Antibodies derived from inflammatory men exhibit effectiveness against a range of snake bites, suggesting that a universal treatment may soon be achievable.
The use of non-human antibodies, however, can lead to serious adverse effects, including potentially fatal allergic reactions. Additionally, it necessitates the identification of the specific snake responsible for the bite before administering the anti-venom.
Jacob Granville from Centivax, a biotechnology firm in San Francisco, California, is exploring broadly neutralizing antibodies that could be developed into anti-venoms effective against multiple or all venomous snakes. “There are 650 venomous snake species, but their venoms involve just 10 common classes of toxins,” Granville explains.
Researchers began investigating individuals bitten multiple times by different snakes. “Perhaps a daring snake researcher,” remarks Granville. Media reports introduced the story of Tim Friede, who claims to have “self-administered escalating doses of venom from the world’s deadliest snakes over 700 times.”
“If anyone could yield a wide-ranging neutralizing antibody against snake venom, it would be Tim Friede,” Granville affirms.
From just 40 milliliters of Friede’s blood, the team “converted immune memory into a library of billions of antibodies,” he adds. They subsequently tested promising candidates against venom from 19 of the deadliest Elapidae family species, including several cobra varieties.
Ultimately, they treated two antibodies derived from Friede’s blood, known as LNX-D09 and SNX-B03, along with a toxin inhibitor named varespladib. In experiments on mice, this combination provided comprehensive protection against 13 species, including various cobras, the tiger snake (Notechis scutatus), and the general Thai bread snake (Oxyuranus scutellatus). It also offered partial protection against six additional species, including the notorious death adder (Acanthophis Antalcus).
The subsequent phase involves testing these treatments on animals brought into Australian veterinary clinics following a snake bite and identifying antibodies that can confer protection against vipers.
Tian Du from the University of Sydney emphasizes that “discovering two antibodies that can inhibit toxins makes for a universal treatment for closely related species.”
Additionally, after learning that the anticoagulant drug heparin can assist individuals in avoiding limb loss following a cobra bite, Du aims to determine whether their treatment can also avert skin and muscle necrosis.
Earlier this month, a mysterious spaceship named X-37B landed at Vandenburg Space Force base near Santa Barbara, California. This experimental project, shrouded in secrecy, has been ongoing for over a decade.
Details about the X-37B and its mission are scarce, but fragments of information have been gradually unveiled over the years, allowing us to piece together the puzzle of what is happening in space.
While the public eye is fixed on the race to the moon by private companies and national space agencies, a more secretive competition is taking place in the background.
The X-37B is just one of many clandestine experiments conducted by countries like the US, Russia, and China. Recent revelations shed light on the features of this mysterious spacecraft and give a glimpse into the future of military space operations.
The X-37B is seen here on the runway after a successful completion of the sixth mission. – Staff Sergeant Adam Shanks / US Space Force
What do you know about the X-37B?
The X-37B, built by Boeing, is a cutting-edge spacecraft born out of NASA’s X-37 program. It embarked on its first flight in 2010 and has since been managed by various US military entities, including the US Space Force.
The US Space Force, established in 2019, recognizes the importance of space in future conflicts and aims to achieve space superiority through operations like space control.
The X-37B, despite not being a weapon itself, plays a crucial role in preparing the US for potential space warfare scenarios. Its capabilities are key in collecting data and testing new technologies in the space domain.
Recent maneuvers like the “aero brake” operation have showcased the agility and versatility of the X-37B, hinting at its potential role in future defense strategies.
While the specifics of the X-37B’s missions remain classified, its significance lies in its contribution to the US military’s readiness for an evolving space landscape.
War in Space: Where does the X-37B fit?
As space becomes increasingly congested with satellites and new technologies, the X-37B’s role in collecting data and testing capabilities is vital for understanding the evolving space environment.
The spacecraft’s ability to operate autonomously and perform complex maneuvers like aero braking sets it apart as a valuable asset in modernizing US space defense strategies.
While countries like China and Russia are also developing secretive space capabilities, the X-37B represents the US’s commitment to maintaining a competitive edge in space while adapting to new threats.
Overall, the X-37B serves as a reminder that space is no longer just a realm of exploration, but a frontier where countries must prepare for defense and strategic advantage.
About our experts
Vivienne Machi: Military space editor at Aviation Week, with a decade of experience covering international military and space technology.
Todd Harrison: Senior fellow at the American Enterprise Institute specializing in defense strategy, budget, and space policy.
Measuring just 4cm square, Google has developed a computing chip with unprecedented speed. In just five minutes, this chip can complete tasks that would take conventional computers 10 billion years to finish – a mind-boggling number surpassing the age of our universe.
The chip, named Willow, is the size of an After Eight Mint and could revolutionize drug development by accelerating the experimental phase. Recent advancements suggest that within five years, quantum computing will transform research and development across various industries.
Willow boasts fewer errors, enhancing the potential of artificial intelligence. Quantum computing leverages matter existing in multiple states simultaneously to make vast calculations beyond previous capabilities, expediting advancements in medicine and technology.
However, concerns remain about security vulnerabilities posed by quantum computing – the ability to breach even the most robust encryption systems.
Google Quantum AI, alongside other entities like Microsoft, Harvard University, and Quantinum, is working on harnessing quantum mechanics for computing. Overcoming challenges in error correction has paved the way for significant speed enhancements and groundbreaking developments.
Quantum processors are evolving rapidly, surpassing traditional computers and unlocking new possibilities for quantum computations. The potential for quantum computers to exist in multiple states simultaneously promises remarkable capabilities across various fields.
Dr Peter Leake, Research Fellow at the University of Oxford’s Quantum Institute and founder of Oxford Quantum Circuits, acknowledges the rapid advancements in quantum computing technology. While applauding Google’s progress in error correction, he highlights the need for practical applicability in real-world scenarios.
As quantum computing approaches practical implementation, collaboration across various fields becomes crucial to navigate challenges and harness the full potential of this groundbreaking technology.
Recent respiratory disease epidemics have attracted a lot of attention, yet most respiratory monitoring is limited to physical signals. Exhaled breath condensate (EBC) is packed with rich molecular information that can reveal various insights into an individual's health. Now, Professor Wei Gao and colleagues at California Institute of Technology have developed EBCare, a mask-based device that monitors EBC biomarkers in real time. For example, the EBCare mask can monitor asthma patients for their levels of nitrite, a chemical that indicates airway inflammation.
This diagram shows how the smart mask detects breathed chemicals, such as nitrite, an indicator of airway inflammation. Images by Wei Gao and Wenzheng Heng, Caltech.
“Monitoring a patient's breathing is routinely done, for example to assess asthma and other respiratory diseases,” Prof Gao said.
“However, this method requires patients to visit a clinic to have a sample taken and then wait for the test results.”
“Since COVID-19, people have started wearing masks. We can leverage this increased use of masks for remote, personalized monitoring to get real-time feedback on one's health from the comfort of one's own home or office.”
“For example, we could use this information to evaluate how effective a medical treatment is.”
To selectively analyze the chemicals and molecules in your breath, you first need to cool them down and condense them into a liquid.
In a clinical setting, this cooling step is separate from the analysis: Moistbreath samples are cooled in a bucket of ice or a large refrigerated cooler.
The EBCare mask, on the other hand, is self-cooling, according to the team.
The breath is cooled by a passive cooling system that integrates hydrogel evaporative cooling and radiative cooling to effectively cool the breath on the facemask.
“This mask represents a new paradigm for respiratory and metabolic disease management and precision medicine because wearing it daily allows for easy collection of breath samples and real-time analysis of exhaled chemical molecules,” said Wen-zheng Heng, a graduate student at the California Institute of Technology.
“Breath condensate contains soluble gases as well as non-volatile substances in the form of aerosols and droplets, including metabolic products, inflammatory indicators and pathogens.”
Once the breath is converted into liquid, a series of capillaries in a device called bioinspired microfluidics immediately transports the liquid to a sensor for analysis.
“We learned how to transport water from plants, which use capillary action to pull water up from the ground,” Professor Gao said.
“The analysis results are then sent wirelessly to an individual's phone, tablet or computer.”
“The smart mask can be prepared at a relatively low cost. The materials are designed to cost just $1.”
To test the masks, the authors conducted a series of human studies, focusing primarily on patients with asthma or COPD.
The researchers specifically monitored the patients' breath for nitrite, a biomarker of inflammation in both diseases.
Results showed that the masks accurately detected biomarkers indicative of inflammation in patients' airways.
In a separate experiment, the masks demonstrated that they could accurately detect subjects' blood alcohol levels, suggesting that they could potentially be used for field DUI checks and other alcohol consumption monitoring.
We also explored how the mask can be used to assess blood urea levels in the monitoring and management of kidney disease.
As kidney function declines, by-products of protein metabolism, such as urea, accumulate in the blood.
At the same time, the amount of urea in saliva increases, which breaks down into ammonia gas, leading to high ammonium concentrations in the breath condensate.
The study showed that the smart mask could accurately detect ammonium levels, closely reflecting the urea concentration in blood.
“Our smart mask platform for EBC collection and analysis represents a major advancement in the potential for real-time monitoring of lung health,” said Professor Harry Rossiter, director of the Lundquist Institute for Biomedical Innovation at Harbor-UCLA Medical Center.
“This concept, with the potential to add biosensors for a wide range of compounds in the future, highlights the groundbreaking potential of smart masks in health monitoring and diagnostics.”
No turning point in the history of the universe surpasses the birth of the first stars. As stars flickered into existence some 200 to 400 million years after the Big Bang, the energy they emitted ripped apart the atoms of the gas that had cooled the universe, reheating it in a process called reionization. Then, as the stars burned out and died, they created a cocktail of chemical elements that prepared the universe to give rise to galaxies, planets, and eventually life itself.
It's no wonder astronomers are itching to get a glimpse of this first generation of stars. To start with, they were spectacular: huge and blisteringly bright, thought to be 300 times more massive and 10 times hotter than the Sun. But observing them could also tell us a lot about the mysterious early stages of the Universe, particularly how the universe came to be flooded with supermassive black holes in an incredibly short space of time.
Now we may finally be on the brink. Earlier this year, astronomers reported that the James Webb Space Telescope (JWST), by fixing its excellent field of view on the outer edges of very distant galaxies, may already have seen evidence of the first stars. “The observations we can now make really expand our knowledge,” says Hannah Ubler of the University of Cambridge.
The signal may turn out to be a false alarm, but what's interesting right now is that other researchers are starting to look at different features of the light from the early universe, even suggesting that it might be the first stars.
I’ll be 60 in just over 5 years, which is a big deal. I already have an age-related disease (high blood pressure), and the odds are good that I haven’t been diagnosed with at least one more by then. After that, the symptoms of age will pile up and bring me to my inevitable end. Many of you will no doubt be in a similar situation. We are living longer than ever before, but those extra years don’t necessarily come with good health.
But judging by recent trends, my sons may be even luckier. Instead of facing a long list of common diseases in their 70s and 80s, they may be able to immunize themselves against them. They may be able to celebrate middle age with vaccinations that immunize them against Alzheimer’s, cancer, and hypertension. What’s more, they may even have access to an anti-aging panacea that vaccinates against all of these and more, allowing them to enter old age in better health than most of us today could hope to achieve.
Suddenly, an ancient medical technique looks set to become a game changer in the fight against diseases associated with age. Vaccines, the most commonly used injections for infectious diseases like COVID-19 and measles, are now showing promise for treating non-infectious diseases, particularly those associated with age. The field is advancing rapidly, and there are signs that, in the right winds, I and others my age might be able to benefit from these vaccinations. It’s so…
Humanity has always dreamed of traveling beyond our solar system to the stars, but the vastness of the universe has kept us grounded. Our closest star, Proxima Centauri, is a staggering 4.24 light years away, which is too far for us to wait patiently.
Recently, on April 23, NASA launched the Advanced Composite Solar Sail System from New Zealand, a system that uses lightweight sails to propel spacecraft instead of traditional rockets. This development has excited both experts and science fiction fans, as it opens up possibilities for long-distance space travel.
How solar sail works
Instead of using thrusters and fuel like traditional spacecraft, solar sail systems use reflective sails to absorb momentum from photons emitted by the sun. This technology enables spacecraft to gain acceleration without the limitations of fuel. In space, where there is no air resistance, a slight push from the sun is all that’s needed for propulsion.
Solar sails operate similar to sailing ships, utilizing the momentum of photons for movement. By harnessing the sun’s energy, spacecraft can travel far distances at manageable speeds.
How fast can an interstellar probe travel with a solar sail?
The speed of a solar sail system depends on factors like the size of the sail, spacecraft mass, and distance from the sun. With creative maneuvers like slingshot maneuvers and potential laser boosts, spacecraft using solar sails can achieve speeds close to 20% of the speed of light.
Future solar sail systems could reach speeds up to 20 percent of the speed of light. – Image credit: NASA/Aero Animation/Ben Schweighart
Will humanity ever be able to sail to another planet?
Potentially, solar sail technology could pave the way for human interstellar travel in the future. However, there are challenges, such as sustaining long-term missions for generations and addressing relativistic effects caused by near-light speed travel.
What exactly is NASA's solar sail mission?
NASA’s Advanced Composite Solar Sail System is a demonstration of solar sail technology that aims to test a new lightweight boom made of flexible materials. The mission involves a CubeSat deploying an 80 square meter sail in orbit to gather data for future solar sail missions.
About our experts
patrick johnson is an associate professor at Georgetown University with expertise in quantum mechanics. He authored the book “Star Wars Physics” and has contributed to scientific journals like Physical Review.
Twenty years ago, scientists announced the creation of a new miracle substance that would revolutionize our lives. They named it graphene.
Graphene is made up of a single layer of carbon atoms arranged in a hexagonal pattern, making it one of the strongest materials ever produced. It is more resistant to electricity than copper and has excellent heat conductivity.
The potential applications of graphene seemed limitless, with predictions of ultra-fast processors, quicker battery charging, and stronger concrete. It was even proposed as a solution for potholes in roads.
Professor Andre Geim (left) and Professor Konstantin Novoselov from the University of Manchester discovered graphene. Photo: John Super/AP
The scientists behind the discovery, Andre Geim and Konstantin Novoselov, received the Nobel Prize in Physics in 2010 for their work. The National Graphene Institute was established at the University of Manchester.
Despite the initial hype, the graphene revolution has not materialized as expected. Challenges in scaling up production have hindered its widespread adoption.
Sir Colin Humphreys, a materials science professor at Queen Mary University of London, pointed out that the main issue lies in the difficulty of producing graphene on a large scale.
He explained that the original method of creating graphene was not conducive to mass production and that significant investments by companies like IBM, Samsung, and Intel have been made to develop scalable production methods.
Recent advancements in manufacturing techniques show promise for the resurgence of graphene technology. Companies like Paragraph are now producing graphene-based devices in large quantities.
Graphene-based devices are being used for various applications, including sensors for detecting magnetic fields and differentiating between bacterial and viral infections.
Additionally, graphene devices are expected to be more energy-efficient than current technologies, offering a promising future for the material.
While the graphene revolution may have been delayed, it holds the potential to address pressing global challenges and significantly impact modern life.
Graphene “has the potential to make a real difference to modern life,” says Sir Colin Humphreys, professor of materials science.
Photo: AddMeshCube/Alamy
The hyped science failed to make the grade.
nuclear power “Our children will have immeasurably cheap electrical energy in their homes.” – Louis Strauss, then chairman of the U.S. Atomic Energy Commission, in 1954.
Sinclair C5 “This is the future of transportation” – promotional materials for the 1985 Sinclair C5 electric scooter/car. Sales in the first year were predicted to be 100,000 units, but only 5,000 units were sold. Project has been abandoned.
medical advances “The time has come to close the book on infectious diseases and declare that the war on epidemics has been won” – in the words of Dr. William H. Stewart, Surgeon General of the United States from 1965 to 1969.
Living a healthier life can be achieved in many ways. Simple activities like daily walks, healthy eating, and brain-boosting puzzles like Sudoku can keep your mind and body active. For a unique approach, consider trying neuromodulation, which involves sending electric shocks to the brain.
Neuromodulation is an innovative method that uses a stimulator placed on the head to deliver electrical shocks directly to the nervous system. This non-invasive technique offers numerous health benefits and has gained traction as a cutting-edge technology for enhancing well-being.
The concept of neuromodulation has been around for some time, but companies like Parasin and gamma core have reignited interest in recent years. These companies claim to improve mental performance and overall health with their devices that can be used conveniently at home.
Research from reputable institutions like UCL, Harvard University, and University College London supports the effectiveness of neuromodulation. Even tech entrepreneurs like Brian Johnson have shown interest in this technology.
What is neuromodulation and how does it work?
Neuromodulation is a technique that alters neural activity by delivering electrical signals to specific areas. Imagine it as a dimmer switch that can increase or decrease nerve or brain activity. This method can excite or inhibit nerves to alleviate pain and modify neural patterns associated with various conditions like epilepsy and Parkinson’s disease.
Companies like Parasym use “auricular vagal neuromodulation therapy” to deliver electrical signals through the ear to target the vagus nerve, which plays a crucial role in connecting the brain, heart, and digestive system.
How technology can slow aging
Neuromodulation can help slow down the aging process by combating chronic inflammation, enhancing cognitive function, and improving cardiovascular health. Research shows promising results in addressing age-related issues like Alzheimer’s disease and heart conditions.
While neuromodulation offers benefits like improved heart rate variability and reduced fatigue and depression, it remains in the early stages of development. Safety concerns and experimental results underscore the need for further research and validation.
Is neuromodulation safe?
Neuromodulation has evolved since its inception in the 1960s, with modern devices providing safer options for users. Implantable devices offer more effective treatment but come with higher risks, including infections and other complications.
Non-invasive wearable devices like those from Parasym are considered safer, with minor side effects like skin irritation being the main concern. These devices require consistent use to deliver optimal results, making them a more accessible but less durable alternative to implantable devices.
While neuromodulation technology shows promise in improving health and well-being, users should weigh the benefits against the costs and potential risks before investing in these innovative devices.
Noland Arbor can play chess using Neuralink implant
Neuralink
Neuralink, the brain-computer interface company founded by Elon Musk, has revealed the identity of its first patient who says its implant “changed his life.” But experts say it’s not yet clear whether Neuralink has done more than replicate existing research efforts.
Who was Neuralink’s first patient?
Musk announced in January that the first human patient had received a Neuralink implant, but few details were released at the time. We now know from something. Live stream video by company – Who is that person and how will the test be done?
Noland Arbaugh explains in the video that an accident eight years ago dislocated his fourth and fifth vertebrae, leaving him a quadriplegic. He previously controlled the computer with a mouth interface, and is shown moving the cursor with just his thoughts, apparently using a Neuralink implant.
“Once I started imagining the cursor moving, it became intuitive,” Arbaugh says in the video. “Basically, it was like using ‘force’ on the cursor, and I was able to move the cursor anywhere I wanted. I could just look anywhere on the screen and the cursor would move where I wanted it. It was a very wild experience.”
He uses the device for reading, language learning, and computer games such as chess, and claims he uses it for up to eight hours at a time, at which point he needs to charge the device. “It’s not perfect, I’ve run into some problems. But it’s already changed my life,” he says.
What does the implant contain?
Neuralink did not respond to requests for an interview, but its website says the current generation coin-sized implant, called N1, generates neural activity through 1,024 electrodes distributed across 64 threads that extend into the user’s brain. It is said that it records. These are so fine that they must be placed by a surgical robot.
In a livestream video, Arbaugh said he was discharged from the hospital the day after his implant surgery, and that from his perspective the surgery was a relatively simple process.
The implant uses a small battery that is charged through the skin by an inductance charger and communicates wirelessly with an app on your smartphone.
Does this mean the first human trials were successful?
Reinhold Scherrer Researchers at the University of Essex in the UK will decide whether Neuralink’s first human trial was a success because the company “has not released enough information to form an informed opinion” He said it was too early.
“While the video is impressive and there is no doubt that it took a lot of research and development work to get to this stage, it is unclear whether what is being shown is new or groundbreaking,” he said. Masu. “Although control appears to be stable, most of the studies and experiments presented so far are primarily replications of past studies. Replication is good, but major challenges still remain. ”
Who else is working on brain implants?
Neuralink isn’t the only group exploring this idea. A number of academic organizations and commercial startups have already conducted human experiments that have successfully interpreted brain signals and produced some sort of output.
A team at Stanford University in California placed two small sensors just below the surface of the brain of a man who was paralyzed from the neck down. Researchers may be able to interpret the brain signals when a man decides to put pen to paper and translate them into text that can be read on a computer.
When will Neuralink be available and how much will it cost?
It’s too early to tell, as this has a long way to go before it becomes a commercial product, with much testing and certification to come. But Musk has made it clear that he intends to commercialize the technology.of The first product planned was named Telepathy.allows users to take control of their mobile phones and computers.
○On Sunday, January 22, 1984, the Los Angeles Raiders defeated the Washington Redskins 38-9 in Super Bowl XVIII. With the exception of a few older Raiders fans, we all remember him that night 40 years ago with one ad that set the tone for the techno-optimism that would dominate the 21st century. did.
Advertisement showed an auditorium full of zombie-like figures watching a projection of an elderly leader resembling the Emperor from 1980's The Empire Strikes Back. A young, athletic woman wearing red and white (the colors of the flag of Poland, which waged a massive labor uprising against the Soviet-controlled communist state) spins a hammer and frames the face of her leader. He threw it across the screen. As armored police rush in to stop her.
The ad explicitly referenced George Orwell's dystopian novel 1984. Meanwhile, then-President Ronald Reagan began his re-election campaign with the audacity to confront the threat of the totalitarian Soviet Union, increasing the risk of global nuclear annihilation.
That same month, Apple began selling personal computers. This will change the way we think about computing technology in our lives and will lead to many of the ideological changes that will drive the 21st century. In many ways, the long 21st century began 40 years ago this week for him.
From a garage-based startup in Cupertino, California, we have steadily grown to where we are today. The most valuable company in the history of the world, Apple has changed the way we experience culture and each other. While not the only force to do so, if you look at other ruling forces that left their mark in 1984, such as Reagan, Apple is a key player in how we view and govern ourselves over the next 40 years. It was part of a larger change. Years later, it still impacts daily life in ways few could have imagined at the time.
Before the Macintosh debuted, Apple created high-quality computers like the Apple II (1979) that ran programs using the standard operating system at the time, the Apple Disc Operating System (which was similar to the Apple Disc Operating System). was highly regarded among computer enthusiasts for producing innovative desktop computers. MS-DOS was provided by a small then-starting company called Microsoft and could be programmed in languages such as Basic.
Companies like Texas Instruments and Atari had brought user-friendly computers to homes before the Macintosh, and IBM and Commodore had made desktop computers for businesses, but the Macintosh was something different. I was promised something.
The Macintosh created a mass market for usable computers that looked more like magic than machines. The Macintosh is a sealed box that hides the board and cables and presents a sleekly designed box, similar to the MacBook and the iPhone, which was released in 2007 and was the most influential and profitable of Apple's products. We have established design standards for what will become.
The iPhone represents much of what's appealing and loathsome about 21st century life. This is a device that does things that no other device or technology can do. It just provides all of that in its own controlled environment that masks all of the actual technology and the human agency that created it. There may be a little elf in there.
Billions of people now use such equipment, but few people ever look inside or think about the people who mined the metals and assembled the parts in dangerous conditions. plug. There are now cars and appliances designed to feel like an iPhone, all glass, metal, curves, and icons. None of them provide any clues for humans to build or maintain them. Everything seems like magic.
The shift to magic by design has blinded us to the real situation of most people working and living in the world. Gated devices are similar to gated communities. What's more, the sealed boxes are equipped with ubiquitous cameras and location devices, and when connected through invisible radio signals, serve as a global surveillance system that Soviet dictators never dreamed of. . We have also entered a world of soft control beyond Orwell's imagination.
Gated communities began to grow in popularity in the United States during the Reagan administration. It was to provide the illusion of safety against imagined but undefined invaders. They also resembled private states, with exclusive membership and strict rules of etiquette.
Reagan won reelection in a landslide in the November 1984 election. His Reagan victory established a nearly unwavering commitment to market fundamentalism and technological optimism that was largely adopted by Reagan's critics and even his successors like Bill Clinton and Barack Obama. . Outside the United States, ostensibly left-wing 20th century leaders such as Greece's Andreas Papandreou, France's François Mitterrand, and Britain's Tony Blair limited the vision of change that the growing neoliberal consensus allowed. was.
By the beginning of this century, questioning the techno-optimism imposed by Apple and the faith in neoliberalism secured by Reagan's hold on the world's political imagination seems like a fit of sulking or sulking. Probably. Does anyone doubt the democratizing and liberating potential of computer technology and free markets?
Now, a quarter of the way through this century, it's clear that the only promises kept were to Apple's shareholders and the descendants of Reagan's politicians. Democracy is in tatters around the world. Networked computers rob relationships, communities, and society of the joy and humanity. The economy is more stratified than ever before. Politics excludes any positive vision of a better future.
Of course, you can't blame Apple or Reagan. They simply distilled, harnessed, and sold back to us what we longed for: a simple story of inevitable progress and liberation. If we had heeded the warnings in Orwell's book instead of Apple's ads, we might have learned that simple stories never have happy endings.
We have all experienced vomiting at some stage in our lives. Whether it’s due to a nasty bout of food poisoning or the well-known norovirus that infects the population episodically. And we can all agree that it’s scary.
But imagine what it would do to you physically, mentally, and emotionally if you were to expect constant nausea and vomiting at a critical stage in your life.this is the reality for them 4 in 5 women experience nausea and vomiting during pregnancy. Even mild cases can cause unpleasant symptoms such as nausea, loss of appetite, and vomiting.
According to the Office for National Statistics, in 2022 this will result in: 20,000 women hospitalized.
But until recently, little was known about the causes of nausea and vomiting during pregnancy. Anecdotal evidence suggests that the more nausea and vomiting you have, the healthier your pregnancy, and even suggests that it is related to the number of babies you have.
However, real-world evidence shows this is not true. In fact, nausea and vomiting can vary widely in severity and pattern during pregnancy.
Often referred to as “morning sickness,” nausea and vomiting during pregnancy can occur at any time of the day or night. Usually it’s worse for the first 12 weeks, then it calms down. However, for many women, it lasts throughout the pregnancy.
Read more about women’s health:
However, after more than 20 years of research in this field, a breakthrough has been made that identifies a causal relationship. This was promoted by Dr. Malena Fezo, a geneticist at the Keck School of Medicine at the University of Southern California.
Fezo was inspired to pursue this career after suffering from severe nausea and vomiting during her second pregnancy in 1999. She was unable to eat or drink without vomiting, and she rapidly lost weight and became so weak that she could no longer stand or walk.
However, doctors were skeptical that she might be exaggerating her symptoms to get attention. Fezo was eventually hospitalized and she miscarried at 15 weeks.
Fezo will conduct genetic research on previously pregnant women in collaboration with 23andMe, a private company that allows individuals to send samples of their DNA to determine health status and insights into their ancestry. did.
She identified a link with a woman who suffered from severe nausea and vomiting during pregnancy (requiring an intravenous fluid). and a variant of the gene encoding a protein named GDF15, a hormone that acts on the brain stem.
This association pinpointed the need for further research to understand the role of GDF15 protein in pregnancy.
GDF15 is secreted by the placenta during the first two trimesters of pregnancy. It also likely plays a role in preventing the mother from biologically rejecting the baby, which is essential to allowing the pregnancy to continue. However, GDF15 has been shown to regulate physiological body weight and appetite through the brain. This substance is produced in excess in cancer patients who suffer from severe appetite and weight loss.
In addition to previous research, research led by Fejzo and the University of Cambridge Professor Stephen O'Rahilly We found that the level of GDF15 was high. Seen in women with severe nausea and vomiting during pregnancy. However, the effects of this hormone appear to depend on the woman's susceptibility and her exposure to GDF15 before pregnancy. Women who received higher levels of exposure before pregnancy had higher levels of the GDF15 hormone but did not have symptoms of nausea or vomiting.
It has been hypothesized that long-term exposure to GDF15 before pregnancy may have a protective effect and reduce a woman’s sensitivity to the hormonal surge caused by fetal development.
This exposure relationship is very unique and provides more understanding and knowledge as well as the potential that women may be desensitized by increasing their exposure to hormones before pregnancy. It also suggests possible treatments. Just like some people treat food allergies with controlled exposure therapy.
Many of the common symptoms affecting women, such as nausea and vomiting during pregnancy, are poorly understood despite their very high incidence. Women’s healthcare is not a niche, and there is much to understand and learn through this type of research.
Illustration of Sierra Space’s first dream chaser, DC#1 (Tenacity). The Dream Chaser spacecraft developed by Sierra Space for NASA is gearing up for a demonstration mission to the ISS in 2024, with a focus on cargo delivery and in-orbit certification. .Credit: Sierra Space
NASA Sierra Space’s Dream Chaser spacecraft is scheduled for a demonstration flight to the ISS in 2024, carrying cargo transport and various on-orbit tests to ensure operational readiness for future missions. be exposed.NASA and Sierra Space are making progress toward the maiden flight of the company’s Dream Chaser spacecraft. international space station. The unmanned cargo spaceplane is scheduled to begin demonstration missions to orbital complexes in 2024 as part of NASA’s commercial resupply services.
Dream chaser and shooting starManufactured by Sierra Space, Louisville, Colorado, the Dream Chaser cargo system consists of two main elements: the Dream Chaser spacecraft and the Shooting Star cargo module. As a lifting body spacecraft, Dream Chaser is designed to be reused up to 15 times. HL-20 spacecraft It was developed at NASA’s Langley Research Center in Hampton, Virginia.Shooting Star, a spaceplane cargo module companion, is designed to support the transportation and disposal of pressurized and unpressurized cargo to the space station. The cargo module can only be used once and is disposed of before reentry.
The Dream Chaser system will be mounted on a ULA (United Launch Alliance) Vulcan Centaur rocket from Space Launch Complex 41 at Cape Canaveral Space Force Station in Florida, inside a 5-meter fairing. It can be launched by folding its wings. Fairing panels protect the spacecraft during ascent, but are discarded once it reaches orbit. Dream Chaser’s cargo module and wing-mounted solar arrays will be deployed during an autonomous rendezvous with the space station. In the event of disaster, Dream Chaser is designed to be ready for launch within as little as 24 hours.
NASA and Sierra Space are making progress toward the company’s Dream Chaser spacecraft’s maiden flight to the International Space Station. The unmanned cargo spaceplane is scheduled to begin demonstration missions to orbital complexes in 2024 as part of NASA’s commercial resupply services.Credit: Sierra SpaceMission overview
During the first flight, Sierra Space will conduct an in-orbit demonstration to qualify Dream Chaser for future missions. Teams from NASA Kennedy Space Center in Florida, NASA Johnson Space Center in Houston, and Dream Chaser Mission Control Center in Louisville, Colorado will monitor the flight. Sierra Space flight controllers will control the Dream Chaser spacecraft at the launch pad until it is handed over to NASA Kennedy’s Sierra Space ground operations team after landing.
The far-field demonstration will be conducted outside the vicinity of the space station before the spacecraft enters the invisible 2.5-by-1.25-by-1.25-mile (4-by-2-by-2-kilometer) boundary around the ellipsoid. . Rotating laboratory. These demonstrations are required before Dream His Chaser enters joint operations with his NASA team at Mission Control Center in Houston. These include demonstrating postural control, translational movements, and aborting functions.
Near-field demonstrations must be performed in close proximity to the space station, and include activation and use of light detection and ranging (LIDAR) sensors, responding to commands sent from the space station, retreating from the station in response to commands, and initially This includes maintaining proximity. 1,083 feet (330 meters) from the station, then 820 feet (250 meters), and finally 98 feet (30 meters). After the successful completion of the demonstration, Dream Chaser will move towards the space station.
As Dream Chaser approaches the orbiting laboratory, it will eventually park approximately 38 feet (11.5 meters) from the space station, where the station’s crew will use the Canadarm2 robotic arm to maneuver the spacecraft in front of the team on the ground. Hold on to the cargo module fixtures. Attach the cargo module to the earth-facing port of the Unity or Harmony module.
Dream Chaser will carry more than 7,800 pounds of cargo on its first flight to the International Space Station. On future missions, Dream Chaser is designed to remain on station for up to 75 days and deliver up to 11,500 pounds of cargo. Cargo can be loaded onto the spacecraft up to 24 hours before launch. Dream Chaser can return more than 3,500 pounds of cargo and experimental samples to Earth, and more than 8,700 pounds of trash can be disposed of during reentry using its cargo module.return to earthDream Chaser will remain on the space station for approximately 45 days before being uninstalled using Canadarm2. After departure, the spacecraft can land within 11 to 15 hours at the earliest, with the possibility of landing daily if weather conditions permit.
Dream Chaser’s landing weather criteria typically require crosswinds of less than 17.2 mph (15 knots), headwinds of less than 23 mph (20 knots), and tailwinds of less than 11.5 mph (10 knots). Thunderstorms, lightning, or rain within a 20-mile radius of the runway or 10 miles along the approach path are not acceptable conditions for landing. Detailed flight rules help controllers determine whether a landing opportunity is favorable.
A combination of Dream Chaser’s 26 Reaction Control System thrusters ignites, sending the spacecraft out of orbit. Dream Chaser re-entered Earth’s atmosphere and glided in the style of NASA’s Space Shuttle to a runway landing at Kennedy Launch and Landing Facility, becoming the first spacecraft to land at the facility since the Space Shuttle’s last flight in 2011. becomes.Once Dream Chaser is powered down after landing, the Sierra Space ground operations team will transport Dream Chaser to the Space Systems Processing Facility for necessary inspections, unload remaining NASA cargo, and prepare for the next mission. let’s start doing ….Sierra Space (formerly Sierra Nevada Corporation) was selected in 2016 as NASA’s third commercial cargo replenishment spacecraft to service the International Space Station.
This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.
Strictly Necessary Cookies
Strictly Necessary Cookie should be enabled at all times so that we can save your preferences for cookie settings.