Nobel Prize Winner Plans to Develop World’s Most Powerful Quantum Computer

Ryan Wills, New Scientist. Alamy

John Martinis is a leading expert in quantum hardware, who emphasizes hands-on physics rather than abstract theories. His pivotal role in quantum computing history makes him indispensable to my book on the subject. As a visionary, he is focused on the next groundbreaking advancements in the field.

Martinis’s journey began in the 1980s with experiments that pushed the limits of quantum effects, earning him a Nobel Prize last year. During his graduate studies at the University of California, Berkeley, he tackled the question of whether quantum mechanics could apply to larger scales, beyond elementary particles.

Collaborating with colleagues, Martinis developed circuits combining superconductors and insulators, demonstrating that multiple charged particles could behave like a single quantum entity. This discovery initiated the macroscopic quantum regime, forming the backbone of modern quantum computers developed by giants like IBM and Google. His work led to the adoption of superconducting qubits, the most common quantum bits in use today.

Martinis made headlines again when he spearheaded a team at Google that built the first quantum computer to achieve quantum supremacy. For nearly five years, this machine could independently verify the outputs of random quantum circuits, though it was eventually surpassed by classical computers in performance.

Approaching seven decades of age, Martinis still believes in the potential of superconducting qubits. In 2024, he co-founded QoLab, a quantum computing startup proposing revolutionary methodologies aimed at developing a genuinely practical quantum computer.

Carmela Padavich Callahan: Early in your career, you fundamentally impacted the field. When did you realize your experiments could lead to technological advancements?

John Martinis: I questioned whether macroscopic variables could bypass quantum mechanics, and as a novice in the field, I felt it was essential to test this assumption. A fundamental quantum mechanics experiment intrigued me, even though it initially seemed daunting.

Our first attempt was a simple and rapid experiment using contemporary technology. The outcome was a failure, but I quickly pivoted. Learning about microwave engineering, we tackled numerous technical challenges before achieving subsequent successes.

Over the next decade, our work on quantum devices laid a solid foundation for quantum computing theory, including the breakthrough Scholl algorithm for factorizing large numbers, essential for cryptography.

How has funding influenced research and the evolution of technology?

Since the 1980s, the landscape has transformed dramatically. Initially, there was uncertainty about manipulating single quantum systems, but quantum computing has since blossomed into a vast field. It’s gratifying to see so many physicists employed to unravel the complexities of superconducting quantum systems.

Your involvement during quantum computing’s infancy gives you a unique perspective on its trajectory. How does that inform your current work?

Having long experience in the field, I possess a deep understanding of the fundamentals. My team at UC Santa Barbara developed early microwave electronics, and I later contributed to foundational cooling technology at Google for superconducting quantum computers. I appreciate both the challenges and opportunities in scaling these complex systems.

Cryostat for Quantum Computers

Mattia Balsamini/Contrasto/Eyeline

What changes do you believe are necessary for quantum computers to become practical? What breakthroughs do you foresee on the horizon?

After my tenure at Google, I reevaluated the core principles behind quantum computing systems, leading to the founding of QoLab, which introduces significant changes in qubit design and assembly, particularly regarding wiring.

We recognized that making quantum technology more reliable and cost-effective requires a fresh perspective on the construction of quantum computers. Despite facing skepticism, my extensive experience in physics affirms that our approach is on the right track.

It’s often stated that achieving a truly functional, error-free quantum computer requires millions of qubits. How do you envision reaching that goal?

The most significant advancements will arise from innovations in manufacturing, particularly in quantum chip fabrication, which is currently outdated. Many leading companies still use techniques reminiscent of the mid-20th century, which is puzzling.

Our mission is to revolutionize the construction of these devices. We aim to minimize the chaotic interconnections typically associated with superconducting quantum computers, focusing on integrating everything into a single chip architecture.

Do you foresee a clear leader in the quest for practical quantum computing in the next five years?

Given the diverse approaches to building quantum computers, each with its engineering hurdles, fostering various strategies is valuable for promoting innovation. However, many projects do not fully contemplate the practical challenges of scaling and cost control.

At QoLab, we adopt a collaborative business model, leveraging partnerships with hardware companies to enhance our manufacturing capabilities.

If a large-scale, error-free quantum computer were available tomorrow, what would your first experiment be?

I am keen to apply quantum computing solutions to challenges in quantum chemistry and materials science. Recent research highlights the potential for using quantum computers to optimize nuclear magnetic resonance (NMR) experiments, as classical supercomputers struggle with such complex quantum issues.

While others may explore optimization or quantum AI applications, my focus centers on well-defined problems in materials science, where we can craft concrete solutions with quantum technologies.

Why have mathematically predicted quantum applications not materialized yet?

While theoretical explorations in qubit behavior are promising, real-life qubits face significant noise challenges, making practical implementations far more complex. Theoretical initiatives comprehensively grasp theory but often overlook the intricacies of hardware development.

Through my training with John Clark, I cultivated a strong focus on noise reduction in qubits, which has proven beneficial in experiments showcasing quantum supremacy. Addressing these challenges requires dedication to understanding qubit design intricacies.

As we pursue advancements, a dual emphasis on hardware improvements and application innovation remains crucial in the journey to unlock quantum computing’s full potential.

Topics:

Source: www.newscientist.com

Archaeologists Develop First 3D Model of Easter Island’s Primary Moai Quarry

Evidence from ethnohistory and recent archaeology indicates that Easter Island (Rapanui) had a politically decentralized structure, organized into small kin-based communities that operated with a degree of autonomy throughout the island. This raises significant questions regarding the over 1,000 monumental statues (moai). Was the production process at Rano Raraku, the main moai quarry, centrally managed, or did it reflect the decentralized patterns observed on the island? Archaeologists utilized a dataset of more than 11,000 UAV images to create the first comprehensive three-dimensional model of a quarry to examine these competing hypotheses.

3D model of Rano Raraku quarry. Image credit: Lipo et al., doi: 10.1371/journal.pone.0336251.

The monumental Moai of Easter Island stand as one of the most remarkable archaeological achievements in Polynesia, with over 1,000 megalithic statues spread across the volcanic isle, which is just 100 miles long.2

This significant investment in monumental architecture seems paradoxical when compared to ethnohistorical records that consistently depict Rapa Nui society as composed of relatively small, rival kin-based groups rather than a centralized polity.

Early ethnographers described a sociopolitical environment with numerous matas (clans or tribes) maintaining distinct territorial boundaries, independent ceremonial centers, and autonomous leadership structures.

This leads to the question of whether the construction of the moai was similarly decentralized.

In a recent study, Professor Carl Lipo from Binghamton University and his team compiled over 11,000 images of Rano Raraku, a key moai quarry, and developed a detailed 3D model of the site, which includes hundreds of moai at various stages of completion.

“For archaeologists, quarries are like an archaeological Disneyland,” Professor Lipo stated.

“Everything you can imagine about the making of a moai is represented here, as most of the crafting was performed directly on site.”

“This has always been a goldmine of information and cultural significance, yet it remains greatly under-documented.”

“The rapid advancement in technology is astounding,” noted Dr. Thomas Pingel of Binghamton University.

“The quality of this model surpasses what was achievable just a few years ago, and the ability to share such a detailed model accessible from anyone’s desktop is exceptional.”

In-depth analysis of the model revealed 30 distinct quarrying centers, each exhibiting different carving techniques, indicating multiple independent working zones.

There is also evidence of the moai being transported in various directions from the quarry.

These observations imply that moai construction, like the broader societal structure of Rapa Nui, lacked central organization.

“We are observing individualized workshops that cater specifically to different clan groups, focusing on particular areas,” said Professor Lipo.

“From the construction site, you can visually identify that specific groups created a series of statues together, indicating separate workshops.”

This finding challenges the prevalent assumption that such large-scale monument production necessitates a hierarchical structure.

The similarities among the moai appear to be the result of shared cultural knowledge rather than collaborative efforts in carving the statues.

“Much of the so-called ‘Rapanui mystery’ arises from the scarcity of publicly available detailed evidence that would empower researchers to assess hypotheses and formulate explanations,” stated the researchers.

“We present the first high-resolution 3D model of the Rano Raraku Moai Quarry, the key site for nearly 1,000 statues, offering new perspectives on the organization and manufacturing processes behind these massive megalithic sculptures.”

Findings are detailed in an article published in the Online Journal on November 26, 2025 in PLoS ONE.

_____

CP Lipo et al. 2025. Production of megalithic statues (moai) at Rapa Nui (Easter Island, Chile). PLoS One 20 (11): e0336251; doi: 10.1371/journal.pone.0336251

Source: www.sci.news

The Competition to Develop the Ultimate Self-Driving Car Heats Up | Technology

Greetings! Welcome to TechScape. I’m your host, Blake Montgomery, reaching out from Barcelona where my culinary adventures have, quite humorously, turned half of me into ham.

Who will lead the self-driving car industry?

The global rollout of self-driving cars is on the horizon. Next year, leading companies from the United States and China plan to expand their operations considerably and introduce robotaxis in major cities worldwide. These firms are akin to male birds strutting to attract a mate, setting the stage for upcoming worldwide rivalries.

On the U.S. front, we have Waymo, the autonomous vehicle initiative by Google. Over the last 15 years, it has invested billions into Waymo. After extensive testing, the company launched its robotaxi service for the public in San Francisco in June 2024, and has since expanded significantly. Waymo vehicles are now a common sight in most parts of Los Angeles, with introductions planned for Washington, D.C., New York City, and London next year.

On November 2nd, Chinese tech giant Baidu lodged a complaint against Google. Baidu claimed its autonomous vehicle division, Apollo Go, conducts 250,000 rides weekly, matching Waymo’s performance. Waymo recently hit a major milestone in the spring.

Most electric vehicles in China are priced significantly lower than their American counterparts, even without self-driving capabilities. Experts estimate that a single Waymo vehicle costs hundreds of thousands to manufacture, though exact figures remain unclear. “The hardware costs for our vehicles are much less than Waymo’s,” declared the CFO of Pony AI, a leading Chinese self-driving firm, to the WSJ.

To recoup its billion-dollar investment in Waymo, Google must persuade potential customers of its superior quality.

Google is highlighting transparency as a distinguishing factor. Much less data is accessible regarding Baidu’s vehicles, raising concerns about their safety records. Baidu asserts that its vehicles have amassed millions of miles without “a single major accident.” Google referenced this in a statement, posing a question about the extent to which the success of Chinese self-driving companies has been communicated to U.S. transportation authorities, as noted by the Wall Street Journal.

However, Apollo Go, which has unlocked taxis in Dubai and Abu Dhabi, is not Waymo’s only contender, as Gulf nations pursue diverse tech partnerships. Wheels from WeRide, another Chinese autonomous vehicle company, have made their way to the UAE and Singapore. All major players in the Chinese market are pursuing expansion into Europe, according to Reuters. Vehicles built by Momenta and deployed by Uber are slated to begin operations in Germany by 2026. WeRide, Baidu, and Pony AI are also gearing up to introduce robotaxi services in various European locations soon, leading to many more people encountering self-driving cars in their everyday lives.

Initially, the primary question concerning self-driving cars was: can we create a working vehicle? Now, the focus has shifted to: who will dominate the market?

Read more: Driving competition: Chinese automakers race to take over European roads

This Week in AI

Elon Musk’s loyal supporters push his wealth to $1 trillion

Martin Lawson discusses Elon Musk’s new compensation package. Illustration: Martin Rowson/The Guardian

Tesla’s recent performance has been lackluster. The looming end of the U.S. electric vehicle tax credit has resulted in a surge of buyers at dealerships over the past few months, yet the company reported a 37% drop in profits in late October. This decline adds to a series of challenges facing EV manufacturers.

In spite of Tesla’s struggles, shareholders voted in favor of a plan to compensate Elon Musk $1 trillion over the next decade, contingent on his ability to elevate Tesla’s valuation from $1.4 trillion to $8.5 trillion. Should he succeed in this and other objectives, it would mark the largest reward in the company’s history.

The results of the vote were revealed during the company’s annual shareholder meeting in Austin, Texas, where more than 75% of investors backed the proposal. Enthusiastic chants of “Elon” filled the room following the announcement.

Musk has been associated with Tesla for a decade through this pay structure, yet his attention has rarely been confined to just one venture. He has remained deeply involved in politics. My colleague Nick Robbins Early details how Musk has aligned himself with the international far-right:

Skip past newsletter promotions

Since his departure from the Trump administration, Musk’s political endeavors have included wielding social media as a platform to influence the New York mayoral election and orchestrating a right-wing, AI-generated alternative to Wikipedia. He has expressed concerns over a “homeless industrial complex” of nonprofits purportedly harming California and declared that “white pride should be acceptable.” On X, he stated that Britain is on the brink of civil war and warned of the collapse of Western civilization.

The social and economic repercussions stemming from Musk’s political stance have not deterred his public support for the far right, and he has increasingly showcased these affiliations, all while maintaining in his characteristic obstinacy that being branded a racist or extremist is of no consequence to him.

Read more: How Tesla shareholders’rewarded Elon Musk towards becoming the world’s first trillionaire

Can you take on the data center?

Google data center located in Santiago. Photo: Rodrigo Arangua/AFP/Getty Images

The data centers fueling the AI revolution are truly colossal. Their financial scope, physical dimensions, and vast datasets encompass all, making the idea of halting their construction seem counterintuitive amid ongoing developments. Silicon Valley’s leading firms are investing hundreds of billions at a rapid pace.

Yet, as data centers expand, resistance is mounting in the United States, the UK, and Latin America, where these facilities are rising in some of the most arid regions globally. Local opposition typically centers on the environmental repercussions and resource use of such monumental constructions.

Paz Peña, a researcher and fellow at the Mozilla Foundation, focuses on the social and environmental effects of data center technology in Latin America. She shared insights with the Guardian at the Mozilla Festival in Barcelona on how communities in Latin America are filing lawsuits to extract information from governments and corporations that prefer to keep it hidden. This dialogue has been condensed for brevity and clarity.

Read my Q&A with Paz Peña here.

Read more: “Cities that draw the line”: A community in Arizona fights against massive data centers

The Broader TechScape

Source: www.theguardian.com

Astronomers Develop 3D Temperature Map of the Exoplanet WASP-18b

A newly released map of WASP-18b, a hot Jupiter exoplanet located approximately 325 light-years from Earth, showcases an atmosphere characterized by distinct temperature zones. Within this region, the scorching temperatures are capable of decomposing water vapor.

Hot Jupiter WASP-18b. Image credit: NASA’s Goddard Space Flight Center.

The WASP-18b map represents the first implementation of a method known as 3D eclipse mapping, or spectroscopic eclipse mapping.

This study features a 2D model. The paper, published in 2023 by members of the same research team, illustrated how eclipse mapping can leverage the sensitive observations from the NASA/ESA/CSA James Webb Space Telescope.

“This technique is unique in that it can simultaneously survey all three dimensions: latitude, longitude, and altitude,” stated Dr. Megan Weiner Mansfield, an astronomer at the University of Maryland and Arizona State University.

“This enables a greater level of detail than previously possible for studying these celestial objects.”

With this technology, astronomers can now begin to chart the atmospheric variations of many similar exoplanets observable through Webb, resembling how Earth-based telescopes once scrutinized Jupiter’s Great Red Spot and its striped cloud formations.

“Eclipse mapping allows us to capture images of exoplanets whose host stars are too bright for direct observation,” remarked Dr. Ryan Challenor, an astronomer at Cornell University and the University of Maryland.

“Thanks to this telescope and groundbreaking technology, we can start to understand exoplanets similarly to the neighboring worlds in our solar system.”

Detecting exoplanets is quite challenging as they typically emit less than 1% of the brightness of their host star.

Mapping a solar eclipse involves measuring a small fraction of the total brightness as the planet orbits behind the star, obscuring and revealing areas of the star in the process.

Scientists can link minute changes in light to specific regions, creating brightness maps. These maps can be rendered in various colors and translated into three-dimensional temperature readings based on latitude, longitude, and altitude.

“It’s quite difficult because you’re looking for changes where small sections of the Earth become obscured and then revealed,” Challenor explained.

WASP-18b has a mass approximately 10 times that of Jupiter, completes its orbit in just 23 hours, and achieves temperatures around 2,760 degrees Celsius (5,000 degrees Fahrenheit). Its strong signal makes it an excellent candidate for testing new mapping techniques.

While previous 2D maps relied on a single wavelength or color of light, the 3D map re-evaluated the same observations using Webb’s Near Infrared Imager and Slitless Spectrometer (NIRISS) across multiple wavelengths.

“Each color corresponds to different temperatures and altitudes within WASP-18b’s gaseous atmosphere, allowing them to be combined into a 3D map,” Dr. Challenor noted.

“Mapping at wavelengths that water absorbs can indicate the layers of water in the atmosphere, while wavelengths that water doesn’t absorb facilitate deeper probing.”

“When combined, these provide a three-dimensional temperature map of the atmosphere.”

The new perspective uncovered spectroscopically distinct zones (with varying temperatures and potentially different chemical compositions) on the visible dayside of WASP-18b (the side that perpetually faces its star due to its tidally locked orbit).

The planet exhibits a circular “hotspot” that receives the most direct stellar light, with winds insufficient to redistribute the heat.

Surrounding the hotspot is a cooler “ring” located closer to the planet’s visible outer edge.

Interestingly, the measurements indicated that water vapor levels within the hotspot were lower than the average for WASP-18b.

“We believe this suggests that the heat in this area is so intense that water is beginning to decompose,” explained Challenor.

“This was anticipated by theory, but it’s exhilarating to confirm it through actual observations.”

“Further observations from Webb could enhance the spatial resolution of this pioneering 3D eclipse map.”

“Already, this technique will aid in refining temperature maps of other hot Jupiters, which comprise hundreds of the more than 6,000 exoplanets discovered to date.”

Dr. Mansfield expressed: “It’s thrilling that we now possess the tools to visualize and map the temperature of another planet in such intricate detail.”

“We can apply this technique to other exoplanet types. For instance, even if a planet lacks an atmosphere, we might be able to use this method to map surface temperatures and discern its composition.”

“While WASP-18b was more predictable, we believe there’s potential to observe phenomena we never anticipated before.”

The map of WASP-18b is detailed in a paper published in the journal Nature Astronomy.

_____

RC Challenor et al.. Horizontal and vertical exoplanet thermal structures from JWST spectroscopic eclipse maps. Nat Astron published online October 28, 2025. doi: 10.1038/s41550-025-02666-9

Source: www.sci.news

Young Children Develop Problem-Solving Skills with a Sorting Algorithm from Birth

Complex problem solving can arise sooner in child development than previously believed

PlusOnevector/Alamy

Research reveals that four-year-olds can devise efficient strategies for complex challenges, such as independently creating sorting methods akin to those used by computer scientists. The researchers assert that these abilities appear much earlier than once thought, warranting a reevaluation of developmental psychology.

Past experiments led by Swiss psychologist Jean Piaget popular in the 1960s, required children to physically arrange sticks by length. His findings indicated that structured strategies didn’t emerge until around age seven, as children tended to experiment haphazardly through trial and error.

Contrarily, recent work by Huiwen Alex Yang and his team at the University of California, Berkeley, shows that a notable fraction of four-year-olds can create algorithmic solutions for the same task, with more than a quarter exhibiting these skills by age five.

“Perhaps we haven’t given our children enough credit,” Yang states. “We must delve deeper into their reasoning capabilities.”

In a study involving 123 children aged 4-9, researchers asked them to sort digital images of bunnies by height. Initially, they could view groups of bunnies and directly compare their heights, allowing all children to sort them aptly using straightforward methods.

However, once the heights were obscured, the children had to compare only two bunnies at a time while being informed whether their order was correct. This approach necessitated the development of new strategies, as they couldn’t see the entire group simultaneously.

The researchers examined the children’s application of these new strategies, looking for evidence of known solutions and demonstrated instances where children utilized established algorithms. It was found that overall, children frequently outperformed random chance. Remarkably, they independently identified at least two efficient sorting algorithms recognized in computer science: Selection Sort and Shaker Sort.

In 34% of trials, children employed various comparisons, signaling their use of known sorting algorithms for a portion of the time. Out of a total of 667 tests run, the children utilized selection and shaker sorting in 141 instances, with some employing combinations of both strategies. Notably, 67 out of 123 children demonstrated at least one recognizable algorithm, and 30 children used both at different stages in the experiment.

Nonetheless, the age of the children directly influenced how many used algorithms. Only 2.9% of four-year-olds applied identifiable methods, while this rose to 25.5% among five-year-olds and 30.7% for six-year-olds. By age nine, over 54% were using identifiable algorithms.

“This has long been a challenge to Piaget,” remarks Andrew Bremner from the University of Birmingham, UK. He acknowledges Piaget’s groundbreaking contributions to developmental psychology in setting stages for learning but emphasizes that Piaget often designed experiments without proper controls. “Critics have been eager to illustrate that children can achieve more than Piaget claimed.

Essentially, while Piaget initially had a correct understanding of child development, his assessments of the ages at which children achieve certain milestones were overly pessimistic. This latest study strengthens the evidence supporting earlier development stages. Interestingly, it revolves around sorting. Bremner indicates this as the last bastion of Piaget’s work, proving applicable to younger children than once believed.

“Children can successfully navigate this particular problem much sooner than we anticipated,” states Bremner. “They do not approach the world as mere blank slates, but rather implement strategic techniques in problem-solving.”

Sam Wass from the University of East London points out that Piaget contended that children needed a comprehensive grasp of complex systems before they could devise strategies to engage with them, a notion he is finding increasingly unnecessary.

“This research signifies a significant trend in psychology that contests the assumption that intricate thoughts and understanding are prerequisites for executing complex behaviors,” notes Wass. “The study illustrates that complex behaviors may emerge from a far simpler array of rules.”

Topics:

Source: www.newscientist.com

Using Lasers, Fiber Optics, and Subtle Vibrations to Develop Earthquake Warning Systems

When the Mendocino earthquake erupted off the California coast in 2024, it shook structures from their very foundations, triggered a 3-inch tsunami, and sparked intriguing scientific investigations in the server room of a nearby police station.

More than two years prior to the quake, scientists had installed a device known as the “Dispersed Acoustic Sensing Interrogation Room” at the Alcata Police Station located near the coast. This device utilizes a laser directed through a fiber optic cable that provides internet connectivity to the station, detecting how the laser light bends as it returns.

Recently, researchers revealed in a study published in the Journal Science that data collected from fiber optic cables can effectively be used to “image” the Mendocino earthquake.

This research demonstrates how scientists can convert telecommunication cables into seismometers, providing detailed earthquake data at the speed of light. Experts noted that this rapidly advancing technology has the potential to enhance early earthquake warning systems, extending the time available for individuals to take safety measures, and could be critical for predicting major earthquakes in the future.

James Atterholt, a research geophysicist for the US Geological Survey and lead author of the study, stated, “This is the first study to image the seismic rupture process from such a significant earthquake. It suggests that early earthquake warning alerts could be improved using telecom fibers.”

The study proposes equipping seismometers with devices capable of gathering sparse data from the extensive network of telecommunications cables utilized by companies such as Google, Amazon, and AT&T, making monitoring submarine earthquakes—often costly—more affordable.

Emily Brozky, a professor of geoscience at the University of California, Santa Cruz, asserted that “early earthquake warnings could be dramatically improved tomorrow” if scientists can establish widespread access to existing communication networks.

“There are no technical barriers to overcome, and that’s precisely what Atterholt’s research emphasizes,” Brozky mentioned in an interview.

In the long term, leveraging this technology through fiber optic cables could enable researchers to explore the possibility of forecasting some of the most devastating earthquakes in advance.

Scientists have observed intriguing patterns in underwater subduction zones prior to significant earthquakes, including Chile’s magnitude 8.1 quake in 2014 and the 2011 Tohoku earthquake and tsunami in Japan.

Both of these major earthquakes were preceded by what are known as “slow slip” events that gradually release energy over weeks or months without causing noticeable shaking.

The scientific community is still uncertain about what this pattern signifies, as high-magnitude earthquakes (8.0 or greater) are rare and seldom monitored in detail.

Effective monitoring of seismic activity using telecommunications networks could enable scientists to accurately document these events and assess whether discernible patterns exist that could help predict future disasters.

Brodsky remarked, “What we want to determine is whether the fault will slip slowly before it gives way entirely. We keep observing these signals from afar, but what we need is an up-close and personal instrument to navigate the obstacles.”

While Brodsky emphasized that it’s still unclear whether earthquakes in these extensive subduction zones can be predicted, she noted that the topic is a major source of scientific discussion, with the new fiber optic technology potentially aiding in resolving this issue.

For nearly 10 years, researchers have been investigating earthquake monitoring through optical fiber cables. Brodsky stated that the study highlights the need for collaboration among the federal government, scientific community, and telecommunications providers to negotiate access.

“There are valid concerns; they worry about people installing instruments on their highly valuable assets and about the security of cables and privacy,” Brozky explained regarding telecom companies. “However, it is evident that acquiring this data also serves the public’s safety interests, which makes it a regulatory issue that needs to be addressed.”

Atterholt clarified that fiber optic sensing technology is not intended to replace traditional seismometers, but rather to complement existing data and is more cost-effective than placing seismometers on the seabed. Generally, using cables for earthquake monitoring does not interfere with their primary function of data transmission.

Jiaxuan Li, an assistant professor of geophysics and seismology at the University of Houston, noted he was not involved in the study but mentioned that there are still technical challenges to the implementation of distributed acoustic sensing (DAS) technology, which currently functions over distances of approximately 90 miles.

Li also pointed out that similar methods are being employed in Iceland to monitor magma movements in volcanoes.

“We utilized DAS to facilitate early warnings for volcanic eruptions,” Li explained. “The Icelandic Meteorological Office is now using this technology for issuing early alerts.”

Additionally, the technique indicated that the Mendocino tremors were rare “supershear” earthquakes, which occur when fault fractures advance quicker than seismic waves can travel. Atterholt likened it to a fighter jet exceeding the speed of sound.

New research has serendipitously uncovered patterns associated with Mendocino, providing fresh insights into this phenomenon.

“We still have not fully grasped why some earthquakes become supershear while others do not,” Atterholt reflected. “This could potentially alter the danger level of an earthquake, but the correlation remains unclear.”

Source: www.nbcnews.com

NASA and IBM Develop AI to Forecast Solar Flares Before They Reach Earth

Solar flares pose risks to GPS systems and communication satellites

NASA/SDO/AIA

AI models developed with NASA satellite imagery are now capable of forecasting the sun’s appearance hours ahead.

“I envision this model as an AI telescope that enables us to observe the sun and grasp its ‘mood,'” states Juan Bernabe Moreno from IBM Research Europe.

The sun’s state is crucial because bursts of solar activity can bombard Earth with high-energy particles, X-rays, and extreme ultraviolet radiation. These events have the potential to disrupt GPS systems and communication satellites, as well as endanger astronauts and commercial flights. Solar flares may also be accompanied by coronal mass ejections, which can severely impact Earth’s magnetic field, leading to geomagnetic storms that could incapacitate power grids.

Bernabé-Moreno and his team at IBM and NASA created an AI model named Surya, derived from the Sanskrit word for ‘sun,’ by utilizing nine years of data from NASA’s Solar Dynamics Observatory. This satellite captures ultra-high-resolution images of the sun across 13 wavelength channels. The AI models have learned to recognize patterns in this visual data and create forecasts of how the sun will appear from future observational stations.

When tested against historical solar flare data, the Surya model demonstrated a 16% improvement in accuracy for predicting flare occurrences within the next day compared to traditional machine learning models. There is also a possibility that the model could generate visualizations of flares observable for up to two hours in advance.

“The strength of AI lies in its capacity to comprehend physics in unconventional ways. It enhances our intuition regarding physical processes,” remarks Lisa Upton at the Southwest Research Institute in Colorado.

Upton is especially eager to explore if the Surya model can aid in predicting solar activity across the sun and at its poles—areas where NASA instruments cannot directly observe. While Surya does not explicitly aim to model the far side of the sun, it has shown promise in forecasting what the sun will resemble for several hours ahead as sections rotate into view, according to Bernabe Moreno.

However, it remains uncertain whether AI models can overcome existing obstacles in accurately predicting how solar activity will influence Earth. Bernard Jackson from the University of California, San Diego, points out that there is currently no means to directly observe the magnetic field composition between the Sun and Earth, a crucial factor determining the direction of high-energy particles emanating from the star.

As stated by Bernabé-Moreno, this model is intended for scientific use now, but future collaborations with other AI systems that could leverage Surya’s capabilities may allow it to support power grid operators and satellite constellation owners as part of early warning frameworks.

Topic:

Source: www.newscientist.com

Scientists Develop a Second Novel Carbon Molecule

Researchers have stabilized ring-shaped carbon molecules by adding “bumpers” to protect the atoms.

Harry Anderson

An innovative variety of whole carbon molecules is currently under investigation at standard room temperature. This marks only the second instance of such research since the synthesis of the spherical buckyball 35 years ago. These advancements may lead to the development of materials that offer substantial efficiencies for emerging electronic and quantum technologies.

Carbon molecules composed of circulating rings can display unique chemical characteristics and, similar to buckyballs and carbon nanotubes, can conduct electricity in unexpected ways. Nonetheless, these rings are fragile and often disintegrate before researchers can analyze them.

“Cyclic carbons are fascinating molecules that we’ve been endeavoring to create for quite some time,” said Harry Anderson from Oxford University. Traditionally, it was essential to maintain a sufficient length for studying the molecules, but Anderson and his team have discovered a method to stabilize cyclic carbon at room temperature.

This process involves modifying the cyclic carbon structure. The researchers have achieved this with unprecedented molecular constructs—specifically, rings consisting of 48 carbon atoms known as cyclo[48]Carbon, or c48. They augmented the c48 by incorporating a “bumper” that prevents the 48 atoms from colliding with one another or with additional molecules.

“There are no unnecessary embellishments,” remarked Max Fonderius from Ulm University, Germany. “Simplicity possesses an exquisite elegance.”

A new configuration called Cyclo[48]carbon [4]Catenan remains stable for approximately two days, allowing researchers to investigate c48 for the first time. Interestingly, the molecule’s 48 carbons behaved as if they were arranged in infinite chains, a formation that enables charge transfer between atoms indefinitely.

This remarkable conduction ability suggests that cyclic carbon could be utilized in a variety of next-generation technologies, including transistors, solar cells, semiconductors, and quantum devices. Nonetheless, further inquiry is necessary to validate this potential.

Innovative techniques for stabilizing cyclic carbon may also inspire other scientists to explore exotic carbon molecules. “I believe there is likely a competitive race happening right now,” said von Delius. “Consider this elongated ring as a stepping stone toward the creation of an infinite chain.”

Von Delius further explained that a solitary chain of carbon molecules could prove to be even superior conductors than the rings like C48. “It’s truly remarkable, and it represents the next significant advancement,” he stated.

topic:

Source: www.newscientist.com

Biotechnology Firms Seek to Develop the “ChatGPT of Biology”: Does It Deliver?

Basecamp researchers gather genetic data in Malta

Greg Funnell

A British biotech firm, Basecamp Research, has spent recent years gathering extensive genetic data from microorganisms inhabiting extreme environments worldwide, uncovering 10 billion new species among over a million scientifically recognized entities. This vast database of planetary biodiversity aims to assist in training “biology chats” to address inquiries regarding life on Earth, although its effectiveness remains uncertain.

Jorg Overmann from the Leibniz Institute DSMZ, which houses one of the world’s most extensive collections of microbial cultures, asserts that while an increase in known genetic sequences is beneficial, it likely won’t lead to significant discoveries in drug development or chemistry without deeper insights into the organisms from which they originated. “In the end, I’m skeptical that a better understanding of unique features will be achieved merely through brute force in the sequencing domain,” he remarks.

Recent years have seen a surge in machine learning models aimed at identifying patterns and predicting relationships within vast biological datasets. The most well-known of these is Alphafold, which can predict the 3D structure of proteins using only genetic data, and was awarded the 2024 Nobel Prize in Chemistry at Google DeepMind.

This “genometric biology” approach has grown significantly, but according to Francis Din at the University of California, Berkeley, progress has been limited. One reason for this is the underrepresentation of biodiversity data. “Current biological models are primarily trained with datasets that favor well-studied species (e.g., E. coli, mice, humans), leading to poor prediction capabilities for traits associated with sequences from other branches of the Tree of Life,” she explains.

Basecamp researchers aim to bridge this biodiversity gap. Their expanding database now includes samples from over 120 locations across 26 countries, as detailed in a report by the company. Jonathan Finn, the company’s Chief Science Officer, notes that their sampling efforts target extreme environments that have yet to be thoroughly examined, spanning from the icy depths of the Arctic Ocean to the warm jungle hot springs. “Most of the samples we’re prioritizing are prokaryotic: bacteria, microorganisms, and their viruses,” Finn states. “We are also aware that some fungi are present.”

Genetic analyses of these samples have illuminated gene variations that are broadly shared across the Tree of Life. Based on this research, the company estimates that their data encompasses over a million species of genetic information not found in public genomic databases utilized for training AI models. This includes around 9.8 billion newly identified genes, increasing the overall known gene count tenfold, each potentially encoding useful proteins, according to the researchers.

“By providing these models with richer data, we enhance our understanding of biological mechanisms,” Finn explains. “We aim to create a ChatGPT for Biology.”

It’s estimated that Earth hosts trillions of microorganism species, many of which remain poorly characterized. Thus, it’s not unexpected that the company has identified such a wealth of novel life forms. “As we explore more, discovering diverse gene variants becomes almost inevitable,” notes Leopold Parts at the Wellcome Sanger Institute in the UK.

Nevertheless, Basecamp promotes the notion that all newly discovered materials might hold value. It’s not alone in this sentiment. “This is among the most thrilling advances I’ve encountered in quite some time,” remarks Nathan Frey, a machine learning researcher at Genentech, a US biotech firm. He emphasizes that most AI biology projects focus on algorithm improvement or generating additional lab data rather than venturing out to collect samples directly from nature.

However, skepticism arises regarding whether this database will yield the meaningful advancements the company aspires to achieve. For starters, it remains uncertain how much this newfound diversity in proteins reflects valuable new functions like enzymes and proteins that can degrade plastic useful for gene editing. “They must demonstrate that this novelty has practical utility,” cautions Parts.

Moreover, if the new genes significantly differ from known genes, Overmann expresses doubts about how easily existing tools can predict functionality or how such data can be utilized for training new models. “I can’t discern the functions of most of my genes,” he states. The company may have created a valuable new repository of biological data, but in traditional lab settings, even the most advanced AI may still face challenges in interpretation.

topic:

Source: www.newscientist.com

Discover “Monster” Tumors That Can Develop Hair, Teeth, and Organs

This concept may surprise you, but certain tumors can indeed develop parts of your body, or at least fragments of them.

These peculiar layers, known as teratomas, originate from germ cells that possess the extraordinary capability to transform into any type of tissue.

Germ cells typically evolve into sperm or eggs; however, when their development is disrupted, they can create a disorganized mass of tissue.

The term “Teratoma” is derived from the Greek word Teras, which means “monster,” aptly reflecting its nature.

These tumors feature an astonishing array of components, ranging from hair and teeth to muscle tissues and even organ-like structures such as the thyroid and eyes.

While fully functional organs are exceedingly rare, the intricate nature of these tumors is undeniable.

Teratomas are most frequently observed in the ovaries and testes, but they can also appear in the midline of the body, such as the mediastinum (the chest area that houses the heart) and the base of the spine.

The majority of teratomas are benign and can be easily excised, though a small percentage—particularly those in men—can become malignant and necessitate urgent treatment. Surgery is generally the primary method for addressing these tumors, and the prognosis is typically favorable.

It can grow teeth, muscles, thyroid, eyes, and other tissues from the teratoma – Image credit: Science Photo Library

In addition to their medical implications, teratomas have offered significant insights into the science of cellular development.

They can include tissues derived from all three layers of germ cells, making them an intriguing model for studying how cells differentiate and organize.

So, can a tumor grow organs? In a way, yes. However, these structures are often nonfunctional and poorly organized.

Teratoma serves as a striking and unsettling example of the bizarre and unpredictable aspects of human biology.


This article addresses the question posed by Anisa Manning and Steve Nage: “Can tumors grow their own organs?”

If you have questions, please email us at Question @sciencefocus.com or message us on Facebook, Twitter, or Instagram (please include your name and location).

Explore our ultimate Fun Fact and more captivating science pages.


Read more:


Source: www.sciencefocus.com

Researchers Develop AI Tools to Revive Artwork Aged by Time in Just Hours

Throughout history, the effects of wear and tear, along with natural aging, have resulted in oil paintings displaying cracks, discoloration, and peeling pigments, leaving lasting marks.

Repairing such damage is typically reserved for the most treasured artworks, requiring years of meticulous effort. However, a new approach promises to revolutionize this process, enabling the restoration of aging pieces in a matter of hours.

This innovative technique utilizes artificial intelligence and advanced digital tools to create reconstructions of damaged paintings, which are subsequently printed on a transparent polymer sheet and applied over the original artwork.

To showcase this method, Alex Kachin, a graduate researcher from the Massachusetts Institute of Technology, undertook the restoration of damaged panels attributed to a master Dutch painter of the late 15th century, whose identity remains unknown, following a piece by Martin Schongauer.

The artwork, rich in detail, is visibly segmented into four panels, marred by fine cracks and speckled with countless tiny paint losses.

“Much of the damage involves small, intricate details,” Kachin noted. “It has been deteriorating for centuries.”

Kachin initiated the process by scanning the painting to ascertain the dimensions, shapes, and locations of the damaged areas, identifying 5,612 individual sections requiring repair.

Following this, a digital mask was created using Adobe Photoshop. Missing paint spots were filled in, with surrounding pigment colors adjusted accordingly. Repairs to patterned sections involved duplicating similar patterns from other areas of the painting. For instance, a missing facial feature of a child was sourced from a different work by the same artist.

Close-ups illustrating the masking results. Photo: Alex Kachin, MIT

Once the mask was complete, it was printed on the polymer sheet and painted over, followed by a varnish application to ensure it harmonized with the painting.

In total, 57,314 colors were utilized to restore the damaged sections. The modifications were crafted to enhance the artwork even if slightly misaligned.

Upon seeing the results, Kachin expressed satisfaction. “We dedicated years to perfecting this method,” he remarked. “It was a significant relief to realize that this approach enabled us to reconstruct and piece together the surviving parts of the painting.”

This approach, as detailed in Nature, can only be applied to works featuring a smooth varnish that allows for flat application. The mask can be removed using conservator solvents without leaving marks on the original piece.

Kachin envisions this technique facilitating galleries in restoring and showcasing numerous damaged paintings that might otherwise lack the value warranting traditional restoration efforts.

Nonetheless, he recognizes the ethical considerations surrounding the use of film overlays on paintings, questioning whether they might disrupt the viewing experience and the appropriateness of features derived from other works.

In a related commentary, Professor Hartmut Kutzke from the Museum of Cultural History at the University of Oslo emphasized that this method enables quicker and more cost-effective recovery of damaged artworks compared to conventional methods.

“This technique is likely best suited for relatively low-value pieces kept in less visible locations, and may not be appropriate for renowned, high-value artworks,” he noted. “However, it could significantly increase public access to the arts, bringing damaged pieces out of storage and into the view of new audiences.”

Source: www.theguardian.com

Meta Unveils $15 Billion Investment to Develop Computerized “Superintelligence”

Reports indicate that Meta is preparing to unveil a substantial $15 billion (£11 billion) bid aimed at achieving computerized “Superintelligence.”

The competition in Silicon Valley to lead in artificial intelligence is intensifying, even as many current AI systems show inconsistent performance.

Meta CEO Mark Zuckerberg is set to announce the acquisition of a 49% stake in Scale AI, which is led by King Alexandre and co-founded by Lucie Guo. This strategic move has been described by one analyst in Silicon Valley as a “wartime CEO” initiative.

Superintelligence refers to an AI that can outperform humans across all tasks. Currently, AI systems have not yet achieved the same capabilities as humans, a condition known as Artificial General Intelligence (AGI). Recent studies reveal that many prominent AI systems falter when tackling highly complex problems.

Following notable progress from competitors like Sam Altman’s OpenAI and Google, as well as substantial investments in the underperforming Metaverse concept, observers are questioning whether Meta’s renewed focus on AI can restore its competitive edge and drive meaningful advancements.

In March, the 28-year-old King signed a contract to develop the Thunderforge system for the US Department of Defense, which focuses on applying AI to military planning and operations, with initial emphasis on Indo-Pacific and European directives. The company has also received early funding from the Peter Thiel founder fund.

Meta’s initiative has sparked fresh calls for the European government to embark on its own transparent research endeavors, ensuring robust technological development while fostering public trust, akin to the Swiss CERN European Nuclear Research Institute.

Michael Wooldridge, a professor at the Oxford University Foundation for Artificial Intelligence, stated, “They are maximizing their use of AI. We cannot assume that we fully understand or trust the technology we are creating. It’s crucial that governments collaborate to develop AI openly and rigorously, much like the importance of CERN and particle accelerators.”

Wooldridge commented that the reported acquisition appears to be Meta’s effort to reclaim its competitive edge following the Metaverse’s lackluster reception, noting that the company invested significantly in that venture.

However, he pointed out that the state of AI development remains uneven, with AGI still a distant goal, and “Superintelligence” being even more elusive.

“We have AI that can achieve remarkable feats, yet it struggles with tasks that capable GCSE students can perform,” he remarked.

Andrew Rogoiski, director of partnerships and innovation at the University of Surrey’s People-centered AI Institute, observed, “Meta’s approach to AI differs from that of OpenAI or Humanity. For Meta, AI is not a core mission, but rather an enabler of its broader business strategy.”

“This allows them to take a longer-term view, rather than feeling rushed to achieve AGI,” he added.

Reports indicate that King is expected to take on a significant role within Meta.

Meta has chosen not to comment at this time. Scale AI will be reached for additional comments.

Source: www.theguardian.com

Public Health Agencies Urged to Develop Period Tracking Apps for Data Protection

As public health organizations indicate that women’s personal information is vulnerable to exploitation by private entities, experts advocate for public health groups to create alternatives to for-profit period tracker applications.

A study from the University of Cambridge reveals that smartphone apps used for menstrual cycle tracking serve as a “Goldmine” for consumer profiling, collecting data on exercise, diet, medication, hormone levels, and birth control methods.

The economic worth of this information is often “greatly underestimated” by users who share intimate details in unregulated markets with profit-driven businesses, according to the report.

If mishandled, data from cycle tracking apps (CTAs) could lead to issues like employment bias, workplace monitoring, discrimination in health insurance, risks of cyberstalking, and restricted access to abortion services, research indicates.

The authors urge for improved regulation in the expanding Femtech sector to safeguard users as data is sold in large quantities, suggesting that apps should offer clear consent options regarding data collection and promote the establishment of public health agency alternatives to commercial CTAs.

“The menstrual cycle tracking app is marketed as empowering women and bridging gender health disparities,” stated Dr. Stephanie Felberger, PhD, of the Center for Technology and Democracy at Cambridge, the lead author of the report. “Nevertheless, its underlying business model relies on commercial usage, wherein user data and insights are sold to third parties for profit.

“As a consequence of the monetization of data collected by cycle tracking app companies, women face significant and alarming privacy and safety threats.”

The report indicates that most cycle tracking apps cater to women attempting to conceive, making the stored data highly commercially valuable. Other life events, aside from home purchasing, do not trigger such notable shifts in consumer behavior.

Data pertaining to pregnancy is valued at over 200 times more than information about age, gender, or location for targeted advertisements. Furthermore, tracking cycle duration can allow for targeting women at various phases of their cycles.

The three most popular apps project a quarterly download figure of 500 million yen for 2024. The digital health sector focused on women’s wellness is anticipated to surpass $60 billion (£44 billion) by 2027, as noted in the report.

In light of the considerable demand for period tracking, the authors are calling on public health entities, including the UK’s NHS, to create transparent and reliable apps as alternatives to commercial offerings.

“The UK is ideally positioned to address researchers’ challenges related to menstrual data access, as well as privacy and data concerns, by developing an NHS app dedicated to tracking menstrual cycles,” added that the parent-child relationship in the US Reproductive Medicine Plan currently utilizes its own app.

“Apps situated within public health frameworks, which are not primarily profit-driven, can significantly reduce privacy violations, gather essential data on reproductive health, and empower users regarding the utilization of their menstrual information.”

“Utilizing cycle tracking apps is beneficial. Women deserve better than having their menstrual tracking data treated merely as consumer data,” remarked Professor Gina Neff, executive director of the Mindeoo Center.

In the UK and the EU, period tracking data falls under “special categories” and enjoys greater legal protection, similar to genetics and ethnicity. In the United States, authorities collect menstrual cycle data which may hinder access to abortion services, according to the report.

Source: www.theguardian.com

IBM Plans to Develop a Functional Quantum Supercomputer by 2029

Rendering of IBM’s proposed quantum supercomputer

IBM

In less than five years, you’ll have access to a Quantum SuperComputer without errors, according to IBM. The company has unveiled a roadmap for a machine named Starling, set to be available for academic and industrial researchers by 2029.

“These are scientific dreams that have been transformed into engineering achievements,” says Jay Gambetta at IBM. He mentions that he and his team have developed all the required components to make Starling a reality, giving them confidence in their ambitious timeline. The new systems will be based in a New York data center and are expected to aid in manufacturing novel chemicals and materials.

IBM has already constructed a fleet of quantum computers, yet the path to truly user-friendly devices remains challenging, with little competition in the field. Errors continue to thwart many efforts to utilize quantum effects for solving problems that typical supercomputers struggle with.

This underscores the necessity for a fault-tolerant quantum computer that can autonomously correct its mistakes. Such capabilities lead to larger, more powerful devices. There is no universal agreement on the optimal strategy to tackle these challenges, prompting the research team to explore various approaches.

All quantum computers depend on qubits, yet different groups create these essential units from light particles, extremely cold atoms, and in Starling’s case, superconducting qubits. IBM is banking on two innovations to enhance its robustness against significant errors.

First, Starling establishes new connections among its qubits, including those that are quite distant from one another. Each qubit is embedded within a chip, and researchers have innovated new hardware to link these components within a single chip and connect multiple chips together. This advancement enables Starling to be larger than its forerunners while allowing it to execute more complex programs.

According to Gambetta, Starling will employ tens of thousands of qubits, permitting 100 million quantum manipulations. Currently, the largest quantum computers house around 1,000 physical qubits, grouped into roughly 200 “logical qubits.” Within each logical qubit, several qubits function together as a single computational unit resilient to errors. The current record for logical qubits belongs to the Quantum Computing Company Quantinuum with a count of 50.

IBM is implementing a novel method for merging physical qubits into logical qubits via LDPC codes. This marks a significant shift from previous methods employed in other superconducting quantum computers. Gambetta notes that utilizing LDPC codes was once seen as a “pipe dream,” but his team has now realized crucial details to make it feasible.

The benefit of this somewhat unconventional technique is that each logical qubit created with an LDPC approach requires fewer physical qubits compared to competing strategies. Consequently, they are smaller and faster error correction becomes achievable.

“IBM has consistently set ambitious goals and accomplished significant milestones over the years,” states Stephen Bartlett from the University of Sydney. “They have achieved notable innovations and improvements in the last five years, and this represents a genuine breakthrough.” He points out that both the distant qubits and the new hardware for connecting the logical qubit codes deviate from the well-performing devices IBM previously developed, necessitating extensive testing. “It looks promising, but it also requires a leap of faith,” Bartlett adds.

Matthew Otten from the University of Wisconsin-Madison mentions that LDPC codes have only been seriously explored in recent years, and IBM’s roadmap clarifies how it functions. He emphasizes its importance as it helps researchers pinpoint potential bottlenecks and trade-offs. For example, he notes that Starling may operate slower than current superconducting quantum computers.

At its intended scale, the device could address challenges relevant to sectors such as pharmaceuticals. Here, simulations of small molecules or proteins on quantum computers like Starling could replace costly and cumbersome experimental steps in drug development, Otten explains.

IBM isn’t the only contender in the quantum computing sector planning significant advancements. For instance, Quantinuum and Psiquantum have also announced their intentions to develop fault-tolerant utility-scale machines by 2029 and 2027, respectively.

Topics:

Source: www.newscientist.com

British Minister Postpones AI Regulation to Develop a More “Comprehensive” Bill

Proposals for regulating artificial intelligence are lagging by at least a year as the UK minister aims to advance a significant bill addressing the use of this technology and its associated copyrighted content.

Technology Secretary Peter Kyle is set to present a “detailed” AI bill in the upcoming Congressional session to tackle pressing issues, including safety and copyright concerns.

This delay in regulation raises concerns ahead of the next King’s speech. While no date has been confirmed for this event, some reports suggest it may occur in May 2026.

Initially, Labour had intended to introduce a concise, targeted AI bill shortly after taking office, focusing specifically on large-scale language models like CHATGPT.

The proposed legislation would have mandated companies to provide their models for assessment by the UK AI Security Institute, aiming to address fears that advanced AI models might pose threats to humanity.

However, with the bill behind schedule, the minister has opted to align with the approach of Donald Trump’s administration in the US, fearing that excessive regulations might dissuade AI companies from the UK.

Now, the minister is eager to incorporate copyright regulations for AI firms within the AI bill.

“We believe this framework can help us tackle copyright issues,” a government source commented. “We’ve been consulting with both creators and tech experts, and we’ve uncovered some intriguing ideas for the future. Once the data bill is finalized, our efforts will begin in earnest.”

The government is currently facing a dispute with the House over copyright provisions in a separate data bill. AI companies can utilize copyrighted materials for model training unless the rights holders opt out.

This has led to a strong backlash from the creative community, with notable artists like Elton John, Paul McCartney, and Kate Bush lending their support to a campaign against these changes.

Recently, Piers backed an amendment to the data bill that would require AI companies to declare whether they are using copyrighted materials for model training, ensuring compliance with existing copyright laws.

Despite Kyle’s expressed concerns over the government’s approach, he has resisted calls to backtrack. The government contends that the data bill does not adequately address copyright matters and has vowed to publish an economic impact evaluation alongside several technical papers on copyright and AI.

In a letter to legislators on Saturday, Kyle further pledged to create a cross-party working group on AI and copyright.

Beevan Kidron, a film director and crossbench peer advocating for the creative sector, remarked on Friday that the minister “has neglected the creative industry and disregarded Britain’s second-largest industrial sector.”

Kyle mentioned in Commons last month that AI and copyright should be included in another “comprehensive” legislative package.

An overwhelming majority of the UK populace (88%) believes the government should have the authority to halt AI product usage if deemed a significant risk. This finding was published in March by the ADA Lovelace Institute and the Alan Turing Institute, which shows that over 75% of people feel that safety oversight for AI should be managed by governments or regulators, alongside private companies.

Scott Singer, an AI specialist at Carnegie Endowment for International Peace, noted: “The UK is strategically navigating between the US and the EU. Similar to the US, the UK is aiming to avoid overly stringent regulations that could stifle innovation while exploring meaningful consumer protection methods.”

Source: www.theguardian.com

Research Reveals AI’s Ability to Voluntarily Develop Human-Like Communication Skills

Research indicates that artificial intelligence can organically develop social practices akin to humans.

The study, conducted in collaboration between the University of London and the City of St. George at the University of Copenhagen, proposes that large-scale language modeling (LLM) AI, like ChatGPT, can begin to adopt linguistic forms and societal norms when interacting in groups without external influence.

Ariel Flint Asherry, a doctoral researcher at Citi St. George and the study’s lead author, challenged the conventional perspective in AI research, asserting that AI is often perceived as solitary entities rather than social beings.

“Unlike most research that treats LLMs in isolation, genuine AI systems are increasingly intertwined, actively interacting,” says Ashery.

“We aimed to investigate whether these models could modify behaviors by shaping practices and forming societal components. The answer is affirmative; their collaborative actions exceed what they achieve individually.”

In this study, groups of individual LLM agents ranged from 24 to 100, where two agents were randomly paired and tasked with selecting a “name” from an optional pool of characters or strings.

When the agents selected the same name, they received a reward; if they chose differently, they faced punishment and were shown each other’s selections.


Although the agents were unaware of being part of a larger group and limited their memory to recent interactions, voluntary naming conventions emerged across the population without a predetermined solution, resembling the communicative norms of human culture.

Andrea Baroncelli, a professor of complexity science at City St. George’s and the senior author of the study, likened the dissemination of behavior to the emergence of new words and terms in our society.

“The agents don’t follow a leader,” he explained. “They actively coordinate, consistently attempting to collaborate in pairs, with each interaction being a one-on-one effort over labels without a comprehensive perspective.

“Consider the term ‘spam.’ No official definition was set, but persistent adjustment efforts led to its universal recognition as a label for unwanted emails.”

Furthermore, the research team identified naturally occurring collective biases that could not be traced back to individual agents.

Skip past newsletter promotions

In the final experiment, a small cohort of AI agents successfully guided a larger group towards a novel naming convention.

This was highlighted as evidence of critical mass dynamics, suggesting that small but pivotal minorities can catalyze rapid behavioral changes in groups once a specific threshold is achieved, akin to phenomena observed in human societies.

Baroncelli remarked that the study “opens a new horizon for AI safety research, illustrating the profound impact of this new breed of agents who will begin to engage with us and collaboratively shape our future.”

He added: “The essence of ensuring coexistence with AI, rather than becoming subservient to it, lies not only in discussions but in negotiation, coordination, and shared actions, much like how we operate.”

Peer-reviewed research on emergent social practices within LLM populations and population bias is published in the journal Science Advances.

Source: www.theguardian.com

Researchers develop chicken nuggets cultured in the biggest laboratory ever, complete with synthetic veins

A significant breakthrough has been made in the field of cultured meat, with scientists successfully growing nugget-sized chicken using a new method that enables the delivery of nutrients and oxygen to artificial tissues.

In the past, lab-produced tissues were limited to cell spheres less than a millimeter thick, making it challenging to replicate the texture of real muscle. However, a team of Japanese researchers has now managed to grow a chicken measuring 2.7 inches wide and 0.7 inches thick using a new lab tool, marking a major step forward in this technology. Biotechnology trends.

The development of bioreactors that mimic the circulation system has played a crucial role in this breakthrough, with 50 hollow fibers distributing nutrients and oxygen into the meat to allow cells to grow in a specific direction.

This lab-grown chicken, although not made from food-grade ingredients and not yet tasted by scientists, showcases the potential of this technology for various applications beyond food production.

As the technology advances, challenges such as replicating the texture and flavor of traditional meat and improving oxygen delivery for larger pieces still need to be addressed. Automation of the process and the use of food-grade ingredients are crucial steps towards making lab-grown meat commercially viable.

Consumer attitudes towards cultured meat vary, with some expressing concerns about its safety and perceived unnaturalness. Despite these challenges, cultured meat is already available in some markets and holds promise for a more sustainable future.

The future of cultured meat holds potential for significant advancements in food production, regenerative medicine, drug testing, and biohybrid robotics, paving the way for a more sustainable and innovative future.

Source: www.nbcnews.com

Physicists develop innovative form of structured light: Optical rotation

According to a team of Harvard physicists, the structure of the optically rotating animal continues in a logarithmic spiral.

The evolution of light beams carrying the optical decy as a function of propagation distance. Image credits: Dorrah et al. , doi: 10.1126/sciadv.adr9092.

“This is a new behavior of light consisting of optical vortices that propagate space and change in an anomalous way,” says Professor Federico Capaso, a senior author of the study.

“It can potentially help you manipulate small substances.”

With a unique twist, the researchers have discovered that orbital angular momentum-mediated beams of light grow in mathematically recognizable patterns found throughout nature.

Reflecting the Fibonacci number sequence, their optical rotations propagate into logarithmic spirals found in Nautilus shells, sunflower seeds, and tree branches.

“It was one of the unexpected highlights of this study,” says Dr. Ahmed Dora, the first author of the study.

“Hopefully we can help others, who are experts in applied mathematics, to further study these light patterns and gain unique insight into their universal signature.”

This study is based on previous research by the team using thin lenses etched with thin nanostructures to create a light beam with controlled polarization and orbital angular momentum along its propagation path, converting the input of light into other structures that change when it moves.

Now they have introduced another degree of freedom in their light. There, spatial torque can be changed as it propagates.

“We show even more versatility in control and we can do it on a continuous basis,” said Alfonso Palmieri, co-author of the study.

Potential use cases for such exotic rays involve the control of very small particles, such as colloids, in suspension, by introducing new types of forces according to the unusual torque of light.

It also allows for precise optical tweezers for small operations.

Others have demonstrated light that changes torque using high-intensity lasers and bulky setups, but scientists have created theirs with a single liquid crystal display and a low-intensity beam.

By showing that they can create rotary rotary devices in industry-compatible, integrated devices, the barriers to entry for their technology to become a reality are much lower than in previous demos.

“Our research expands the previous literature on structured light, providing new modalities for light and physics, and sensing, suggesting similar effects of condensed material physics and Bose-Einstein condensates,” they concluded.

study Published in the journal Advances in science.

____

Ahmed H. Dora et al. 2025. Rotation of light. Advances in science 11 (15); doi:10.1126/sciadv.adr9092

Source: www.sci.news

Develop a tool to predict potential murderers in the UK | Crime

The UK government is in the process of developing a predictive programme aimed at identifying potential murderers by utilizing personal data from individuals known to law enforcement authorities.

Researchers are utilizing algorithms to analyze data from thousands of individuals, including crime victims.

Originally named the “Murder Prediction Project,” the initiative has been renamed to “Share data to improve risk assessment” by the Ministry of Justice. While officials hope the project will enhance public safety, critics have labeled it as “chilling and dystopian.”

The existence of the project was brought to light by the advocacy group Statewatch, with details of its operations available through a Freedom of Information request.

Statewatch alleges that data from individuals without criminal convictions will be utilized in the project, including sensitive details related to self-harm and domestic abuse. Authorities vehemently deny this, stating they only collect data on individuals with at least one criminal conviction.

While the government maintains the project is solely for research purposes at this stage, detractors argue that the data used could introduce biases in predictions, particularly affecting ethnic minorities and low-income populations.

The project, commissioned during Rishi Snack’s tenure at the Prime Minister’s Office, analyzes crime data from various official sources, including the probation service and Greater Manchester Police prior to 2015.

Information processed includes names, dates of birth, gender, ethnicity, and unique identifiers on the police national database.

Statewatch’s claim regarding the inclusion of data from innocent individuals and those seeking police assistance is based on a data sharing agreement between the Ministry of Justice and Greater Manchester Police.

The shared data encompasses a range of personal information, including criminal convictions and details such as age at first reporting domestic violence or seeking police intervention.

Moreover, sensitive information categorized as “Special Categories of Personal Data” includes health indicators deemed predictive, mental health, addiction, and vulnerability data.

Responding to criticisms, a Ministry of Justice spokesperson stated: “This project is strictly for research purposes. It utilizes existing data from prison, probation, and police records of convicted offenders to enhance understanding of probationer risks.”

Current risk assessment tools used by correctional services will be supplemented with additional data sources to gauge effectiveness.

In summary, the Ministry of Justice asserts that the project aims to enhance risk assessment for serious crimes and ultimately contribute to public protection through improved analysis.

Source: www.theguardian.com

Physicists Develop Shape-Recovering Liquids | Sci.News

According to a team of physicists at the University of Massachusetts at Amherst, liquids that recover the newly discovered shapes go against years of expectation derived from the laws of thermodynamics.

This image shows emulsion droplets stabilized by silica nanoparticles with nickel nanoparticles remaining on the drop surface. Image credit: Raykh et al. , doi: 10.1038/s41567-025-02865-1.

“Imagine your favorite Italian salad dressing,” said Professor Thomas Russell, Amherst professor at the University of Massachusetts.

“It consists of oil, water and spices, and all the ingredients are mixed together and shaken with it before pouring it into the salad.”

“It is those spices, something else, that are usually mutually exclusive, that mix water and oil, allowing a process called emulsification, that is small bits of those spices, something else, explained by the laws of thermodynamics.”

“Emulsification underlies a vast amount of technology and applications that go far beyond seasonings,” said Anthony Leif, a graduate student at the University of Massachusetts Amherst University.

“One day I was in the lab to mix this batch of science salad dressing and see what I could create. Instead of spice, I used magnetized particles of nickel because I could design any kind of interesting material that has useful properties when it contains magnetic particles.”

“I made the mixture and rocked it – and to my total surprise, the mixture formed this beautiful, pristine ur shape.”

“No matter how many times, how violently it was, the bones have always returned.”

The researchers determined that using additional lab experiments and simulations, they would explain the mysterious phenomenon of magnetism, strong magnetism, discovered.

“A very close look at the individual magnetized nickel nanoparticles that form the water-oil boundary gives you very detailed information on how the different morphologies are assembled.”

“In this case, the particles are magnetized so strongly that the assembly interferes with the emulsification process described by the laws of thermodynamics.”

The particles that are usually added to oil and water mixtures reduce the tension at the interface between the two liquids, allowing them to be mixed.

However, with a twist, the well-heavy magnetized particles actually increase the interfacial tension, bending the oil-water boundary into an elegant curve.

“When you see something impossible, you have to investigate,” Professor Russell said.

“We don’t have any applications yet in our discoveries, but we look forward to seeing how unprecedented states will affect the field of soft matter physics,” added Raykh.

Team’s work It will be displayed in the journal Natural Physics.

____

A.Rafe et al. Shape recovery solution. nut. PhysPublished online on April 4, 2025. doi:10.1038/s41567-025-02865-1

Source: www.sci.news

Latamgpt’s goal is to develop AI that accurately reflects the diverse culture of Latin America

Latin America has been a source of inspiration for various aspects, including a popular literary and musical genre and staple foods like potatoes. A famous Happy meal is now an indication of this inspiration. There is potential for Latin America to also become a cradle for AI.

A coalition of research institutes is collaborating on a project called latamgpt, which aims to create a tool that considers regional language differences, cultural experiences, and “specificity.” This tool is intended to provide more accurate representations for users in Latin America and the Caribbean compared to existing Large Language Models (LLM) primarily trained by US or Chinese companies in English.

The project lead, Rodrigo Duran Rojas, expressed the importance of developing local AI solutions to better serve Latin America. The goal is to offer a representative outlook tailored for the region, with initial tests showing promising results in areas like South American history.

Over 30 institutions are involved in the development of Latamgpt from countries across the hemisphere, including collaborations with Latinos in the US like Freddy Vilci Meneseth, an associate professor of Hispanic Studies at Lewis & Clark College, Oregon.

Latamgpt’s launch is planned for around June, following a significant commitment from various regions for improved AI governance. Projects like monitoring deforestation in the Amazon Rainforest and preserving historical documents from past dictatorships are contributing to the dataset used for training Latamgpt.

With a dataset of over 8 terabytes, Latamgpt aims to provide a nuanced and localized model for various applications. The project faces challenges in incorporating diverse dialects and complex grammatical structures, but emphasizes the importance of collaboration for continued development.

Diversified dialects and complex grammar challenges

Efforts like Latamgpt, CHATGPT, and Google’s Gemini are working towards incorporating a wider range of data and improving localization for non-English languages. Challenges in training models for languages with complex grammar and dialects persist.

Despite these challenges, Latamgpt aims to address these issues through collaboration with institutions, libraries, and archives across the region. The project continues to receive data and feedback to enhance its capabilities and explore applications in public policy and regulation.

The long-term goal of Latamgpt is to create an interconnected network for developing AI solutions with a Latinx touch, emphasizing the impact of collaboration in shaping the future of technology in Latin America and beyond.

An earlier version of this story was first published by Noticias Telemundo.

Source: www.nbcnews.com

Dark energy could potentially develop in unforeseen manners as time progresses

New results from the collaboration of Digi (dark energy spectroscopy) reveal signs of time-varying dark energy.

Two “fans” corresponding to the two main areas were observed by Desi on top and bottom of the plane of the Milkyway Galaxy. Image credits: Desi Collaboration/DOE/KPNO/NOIRLAB/NSF/AURA/R. Proctor.

“The universe will never surprise us and will never surprise us,” said Dr Arjun Dei, a digiproject scientist at Noir Love and associate director of the Central Scale Observatory for Strategic Initiatives.

“By unprecedentedly revealing the evolving textures of our universe's fabrics, Digi and Mayall telescopes are changing our understanding of the future of our universe and nature itself.”

The DESI data, which is employed alone, is consistent with the standard model of the universe. In Lambda CDM, CDM is cold dark matter, and Lambda represents the simplest case of dark energy that acts as a cosmological constant.

However, when combined with other measurements, the effect of dark energy may be weaker over time, increasing indications that other models may be more appropriate.

Other measurements of them include light leftovers from the dawn of space (cosmic microwave background, or CMB), distance measurements of supernovae, and observations of how light from distant galaxies are distorted by the effects of dark matter gravity (weak lenses).

So far, the evolving dark energy preference has not risen to 5 sigma. This is the gold standard in physics that represents a commonly accepted threshold of discovery.

However, the various combinations of DESI data and CMB, weak lenses, and supernova sets range from 2.8 to 4.2 sigma.

This analysis used techniques to hide results from scientists to the end to reduce unconscious biases about data.

This approach sets new criteria for how data is analyzed from large-scale spectroscopic studies.

The Desi is a cutting-edge instrument mounted on the NSF Nicholas U. Mayall 4-M telescope of the NSF Noirlab program, Kitt Peak National Observatory.

Light from 5,000 galaxies can be captured simultaneously, allowing you to carry out one of the most extensive research to date.

The experiment is currently investigating the fourth sky in five years, with plans to measure around 50 million galaxies and quasars (very far but bright objects with black holes in their cores) and more than 10 million stars by the time the project is finished.

The new analysis uses data from the first three years of observations and includes nearly 15 million best measured galaxies and quasars.

This is a major leap, with the one used in Desi's initial analysis improving the accuracy of the experiment with more than twice as much data set, suggesting evolving dark energy.

Digi tracks the effects of dark energy by studying how matter spreads throughout the universe.

Very early cosmic events left subtle patterns in the way matter was distributed. This is a function called Barion Acoustic Vibration (BAO).

Its Bao pattern acts as a standard ruler, and its size is directly influenced by how the universe is expanding at different times.

Measuring rulers at different distances has shown the strength of dark energy throughout history by researchers.

DESI Collaboration begins work with additional analysis to extract more information from the current dataset, and Desi continues to collect the data.

Other experiments offered online over the next few years will also provide complementary data sets for future analysis.

“Our results are a fertile foundation for our theory colleagues looking at new and existing models, and we look forward to what they came up with,” says Dr. Michael Levi, Desi Director and Scientist.

“Whatever the nature of dark energy, it shapes the future of our universe. It is very noteworthy that we look up at the sky with a telescope and try to answer one of the biggest questions humanity has ever asked.”

“These are prominent results from very successful projects,” said Dr. Chris Davis, NSF Program Director at NSF Neil Love.

“The powerful combination of NSF Mayall Telescope and DOE's dark energy spectroscopy instruments demonstrates the benefits of federal agencies collaborating with fundamental science to improve our understanding of the universe.”

Physicists shared their findings in a A series of papers It will be posted above arxiv.org.

Source: www.sci.news

Scientists develop ultra-thin niobium phosphide conductors for use in nanoelectronics

Niobium phosphide conducts electricity better than copper in films a few atoms thick. What's more, these films can be created and deposited at low enough temperatures to be compatible with modern computer chip manufacturing, according to a team of scientists led by Stanford University.

Amorphous niobium phosphide films a few atoms thick have better surface conductivity, making the entire material a better conductor. Image credit: Il-Kwon Oh / Asir Khan.

“We are breaking the fundamental bottlenecks of traditional materials like copper,” said Dr. Aseel Intisar Khan of Stanford University.

“We show that our niobium phosphide conductor can transmit signals faster and more efficiently through ultra-thin wires.”

“This could make future chips more energy efficient, and even small gains can add up when large numbers of chips are used, such as in large data centers storing and processing today's information. There is a possibility.”

Niobium phosphide is what researchers call a topological metalloid, meaning that the entire material can conduct electricity, but its outer surface is more conductive than the center.

As a film of niobium phosphide becomes thinner, the central region shrinks, but its surface remains the same, allowing the surface to take a greater share in the flow of electricity, making the entire material a better conductor. .

Traditional metals such as copper, on the other hand, become less conductive when thinned below about 50 nm.

The researchers found that niobium phosphide is a better conductor than copper at film thicknesses of 5 nm or less, even when operating at room temperature.

At this size, copper wire has a hard time handling rapid electrical signals and loses more energy to heat.

“Really high-density electronics requires very thin metal connections, and if those metals don't conduct well, you're going to lose a lot of power and energy,” said Eric Popp, a professor at Stanford University. said.

“If we had better materials, we could spend less energy on thin wires and more energy on actual calculations.”

Many researchers have been working to find better conductors for nanoscale electronics, but so far the best candidates have very precise crystal structures, which can be used at very high temperatures. must be formed with.

The niobium phosphide film the researchers created is the first example of an amorphous material that becomes a better conductor as it becomes thinner.

“It has been thought that if you want to take advantage of these topological surfaces, you need good single-crystal films that are very difficult to deposit,” said Akash Ramdas, a doctoral student at Stanford University. .

“Now we have another class of materials, topological metalloids, that could serve as a way to reduce energy usage in electronics.”

Niobium phosphide films do not need to be single crystal, so they can be made at low temperatures.

The scientists deposited the film at 400 degrees Celsius (752 degrees Fahrenheit). This temperature is low enough to avoid damage or destruction to existing silicon computer chips.

“If you have to make a perfect crystalline wire, that doesn't work in nanoelectronics,” says Yuri Suzuki, a professor at Stanford University.

“But if you can make them amorphous or slightly disordered and still give them the properties you need, that opens the door to potential real-world applications.”

The authors are also working on fabricating the niobium phosphide film into thin wires for additional testing.

They want to determine how reliable and effective the material is in real-world applications.

“We've taken some really cool physics and transplanted it into the world of applied electronics,” Professor Popp said.

“This type of breakthrough in amorphous materials could help address power and energy challenges in current and future electronics.”

of work Published in a magazine science.

_____

Asil Intisar Khan others. 2025. Surface conduction and electrical resistivity reduction in ultrathin amorphous NbP semimetals. science 387 (6729): 62-67;doi: 10.1126/science.adq7096

This article is a version of a press release provided by Stanford University.

Source: www.sci.news

UK Bill Could Mandate Social Media Platforms to Develop Less Addictive Content for Under-16s

Legislation supported by Labor, the Conservative Party, and child protection experts will require social media companies to exclude teenagers from algorithms intended to reduce content addiction in under-16s. This new Safer Telephones Bill, introduced by Labor MPs, prioritizes reviewing mobile phone sales to teenagers and potentially implementing additional safeguards for under-16s. Health Secretary Wes Street voiced support for the bill, citing the negative impact of smartphone addiction on children’s mental health.

The bill, championed by Labor MP Josh McAllister, is receiving positive feedback from ministers, although there is hesitation around banning mobile phone sales to teens. With backing from former Conservative education secretary Kit Malthouse and education select committee chair Helen Hayes, the bill aims to address concerns about children’s excessive screen time and exposure to harmful content.

Mr. McAllister’s bill, which focuses on protecting children from online dangers, will be debated by ministers this week. The bill includes measures to raise the Internet age of majority to 16 and give regulatory powers to Ofcom for children’s online safety. The proposed legislation has garnered support from various stakeholders including former children’s minister Claire Coutinho and children’s charities.

Concerns about the impact of smartphones on children’s well-being have prompted calls for stricter regulations on access to addictive online content. While Prime Minister Keir Starmer is against a blanket ban on mobile phones for under-16s, there are ongoing discussions about how to ensure children’s safety online without restricting necessary access to technology.

The bill aims to regulate online platforms and mobile phone sales to protect young people from harmful content and addiction. Mr. McAllister’s efforts in promoting children’s digital well-being have garnered significant support from policymakers and child welfare advocates.

As the government considers the implications of the bill and the Online Safety Act, which is currently pending full implementation, efforts to protect children from online risks continue to gain momentum. It remains crucial to strike a balance between enabling technology access and safeguarding children from potential online harms.

Source: www.theguardian.com

Physicists develop one-dimensional photon gas

In an experiment, physicists from the University of Bonn and the University of Kaiserslautern-Landau observed and studied the properties of a one- to two-dimensional crossover in a gas of harmonically confined photons (light particles). The photons were confined in dye microcavities, while polymer nanostructures provided the trapping potential for the photon gas. By varying the aspect ratio of the trap, the researchers tuned it from an isotropic two-dimensional confinement to a highly elongated one-dimensional trapping potential. The team paper Published in a journal Natural Physics.

A polymer applied to the reflective surface confines the photonic gas within the light's parabola. The narrower this parabola is, the more one-dimensional the gas behaves. Image courtesy of University of Bonn.

“To create a gas from photons, you need to concentrate a lot of photons in a limited space and cool them at the same time,” said Dr Frank Wevinger from the University of Bonn.

In their experiments, Dr. Wewinger and his colleagues filled a small container with a dye solution and used a laser to excite it.

The resulting photons bounced back and forth between the reflective walls of the container.

Each time they collided with a dye molecule they cooled, eventually condensing the photon gas.

By modifying the reflective surface, we can affect the gas's dimensions.

“We were able to coat the reflective surface with a transparent polymer and create tiny microscopic protrusions,” said Dr Julian Schulz, a physicist at the University of Kaiserslautern-Landau.

“These protrusions allow us to confine and condense photons into one or two dimensions.”

“These polymers act as a kind of channel for the light,” said Dr Kirankumar Kalkihari Umesh, a physicist at the University of Bonn.

“The narrower this gap becomes, the more one-dimensional the gas behaves.”

In two dimensions, there is a precise temperature limit where condensation occurs, just as water freezes at exactly 0 degrees – physicists call this a phase transition.

“But if you create a one-dimensional gas instead of two-dimensional, things are a bit different,” Dr Wewinger said.

“So-called thermal fluctuations do occur in the photon gas, but in two dimensions they are so small that they have no practical effect.”

“But on one level, these fluctuations can make waves, figuratively speaking.”

These fluctuations destroy the order in a one-dimensional system, causing different regions in the gas to no longer behave in the same way.

As a result, phase transitions that are still precisely defined in two dimensions become increasingly blurred as the system becomes one-dimensional.

However, their properties are still governed by quantum physics, just like for two-dimensional gases, and these types of gases are called degenerate quantum gases.

It's as if water gets cold but doesn't freeze completely, but turns into ice at low temperatures.

“We were able to investigate this behavior for the first time in the transition from a two-dimensional to a one-dimensional photon gas,” Dr. Wewinger said.

The authors were able to demonstrate that a one-dimensional photon gas indeed does not have a precise condensation point.

By making small changes to the polymer structure, it becomes possible to study in detail what happens during the transition between different dimensions.

Although this is still considered fundamental research at this point, it has the potential to open up new applications of quantum optical effects.

_____

K. Kalkihari Umesh othersDimensional crossover in a quantum gas of light. National Physical SocietyPublished online September 6, 2024; doi: 10.1038/s41567-024-02641-7

Source: www.sci.news

Researchers develop 3D radiation map of Jupiter’s moons

Using data collected by the Advanced Stellar Compass (ASC) and Stellar Reference Unit (SRU) on NASA’s Juno spacecraft, scientists have created the first complete 3D radiation map of the Jupiter system. The map characterizes the intensity of high-energy particles near the orbit of the icy moon Europa and shows how the radiation environment is shaped by small moons orbiting close to Jupiter’s rings.

This diagram shows a model of radiation intensity at different points on the Juno spacecraft’s orbit around Jupiter. Image credit: NASA / JPL-Caltech / DTU.

“With Juno, we’ve been trying to invent new ways to use sensors to learn about nature, and we’ve been using many of our science instruments in ways that were not originally intended,” said Juno principal investigator Dr. Scott Bolton, a planetary scientist at the Southwest Research Institute.

“This is the first detailed radiation map of this high-energy region and marks a major step forward in understanding how Jupiter’s radiation environment works.”

“It’s significant that we’ve been able to map this area in detail for the first time, because we don’t have instruments designed to look for radiation.”

“This map will help plan observations for future missions to the Jovian system.”

Juno’s ASC instrument, consisting of four star cameras mounted on the spacecraft’s magnetometer boom, takes images of the stars to determine the spacecraft’s orientation in space.

But the instrument is also a valuable detector for detecting the flow of high-energy particles within Jupiter’s magnetosphere.

The cameras record “hard radiation” – ionizing radiation that affects the spacecraft with enough energy to penetrate the ASC’s shielding.

“The ASC takes an image of the star every quarter of a second,” said Juno scientist Dr. John Leif Jorgensen, a researcher at the Technical University of Denmark.

“The highly energetic electrons that penetrate the shield leave distinctive signatures in our images, like firefly trails.”

“The device is programmed to count the number of fireflies, allowing us to accurately calculate the amount of radiation.”

Juno’s orbit is constantly changing, so the spacecraft has traversed nearly every region of space near Jupiter.

The ASC data suggests that there is more very high-energy radiation, relative to low-energy radiation, near Europa’s orbit than previously thought.

The data also confirm that there are more energetic electrons on the side of Europa facing in the direction of its orbital motion than on the rear side of Europa.

This is because most of the electrons in Jupiter’s magnetosphere pass Europa from behind due to the planet’s rotation, but the very energetic electrons flow backwards, like a fish swimming upstream, and slam into the front of Europa.

The Jupiter radiation data is not the ASC’s first scientific contribution to the mission: even before it arrived at Jupiter, ASC data was used to measure interstellar dust bombarding Juno.

Using the same dust-detection techniques, the imager also discovered a previously undiscovered comet, identifying tiny pieces of the spacecraft ejected by fine dust particles that collided with Juno at high speed.

Like Juno’s ASC, the SRU will act as a radiation detector and low-light imaging instrument.

Data from both instruments show that, like Europa, small shepherd moons that orbit inside or near the edges of Jupiter’s rings and help maintain their shape also appear to interact with the planet’s radiation environment.

If the spacecraft flies over magnetic field lines that connect to ring moons or dense dust, the radiation dose to both the ASC and SRU drops sharply.

The SRU is also collecting rare low-light images of the rings from Juno’s unique vantage point.

“Many mysteries remain about how Jupiter’s rings formed, and very few images have been collected by previous spacecraft,” said SRU principal investigator Dr. Heidi Becker, a scientist at NASA’s Jet Propulsion Laboratory.

“If you’re lucky, you might even be able to capture a little shepherd moon in your photo.”

“These images allow us to get a better idea of where the ring moons are currently located and to see the distribution of dust relative to the distance from Jupiter.”

of Survey results Will be published in the journal Geophysical Research Letters.

Source: www.sci.news

Scientists develop ultra-thin gold ‘golden’ that is only one atom thick

Golden in the form of gold monolayer sheets is prepared by etching away titanium carbide (Ti)3C2. Slabs of titanium gold carbide (Ti)3AuC2.

Golden preparation.Image provided by: Kashiwaya other., doi: 10.1038/s44160-024-00518-4.

“When you make a material extremely thin, something unusual happens, just as it did with graphene. The same thing happens with gold,” said Dr. Shun Kashiwaya, a researcher at Linköping University.

“As you know, gold is normally a metal, but if it's an atomic layer thick, it can become a semiconductor instead.”

To create Goldene, Dr. Kashiwaya and his colleagues used a three-dimensional substrate with gold embedded between layers of titanium and carbon. However, coming up with a golden turned out to be difficult.

“We created the basic material with a completely different application in mind,” said Professor Lars Hartmann from Linköping University.

“We started with a conductive ceramic called titanium silicon carbide, which has a thin layer of silicon.”

“Then the idea was to coat the material with gold to make the contacts. However, when the component was exposed to high temperatures, the silicon layer inside the substrate was replaced by gold.”

This phenomenon is called intercalation, and what the researchers discovered was titanium-gold carbide.

For several years, authors have been using titanium gold carbide without knowing how the gold could be exfoliated or panned out.

They accidentally discovered a method that has been used in Japanese forging for more than 100 years.

This is called Murakami's reagent, and it etches away carbon residues and changes the color of steel, such as in knife making. However, it was not possible to use exactly the same recipe as the blacksmith.

“We tried varying the concentration of Murakami's reagent and the etching time. One day, one week, one month, several months. What we noticed was that the lower the concentration and the longer the etching process, the better. But even that wasn't enough,” Dr. Kashiwaya said.

Etching must also be performed in the dark, as the reaction produces cyanide, which dissolves the gold when exposed to light. This step was to stabilize the gold sheet.

A surfactant was added to prevent the exposed two-dimensional sheet from curling up. In this case, it is a long molecule, a surfactant, that separates and stabilizes the sheets.

“The golden sheets sit in a solution, a bit like cornflakes in milk. We use a sort of 'sieve' to collect the gold and examine it under an electron microscope to see if we were successful.” We have that,” Dr. Kashiwaya said.

“Golden's new properties are due to the fact that gold has two free bonds when it is two-dimensional.”

“Thanks to this, future applications could include carbon dioxide conversion, hydrogen production catalysts, selective production of value-added chemicals, hydrogen production, water purification, communications, etc.”

“Additionally, the amount of gold used in today's applications can be significantly reduced.”

team's work It was published in the magazine natural synthesis.

_____

Shin Kashiwaya other. Golden synthesis consisting of a single atomic layer of gold. nut.synthesizer, published online March 18, 2024. doi: 10.1038/s44160-024-00518-4

Source: www.sci.news

Tiny nematodes develop large mouths and exhibit cannibalistic behavior

Huge mouth of a small nematode

Sarah Wiggard and Ralf Sommer / Max Planck Institute for Biology Tübingen

Tiny soil insects called nematodes usually feed on bacteria and algae and have small mouths to accommodate their diet. However, when baby nematodes are fed the fungus, their mouths double in size, giving them the ability to cannibalize their mates.

that’s what ralph sommer Researchers at the Max Planck Institute for Biology in Tübingen, Germany, made the discovery while studying the development of predatory soil nematodes. Allody Progaster Sudhouushi.when the larvae are raised Penicillium Some of them ate fungi and cheese and grew into cannibals with giant mouths. “We were shocked,” he says.

The researchers knew that the different mouth shapes seen in this species resulted from different feeding habits. Nematodes that feed on bacteria have narrow mouths, while nematodes that feed on much smaller nematode species have slightly wider mouths. But this extreme variant, which the researchers called “teratostomia,” or Te morphology, had not been previously documented.

Sommer and colleagues investigated the genetics underlying these different mouth shapes and found that all three were controlled by the same sulfatase gene. But that activity only seems to result in a giant, gaping mouth. A. Sudaushi. The species’ complete set of genetic instructions was duplicated only recently in its evolution, Sommer said, so the doubling of gene pairs may have facilitated the origin of the worm’s giant mouth. That’s what it means.

Because the fungi’s diet was low in nutrients and more Te forms were found in high-density conditions, the researchers found that Te forms and their associated cannibalistic habits may have evolved as a response to the stresses of starvation and crowding. That’s what I think.

Nicholas Levis Indiana University points out that a similar phenomenon is seen in several other species. For example, the tadpoles of spadefoot toads and some salamanders can develop into cannibalistic carnivores depending on environmental conditions, Levis says.

But even in such cases, animals often avoid eating their own kind. Te nematodes are nondiscriminatory and prey on genetically identical neighbors. Levis says this is a “surprising finding” that could indicate that the development strategy is “really hopeless.”

“This discovery…made me wonder how much more diverse there is in the natural world than what we see,” Levis says. “How many other hidden ‘monsters’ are there waiting to be discovered under the right environmental conditions?”

topic:

Source: www.newscientist.com