Have you ever wondered how many times you can fold a delicious crêpe without it flipping over? A new study reveals the fascinating physics behind crepe folding dynamics.
In a quest to uncover the nuances of this culinary art, a physicist from France explored this phenomenon. He discovered that a single key number can explain the folding limits.
Tom Marzin, a research student at Cornell University, was inspired during a trip to his hometown of Brittany, France, a region known for its crêpes. He observed that while simply folding the tip of a crêpe causes it to flip, further folds create a delicate balance of gravity and friction that keeps it stationary. What scientific principles govern this behavior?
Marzin turned his curiosity into a research project, and he plans to present his findings at the upcoming American Physical Society meeting on March 20 in Denver.
Unlike traditional studies focused on permanent origami-style folds, Marzin’s work delves into what he terms “soft creases,” a competition between the element of gravity and material elasticity.
To observe this fascinating competition, Marzin conducted an experiment using pancake pieces. By attaching a section to a tabletop, he measured the flex it experienced when the opposite end hung over the edge. He found that all behavior regarding crepe folding can be predicted based on a single value known as the elastic gravity length, which factors in material density, stiffness, and gravitational forces. Marzin speculates that this concept could apply to various flexible materials beyond just crêpes, supported by computer model simulations.
To test his theories in a practical setting, Marzin experimented with plastic discs, store-bought tortillas, and crêpes. Finding homemade crêpes unreliable for experiments due to thickness variability, he enlisted his mother to procure commercial crêpes that ensure consistent thickness.
Marzin’s experiments confirmed that all aspects of crêpe folding are dictated by this elastic gravity length. For instance, by controlling the folded area’s dimensions, one can determine if there’s sufficient surface area left for subsequent folds.
His equation accurately predicts that a crêpe measuring 26 centimeters in diameter and 0.9 millimeters thick can be folded up to four times. In contrast, a similarly sized tortilla at 1.5 millimeters thick, exhibiting an elastic gravity length of 3.4 times, can withstand just two folds. “This length encapsulates the essential physics,” Marzin concludes.
Starlink’s satellite constellation delivers reliable internet connectivity to nearly every corner of the Earth, enhancing operational capabilities in modern military applications. However, the network is overseen by the controversial billionaire Elon Musk, posing potential challenges for military reliance on external internet services.
Comprising approximately 10,000 satellites, the Starlink network facilitates internet access through small terrestrial dishes, reportedly serving over 10 million paying civilian clients. The system is also essential for military operations, which rely heavily on data, high-definition video feeds, and drone controls around the clock.
In contrast to traditional radio systems that can be easily jammed, Starlink’s signals are sent directly into space from ground stations, making them more resilient. Additionally, the affordable receivers enable deployment by small military units and are compatible with both ground and airborne drones.
Given escalating global tensions and nations vying for control over critical technologies, such as nuclear deterrents, relying on foreign services like Starlink for military communication is increasingly seen as a vulnerability, especially under Musk’s unpredictable stewardship.
During the ongoing conflict between Ukraine and Russia since the 2022 invasion, Starlink has proven invaluable. Reports indicate that Russian drones were guided using Starlink technology; however, access to the service was restricted for Russian military operations in February, significantly impacting their operational coordination. This situation temporarily favored Ukraine, illustrating the risks other nations face in relying on a foreign-controlled satellite network.
The European Union is currently developing an alternative system known as Infrastructure for Resilience, Interconnectivity, and Security through Satellites (IRIS²), which aims to deploy around 300 satellites by 2030. Meanwhile, China is working on a similar project, the Guowang Network, expected to comprise 13,000 satellites, although fewer than 200 are operational at present. The Qianho Constellation is also in its initial building phase, and Russia’s Sfera Constellation has encountered delays.
Additionally, individual European nations are pursuing independent satellite initiatives apart from the EU umbrella. Germany is in talks to construct its own network, while Britain invests in Eutelsat OneWeb, a crucial satellite internet provider that previously avoided bankruptcy due to its technology. A British startup, OpenCosmos, is also developing a comparable system, supported by the CIA.
According to Anthony King, a professor at Exeter University in the UK, it’s remarkable that private telecommunications companies wield so much influence in global conflicts, often determining tactical advantages. However, with the rise of superpowers, future secure satellite communications will likely evolve. “Certainly, China is advancing their capabilities,” he remarked, emphasizing that secure satellite communication will become vital in future military scenarios.
Rising Costs
Although Starlink is a private entity, Barry Evans from the University of Surrey highlights the availability of a secure military version known as StarShield, which is partly funded by the U.S. government because of its strategic importance.
“Dependence on private entities raises concerns in Europe,” Evans noted. “With Musk’s unpredictable shutoff times across different regions, this uncertainty is especially worrisome for the UK, given its insufficiency of funds to develop an independent system.”
Currently, Russia and China lag behind Starlink, which operates under the wholly owned rocket company SpaceX, enabling more economical satellite launches on a flexible schedule, according to Evans.
Building expansive satellite networks incurs massive initial costs, but ongoing maintenance and regular satellite launches are essential to replace those that fail or exhaust their fuel reserves, complicating sustainability. The UK lacks independent launch capabilities, implying reliance on external partners for its satellite constellation.
Ian Muirhead at Manchester University, who has extensive military communications experience, explains that militaries have transitioned from using radios to temporary cell networks for combat communication. However, following the Cold War, shaping such networks became prohibitively costly, leading military operations to opt for satellite communications instead. Starlink simplifies this process, providing higher capabilities at lower costs and complexities.
“Moreover, when considering space warfare, there are benefits arising from the multitude of satellites,” Muirhead added. “It’s difficult to neutralize a satellite system since they constantly orbit overhead.”
John Martinis is a leading expert in quantum hardware, who emphasizes hands-on physics rather than abstract theories. His pivotal role in quantum computing history makes him indispensable to my book on the subject. As a visionary, he is focused on the next groundbreaking advancements in the field.
Martinis’s journey began in the 1980s with experiments that pushed the limits of quantum effects, earning him a Nobel Prize last year. During his graduate studies at the University of California, Berkeley, he tackled the question of whether quantum mechanics could apply to larger scales, beyond elementary particles.
Collaborating with colleagues, Martinis developed circuits combining superconductors and insulators, demonstrating that multiple charged particles could behave like a single quantum entity. This discovery initiated the macroscopic quantum regime, forming the backbone of modern quantum computers developed by giants like IBM and Google. His work led to the adoption of superconducting qubits, the most common quantum bits in use today.
Martinis made headlines again when he spearheaded a team at Google that built the first quantum computer to achieve quantum supremacy. For nearly five years, this machine could independently verify the outputs of random quantum circuits, though it was eventually surpassed by classical computers in performance.
Approaching seven decades of age, Martinis still believes in the potential of superconducting qubits. In 2024, he co-founded QoLab, a quantum computing startup proposing revolutionary methodologies aimed at developing a genuinely practical quantum computer.
Carmela Padavich Callahan: Early in your career, you fundamentally impacted the field. When did you realize your experiments could lead to technological advancements?
John Martinis: I questioned whether macroscopic variables could bypass quantum mechanics, and as a novice in the field, I felt it was essential to test this assumption. A fundamental quantum mechanics experiment intrigued me, even though it initially seemed daunting.
Our first attempt was a simple and rapid experiment using contemporary technology. The outcome was a failure, but I quickly pivoted. Learning about microwave engineering, we tackled numerous technical challenges before achieving subsequent successes.
Over the next decade, our work on quantum devices laid a solid foundation for quantum computing theory, including the breakthrough Scholl algorithm for factorizing large numbers, essential for cryptography.
How has funding influenced research and the evolution of technology?
Since the 1980s, the landscape has transformed dramatically. Initially, there was uncertainty about manipulating single quantum systems, but quantum computing has since blossomed into a vast field. It’s gratifying to see so many physicists employed to unravel the complexities of superconducting quantum systems.
Your involvement during quantum computing’s infancy gives you a unique perspective on its trajectory. How does that inform your current work?
Having long experience in the field, I possess a deep understanding of the fundamentals. My team at UC Santa Barbara developed early microwave electronics, and I later contributed to foundational cooling technology at Google for superconducting quantum computers. I appreciate both the challenges and opportunities in scaling these complex systems.
Cryostat for Quantum Computers
Mattia Balsamini/Contrasto/Eyeline
What changes do you believe are necessary for quantum computers to become practical? What breakthroughs do you foresee on the horizon?
After my tenure at Google, I reevaluated the core principles behind quantum computing systems, leading to the founding of QoLab, which introduces significant changes in qubit design and assembly, particularly regarding wiring.
We recognized that making quantum technology more reliable and cost-effective requires a fresh perspective on the construction of quantum computers. Despite facing skepticism, my extensive experience in physics affirms that our approach is on the right track.
It’s often stated that achieving a truly functional, error-free quantum computer requires millions of qubits. How do you envision reaching that goal?
The most significant advancements will arise from innovations in manufacturing, particularly in quantum chip fabrication, which is currently outdated. Many leading companies still use techniques reminiscent of the mid-20th century, which is puzzling.
Our mission is to revolutionize the construction of these devices. We aim to minimize the chaotic interconnections typically associated with superconducting quantum computers, focusing on integrating everything into a single chip architecture.
Do you foresee a clear leader in the quest for practical quantum computing in the next five years?
Given the diverse approaches to building quantum computers, each with its engineering hurdles, fostering various strategies is valuable for promoting innovation. However, many projects do not fully contemplate the practical challenges of scaling and cost control.
At QoLab, we adopt a collaborative business model, leveraging partnerships with hardware companies to enhance our manufacturing capabilities.
If a large-scale, error-free quantum computer were available tomorrow, what would your first experiment be?
I am keen to apply quantum computing solutions to challenges in quantum chemistry and materials science. Recent research highlights the potential for using quantum computers to optimize nuclear magnetic resonance (NMR) experiments, as classical supercomputers struggle with such complex quantum issues.
While others may explore optimization or quantum AI applications, my focus centers on well-defined problems in materials science, where we can craft concrete solutions with quantum technologies.
Why have mathematically predicted quantum applications not materialized yet?
While theoretical explorations in qubit behavior are promising, real-life qubits face significant noise challenges, making practical implementations far more complex. Theoretical initiatives comprehensively grasp theory but often overlook the intricacies of hardware development.
Through my training with John Clark, I cultivated a strong focus on noise reduction in qubits, which has proven beneficial in experiments showcasing quantum supremacy. Addressing these challenges requires dedication to understanding qubit design intricacies.
As we pursue advancements, a dual emphasis on hardware improvements and application innovation remains crucial in the journey to unlock quantum computing’s full potential.
Evidence from ethnohistory and recent archaeology indicates that Easter Island (Rapanui) had a politically decentralized structure, organized into small kin-based communities that operated with a degree of autonomy throughout the island. This raises significant questions regarding the over 1,000 monumental statues (moai). Was the production process at Rano Raraku, the main moai quarry, centrally managed, or did it reflect the decentralized patterns observed on the island? Archaeologists utilized a dataset of more than 11,000 UAV images to create the first comprehensive three-dimensional model of a quarry to examine these competing hypotheses.
3D model of Rano Raraku quarry. Image credit: Lipo et al., doi: 10.1371/journal.pone.0336251.
The monumental Moai of Easter Island stand as one of the most remarkable archaeological achievements in Polynesia, with over 1,000 megalithic statues spread across the volcanic isle, which is just 100 miles long.2
This significant investment in monumental architecture seems paradoxical when compared to ethnohistorical records that consistently depict Rapa Nui society as composed of relatively small, rival kin-based groups rather than a centralized polity.
Early ethnographers described a sociopolitical environment with numerous matas (clans or tribes) maintaining distinct territorial boundaries, independent ceremonial centers, and autonomous leadership structures.
This leads to the question of whether the construction of the moai was similarly decentralized.
In a recent study, Professor Carl Lipo from Binghamton University and his team compiled over 11,000 images of Rano Raraku, a key moai quarry, and developed a detailed 3D model of the site, which includes hundreds of moai at various stages of completion.
“For archaeologists, quarries are like an archaeological Disneyland,” Professor Lipo stated.
“Everything you can imagine about the making of a moai is represented here, as most of the crafting was performed directly on site.”
“This has always been a goldmine of information and cultural significance, yet it remains greatly under-documented.”
“The rapid advancement in technology is astounding,” noted Dr. Thomas Pingel of Binghamton University.
“The quality of this model surpasses what was achievable just a few years ago, and the ability to share such a detailed model accessible from anyone’s desktop is exceptional.”
In-depth analysis of the model revealed 30 distinct quarrying centers, each exhibiting different carving techniques, indicating multiple independent working zones.
There is also evidence of the moai being transported in various directions from the quarry.
These observations imply that moai construction, like the broader societal structure of Rapa Nui, lacked central organization.
“We are observing individualized workshops that cater specifically to different clan groups, focusing on particular areas,” said Professor Lipo.
“From the construction site, you can visually identify that specific groups created a series of statues together, indicating separate workshops.”
This finding challenges the prevalent assumption that such large-scale monument production necessitates a hierarchical structure.
The similarities among the moai appear to be the result of shared cultural knowledge rather than collaborative efforts in carving the statues.
“Much of the so-called ‘Rapanui mystery’ arises from the scarcity of publicly available detailed evidence that would empower researchers to assess hypotheses and formulate explanations,” stated the researchers.
“We present the first high-resolution 3D model of the Rano Raraku Moai Quarry, the key site for nearly 1,000 statues, offering new perspectives on the organization and manufacturing processes behind these massive megalithic sculptures.”
Findings are detailed in an article published in the Online Journal on November 26, 2025 in PLoS ONE.
_____
CP Lipo et al. 2025. Production of megalithic statues (moai) at Rapa Nui (Easter Island, Chile). PLoS One 20 (11): e0336251; doi: 10.1371/journal.pone.0336251
Greetings! Welcome to TechScape. I’m your host, Blake Montgomery, reaching out from Barcelona where my culinary adventures have, quite humorously, turned half of me into ham.
Who will lead the self-driving car industry?
The global rollout of self-driving cars is on the horizon. Next year, leading companies from the United States and China plan to expand their operations considerably and introduce robotaxis in major cities worldwide. These firms are akin to male birds strutting to attract a mate, setting the stage for upcoming worldwide rivalries.
On the U.S. front, we have Waymo, the autonomous vehicle initiative by Google. Over the last 15 years, it has invested billions into Waymo. After extensive testing, the company launched its robotaxi service for the public in San Francisco in June 2024, and has since expanded significantly. Waymo vehicles are now a common sight in most parts of Los Angeles, with introductions planned for Washington, D.C., New York City, and London next year.
On November 2nd, Chinese tech giant Baidu lodged a complaint against Google. Baidu claimed its autonomous vehicle division, Apollo Go, conducts 250,000 rides weekly, matching Waymo’s performance. Waymo recently hit a major milestone in the spring.
Most electric vehicles in China are priced significantly lower than their American counterparts, even without self-driving capabilities. Experts estimate that a single Waymo vehicle costs hundreds of thousands to manufacture, though exact figures remain unclear. “The hardware costs for our vehicles are much less than Waymo’s,” declared the CFO of Pony AI, a leading Chinese self-driving firm, to the WSJ.
To recoup its billion-dollar investment in Waymo, Google must persuade potential customers of its superior quality.
Google is highlighting transparency as a distinguishing factor. Much less data is accessible regarding Baidu’s vehicles, raising concerns about their safety records. Baidu asserts that its vehicles have amassed millions of miles without “a single major accident.” Google referenced this in a statement, posing a question about the extent to which the success of Chinese self-driving companies has been communicated to U.S. transportation authorities, as noted by the Wall Street Journal.
However, Apollo Go, which has unlocked taxis in Dubai and Abu Dhabi, is not Waymo’s only contender, as Gulf nations pursue diverse tech partnerships. Wheels from WeRide, another Chinese autonomous vehicle company, have made their way to the UAE and Singapore. All major players in the Chinese market are pursuing expansion into Europe, according to Reuters. Vehicles built by Momenta and deployed by Uber are slated to begin operations in Germany by 2026. WeRide, Baidu, and Pony AI are also gearing up to introduce robotaxi services in various European locations soon, leading to many more people encountering self-driving cars in their everyday lives.
Initially, the primary question concerning self-driving cars was: can we create a working vehicle? Now, the focus has shifted to: who will dominate the market?
Read more: Driving competition: Chinese automakers race to take over European roads
This Week in AI
Elon Musk’s loyal supporters push his wealth to $1 trillion
Martin Lawson discusses Elon Musk’s new compensation package. Illustration: Martin Rowson/The Guardian
Tesla’s recent performance has been lackluster. The looming end of the U.S. electric vehicle tax credit has resulted in a surge of buyers at dealerships over the past few months, yet the company reported a 37% drop in profits in late October. This decline adds to a series of challenges facing EV manufacturers.
In spite of Tesla’s struggles, shareholders voted in favor of a plan to compensate Elon Musk $1 trillion over the next decade, contingent on his ability to elevate Tesla’s valuation from $1.4 trillion to $8.5 trillion. Should he succeed in this and other objectives, it would mark the largest reward in the company’s history.
The results of the vote were revealed during the company’s annual shareholder meeting in Austin, Texas, where more than 75% of investors backed the proposal. Enthusiastic chants of “Elon” filled the room following the announcement.
Musk has been associated with Tesla for a decade through this pay structure, yet his attention has rarely been confined to just one venture. He has remained deeply involved in politics. My colleague Nick Robbins Early details how Musk has aligned himself with the international far-right:
Since his departure from the Trump administration, Musk’s political endeavors have included wielding social media as a platform to influence the New York mayoral election and orchestrating a right-wing, AI-generated alternative to Wikipedia. He has expressed concerns over a “homeless industrial complex” of nonprofits purportedly harming California and declared that “white pride should be acceptable.” On X, he stated that Britain is on the brink of civil war and warned of the collapse of Western civilization.
The social and economic repercussions stemming from Musk’s political stance have not deterred his public support for the far right, and he has increasingly showcased these affiliations, all while maintaining in his characteristic obstinacy that being branded a racist or extremist is of no consequence to him.
Read more: How Tesla shareholders’rewarded Elon Musk towards becoming the world’s first trillionaire
Can you take on the data center?
Google data center located in Santiago. Photo: Rodrigo Arangua/AFP/Getty Images
The data centers fueling the AI revolution are truly colossal. Their financial scope, physical dimensions, and vast datasets encompass all, making the idea of halting their construction seem counterintuitive amid ongoing developments. Silicon Valley’s leading firms are investing hundreds of billions at a rapid pace.
Yet, as data centers expand, resistance is mounting in the United States, the UK, and Latin America, where these facilities are rising in some of the most arid regions globally. Local opposition typically centers on the environmental repercussions and resource use of such monumental constructions.
Paz Peña, a researcher and fellow at the Mozilla Foundation, focuses on the social and environmental effects of data center technology in Latin America. She shared insights with the Guardian at the Mozilla Festival in Barcelona on how communities in Latin America are filing lawsuits to extract information from governments and corporations that prefer to keep it hidden. This dialogue has been condensed for brevity and clarity.
Read my Q&A with Paz Peña here.
Read more: “Cities that draw the line”: A community in Arizona fights against massive data centers
A newly released map of WASP-18b, a hot Jupiter exoplanet located approximately 325 light-years from Earth, showcases an atmosphere characterized by distinct temperature zones. Within this region, the scorching temperatures are capable of decomposing water vapor.
Hot Jupiter WASP-18b. Image credit: NASA’s Goddard Space Flight Center.
The WASP-18b map represents the first implementation of a method known as 3D eclipse mapping, or spectroscopic eclipse mapping.
This study features a 2D model. The paper, published in 2023 by members of the same research team, illustrated how eclipse mapping can leverage the sensitive observations from the NASA/ESA/CSA James Webb Space Telescope.
“This technique is unique in that it can simultaneously survey all three dimensions: latitude, longitude, and altitude,” stated Dr. Megan Weiner Mansfield, an astronomer at the University of Maryland and Arizona State University.
“This enables a greater level of detail than previously possible for studying these celestial objects.”
With this technology, astronomers can now begin to chart the atmospheric variations of many similar exoplanets observable through Webb, resembling how Earth-based telescopes once scrutinized Jupiter’s Great Red Spot and its striped cloud formations.
“Eclipse mapping allows us to capture images of exoplanets whose host stars are too bright for direct observation,” remarked Dr. Ryan Challenor, an astronomer at Cornell University and the University of Maryland.
“Thanks to this telescope and groundbreaking technology, we can start to understand exoplanets similarly to the neighboring worlds in our solar system.”
Detecting exoplanets is quite challenging as they typically emit less than 1% of the brightness of their host star.
Mapping a solar eclipse involves measuring a small fraction of the total brightness as the planet orbits behind the star, obscuring and revealing areas of the star in the process.
Scientists can link minute changes in light to specific regions, creating brightness maps. These maps can be rendered in various colors and translated into three-dimensional temperature readings based on latitude, longitude, and altitude.
“It’s quite difficult because you’re looking for changes where small sections of the Earth become obscured and then revealed,” Challenor explained.
WASP-18b has a mass approximately 10 times that of Jupiter, completes its orbit in just 23 hours, and achieves temperatures around 2,760 degrees Celsius (5,000 degrees Fahrenheit). Its strong signal makes it an excellent candidate for testing new mapping techniques.
While previous 2D maps relied on a single wavelength or color of light, the 3D map re-evaluated the same observations using Webb’s Near Infrared Imager and Slitless Spectrometer (NIRISS) across multiple wavelengths.
“Each color corresponds to different temperatures and altitudes within WASP-18b’s gaseous atmosphere, allowing them to be combined into a 3D map,” Dr. Challenor noted.
“Mapping at wavelengths that water absorbs can indicate the layers of water in the atmosphere, while wavelengths that water doesn’t absorb facilitate deeper probing.”
“When combined, these provide a three-dimensional temperature map of the atmosphere.”
The new perspective uncovered spectroscopically distinct zones (with varying temperatures and potentially different chemical compositions) on the visible dayside of WASP-18b (the side that perpetually faces its star due to its tidally locked orbit).
The planet exhibits a circular “hotspot” that receives the most direct stellar light, with winds insufficient to redistribute the heat.
Surrounding the hotspot is a cooler “ring” located closer to the planet’s visible outer edge.
Interestingly, the measurements indicated that water vapor levels within the hotspot were lower than the average for WASP-18b.
“We believe this suggests that the heat in this area is so intense that water is beginning to decompose,” explained Challenor.
“This was anticipated by theory, but it’s exhilarating to confirm it through actual observations.”
“Further observations from Webb could enhance the spatial resolution of this pioneering 3D eclipse map.”
“Already, this technique will aid in refining temperature maps of other hot Jupiters, which comprise hundreds of the more than 6,000 exoplanets discovered to date.”
Dr. Mansfield expressed: “It’s thrilling that we now possess the tools to visualize and map the temperature of another planet in such intricate detail.”
“We can apply this technique to other exoplanet types. For instance, even if a planet lacks an atmosphere, we might be able to use this method to map surface temperatures and discern its composition.”
“While WASP-18b was more predictable, we believe there’s potential to observe phenomena we never anticipated before.”
The map of WASP-18b is detailed in a paper published in the journal Nature Astronomy.
_____
RC Challenor et al.. Horizontal and vertical exoplanet thermal structures from JWST spectroscopic eclipse maps. Nat Astron published online October 28, 2025. doi: 10.1038/s41550-025-02666-9
Complex problem solving can arise sooner in child development than previously believed
PlusOnevector/Alamy
Research reveals that four-year-olds can devise efficient strategies for complex challenges, such as independently creating sorting methods akin to those used by computer scientists. The researchers assert that these abilities appear much earlier than once thought, warranting a reevaluation of developmental psychology.
Past experiments led by Swiss psychologist Jean Piaget popular in the 1960s, required children to physically arrange sticks by length. His findings indicated that structured strategies didn’t emerge until around age seven, as children tended to experiment haphazardly through trial and error.
Contrarily, recent work by Huiwen Alex Yang and his team at the University of California, Berkeley, shows that a notable fraction of four-year-olds can create algorithmic solutions for the same task, with more than a quarter exhibiting these skills by age five.
“Perhaps we haven’t given our children enough credit,” Yang states. “We must delve deeper into their reasoning capabilities.”
In a study involving 123 children aged 4-9, researchers asked them to sort digital images of bunnies by height. Initially, they could view groups of bunnies and directly compare their heights, allowing all children to sort them aptly using straightforward methods.
However, once the heights were obscured, the children had to compare only two bunnies at a time while being informed whether their order was correct. This approach necessitated the development of new strategies, as they couldn’t see the entire group simultaneously.
The researchers examined the children’s application of these new strategies, looking for evidence of known solutions and demonstrated instances where children utilized established algorithms. It was found that overall, children frequently outperformed random chance. Remarkably, they independently identified at least two efficient sorting algorithms recognized in computer science: Selection Sort and Shaker Sort.
In 34% of trials, children employed various comparisons, signaling their use of known sorting algorithms for a portion of the time. Out of a total of 667 tests run, the children utilized selection and shaker sorting in 141 instances, with some employing combinations of both strategies. Notably, 67 out of 123 children demonstrated at least one recognizable algorithm, and 30 children used both at different stages in the experiment.
Nonetheless, the age of the children directly influenced how many used algorithms. Only 2.9% of four-year-olds applied identifiable methods, while this rose to 25.5% among five-year-olds and 30.7% for six-year-olds. By age nine, over 54% were using identifiable algorithms.
“This has long been a challenge to Piaget,” remarks Andrew Bremner from the University of Birmingham, UK. He acknowledges Piaget’s groundbreaking contributions to developmental psychology in setting stages for learning but emphasizes that Piaget often designed experiments without proper controls. “Critics have been eager to illustrate that children can achieve more than Piaget claimed.
Essentially, while Piaget initially had a correct understanding of child development, his assessments of the ages at which children achieve certain milestones were overly pessimistic. This latest study strengthens the evidence supporting earlier development stages. Interestingly, it revolves around sorting. Bremner indicates this as the last bastion of Piaget’s work, proving applicable to younger children than once believed.
“Children can successfully navigate this particular problem much sooner than we anticipated,” states Bremner. “They do not approach the world as mere blank slates, but rather implement strategic techniques in problem-solving.”
Sam Wass from the University of East London points out that Piaget contended that children needed a comprehensive grasp of complex systems before they could devise strategies to engage with them, a notion he is finding increasingly unnecessary.
“This research signifies a significant trend in psychology that contests the assumption that intricate thoughts and understanding are prerequisites for executing complex behaviors,” notes Wass. “The study illustrates that complex behaviors may emerge from a far simpler array of rules.”
When the Mendocino earthquake erupted off the California coast in 2024, it shook structures from their very foundations, triggered a 3-inch tsunami, and sparked intriguing scientific investigations in the server room of a nearby police station.
More than two years prior to the quake, scientists had installed a device known as the “Dispersed Acoustic Sensing Interrogation Room” at the Alcata Police Station located near the coast. This device utilizes a laser directed through a fiber optic cable that provides internet connectivity to the station, detecting how the laser light bends as it returns.
Recently, researchers revealed in a study published in the Journal Science that data collected from fiber optic cables can effectively be used to “image” the Mendocino earthquake.
This research demonstrates how scientists can convert telecommunication cables into seismometers, providing detailed earthquake data at the speed of light. Experts noted that this rapidly advancing technology has the potential to enhance early earthquake warning systems, extending the time available for individuals to take safety measures, and could be critical for predicting major earthquakes in the future.
James Atterholt, a research geophysicist for the US Geological Survey and lead author of the study, stated, “This is the first study to image the seismic rupture process from such a significant earthquake. It suggests that early earthquake warning alerts could be improved using telecom fibers.”
The study proposes equipping seismometers with devices capable of gathering sparse data from the extensive network of telecommunications cables utilized by companies such as Google, Amazon, and AT&T, making monitoring submarine earthquakes—often costly—more affordable.
Emily Brozky, a professor of geoscience at the University of California, Santa Cruz, asserted that “early earthquake warnings could be dramatically improved tomorrow” if scientists can establish widespread access to existing communication networks.
“There are no technical barriers to overcome, and that’s precisely what Atterholt’s research emphasizes,” Brozky mentioned in an interview.
In the long term, leveraging this technology through fiber optic cables could enable researchers to explore the possibility of forecasting some of the most devastating earthquakes in advance.
Scientists have observed intriguing patterns in underwater subduction zones prior to significant earthquakes, including Chile’s magnitude 8.1 quake in 2014 and the 2011 Tohoku earthquake and tsunami in Japan.
Both of these major earthquakes were preceded by what are known as “slow slip” events that gradually release energy over weeks or months without causing noticeable shaking.
The scientific community is still uncertain about what this pattern signifies, as high-magnitude earthquakes (8.0 or greater) are rare and seldom monitored in detail.
Effective monitoring of seismic activity using telecommunications networks could enable scientists to accurately document these events and assess whether discernible patterns exist that could help predict future disasters.
Brodsky remarked, “What we want to determine is whether the fault will slip slowly before it gives way entirely. We keep observing these signals from afar, but what we need is an up-close and personal instrument to navigate the obstacles.”
While Brodsky emphasized that it’s still unclear whether earthquakes in these extensive subduction zones can be predicted, she noted that the topic is a major source of scientific discussion, with the new fiber optic technology potentially aiding in resolving this issue.
For nearly 10 years, researchers have been investigating earthquake monitoring through optical fiber cables. Brodsky stated that the study highlights the need for collaboration among the federal government, scientific community, and telecommunications providers to negotiate access.
“There are valid concerns; they worry about people installing instruments on their highly valuable assets and about the security of cables and privacy,” Brozky explained regarding telecom companies. “However, it is evident that acquiring this data also serves the public’s safety interests, which makes it a regulatory issue that needs to be addressed.”
Atterholt clarified that fiber optic sensing technology is not intended to replace traditional seismometers, but rather to complement existing data and is more cost-effective than placing seismometers on the seabed. Generally, using cables for earthquake monitoring does not interfere with their primary function of data transmission.
Jiaxuan Li, an assistant professor of geophysics and seismology at the University of Houston, noted he was not involved in the study but mentioned that there are still technical challenges to the implementation of distributed acoustic sensing (DAS) technology, which currently functions over distances of approximately 90 miles.
Li also pointed out that similar methods are being employed in Iceland to monitor magma movements in volcanoes.
“We utilized DAS to facilitate early warnings for volcanic eruptions,” Li explained. “The Icelandic Meteorological Office is now using this technology for issuing early alerts.”
Additionally, the technique indicated that the Mendocino tremors were rare “supershear” earthquakes, which occur when fault fractures advance quicker than seismic waves can travel. Atterholt likened it to a fighter jet exceeding the speed of sound.
New research has serendipitously uncovered patterns associated with Mendocino, providing fresh insights into this phenomenon.
“We still have not fully grasped why some earthquakes become supershear while others do not,” Atterholt reflected. “This could potentially alter the danger level of an earthquake, but the correlation remains unclear.”
Solar flares pose risks to GPS systems and communication satellites
NASA/SDO/AIA
AI models developed with NASA satellite imagery are now capable of forecasting the sun’s appearance hours ahead.
“I envision this model as an AI telescope that enables us to observe the sun and grasp its ‘mood,'” states Juan Bernabe Moreno from IBM Research Europe.
The sun’s state is crucial because bursts of solar activity can bombard Earth with high-energy particles, X-rays, and extreme ultraviolet radiation. These events have the potential to disrupt GPS systems and communication satellites, as well as endanger astronauts and commercial flights. Solar flares may also be accompanied by coronal mass ejections, which can severely impact Earth’s magnetic field, leading to geomagnetic storms that could incapacitate power grids.
Bernabé-Moreno and his team at IBM and NASA created an AI model named Surya, derived from the Sanskrit word for ‘sun,’ by utilizing nine years of data from NASA’s Solar Dynamics Observatory. This satellite captures ultra-high-resolution images of the sun across 13 wavelength channels. The AI models have learned to recognize patterns in this visual data and create forecasts of how the sun will appear from future observational stations.
When tested against historical solar flare data, the Surya model demonstrated a 16% improvement in accuracy for predicting flare occurrences within the next day compared to traditional machine learning models. There is also a possibility that the model could generate visualizations of flares observable for up to two hours in advance.
“The strength of AI lies in its capacity to comprehend physics in unconventional ways. It enhances our intuition regarding physical processes,” remarks Lisa Upton at the Southwest Research Institute in Colorado.
Upton is especially eager to explore if the Surya model can aid in predicting solar activity across the sun and at its poles—areas where NASA instruments cannot directly observe. While Surya does not explicitly aim to model the far side of the sun, it has shown promise in forecasting what the sun will resemble for several hours ahead as sections rotate into view, according to Bernabe Moreno.
However, it remains uncertain whether AI models can overcome existing obstacles in accurately predicting how solar activity will influence Earth. Bernard Jackson from the University of California, San Diego, points out that there is currently no means to directly observe the magnetic field composition between the Sun and Earth, a crucial factor determining the direction of high-energy particles emanating from the star.
As stated by Bernabé-Moreno, this model is intended for scientific use now, but future collaborations with other AI systems that could leverage Surya’s capabilities may allow it to support power grid operators and satellite constellation owners as part of early warning frameworks.
Researchers have stabilized ring-shaped carbon molecules by adding “bumpers” to protect the atoms.
Harry Anderson
An innovative variety of whole carbon molecules is currently under investigation at standard room temperature. This marks only the second instance of such research since the synthesis of the spherical buckyball 35 years ago. These advancements may lead to the development of materials that offer substantial efficiencies for emerging electronic and quantum technologies.
Carbon molecules composed of circulating rings can display unique chemical characteristics and, similar to buckyballs and carbon nanotubes, can conduct electricity in unexpected ways. Nonetheless, these rings are fragile and often disintegrate before researchers can analyze them.
“Cyclic carbons are fascinating molecules that we’ve been endeavoring to create for quite some time,” said Harry Anderson from Oxford University. Traditionally, it was essential to maintain a sufficient length for studying the molecules, but Anderson and his team have discovered a method to stabilize cyclic carbon at room temperature.
This process involves modifying the cyclic carbon structure. The researchers have achieved this with unprecedented molecular constructs—specifically, rings consisting of 48 carbon atoms known as cyclo[48]Carbon, or c48. They augmented the c48 by incorporating a “bumper” that prevents the 48 atoms from colliding with one another or with additional molecules.
“There are no unnecessary embellishments,” remarked Max Fonderius from Ulm University, Germany. “Simplicity possesses an exquisite elegance.”
A new configuration called Cyclo[48]carbon [4]Catenan remains stable for approximately two days, allowing researchers to investigate c48 for the first time. Interestingly, the molecule’s 48 carbons behaved as if they were arranged in infinite chains, a formation that enables charge transfer between atoms indefinitely.
This remarkable conduction ability suggests that cyclic carbon could be utilized in a variety of next-generation technologies, including transistors, solar cells, semiconductors, and quantum devices. Nonetheless, further inquiry is necessary to validate this potential.
Innovative techniques for stabilizing cyclic carbon may also inspire other scientists to explore exotic carbon molecules. “I believe there is likely a competitive race happening right now,” said von Delius. “Consider this elongated ring as a stepping stone toward the creation of an infinite chain.”
Von Delius further explained that a solitary chain of carbon molecules could prove to be even superior conductors than the rings like C48. “It’s truly remarkable, and it represents the next significant advancement,” he stated.
A British biotech firm, Basecamp Research, has spent recent years gathering extensive genetic data from microorganisms inhabiting extreme environments worldwide, uncovering 10 billion new species among over a million scientifically recognized entities. This vast database of planetary biodiversity aims to assist in training “biology chats” to address inquiries regarding life on Earth, although its effectiveness remains uncertain.
Jorg Overmann from the Leibniz Institute DSMZ, which houses one of the world’s most extensive collections of microbial cultures, asserts that while an increase in known genetic sequences is beneficial, it likely won’t lead to significant discoveries in drug development or chemistry without deeper insights into the organisms from which they originated. “In the end, I’m skeptical that a better understanding of unique features will be achieved merely through brute force in the sequencing domain,” he remarks.
Recent years have seen a surge in machine learning models aimed at identifying patterns and predicting relationships within vast biological datasets. The most well-known of these is Alphafold, which can predict the 3D structure of proteins using only genetic data, and was awarded the 2024 Nobel Prize in Chemistry at Google DeepMind.
This “genometric biology” approach has grown significantly, but according to Francis Din at the University of California, Berkeley, progress has been limited. One reason for this is the underrepresentation of biodiversity data. “Current biological models are primarily trained with datasets that favor well-studied species (e.g., E. coli, mice, humans), leading to poor prediction capabilities for traits associated with sequences from other branches of the Tree of Life,” she explains.
Basecamp researchers aim to bridge this biodiversity gap. Their expanding database now includes samples from over 120 locations across 26 countries, as detailed in a report by the company. Jonathan Finn, the company’s Chief Science Officer, notes that their sampling efforts target extreme environments that have yet to be thoroughly examined, spanning from the icy depths of the Arctic Ocean to the warm jungle hot springs. “Most of the samples we’re prioritizing are prokaryotic: bacteria, microorganisms, and their viruses,” Finn states. “We are also aware that some fungi are present.”
Genetic analyses of these samples have illuminated gene variations that are broadly shared across the Tree of Life. Based on this research, the company estimates that their data encompasses over a million species of genetic information not found in public genomic databases utilized for training AI models. This includes around 9.8 billion newly identified genes, increasing the overall known gene count tenfold, each potentially encoding useful proteins, according to the researchers.
“By providing these models with richer data, we enhance our understanding of biological mechanisms,” Finn explains. “We aim to create a ChatGPT for Biology.”
It’s estimated that Earth hosts trillions of microorganism species, many of which remain poorly characterized. Thus, it’s not unexpected that the company has identified such a wealth of novel life forms. “As we explore more, discovering diverse gene variants becomes almost inevitable,” notes Leopold Parts at the Wellcome Sanger Institute in the UK.
Nevertheless, Basecamp promotes the notion that all newly discovered materials might hold value. It’s not alone in this sentiment. “This is among the most thrilling advances I’ve encountered in quite some time,” remarks Nathan Frey, a machine learning researcher at Genentech, a US biotech firm. He emphasizes that most AI biology projects focus on algorithm improvement or generating additional lab data rather than venturing out to collect samples directly from nature.
However, skepticism arises regarding whether this database will yield the meaningful advancements the company aspires to achieve. For starters, it remains uncertain how much this newfound diversity in proteins reflects valuable new functions like enzymes and proteins that can degrade plastic useful for gene editing. “They must demonstrate that this novelty has practical utility,” cautions Parts.
Moreover, if the new genes significantly differ from known genes, Overmann expresses doubts about how easily existing tools can predict functionality or how such data can be utilized for training new models. “I can’t discern the functions of most of my genes,” he states. The company may have created a valuable new repository of biological data, but in traditional lab settings, even the most advanced AI may still face challenges in interpretation.
This concept may surprise you, but certain tumors can indeed develop parts of your body, or at least fragments of them.
These peculiar layers, known as teratomas, originate from germ cells that possess the extraordinary capability to transform into any type of tissue.
Germ cells typically evolve into sperm or eggs; however, when their development is disrupted, they can create a disorganized mass of tissue.
The term “Teratoma” is derived from the Greek word Teras, which means “monster,” aptly reflecting its nature.
These tumors feature an astonishing array of components, ranging from hair and teeth to muscle tissues and even organ-like structures such as the thyroid and eyes.
While fully functional organs are exceedingly rare, the intricate nature of these tumors is undeniable.
Teratomas are most frequently observed in the ovaries and testes, but they can also appear in the midline of the body, such as the mediastinum (the chest area that houses the heart) and the base of the spine.
The majority of teratomas are benign and can be easily excised, though a small percentage—particularly those in men—can become malignant and necessitate urgent treatment. Surgery is generally the primary method for addressing these tumors, and the prognosis is typically favorable.
It can grow teeth, muscles, thyroid, eyes, and other tissues from the teratoma – Image credit: Science Photo Library
In addition to their medical implications, teratomas have offered significant insights into the science of cellular development.
They can include tissues derived from all three layers of germ cells, making them an intriguing model for studying how cells differentiate and organize.
So, can a tumor grow organs? In a way, yes. However, these structures are often nonfunctional and poorly organized.
Teratoma serves as a striking and unsettling example of the bizarre and unpredictable aspects of human biology.
This article addresses the question posed by Anisa Manning and Steve Nage: “Can tumors grow their own organs?”
If you have questions, please email us atQuestion @sciencefocus.com or message us onFacebook,Twitter, orInstagram (please include your name and location).
Explore our ultimateFun Fact and more captivating science pages.
Throughout history, the effects of wear and tear, along with natural aging, have resulted in oil paintings displaying cracks, discoloration, and peeling pigments, leaving lasting marks.
Repairing such damage is typically reserved for the most treasured artworks, requiring years of meticulous effort. However, a new approach promises to revolutionize this process, enabling the restoration of aging pieces in a matter of hours.
This innovative technique utilizes artificial intelligence and advanced digital tools to create reconstructions of damaged paintings, which are subsequently printed on a transparent polymer sheet and applied over the original artwork.
To showcase this method, Alex Kachin, a graduate researcher from the Massachusetts Institute of Technology, undertook the restoration of damaged panels attributed to a master Dutch painter of the late 15th century, whose identity remains unknown, following a piece by Martin Schongauer.
The artwork, rich in detail, is visibly segmented into four panels, marred by fine cracks and speckled with countless tiny paint losses.
“Much of the damage involves small, intricate details,” Kachin noted. “It has been deteriorating for centuries.”
Kachin initiated the process by scanning the painting to ascertain the dimensions, shapes, and locations of the damaged areas, identifying 5,612 individual sections requiring repair.
Following this, a digital mask was created using Adobe Photoshop. Missing paint spots were filled in, with surrounding pigment colors adjusted accordingly. Repairs to patterned sections involved duplicating similar patterns from other areas of the painting. For instance, a missing facial feature of a child was sourced from a different work by the same artist.
Close-ups illustrating the masking results. Photo: Alex Kachin, MIT
Once the mask was complete, it was printed on the polymer sheet and painted over, followed by a varnish application to ensure it harmonized with the painting.
In total, 57,314 colors were utilized to restore the damaged sections. The modifications were crafted to enhance the artwork even if slightly misaligned.
Upon seeing the results, Kachin expressed satisfaction. “We dedicated years to perfecting this method,” he remarked. “It was a significant relief to realize that this approach enabled us to reconstruct and piece together the surviving parts of the painting.”
This approach, as detailed in Nature, can only be applied to works featuring a smooth varnish that allows for flat application. The mask can be removed using conservator solvents without leaving marks on the original piece.
Kachin envisions this technique facilitating galleries in restoring and showcasing numerous damaged paintings that might otherwise lack the value warranting traditional restoration efforts.
Nonetheless, he recognizes the ethical considerations surrounding the use of film overlays on paintings, questioning whether they might disrupt the viewing experience and the appropriateness of features derived from other works.
In a related commentary, Professor Hartmut Kutzke from the Museum of Cultural History at the University of Oslo emphasized that this method enables quicker and more cost-effective recovery of damaged artworks compared to conventional methods.
“This technique is likely best suited for relatively low-value pieces kept in less visible locations, and may not be appropriate for renowned, high-value artworks,” he noted. “However, it could significantly increase public access to the arts, bringing damaged pieces out of storage and into the view of new audiences.”
Reports indicate that Meta is preparing to unveil a substantial $15 billion (£11 billion) bid aimed at achieving computerized “Superintelligence.”
The competition in Silicon Valley to lead in artificial intelligence is intensifying, even as many current AI systems show inconsistent performance.
Meta CEO Mark Zuckerberg is set to announce the acquisition of a 49% stake in Scale AI, which is led by King Alexandre and co-founded by Lucie Guo. This strategic move has been described by one analyst in Silicon Valley as a “wartime CEO” initiative.
Superintelligence refers to an AI that can outperform humans across all tasks. Currently, AI systems have not yet achieved the same capabilities as humans, a condition known as Artificial General Intelligence (AGI). Recent studies reveal that many prominent AI systems falter when tackling highly complex problems.
Following notable progress from competitors like Sam Altman’s OpenAI and Google, as well as substantial investments in the underperforming Metaverse concept, observers are questioning whether Meta’s renewed focus on AI can restore its competitive edge and drive meaningful advancements.
Meta’s initiative has sparked fresh calls for the European government to embark on its own transparent research endeavors, ensuring robust technological development while fostering public trust, akin to the Swiss CERN European Nuclear Research Institute.
Michael Wooldridge, a professor at the Oxford University Foundation for Artificial Intelligence, stated, “They are maximizing their use of AI. We cannot assume that we fully understand or trust the technology we are creating. It’s crucial that governments collaborate to develop AI openly and rigorously, much like the importance of CERN and particle accelerators.”
Wooldridge commented that the reported acquisition appears to be Meta’s effort to reclaim its competitive edge following the Metaverse’s lackluster reception, noting that the company invested significantly in that venture.
However, he pointed out that the state of AI development remains uneven, with AGI still a distant goal, and “Superintelligence” being even more elusive.
“We have AI that can achieve remarkable feats, yet it struggles with tasks that capable GCSE students can perform,” he remarked.
Andrew Rogoiski, director of partnerships and innovation at the University of Surrey’s People-centered AI Institute, observed, “Meta’s approach to AI differs from that of OpenAI or Humanity. For Meta, AI is not a core mission, but rather an enabler of its broader business strategy.”
“This allows them to take a longer-term view, rather than feeling rushed to achieve AGI,” he added.
Reports indicate that King is expected to take on a significant role within Meta.
Meta has chosen not to comment at this time. Scale AI will be reached for additional comments.
As public health organizations indicate that women’s personal information is vulnerable to exploitation by private entities, experts advocate for public health groups to create alternatives to for-profit period tracker applications.
A study from the University of Cambridge reveals that smartphone apps used for menstrual cycle tracking serve as a “Goldmine” for consumer profiling, collecting data on exercise, diet, medication, hormone levels, and birth control methods.
The economic worth of this information is often “greatly underestimated” by users who share intimate details in unregulated markets with profit-driven businesses, according to the report.
If mishandled, data from cycle tracking apps (CTAs) could lead to issues like employment bias, workplace monitoring, discrimination in health insurance, risks of cyberstalking, and restricted access to abortion services, research indicates.
The authors urge for improved regulation in the expanding Femtech sector to safeguard users as data is sold in large quantities, suggesting that apps should offer clear consent options regarding data collection and promote the establishment of public health agency alternatives to commercial CTAs.
“The menstrual cycle tracking app is marketed as empowering women and bridging gender health disparities,” stated Dr. Stephanie Felberger, PhD, of the Center for Technology and Democracy at Cambridge, the lead author of the report. “Nevertheless, its underlying business model relies on commercial usage, wherein user data and insights are sold to third parties for profit.
“As a consequence of the monetization of data collected by cycle tracking app companies, women face significant and alarming privacy and safety threats.”
The report indicates that most cycle tracking apps cater to women attempting to conceive, making the stored data highly commercially valuable. Other life events, aside from home purchasing, do not trigger such notable shifts in consumer behavior.
Data pertaining to pregnancy is valued at over 200 times more than information about age, gender, or location for targeted advertisements. Furthermore, tracking cycle duration can allow for targeting women at various phases of their cycles.
The three most popular apps project a quarterly download figure of 500 million yen for 2024. The digital health sector focused on women’s wellness is anticipated to surpass $60 billion (£44 billion) by 2027, as noted in the report.
In light of the considerable demand for period tracking, the authors are calling on public health entities, including the UK’s NHS, to create transparent and reliable apps as alternatives to commercial offerings.
“The UK is ideally positioned to address researchers’ challenges related to menstrual data access, as well as privacy and data concerns, by developing an NHS app dedicated to tracking menstrual cycles,” added that the parent-child relationship in the US Reproductive Medicine Plan currently utilizes its own app.
“Apps situated within public health frameworks, which are not primarily profit-driven, can significantly reduce privacy violations, gather essential data on reproductive health, and empower users regarding the utilization of their menstrual information.”
“Utilizing cycle tracking apps is beneficial. Women deserve better than having their menstrual tracking data treated merely as consumer data,” remarked Professor Gina Neff, executive director of the Mindeoo Center.
In the UK and the EU, period tracking data falls under “special categories” and enjoys greater legal protection, similar to genetics and ethnicity. In the United States, authorities collect menstrual cycle data which may hinder access to abortion services, according to the report.
In less than five years, you’ll have access to a Quantum SuperComputer without errors, according to IBM. The company has unveiled a roadmap for a machine named Starling, set to be available for academic and industrial researchers by 2029.
“These are scientific dreams that have been transformed into engineering achievements,” says Jay Gambetta at IBM. He mentions that he and his team have developed all the required components to make Starling a reality, giving them confidence in their ambitious timeline. The new systems will be based in a New York data center and are expected to aid in manufacturing novel chemicals and materials.
IBM has already constructed a fleet of quantum computers, yet the path to truly user-friendly devices remains challenging, with little competition in the field. Errors continue to thwart many efforts to utilize quantum effects for solving problems that typical supercomputers struggle with.
This underscores the necessity for a fault-tolerant quantum computer that can autonomously correct its mistakes. Such capabilities lead to larger, more powerful devices. There is no universal agreement on the optimal strategy to tackle these challenges, prompting the research team to explore various approaches.
All quantum computers depend on qubits, yet different groups create these essential units from light particles, extremely cold atoms, and in Starling’s case, superconducting qubits. IBM is banking on two innovations to enhance its robustness against significant errors.
First, Starling establishes new connections among its qubits, including those that are quite distant from one another. Each qubit is embedded within a chip, and researchers have innovated new hardware to link these components within a single chip and connect multiple chips together. This advancement enables Starling to be larger than its forerunners while allowing it to execute more complex programs.
According to Gambetta, Starling will employ tens of thousands of qubits, permitting 100 million quantum manipulations. Currently, the largest quantum computers house around 1,000 physical qubits, grouped into roughly 200 “logical qubits.” Within each logical qubit, several qubits function together as a single computational unit resilient to errors. The current record for logical qubits belongs to the Quantum Computing Company Quantinuum with a count of 50.
IBM is implementing a novel method for merging physical qubits into logical qubits via LDPC codes. This marks a significant shift from previous methods employed in other superconducting quantum computers. Gambetta notes that utilizing LDPC codes was once seen as a “pipe dream,” but his team has now realized crucial details to make it feasible.
The benefit of this somewhat unconventional technique is that each logical qubit created with an LDPC approach requires fewer physical qubits compared to competing strategies. Consequently, they are smaller and faster error correction becomes achievable.
“IBM has consistently set ambitious goals and accomplished significant milestones over the years,” states Stephen Bartlett from the University of Sydney. “They have achieved notable innovations and improvements in the last five years, and this represents a genuine breakthrough.” He points out that both the distant qubits and the new hardware for connecting the logical qubit codes deviate from the well-performing devices IBM previously developed, necessitating extensive testing. “It looks promising, but it also requires a leap of faith,” Bartlett adds.
Matthew Otten from the University of Wisconsin-Madison mentions that LDPC codes have only been seriously explored in recent years, and IBM’s roadmap clarifies how it functions. He emphasizes its importance as it helps researchers pinpoint potential bottlenecks and trade-offs. For example, he notes that Starling may operate slower than current superconducting quantum computers.
At its intended scale, the device could address challenges relevant to sectors such as pharmaceuticals. Here, simulations of small molecules or proteins on quantum computers like Starling could replace costly and cumbersome experimental steps in drug development, Otten explains.
IBM isn’t the only contender in the quantum computing sector planning significant advancements. For instance, Quantinuum and Psiquantum have also announced their intentions to develop fault-tolerant utility-scale machines by 2029 and 2027, respectively.
Proposals for regulating artificial intelligence are lagging by at least a year as the UK minister aims to advance a significant bill addressing the use of this technology and its associated copyrighted content.
Technology Secretary Peter Kyle is set to present a “detailed” AI bill in the upcoming Congressional session to tackle pressing issues, including safety and copyright concerns.
This delay in regulation raises concerns ahead of the next King’s speech. While no date has been confirmed for this event, some reports suggest it may occur in May 2026.
Initially, Labour had intended to introduce a concise, targeted AI bill shortly after taking office, focusing specifically on large-scale language models like CHATGPT.
The proposed legislation would have mandated companies to provide their models for assessment by the UK AI Security Institute, aiming to address fears that advanced AI models might pose threats to humanity.
However, with the bill behind schedule, the minister has opted to align with the approach of Donald Trump’s administration in the US, fearing that excessive regulations might dissuade AI companies from the UK.
Now, the minister is eager to incorporate copyright regulations for AI firms within the AI bill.
“We believe this framework can help us tackle copyright issues,” a government source commented. “We’ve been consulting with both creators and tech experts, and we’ve uncovered some intriguing ideas for the future. Once the data bill is finalized, our efforts will begin in earnest.”
The government is currently facing a dispute with the House over copyright provisions in a separate data bill. AI companies can utilize copyrighted materials for model training unless the rights holders opt out.
This has led to a strong backlash from the creative community, with notable artists like Elton John, Paul McCartney, and Kate Bush lending their support to a campaign against these changes.
Recently, Piers backed an amendment to the data bill that would require AI companies to declare whether they are using copyrighted materials for model training, ensuring compliance with existing copyright laws.
Despite Kyle’s expressed concerns over the government’s approach, he has resisted calls to backtrack. The government contends that the data bill does not adequately address copyright matters and has vowed to publish an economic impact evaluation alongside several technical papers on copyright and AI.
In a letter to legislators on Saturday, Kyle further pledged to create a cross-party working group on AI and copyright.
Beevan Kidron, a film director and crossbench peer advocating for the creative sector, remarked on Friday that the minister “has neglected the creative industry and disregarded Britain’s second-largest industrial sector.”
Kyle mentioned in Commons last month that AI and copyright should be included in another “comprehensive” legislative package.
An overwhelming majority of the UK populace (88%) believes the government should have the authority to halt AI product usage if deemed a significant risk. This finding was published in March by the ADA Lovelace Institute and the Alan Turing Institute, which shows that over 75% of people feel that safety oversight for AI should be managed by governments or regulators, alongside private companies.
Scott Singer, an AI specialist at Carnegie Endowment for International Peace, noted: “The UK is strategically navigating between the US and the EU. Similar to the US, the UK is aiming to avoid overly stringent regulations that could stifle innovation while exploring meaningful consumer protection methods.”
Research indicates that artificial intelligence can organically develop social practices akin to humans.
The study, conducted in collaboration between the University of London and the City of St. George at the University of Copenhagen, proposes that large-scale language modeling (LLM) AI, like ChatGPT, can begin to adopt linguistic forms and societal norms when interacting in groups without external influence.
Ariel Flint Asherry, a doctoral researcher at Citi St. George and the study’s lead author, challenged the conventional perspective in AI research, asserting that AI is often perceived as solitary entities rather than social beings.
“Unlike most research that treats LLMs in isolation, genuine AI systems are increasingly intertwined, actively interacting,” says Ashery.
“We aimed to investigate whether these models could modify behaviors by shaping practices and forming societal components. The answer is affirmative; their collaborative actions exceed what they achieve individually.”
In this study, groups of individual LLM agents ranged from 24 to 100, where two agents were randomly paired and tasked with selecting a “name” from an optional pool of characters or strings.
When the agents selected the same name, they received a reward; if they chose differently, they faced punishment and were shown each other’s selections.
Although the agents were unaware of being part of a larger group and limited their memory to recent interactions, voluntary naming conventions emerged across the population without a predetermined solution, resembling the communicative norms of human culture.
Andrea Baroncelli, a professor of complexity science at City St. George’s and the senior author of the study, likened the dissemination of behavior to the emergence of new words and terms in our society.
“The agents don’t follow a leader,” he explained. “They actively coordinate, consistently attempting to collaborate in pairs, with each interaction being a one-on-one effort over labels without a comprehensive perspective.
“Consider the term ‘spam.’ No official definition was set, but persistent adjustment efforts led to its universal recognition as a label for unwanted emails.”
Furthermore, the research team identified naturally occurring collective biases that could not be traced back to individual agents.
In the final experiment, a small cohort of AI agents successfully guided a larger group towards a novel naming convention.
This was highlighted as evidence of critical mass dynamics, suggesting that small but pivotal minorities can catalyze rapid behavioral changes in groups once a specific threshold is achieved, akin to phenomena observed in human societies.
Baroncelli remarked that the study “opens a new horizon for AI safety research, illustrating the profound impact of this new breed of agents who will begin to engage with us and collaboratively shape our future.”
He added: “The essence of ensuring coexistence with AI, rather than becoming subservient to it, lies not only in discussions but in negotiation, coordination, and shared actions, much like how we operate.”
Peer-reviewed research on emergent social practices within LLM populations and population bias is published in the journal Science Advances.
A significant breakthrough has been made in the field of cultured meat, with scientists successfully growing nugget-sized chicken using a new method that enables the delivery of nutrients and oxygen to artificial tissues.
In the past, lab-produced tissues were limited to cell spheres less than a millimeter thick, making it challenging to replicate the texture of real muscle. However, a team of Japanese researchers has now managed to grow a chicken measuring 2.7 inches wide and 0.7 inches thick using a new lab tool, marking a major step forward in this technology. Biotechnology trends.
The development of bioreactors that mimic the circulation system has played a crucial role in this breakthrough, with 50 hollow fibers distributing nutrients and oxygen into the meat to allow cells to grow in a specific direction.
This lab-grown chicken, although not made from food-grade ingredients and not yet tasted by scientists, showcases the potential of this technology for various applications beyond food production.
As the technology advances, challenges such as replicating the texture and flavor of traditional meat and improving oxygen delivery for larger pieces still need to be addressed. Automation of the process and the use of food-grade ingredients are crucial steps towards making lab-grown meat commercially viable.
Consumer attitudes towards cultured meat vary, with some expressing concerns about its safety and perceived unnaturalness. Despite these challenges, cultured meat is already available in some markets and holds promise for a more sustainable future.
The future of cultured meat holds potential for significant advancements in food production, regenerative medicine, drug testing, and biohybrid robotics, paving the way for a more sustainable and innovative future.
According to a team of Harvard physicists, the structure of the optically rotating animal continues in a logarithmic spiral.
The evolution of light beams carrying the optical decy as a function of propagation distance. Image credits: Dorrah et al. , doi: 10.1126/sciadv.adr9092.
“This is a new behavior of light consisting of optical vortices that propagate space and change in an anomalous way,” says Professor Federico Capaso, a senior author of the study.
“It can potentially help you manipulate small substances.”
With a unique twist, the researchers have discovered that orbital angular momentum-mediated beams of light grow in mathematically recognizable patterns found throughout nature.
Reflecting the Fibonacci number sequence, their optical rotations propagate into logarithmic spirals found in Nautilus shells, sunflower seeds, and tree branches.
“It was one of the unexpected highlights of this study,” says Dr. Ahmed Dora, the first author of the study.
“Hopefully we can help others, who are experts in applied mathematics, to further study these light patterns and gain unique insight into their universal signature.”
This study is based on previous research by the team using thin lenses etched with thin nanostructures to create a light beam with controlled polarization and orbital angular momentum along its propagation path, converting the input of light into other structures that change when it moves.
Now they have introduced another degree of freedom in their light. There, spatial torque can be changed as it propagates.
“We show even more versatility in control and we can do it on a continuous basis,” said Alfonso Palmieri, co-author of the study.
Potential use cases for such exotic rays involve the control of very small particles, such as colloids, in suspension, by introducing new types of forces according to the unusual torque of light.
It also allows for precise optical tweezers for small operations.
Others have demonstrated light that changes torque using high-intensity lasers and bulky setups, but scientists have created theirs with a single liquid crystal display and a low-intensity beam.
By showing that they can create rotary rotary devices in industry-compatible, integrated devices, the barriers to entry for their technology to become a reality are much lower than in previous demos.
“Our research expands the previous literature on structured light, providing new modalities for light and physics, and sensing, suggesting similar effects of condensed material physics and Bose-Einstein condensates,” they concluded.
study Published in the journal Advances in science.
____
Ahmed H. Dora et al. 2025. Rotation of light. Advances in science 11 (15); doi:10.1126/sciadv.adr9092
The UK government is in the process of developing a predictive programme aimed at identifying potential murderers by utilizing personal data from individuals known to law enforcement authorities.
Researchers are utilizing algorithms to analyze data from thousands of individuals, including crime victims.
Originally named the “Murder Prediction Project,” the initiative has been renamed to “Share data to improve risk assessment” by the Ministry of Justice. While officials hope the project will enhance public safety, critics have labeled it as “chilling and dystopian.”
The existence of the project was brought to light by the advocacy group Statewatch, with details of its operations available through a Freedom of Information request.
Statewatch alleges that data from individuals without criminal convictions will be utilized in the project, including sensitive details related to self-harm and domestic abuse. Authorities vehemently deny this, stating they only collect data on individuals with at least one criminal conviction.
While the government maintains the project is solely for research purposes at this stage, detractors argue that the data used could introduce biases in predictions, particularly affecting ethnic minorities and low-income populations.
The project, commissioned during Rishi Snack’s tenure at the Prime Minister’s Office, analyzes crime data from various official sources, including the probation service and Greater Manchester Police prior to 2015.
Information processed includes names, dates of birth, gender, ethnicity, and unique identifiers on the police national database.
Statewatch’s claim regarding the inclusion of data from innocent individuals and those seeking police assistance is based on a data sharing agreement between the Ministry of Justice and Greater Manchester Police.
The shared data encompasses a range of personal information, including criminal convictions and details such as age at first reporting domestic violence or seeking police intervention.
Moreover, sensitive information categorized as “Special Categories of Personal Data” includes health indicators deemed predictive, mental health, addiction, and vulnerability data.
Responding to criticisms, a Ministry of Justice spokesperson stated: “This project is strictly for research purposes. It utilizes existing data from prison, probation, and police records of convicted offenders to enhance understanding of probationer risks.”
Current risk assessment tools used by correctional services will be supplemented with additional data sources to gauge effectiveness.
In summary, the Ministry of Justice asserts that the project aims to enhance risk assessment for serious crimes and ultimately contribute to public protection through improved analysis.
According to a team of physicists at the University of Massachusetts at Amherst, liquids that recover the newly discovered shapes go against years of expectation derived from the laws of thermodynamics.
This image shows emulsion droplets stabilized by silica nanoparticles with nickel nanoparticles remaining on the drop surface. Image credit: Raykh et al. , doi: 10.1038/s41567-025-02865-1.
“Imagine your favorite Italian salad dressing,” said Professor Thomas Russell, Amherst professor at the University of Massachusetts.
“It consists of oil, water and spices, and all the ingredients are mixed together and shaken with it before pouring it into the salad.”
“It is those spices, something else, that are usually mutually exclusive, that mix water and oil, allowing a process called emulsification, that is small bits of those spices, something else, explained by the laws of thermodynamics.”
“Emulsification underlies a vast amount of technology and applications that go far beyond seasonings,” said Anthony Leif, a graduate student at the University of Massachusetts Amherst University.
“One day I was in the lab to mix this batch of science salad dressing and see what I could create. Instead of spice, I used magnetized particles of nickel because I could design any kind of interesting material that has useful properties when it contains magnetic particles.”
“I made the mixture and rocked it – and to my total surprise, the mixture formed this beautiful, pristine ur shape.”
“No matter how many times, how violently it was, the bones have always returned.”
The researchers determined that using additional lab experiments and simulations, they would explain the mysterious phenomenon of magnetism, strong magnetism, discovered.
“A very close look at the individual magnetized nickel nanoparticles that form the water-oil boundary gives you very detailed information on how the different morphologies are assembled.”
“In this case, the particles are magnetized so strongly that the assembly interferes with the emulsification process described by the laws of thermodynamics.”
The particles that are usually added to oil and water mixtures reduce the tension at the interface between the two liquids, allowing them to be mixed.
However, with a twist, the well-heavy magnetized particles actually increase the interfacial tension, bending the oil-water boundary into an elegant curve.
“When you see something impossible, you have to investigate,” Professor Russell said.
“We don’t have any applications yet in our discoveries, but we look forward to seeing how unprecedented states will affect the field of soft matter physics,” added Raykh.
Team’s work It will be displayed in the journal Natural Physics.
____
A.Rafe et al. Shape recovery solution. nut. PhysPublished online on April 4, 2025. doi:10.1038/s41567-025-02865-1
Latin America has been a source of inspiration for various aspects, including a popular literary and musical genre and staple foods like potatoes. A famous Happy meal is now an indication of this inspiration. There is potential for Latin America to also become a cradle for AI.
A coalition of research institutes is collaborating on a project called latamgpt, which aims to create a tool that considers regional language differences, cultural experiences, and “specificity.” This tool is intended to provide more accurate representations for users in Latin America and the Caribbean compared to existing Large Language Models (LLM) primarily trained by US or Chinese companies in English.
The project lead, Rodrigo Duran Rojas, expressed the importance of developing local AI solutions to better serve Latin America. The goal is to offer a representative outlook tailored for the region, with initial tests showing promising results in areas like South American history.
Over 30 institutions are involved in the development of Latamgpt from countries across the hemisphere, including collaborations with Latinos in the US like Freddy Vilci Meneseth, an associate professor of Hispanic Studies at Lewis & Clark College, Oregon.
Latamgpt’s launch is planned for around June, following a significant commitment from various regions for improved AI governance. Projects like monitoring deforestation in the Amazon Rainforest and preserving historical documents from past dictatorships are contributing to the dataset used for training Latamgpt.
With a dataset of over 8 terabytes, Latamgpt aims to provide a nuanced and localized model for various applications. The project faces challenges in incorporating diverse dialects and complex grammatical structures, but emphasizes the importance of collaboration for continued development.
Diversified dialects and complex grammar challenges
Efforts like Latamgpt, CHATGPT, and Google’s Gemini are working towards incorporating a wider range of data and improving localization for non-English languages. Challenges in training models for languages with complex grammar and dialects persist.
Despite these challenges, Latamgpt aims to address these issues through collaboration with institutions, libraries, and archives across the region. The project continues to receive data and feedback to enhance its capabilities and explore applications in public policy and regulation.
The long-term goal of Latamgpt is to create an interconnected network for developing AI solutions with a Latinx touch, emphasizing the impact of collaboration in shaping the future of technology in Latin America and beyond.
New results from the collaboration of Digi (dark energy spectroscopy) reveal signs of time-varying dark energy.
Two “fans” corresponding to the two main areas were observed by Desi on top and bottom of the plane of the Milkyway Galaxy. Image credits: Desi Collaboration/DOE/KPNO/NOIRLAB/NSF/AURA/R. Proctor.
“The universe will never surprise us and will never surprise us,” said Dr Arjun Dei, a digiproject scientist at Noir Love and associate director of the Central Scale Observatory for Strategic Initiatives.
“By unprecedentedly revealing the evolving textures of our universe's fabrics, Digi and Mayall telescopes are changing our understanding of the future of our universe and nature itself.”
The DESI data, which is employed alone, is consistent with the standard model of the universe. In Lambda CDM, CDM is cold dark matter, and Lambda represents the simplest case of dark energy that acts as a cosmological constant.
However, when combined with other measurements, the effect of dark energy may be weaker over time, increasing indications that other models may be more appropriate.
Other measurements of them include light leftovers from the dawn of space (cosmic microwave background, or CMB), distance measurements of supernovae, and observations of how light from distant galaxies are distorted by the effects of dark matter gravity (weak lenses).
So far, the evolving dark energy preference has not risen to 5 sigma. This is the gold standard in physics that represents a commonly accepted threshold of discovery.
However, the various combinations of DESI data and CMB, weak lenses, and supernova sets range from 2.8 to 4.2 sigma.
This analysis used techniques to hide results from scientists to the end to reduce unconscious biases about data.
This approach sets new criteria for how data is analyzed from large-scale spectroscopic studies.
The Desi is a cutting-edge instrument mounted on the NSF Nicholas U. Mayall 4-M telescope of the NSF Noirlab program, Kitt Peak National Observatory.
Light from 5,000 galaxies can be captured simultaneously, allowing you to carry out one of the most extensive research to date.
The experiment is currently investigating the fourth sky in five years, with plans to measure around 50 million galaxies and quasars (very far but bright objects with black holes in their cores) and more than 10 million stars by the time the project is finished.
The new analysis uses data from the first three years of observations and includes nearly 15 million best measured galaxies and quasars.
This is a major leap, with the one used in Desi's initial analysis improving the accuracy of the experiment with more than twice as much data set, suggesting evolving dark energy.
Digi tracks the effects of dark energy by studying how matter spreads throughout the universe.
Very early cosmic events left subtle patterns in the way matter was distributed. This is a function called Barion Acoustic Vibration (BAO).
Its Bao pattern acts as a standard ruler, and its size is directly influenced by how the universe is expanding at different times.
Measuring rulers at different distances has shown the strength of dark energy throughout history by researchers.
DESI Collaboration begins work with additional analysis to extract more information from the current dataset, and Desi continues to collect the data.
Other experiments offered online over the next few years will also provide complementary data sets for future analysis.
“Our results are a fertile foundation for our theory colleagues looking at new and existing models, and we look forward to what they came up with,” says Dr. Michael Levi, Desi Director and Scientist.
“Whatever the nature of dark energy, it shapes the future of our universe. It is very noteworthy that we look up at the sky with a telescope and try to answer one of the biggest questions humanity has ever asked.”
“These are prominent results from very successful projects,” said Dr. Chris Davis, NSF Program Director at NSF Neil Love.
“The powerful combination of NSF Mayall Telescope and DOE's dark energy spectroscopy instruments demonstrates the benefits of federal agencies collaborating with fundamental science to improve our understanding of the universe.”
Physicists shared their findings in a A series of papers It will be posted above arxiv.org.
Niobium phosphide conducts electricity better than copper in films a few atoms thick. What's more, these films can be created and deposited at low enough temperatures to be compatible with modern computer chip manufacturing, according to a team of scientists led by Stanford University.
Amorphous niobium phosphide films a few atoms thick have better surface conductivity, making the entire material a better conductor. Image credit: Il-Kwon Oh / Asir Khan.
“We are breaking the fundamental bottlenecks of traditional materials like copper,” said Dr. Aseel Intisar Khan of Stanford University.
“We show that our niobium phosphide conductor can transmit signals faster and more efficiently through ultra-thin wires.”
“This could make future chips more energy efficient, and even small gains can add up when large numbers of chips are used, such as in large data centers storing and processing today's information. There is a possibility.”
Niobium phosphide is what researchers call a topological metalloid, meaning that the entire material can conduct electricity, but its outer surface is more conductive than the center.
As a film of niobium phosphide becomes thinner, the central region shrinks, but its surface remains the same, allowing the surface to take a greater share in the flow of electricity, making the entire material a better conductor. .
Traditional metals such as copper, on the other hand, become less conductive when thinned below about 50 nm.
The researchers found that niobium phosphide is a better conductor than copper at film thicknesses of 5 nm or less, even when operating at room temperature.
At this size, copper wire has a hard time handling rapid electrical signals and loses more energy to heat.
“Really high-density electronics requires very thin metal connections, and if those metals don't conduct well, you're going to lose a lot of power and energy,” said Eric Popp, a professor at Stanford University. said.
“If we had better materials, we could spend less energy on thin wires and more energy on actual calculations.”
Many researchers have been working to find better conductors for nanoscale electronics, but so far the best candidates have very precise crystal structures, which can be used at very high temperatures. must be formed with.
The niobium phosphide film the researchers created is the first example of an amorphous material that becomes a better conductor as it becomes thinner.
“It has been thought that if you want to take advantage of these topological surfaces, you need good single-crystal films that are very difficult to deposit,” said Akash Ramdas, a doctoral student at Stanford University. .
“Now we have another class of materials, topological metalloids, that could serve as a way to reduce energy usage in electronics.”
Niobium phosphide films do not need to be single crystal, so they can be made at low temperatures.
The scientists deposited the film at 400 degrees Celsius (752 degrees Fahrenheit). This temperature is low enough to avoid damage or destruction to existing silicon computer chips.
“If you have to make a perfect crystalline wire, that doesn't work in nanoelectronics,” says Yuri Suzuki, a professor at Stanford University.
“But if you can make them amorphous or slightly disordered and still give them the properties you need, that opens the door to potential real-world applications.”
The authors are also working on fabricating the niobium phosphide film into thin wires for additional testing.
They want to determine how reliable and effective the material is in real-world applications.
“We've taken some really cool physics and transplanted it into the world of applied electronics,” Professor Popp said.
“This type of breakthrough in amorphous materials could help address power and energy challenges in current and future electronics.”
Legislation supported by Labor, the Conservative Party, and child protection experts will require social media companies to exclude teenagers from algorithms intended to reduce content addiction in under-16s. This new Safer Telephones Bill, introduced by Labor MPs, prioritizes reviewing mobile phone sales to teenagers and potentially implementing additional safeguards for under-16s. Health Secretary Wes Street voiced support for the bill, citing the negative impact of smartphone addiction on children’s mental health.
The bill, championed by Labor MP Josh McAllister, is receiving positive feedback from ministers, although there is hesitation around banning mobile phone sales to teens. With backing from former Conservative education secretary Kit Malthouse and education select committee chair Helen Hayes, the bill aims to address concerns about children’s excessive screen time and exposure to harmful content.
Mr. McAllister’s bill, which focuses on protecting children from online dangers, will be debated by ministers this week. The bill includes measures to raise the Internet age of majority to 16 and give regulatory powers to Ofcom for children’s online safety. The proposed legislation has garnered support from various stakeholders including former children’s minister Claire Coutinho and children’s charities.
Concerns about the impact of smartphones on children’s well-being have prompted calls for stricter regulations on access to addictive online content. While Prime Minister Keir Starmer is against a blanket ban on mobile phones for under-16s, there are ongoing discussions about how to ensure children’s safety online without restricting necessary access to technology.
The bill aims to regulate online platforms and mobile phone sales to protect young people from harmful content and addiction. Mr. McAllister’s efforts in promoting children’s digital well-being have garnered significant support from policymakers and child welfare advocates.
As the government considers the implications of the bill and the Online Safety Act, which is currently pending full implementation, efforts to protect children from online risks continue to gain momentum. It remains crucial to strike a balance between enabling technology access and safeguarding children from potential online harms.
In an experiment, physicists from the University of Bonn and the University of Kaiserslautern-Landau observed and studied the properties of a one- to two-dimensional crossover in a gas of harmonically confined photons (light particles). The photons were confined in dye microcavities, while polymer nanostructures provided the trapping potential for the photon gas. By varying the aspect ratio of the trap, the researchers tuned it from an isotropic two-dimensional confinement to a highly elongated one-dimensional trapping potential. The team paper Published in a journal Natural Physics.
A polymer applied to the reflective surface confines the photonic gas within the light's parabola. The narrower this parabola is, the more one-dimensional the gas behaves. Image courtesy of University of Bonn.
“To create a gas from photons, you need to concentrate a lot of photons in a limited space and cool them at the same time,” said Dr Frank Wevinger from the University of Bonn.
In their experiments, Dr. Wewinger and his colleagues filled a small container with a dye solution and used a laser to excite it.
The resulting photons bounced back and forth between the reflective walls of the container.
Each time they collided with a dye molecule they cooled, eventually condensing the photon gas.
By modifying the reflective surface, we can affect the gas's dimensions.
“We were able to coat the reflective surface with a transparent polymer and create tiny microscopic protrusions,” said Dr Julian Schulz, a physicist at the University of Kaiserslautern-Landau.
“These protrusions allow us to confine and condense photons into one or two dimensions.”
“These polymers act as a kind of channel for the light,” said Dr Kirankumar Kalkihari Umesh, a physicist at the University of Bonn.
“The narrower this gap becomes, the more one-dimensional the gas behaves.”
In two dimensions, there is a precise temperature limit where condensation occurs, just as water freezes at exactly 0 degrees – physicists call this a phase transition.
“But if you create a one-dimensional gas instead of two-dimensional, things are a bit different,” Dr Wewinger said.
“So-called thermal fluctuations do occur in the photon gas, but in two dimensions they are so small that they have no practical effect.”
“But on one level, these fluctuations can make waves, figuratively speaking.”
These fluctuations destroy the order in a one-dimensional system, causing different regions in the gas to no longer behave in the same way.
As a result, phase transitions that are still precisely defined in two dimensions become increasingly blurred as the system becomes one-dimensional.
However, their properties are still governed by quantum physics, just like for two-dimensional gases, and these types of gases are called degenerate quantum gases.
It's as if water gets cold but doesn't freeze completely, but turns into ice at low temperatures.
“We were able to investigate this behavior for the first time in the transition from a two-dimensional to a one-dimensional photon gas,” Dr. Wewinger said.
The authors were able to demonstrate that a one-dimensional photon gas indeed does not have a precise condensation point.
By making small changes to the polymer structure, it becomes possible to study in detail what happens during the transition between different dimensions.
Although this is still considered fundamental research at this point, it has the potential to open up new applications of quantum optical effects.
_____
K. Kalkihari Umesh othersDimensional crossover in a quantum gas of light. National Physical SocietyPublished online September 6, 2024; doi: 10.1038/s41567-024-02641-7
Using data collected by the Advanced Stellar Compass (ASC) and Stellar Reference Unit (SRU) on NASA’s Juno spacecraft, scientists have created the first complete 3D radiation map of the Jupiter system. The map characterizes the intensity of high-energy particles near the orbit of the icy moon Europa and shows how the radiation environment is shaped by small moons orbiting close to Jupiter’s rings.
This diagram shows a model of radiation intensity at different points on the Juno spacecraft’s orbit around Jupiter. Image credit: NASA / JPL-Caltech / DTU.
“With Juno, we’ve been trying to invent new ways to use sensors to learn about nature, and we’ve been using many of our science instruments in ways that were not originally intended,” said Juno principal investigator Dr. Scott Bolton, a planetary scientist at the Southwest Research Institute.
“This is the first detailed radiation map of this high-energy region and marks a major step forward in understanding how Jupiter’s radiation environment works.”
“It’s significant that we’ve been able to map this area in detail for the first time, because we don’t have instruments designed to look for radiation.”
“This map will help plan observations for future missions to the Jovian system.”
Juno’s ASC instrument, consisting of four star cameras mounted on the spacecraft’s magnetometer boom, takes images of the stars to determine the spacecraft’s orientation in space.
But the instrument is also a valuable detector for detecting the flow of high-energy particles within Jupiter’s magnetosphere.
The cameras record “hard radiation” – ionizing radiation that affects the spacecraft with enough energy to penetrate the ASC’s shielding.
“The ASC takes an image of the star every quarter of a second,” said Juno scientist Dr. John Leif Jorgensen, a researcher at the Technical University of Denmark.
“The highly energetic electrons that penetrate the shield leave distinctive signatures in our images, like firefly trails.”
“The device is programmed to count the number of fireflies, allowing us to accurately calculate the amount of radiation.”
Juno’s orbit is constantly changing, so the spacecraft has traversed nearly every region of space near Jupiter.
The ASC data suggests that there is more very high-energy radiation, relative to low-energy radiation, near Europa’s orbit than previously thought.
The data also confirm that there are more energetic electrons on the side of Europa facing in the direction of its orbital motion than on the rear side of Europa.
This is because most of the electrons in Jupiter’s magnetosphere pass Europa from behind due to the planet’s rotation, but the very energetic electrons flow backwards, like a fish swimming upstream, and slam into the front of Europa.
The Jupiter radiation data is not the ASC’s first scientific contribution to the mission: even before it arrived at Jupiter, ASC data was used to measure interstellar dust bombarding Juno.
Using the same dust-detection techniques, the imager also discovered a previously undiscovered comet, identifying tiny pieces of the spacecraft ejected by fine dust particles that collided with Juno at high speed.
Like Juno’s ASC, the SRU will act as a radiation detector and low-light imaging instrument.
Data from both instruments show that, like Europa, small shepherd moons that orbit inside or near the edges of Jupiter’s rings and help maintain their shape also appear to interact with the planet’s radiation environment.
If the spacecraft flies over magnetic field lines that connect to ring moons or dense dust, the radiation dose to both the ASC and SRU drops sharply.
The SRU is also collecting rare low-light images of the rings from Juno’s unique vantage point.
“Many mysteries remain about how Jupiter’s rings formed, and very few images have been collected by previous spacecraft,” said SRU principal investigator Dr. Heidi Becker, a scientist at NASA’s Jet Propulsion Laboratory.
“If you’re lucky, you might even be able to capture a little shepherd moon in your photo.”
“These images allow us to get a better idea of where the ring moons are currently located and to see the distribution of dust relative to the distance from Jupiter.”
of Survey results Will be published in the journal Geophysical Research Letters.
Golden in the form of gold monolayer sheets is prepared by etching away titanium carbide (Ti)3C2. Slabs of titanium gold carbide (Ti)3AuC2.
Golden preparation.Image provided by: Kashiwaya other., doi: 10.1038/s44160-024-00518-4.
“When you make a material extremely thin, something unusual happens, just as it did with graphene. The same thing happens with gold,” said Dr. Shun Kashiwaya, a researcher at Linköping University.
“As you know, gold is normally a metal, but if it's an atomic layer thick, it can become a semiconductor instead.”
To create Goldene, Dr. Kashiwaya and his colleagues used a three-dimensional substrate with gold embedded between layers of titanium and carbon. However, coming up with a golden turned out to be difficult.
“We created the basic material with a completely different application in mind,” said Professor Lars Hartmann from Linköping University.
“We started with a conductive ceramic called titanium silicon carbide, which has a thin layer of silicon.”
“Then the idea was to coat the material with gold to make the contacts. However, when the component was exposed to high temperatures, the silicon layer inside the substrate was replaced by gold.”
This phenomenon is called intercalation, and what the researchers discovered was titanium-gold carbide.
For several years, authors have been using titanium gold carbide without knowing how the gold could be exfoliated or panned out.
They accidentally discovered a method that has been used in Japanese forging for more than 100 years.
This is called Murakami's reagent, and it etches away carbon residues and changes the color of steel, such as in knife making. However, it was not possible to use exactly the same recipe as the blacksmith.
“We tried varying the concentration of Murakami's reagent and the etching time. One day, one week, one month, several months. What we noticed was that the lower the concentration and the longer the etching process, the better. But even that wasn't enough,” Dr. Kashiwaya said.
Etching must also be performed in the dark, as the reaction produces cyanide, which dissolves the gold when exposed to light. This step was to stabilize the gold sheet.
A surfactant was added to prevent the exposed two-dimensional sheet from curling up. In this case, it is a long molecule, a surfactant, that separates and stabilizes the sheets.
“The golden sheets sit in a solution, a bit like cornflakes in milk. We use a sort of 'sieve' to collect the gold and examine it under an electron microscope to see if we were successful.” We have that,” Dr. Kashiwaya said.
“Golden's new properties are due to the fact that gold has two free bonds when it is two-dimensional.”
“Thanks to this, future applications could include carbon dioxide conversion, hydrogen production catalysts, selective production of value-added chemicals, hydrogen production, water purification, communications, etc.”
“Additionally, the amount of gold used in today's applications can be significantly reduced.”
team's work It was published in the magazine natural synthesis.
_____
Shin Kashiwaya other. Golden synthesis consisting of a single atomic layer of gold. nut.synthesizer, published online March 18, 2024. doi: 10.1038/s44160-024-00518-4
Sarah Wiggard and Ralf Sommer / Max Planck Institute for Biology Tübingen
Tiny soil insects called nematodes usually feed on bacteria and algae and have small mouths to accommodate their diet. However, when baby nematodes are fed the fungus, their mouths double in size, giving them the ability to cannibalize their mates.
that’s what ralph sommer Researchers at the Max Planck Institute for Biology in Tübingen, Germany, made the discovery while studying the development of predatory soil nematodes. Allody Progaster Sudhouushi.when the larvae are raised Penicillium Some of them ate fungi and cheese and grew into cannibals with giant mouths. “We were shocked,” he says.
The researchers knew that the different mouth shapes seen in this species resulted from different feeding habits. Nematodes that feed on bacteria have narrow mouths, while nematodes that feed on much smaller nematode species have slightly wider mouths. But this extreme variant, which the researchers called “teratostomia,” or Te morphology, had not been previously documented.
Sommer and colleagues investigated the genetics underlying these different mouth shapes and found that all three were controlled by the same sulfatase gene. But that activity only seems to result in a giant, gaping mouth. A. Sudaushi. The species’ complete set of genetic instructions was duplicated only recently in its evolution, Sommer said, so the doubling of gene pairs may have facilitated the origin of the worm’s giant mouth. That’s what it means.
Because the fungi’s diet was low in nutrients and more Te forms were found in high-density conditions, the researchers found that Te forms and their associated cannibalistic habits may have evolved as a response to the stresses of starvation and crowding. That’s what I think.
Nicholas Levis Indiana University points out that a similar phenomenon is seen in several other species. For example, the tadpoles of spadefoot toads and some salamanders can develop into cannibalistic carnivores depending on environmental conditions, Levis says.
But even in such cases, animals often avoid eating their own kind. Te nematodes are nondiscriminatory and prey on genetically identical neighbors. Levis says this is a “surprising finding” that could indicate that the development strategy is “really hopeless.”
“This discovery…made me wonder how much more diverse there is in the natural world than what we see,” Levis says. “How many other hidden ‘monsters’ are there waiting to be discovered under the right environmental conditions?”
The latest genealogy is detailed in two supplementary papers published today. journal Nature And that Proceedings of the National Academy of Sciences, researchers have uncovered patterns in the evolutionary history of birds after the massive mass extinction event that wiped out the dinosaurs 66 million years ago. The authors observed rapid increases in effective population size, replacement rate, and relative brain size in early birds, and found that new adaptive mechanisms that drove bird diversification in the aftermath of this pivotal event. Shined a light. The researchers also took a closer look at one branch of the new family tree and found that flamingos and pigeons are more distantly related than previous genome-wide analyzes had shown.
The latest bird family tree outlining 93 million years of evolutionary relationships among 363 bird species. Image credit: Jon Fjeldså / Josefin Stiller.
“Our goal is to reconstruct the entire evolutionary history of all birds,” said Professor Siavash Milarab, a researcher at the University of California, San Diego.
This work is part of that Bird 10,000 Genomes (B10K) ProjectThis is a multi-institutional effort led by the University of Copenhagen, Zhejiang University, and the University of California, San Diego, with the aim of producing draft genome sequences for approximately 10,500 extant bird species.
At the heart of these studies is a suite of algorithms known as ASTRAL, developed by Professor Miralove and colleagues to infer evolutionary relationships with unprecedented scalability, accuracy, and speed.
By harnessing the power of these algorithms, we integrated genomic data from over 60,000 genomic regions and provided a robust statistical foundation for our analysis.
The researchers then examined the evolutionary history of individual segments across the genome.
From there, they pieced together a mosaic of gene trees and compiled them into a comprehensive species tree.
This meticulous approach has allowed researchers to construct new and improved bird genealogies that depict complex divergence events with remarkable accuracy and detail, even in the face of historical uncertainty. I did.
“We found that our method, which adds tens of thousands of genes to the analysis, is indeed necessary to unravel the evolutionary relationships between bird species,” Professor Miralove said.
“We really need all the genomic data to reconstruct with a high degree of confidence what happened during this period of time, 65 to 67 million years ago.”
The scientists also looked at the impact of different genome sampling methods on the accuracy of the tree.
They showed that to reconstruct this evolutionary history, it is important to combine two strategies: sequence many genes in each species and sequence many species. Ta.
“Because we used both strategies in combination, we were able to test which approach has a stronger impact on phylogenetic reconstructions,” said Professor Josephine Stiller from the University of Copenhagen.
We found that it is more important to sample many gene sequences from each organism than to sample from a wider range of species, but the latter method does not allow us to determine when different groups evolved. It was helpful to know. ”
mira love other. They took a closer look at one branch of the updated bird family tree and found that groups including flamingos and pigeons are more distantly related than previous genome-wide analyzes had shown. We attributed the results to an abnormal region on chromosome 4.Image credits: Ed Braun / Daniel J. Field / Siavash Miarab
With the help of advanced computational techniques, the researchers were also able to shed light on anomalies discovered in previous studies. The theory is that a particular part of a chromosome in the bird's genome remained unchanged and blank for millions of years. Description of expected genetic recombination patterns.
“Ten years ago, we put together a family tree. Neo Avesthe group that includes the vast majority of bird species,” said Professor Edward Brown of the University of Florida.
“Based on the genomes of 48 species, we divided neoabees into two broad categories: pigeons and flamingos in one group, and all the rest in the other.”
“This year, when we repeated the same analysis with 363 species, a different family tree emerged that divided pigeons and flamingos into two distinct groups.”
“Given two mutually exclusive family trees, I looked for an explanation that would allow me to determine which family tree was correct.”
“When we looked at individual genes and which trees they supported, it suddenly dawned on us that all the genes that support old trees were all in one place. That's how it all started. “It was,” he explained.
“When we investigated this site, we realized that it was a place where sexual reproduction had been occurring for millions of years, but it wasn't as mixed.”
“Just like humans, birds combine the genes of their father and mother to create the next generation.”
“But in birds and humans alike, when creating sperm and eggs, we first mix together genes inherited from both parents.”
“This process, called recombination, maximizes the genetic diversity of a species by ensuring that no two siblings are exactly alike.”
The authors found evidence that parts of bird chromosomes suppressed this recombination process for millions of years after the dinosaurs went extinct.
It is unclear whether extinction events and genomic abnormalities are related.
They found that flamingos and pigeons resemble each other in this frozen chunk of DNA.
However, when the complete genomes were considered, it became clear that the two groups were more distantly related.
“What is surprising is that this period of recombination suppression can mislead the analysis,” says Professor Brown.
“And because that can mislead the analysis, it was actually detectable more than 60 million years in the future. That's the cool thing about it.”
“Such mysteries may also be hidden in the genomes of other organisms.”
“We discovered this misleading region of birds because we put a lot of energy into deciphering their genomes.”
“I think there are similar cases in other species that are unknown at this time.”
_____
J. Stiller other. 2024. The complexity of bird evolution revealed by family-level genomes. Naturein press.
Using data from ESA’s ExoMars Trace Gas Orbiter and NASA’s Mars Reconnaissance Orbiter, planetary scientists created a 1:30,000-scale geological map of the Oxian Plain, the landing site for ESA’s ExoMars Rosalind Franklin rover mission. Created.
Faudon other. Created the most detailed geological map of Oxia Planitia, the Mars landing site for ESA’s Rosalind Franklin rover. Image credit: Faudon other., doi: 10.1080/17445647.2024.2302361.
Oxia Planum Located on the northern edge of Arabian Terra, it preserves a record of the diverse geological processes that shaped the region.
It is a transitional region between the cratered Arabian Terra and the young lowland plain of the Chryse Planitia.
“Oxia Planum is located near the Martian equator and contains deposits that are nearly 4 billion years old,” said Open University researcher Peter Faudon and colleagues.
“On a geological scale, this would be the oldest landing site ever visited by a spacecraft on Mars.”
“This region is rich in clay minerals that form due to the presence of water. These rocks are ideal for preserving evidence of the earliest forms of life. This makes it possible to predict that life once existed on Mars. It’s a great place to look for clues as to whether this is the case.”
To map the Oxian Plains, the authors used several instruments, including the Color and Stereo Surface Imaging System (CaSSIS) onboard the ExoMars Trace Gas Orbiter and the HiRISE camera on NASA’s Mars Reconnaissance Orbiter (MRO). We used data from.
This map details 15 rock units classified into 6 groups and 7 textural and surface units.
“This map includes the main types of rock and structures with unique shapes, such as ridges and craters,” the researchers said.
“It also features materials that are placed on top, blown by the wind or blown long distances when a meteorite hits the Earth’s surface, for example.”
The result is the highest-resolution map of the Oxian plains to date at a scale of 1:25000, where 1 centimeter corresponds to 250 meters on the surface of Mars.
Average daily driving time is 25-50 minutes. rosalind franklin rover On the map it is 1-2 mm.
“This map is really interesting because it’s a guide to where the answers are,” Dr. Faudon said.
“This serves as a visual hypothesis for what we currently know about the different rocks at the landing site.”
“With the instruments on board the Rosalind Franklin rover, you can test your knowledge on the fly when the time comes.”
peter faudon other. 2024. High-resolution map of the Oxian Plains on Mars. Landing site for the ExoMars Rosalind Franklin rover mission. map journal 20(1); doi: 10.1080/17445647.2024.2302361
In the science fiction universe of Dune, the spice melange is commonly referred to as “spice” and is a valuable narcotic substance. It is produced from the excrement of young sandworms found only in the deserts of the planet Arrakis.
This spice has various health benefits, such as increasing lifespan. Due to its highly addictive nature, there is a high demand for it, making it a valuable commodity. The control of Spice leads to control over all other factions in the Dune universe.
This phenomenon may have historical parallels in the real world. In her 2008 book chapter on melange, science writer Dr. Carol Hart mentions how coca leaves in pre-Columbian America were similar to melange and were mostly used by the ancient Inca nobility and priestly class to maintain power through a monopoly on coca leaves.
The spice also possesses mind-altering properties, allowing the post-human species known as Guild Navigators to see across vast distances of space to navigate spaceships on long interstellar journeys. The Navigators reside in tanks where they constantly inhale orange spice gas that mutates their bodies significantly.
Even minimal exposure to the spice causes the user’s eyes to turn a deep navy blue, a characteristic seen among the Fremen of Arrakis due to constant spice exposure. This effect is akin to the persistent pupil dilation associated with recreational drug use globally.
The Bene Gesserit also use spices, which grant them the ability to see the future and enhance their mental abilities. This mirrors the rise of nootropics, or “smart pills,” used by individuals seeking a cognitive edge. While these drugs claim to improve memory, attention, creativity, and motivation, they are sometimes prescribed for conditions like ADHD and dementia.
However, there are concerns about using nootropics without a prescription. A 2020 study by Harvard Medical School revealed that these supplements may contain unapproved pharmaceutical drugs, posing serious health risks, as noted by study author Dr. Peter Cohen.
Nibiru Chain, a general-purpose layer 1 blockchain, has successfully closed its latest funding round, securing $12 million in preparation for an ambitious growth phase. Venture investments include funding from Kraken Ventures, ArkStream, NGC Ventures, Master Ventures, Tribe Capital, and Banter Capital.
“Nibiru has taken all the best-in-class technology and research developed over the past few cycles and crammed it into the best new chain we've ever analyzed. Built-in development toolkit, easy-to-use API , language SDK, and is optimized for developers using native Oracle. MV Capital couldn't be more excited to partner with Nibiru as it moves to a new level of cryptocurrency adoption. – Tom Dunleavy, CIO and Partner at Master Ventures Capital
Nibiru’s community sale on CoinList sold out its initial allocation of $3 million in 9 minutes. His additional $3 million extension also sold out within 11 minutes. Ultimately, the sale earned 842% of the pre-funded excess interest and attracted 42,713 registrants and over 5,000 buyers of NIBI, the network's staking and utility token.
Nibiru Chain stands out for its innovative technology and developer- and user-centric focus. Key partners expressed their enthusiasm:
“We look forward to supporting the distinctive Layer 1 model designed by Nibiru, which allows for maximum interoperability while incentivizing developers and users through smart contract loyalty mechanisms. Core primitives are being built in parallel with the core foundation to enable functionality.” – Brandon Gath, Managing Partner, Kraken Ventures
This is the perfect platform for developers who prioritize security and performance. Boasting 40,000 transactions per second (TPS), 1.4 seconds block time, and robust security, Nibiru Chain's versatility spans multiple sectors including real-world assets (RWA), gaming, DeFi, and more. Developers build on her Nibiru Chain with confidence, leveraging CosmWasm smart contracts to enhance security and ease of use with EVM.
Additionally, developers can leverage Nibiru’s “Development Gas” royalty mechanism to ensure a sustainable model for long-term growth. The core of Nibiru Chain is to create the best environment for developers and users.
“Nibiru's integrated super application, native oracle, and data indexing greatly reduce the difficulty of technical choices for Web3 projects, while reducing the likelihood of security incidents. Increased trust, which in turn increases the growth and prosperity of the Nibiru ecosystem. – Allen Su, General Partner, ArkStream Capital
Looking to the future – Nibiru Chain’s 2024 roadmap
In 2024, Nibiru Chain aims to expand its ecosystem. Key developments include gamified engagement airdrops, integration with major liquidity centers, listing on multiple top-tier centralized exchanges, implementing parallel optimistic execution, and achieving full EVM compatibility. , includes several initiatives.
This is the year when Nibiru Chain’s flagship dApps are scheduled to launch, including Nibi-Perps, Nibi-Swap, and NUSD. These releases will be major milestones in his Nibiru Chain's journey towards a user and developer-centric platform.
About Nibiru Chain
Nibiru Chain is a breakthrough L1 blockchain and smart contract ecosystem with superior throughput and unparalleled security. Nibiru aims to be the most developer-friendly smart contract ecosystem, and by innovating at each layer of the stack (dApp development, infrastructure, consensus, comprehensive development toolkit, and value generation) is leading the way in implementing Web3.
Researchers created tornado-like vortices in superfluid helium
Yoshigin/Shutterstock
Giant quantum vortices could allow researchers to study black holes. This vortex is a special form of liquid helium vortex that exhibits quantum effects. The result has some properties similar to a black hole and acts as a kind of simulator.
In the region around a black hole, the laws of gravity and quantum mechanics interact, producing effects that cannot be observed elsewhere in the universe. This makes these regions particularly important to study. “There are interesting physics happening around black holes, but many of them are out of our reach,” he says. Silke Weinfurtner at the University of Nottingham, UK. “Thus, we can use these quantum simulators to investigate phenomena that occur around black holes.”
To build the quantum simulator, Weinfurtner and his colleagues used superfluid helium, which flows at a very low viscosity, 500 times lower than water. Because it moves without friction, this form of helium exhibits unusual quantum effects and is known as a quantum fluid. The researchers filled a tank with helium with a rotating propeller at the bottom. As the propeller rotated, a tornado-like vortex was generated in the fluid.
“Similar vortices have been created in physical systems other than superfluid helium, but their strength is generally at least several orders of magnitude weaker,” he says. Patrick Svanchara, is also enrolled at the University of Nottingham and is part of the team. The strength and size of the vortex are critical to producing an interaction significant enough to observe between the vortex and the remaining fluid in the tank.
The vortices in this work were a few millimeters in diameter, much larger than other stable vortices created to date. quantum fluid In the past. In quantum liquids, rotation only occurs in tiny “packets” called quanta, which are essentially tiny vortices, so creating such large vortices is difficult. Many of them tend to become unstable when clustered, but the experimental setup here allows the researchers to combine about 40,000 rotating quanta to form what is called a giant quantum vortex. It's done.
“This is an experimental masterpiece,” he says Jeff Steinhauer He received his PhD from the Technion-Israel Institute of Technology, a pioneer in laboratory simulations of black holes. “They took a very well-established, old, classic technology called superfluid helium and did something really new with it, significantly increasing their technical capabilities compared to what had been done in the past. .”
The researchers observed how small waves in the fluid interacted with vortices. This process mimics the way the universe's cosmic field interacts with a rotating black hole. They discovered hints of a black hole phenomenon called ringdown mode. This phenomenon occurs after two black holes combine and the resulting single black hole is shaken by the residual energy of the combination.
Now that it has been established that this type of vortex exhibits behavior similar to that seen in black holes, researchers plan to use quantum vortices to study more elusive phenomena. “This is an excellent starting point for investigating some black hole physics processes, seeking new insights and potentially discovering hidden treasures along the way,” Weinfurtner says. .
Innovative “cooling glass” developed by researchers at the University of Maryland provides a groundbreaking, non-electrical solution for reducing indoor heat and carbon emissions, and significantly advances sustainable building technology. It shows great progress.
Applying new coatings to exterior surfaces can reduce air conditioning usage and help fight climate change.
Researchers at the University of Maryland have developed an innovative “cooling glass” designed to reduce indoor temperatures without using electricity. This revolutionary material works by harnessing the cold air of outer space.
New technology, microporous glass coating, described in paper published in the journal sciencecan lower the temperature of the material beneath it by 3.5 degrees. Celsius According to a research team led by distinguished professor Liangbing Hu of the university’s School of Materials Science and Engineering, it has the potential to reduce the annual carbon dioxide emissions of mid-rise apartments by 10%.
Cooling mechanism with two functions
This coating works in two ways. For one, it reflects up to 99% of solar radiation, preventing buildings from absorbing heat. Even more interestingly, this universe emits heat in the form of long-wave infrared radiation into the icy universe, whose temperature is typically -270 degrees Celsius, or just a few degrees warmer. absolute temperature.
In a phenomenon known as “radiative cooling,” spaces effectively act as heat sinks for buildings. They use new cooling glass designs and so-called atmospheric transparency windows (the part of the electromagnetic spectrum that passes through the atmosphere without increasing its temperature) to dump large amounts of heat into the infinitely colder sky beyond. Masu. (Although the emissions are much stronger than those from the new glass developed at UMD, the same phenomenon causes the Earth to cool itself, especially on clear nights.)
State-of-the-art durable materials
“This is an innovative technology that simplifies the way we keep buildings cool and energy efficient,” said research assistant Xinpeng Zhao, lead author of the study. “This could help us change the way we live and take better care of our homes and the planet.”
Unlike previous attempts at cooling coatings, the new glass developed by UMD is environmentally stable, withstanding exposure to water, UV light, dirt, and even flame, and withstands temperatures up to 1,000 degrees Celsius. can withstand. Because glass can be applied to a variety of surfaces such as tile, brick, and metal, the technology is highly scalable and can be adopted for a wide range of applications.
The research team could use finely ground glass particles as a binder, bypassing polymers and increasing long-term durability outdoors, Zhao said. We then selected a particle size that maximizes the release of infrared heat while reflecting sunlight.
Climate change solutions and global impacts
The development of cooling glass is in line with global efforts to reduce energy consumption and combat climate change, Hu said, adding that this year’s Independence Day could have been the world’s hottest day in 125,000 years. He pointed out recent reports that it was a day of sex.
“This ‘cooling glass’ is not just a new material, it’s an important part of the solution to climate change,” he said. “By reducing the use of air conditioners, we have taken a big step towards reducing energy usage and reducing our carbon footprint. This is because new technology is helping us build a cooler, greener world. It shows how it can help.”
In addition to Hu and Zhao, Jelena Srebric and Zongfu Yu, professors of mechanical engineering in the University of Wisconsin-Madison’s Department of Electrical and Computer Engineering, are co-authors of the study, each contributing expertise in CO2 reduction and structural design. There is. .
The team is now focused on further testing and practical application of the cooled glass. They are optimistic about its commercialization prospects and have formed a startup company, CeraCool, to scale and commercialize it.
Reference: “Solution-processed radiatively cooled glass” Xinpeng Zhao, Tangyuan Li, Hua Xie, He Liu, Lingzhe Wang, Yurui Qu, Stephanie C. Li, Shufeng Liu, Alexandra H. Brozena, Zongfu Yu, Jelena Srebric, Liangbing Written by Hu, November 9, 2023, science. DOI: 10.1126/science.adi2224
Builders, bakers, and body conditioners may not be the first professions that come to mind when you think of how AI is changing the way we work. But today, growing interest in the company is driving healthy funding for startups building AI-powered business tools, especially for small businesses and the thousands of other categories that make up the service industry world. announced a funding round. product.
durable — Vancouver, Canada-based startup builds an AI website creator and a host of other AI-powered tools to help small business owners plan, create, and run business apps more easily — Series A We have raised $14 million which will be used to continue expanding our platform and customer base.
This round is not the largest Series A, but it comes with an interesting list of investors. Spark Capital led the round, along with Torch Capital, Altman Capital (a VC founded and managed by Jack Altman, brother of OpenAI’s Sam Altman), Dash Fund, South Park Commons, Infinity Ventures, Soma Capital ( All previous supporters) are participating. are also participating. The startup has now raised a total of $20 million.
Durable’s AI-powered website builder is aimed at people with a very novice online presence and has already been used to create more than 6 million websites since its launch a year ago. That’s what it means.
“We have a lot of traditional companies that have been around for a long time but don’t have an online presence. They don’t have the software, they don’t have the systems. That’s a big part of our customer base. ,” founder and CEO James Clift said in an interview. “Plumbers, skilled craftsmen, personal trainers. A lot of businesses with one to six people don’t have the time or resources to actually build an online presence or create marketing materials.”
Durable will continue to build on that momentum and leverage advances in the world of AI to build more tools for users.
The end goal, Clift said, is an omniscient assistant that not only answers users’ questions, but also proactively suggests ways to run their business better.
Clift said in an interview that a beta version of its “automated proactive assistant” will be released “soon,” likely within about three months.
Based on the different needs of a user’s specific profile (a baker may not want or need the same information as a body conditioner or a builder), we can train it in areas such as taxes. ” he said. “You press a button and your business runs in the background. He texts you once a day, and you have work booked on your calendar, so all you have to do is show up to work.”
Other tools Durable has built to complement its flagship website builder include a CRM platform, an invoicing service, a blog builder, and a precursor to Proactive Assistant, an AI bot that allows users to ask questions relevant to their business. there is. Her AI assistant uses LLM’s OpenAI, among other things.
The gap in the market that Durable is filling is actually a well-known one in the technology world.
Small businesses and sole proprietors have been an elusive target for startups developing business tools. Despite accounting for more than 99% of his total business in markets like we and EnglandSmall businesses are more complex users to litigate because they spend less individually than larger businesses (making ROI per customer harder for vendors) and are generally a fragmented population when it comes to their technology needs. This is a group of
Of course, none of the above is new information in the world of technology. There are dozens of startups and large tech companies targeting small and medium-sized businesses, especially those in the service industry and building apps to manage everything from teams, accounting, banking, payroll, and more.
Clift said Durable’s unique selling point is that it applies advances in AI to problems to bring small business owners and employees into the modern era.
In his view, AI has a democratizing role. First, SMBs now have access to more affordable tools that were previously out of reach. For example, Durable works to create a logo and branding builder for its users, but if that service were provided by a consultancy, it would have been beyond most customers’ budgets.
Second, the use of AI means that Durable itself can scale out its services more easily, avoiding the problems of selling and distributing services to a fragmented customer base.
“Advances in software will allow us to start delivering a ton of value that even last year would only have been available to enterprise customers,” he said. “We can now provide an even better level of service to independent stores who previously couldn’t afford something like this. It’s a very long tail, but it’s a huge market opportunity. .”
Durable turned to OpenAI after gaining access thanks to Altman Capital, which led Durable’s seed round.
“OpenAi has been a great partner from day one,” Clift said. Given the trajectory of OpenAI, which is reportedly working to close a new funding round with a valuation of more than $80 billion, the startup is probably one to watch as it is a close partner with ties to the CEO. right.
“One of the ideas I’m most interested in right now is how we can leverage AI to help founders build products from scratch that are 10x better than anything that exists today. in a space that helps you do it cheaper, faster and more accurately,” Jack Altman told me. “When I met James, I was not only very impressed with him as a founder, but also excited about the potential of what this product could do for entrepreneurs and small business owners. Since our initial investment. , seeing how well he and the team have done only increases my expectations for what Durable will be like.”
“At Spark, we have always pursued founders who challenge the status quo. James and the Durable team are not only doing this uniquely, but also helping entrepreneurs do the same with a frictionless user experience powered by AI. We are also creating a global platform for ,” said Natalie Sandman, general partner at Spark Capital. statement.
Scenarios Mouse might see when wearing virtual reality goggles
Dom Pinke
Tiny virtual reality goggles for mice create a convincing world in which scientists can study animal brain activity in a variety of scenarios. This technology brings rodent neuroscience even closer to simulations that are indistinguishable from the real world, researchers say.
For about 20 years, Daniel Dombeck Researchers at Northwestern University in Illinois used rudimentary virtual reality to learn more about how the mouse brain works.
The machines used to observe brain patterns are too large to attach to freely moving mice. Instead, the researchers kept a mouse inside such a machine and placed a screen around the mouse that displayed a virtual reality world when it was placed on a treadmill. The researchers were able to create a virtual world where the mouse could navigate any environment they designed.
“We can run them through a virtual maze and image their brains to see which neurons form memories and remember where they are,” Dombeck says. “[But] What the animal sees is a flat surface, there is no depth perception, and the mouse sees things that are not part of the projection. So there’s a collision of all these cues around us, and we think they’re not fully engaged and immersed in the environment. They are not completely fooled.”
To solve this problem, researchers have now created tiny goggles with a different screen for each eye to cut out everything but the virtual world from the mouse’s field of view and create convincing depth perception. They believe this allows them to perform more accurate experiments because the mice become more convinced of the illusion and behave more naturally.
But designing goggles for mice isn’t as simple as simply miniaturizing technology made for humans. A human’s field of view is just over 200 degrees, while a mouse’s field of view is up to 320 degrees.
This means that the screen inside the goggles needs to be curved and almost surround the eyeball. Although the screen can only display 400 pixels by 400 pixels, Dombeck says that’s enough to be convincing, since mouse vision is much less detailed than human vision.
“The first use of goggles on the first set of mice was quite remarkable,” says Dombeck. “The rats seemed to engage very quickly. When you put the goggles on, it’s pitch black and you can’t see anything, and the virtual rendering turns on. The first rat sat up and said, ‘Oh, what is this?’ It was that kind of feeling. It then started moving pretty naturally, which doesn’t usually happen with flat projection screens.”
Dombeck says the long-term goal is to make mouse technology comparable to what’s seen in mice, with additional devices to trick the senses of smell, hearing, and touch.
Monday.com launches more Ten years ago, we wanted to help companies build a highly flexible set of business tools, including CRM, marketing, operations, and HR, built in a customized way that you wouldn’t find out of the box. Not only did companies love the flexibility, they found that they were pushing the boundaries beyond the ability of the underlying database technology to continue handling all use cases.
So the company began looking for a successor. With a myriad of off-the-shelf database choices, you might think finding the right one is just a matter of time and testing, but it’s worth considering some options and talking to some experts. After Monday came to the conclusion that we needed more than what was available on the market.
One of the main issues was flexibility. Monday.com had no idea how customers would incorporate building blocks into their applications. This meant we needed a schemaless database to handle whatever the customer decided to build. So we decided to build our own database, but there was a twist: we couldn’t build a single database that would take over all future functionality. Instead, it overlays other databases to handle specific tasks.it called New solution MondayDB.
The new database has been in place since July, and even though the company has migrated to the new database, the old database still exists as another layer in the complex Monday.com architecture.
No matter how careful they are with their technology choices, startups struggle to bring their products to market, which is often not possible, and they are often unable to see how their products grow and develop over time. You must realize that there is no way to predict what will happen. At some point, though, companies will have to start with a completely new architecture and pay off their technical debt, as Monday.com had to do.
We spoke with Daniel Lereya, Chief Product and Technology Officer, to learn how his team decided to build this solution and the challenges they faced in finding a database technology that met these unique requirements. I learned about the challenges I faced.
Another manic Monday
The process leading to the construction of the database has been going on for several years. In fact, the company started considering the idea of a new database in his January 2021 with a completely open mindset. Lereya says customers value his Monday.com because of the flexibility it provides, and the company needed a solution that could manage an adaptable approach.
This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.
Strictly Necessary Cookies
Strictly Necessary Cookie should be enabled at all times so that we can save your preferences for cookie settings.