Discoveries of Advanced Stone Tool Technology at China’s Xigou Ruins: New Archaeological Evidence

Technological advancements in Africa and Western Europe during the late Middle Pleistocene highlight the intricate behaviors of hominin groups. Contrarily, East Asian human technology has long been perceived as lacking innovation. Recent archaeological findings at the Xigou site in Henan province, China, reveal remarkable evidence of technological innovations dating back between 160,000 and 72,000 years, illustrating over 90,000 years of sophisticated technological behavior through detailed technological, typological, and functional analyses.



Artist’s restoration of the Nishimizo utensil holder. Image credit: Hulk Yuan, IVPP.

“For decades, researchers have posited that, while Africa and Western Europe exhibited significant technological growth, East Asians relied on simpler and more traditional stone tool techniques,” noted Dr. Shisia Yang from the Institute of Vertebrate Paleontology and Paleoanthropology.

In recent findings, Dr. Yang and colleagues reveal that, during a time when numerous large-brained hominins coexisted in China, the hominins in this region displayed far greater inventiveness and adaptability than previously assumed, including species such as Homolonghi, Homo juruensis, and potentially Homo sapiens.

“The discovery at Xigou challenges the notion that early human populations in China were inherently conservative over time,” emphasized Professor Michael Petraglia from Griffith University.

“In-depth analyses indicate that the early inhabitants utilized advanced stone tool-making techniques to create small flakes and multifunctional tools,” he added.

Notably, the site revealed handled stone tools, marking the earliest known evidence of composite tools in East Asia.

These tools, which integrated stone components with handles and shafts, demonstrate exceptional planning, skilled craftsmanship, and knowledge of how to enhance tool functionality.

“Their existence underscores the behavioral flexibility and ingenuity of the Nishigou hominids,” Dr. Jiang Ping Yue, also affiliated with the Institute of Vertebrate Paleontology and Paleoanthropology, remarked.

The geological formations at Xigou, spanning 90,000 years, align with accumulating evidence of increasing hominin diversity across China.

Findings from Xujiaba and Lingjing confirm the presence of a large-brained hominin, Homo juruensis, providing a biological foundation for the behavioral complexity observed in the Xigou population.

“The advanced technological strategies evidenced in the stone tools likely played a crucial role in aiding humans to adapt to the fluctuating environments typical of East Asia over 90,000 years,” stated Professor Petraglia.

The discoveries at Xigou have transformed our understanding of human evolution in East Asia, revealing that early populations possessed cognitive and technological competencies comparable to their African and European counterparts.

“Emerging evidence from Xigou and other archaeological sites indicates that early Chinese technology featured prepared core methods, innovative retouching techniques, and substantial cutting tools, suggesting a more intricate and advanced technological landscape than previously acknowledged,” Dr. Yang concluded.

The research team’s paper is published in the latest edition of Nature Communications.

_____

JP. Yue et al. 2026. Technological Innovation and Patterned Technology in Central China from Approximately 160,000 to 72,000 Years Ago. Nat Commun 17,615; doi: 10.1038/s41467-025-67601-y

Source: www.sci.news

Simulating the Human Brain with Supercomputers: Exploring Advanced Neuroscience Technology

3D MRI scan of human brain

3D MRI Scan of the Human Brain

K H FUNG/Science Photo Library

Simulating the human brain involves using advanced computing power to model billions of neurons, aiming to replicate the intricacies of real brain function. Researchers aspire to enhance brain simulations, uncovering secrets of cognition with enhanced understanding of neuronal wiring.

Historically, researchers have focused on isolating specific brain regions for simulations to elucidate particular functions. However, a comprehensive model encompassing the entire brain has yet to be achieved. As Markus Diesmann from the Jülich Research Center in Germany notes, “This is now changing.”

This shift is largely due to the emergence of state-of-the-art supercomputers, nearing exascale capabilities—performing billions of operations per second. Currently, only four such machines exist, according to the Top 500 list. Diesmann’s team is set to execute extensive brain simulations on one such supercomputer, named JUPITER (Joint Venture Pioneer for Innovative Exascale Research in Germany).

Recently, Diesmann and colleagues demonstrated that a simple model of brain neurons and their synapses, known as a spiking neural network, can be configured to leverage JUPITER’s thousands of GPUs. This scaling can achieve 20 billion neurons and 100 trillion connections, effectively mimicking the human cerebral cortex, the hub of higher brain functions.

These simulations promise more impactful outcomes than previous models of smaller brains such as fruit flies. Recent insights from large language models reveal that larger systems exhibit behaviors unattainable in their smaller counterparts. “We recognize that expansive networks demonstrate qualitatively different capabilities than their reduced size equivalents,” asserts Diesmann. “It’s evident that larger networks offer unique functionalities.”

Thomas Novotny from the University of Sussex emphasizes that downscaling risks omitting crucial characteristics entirely. “Conducting full-scale simulations is vital; without it, we can’t truly replicate reality,” Novotny states.

The model in development at JUPITER is founded on empirical data from limited neuron and synapse experiments in humans. As Johanna Cenk, a collaborator with Diesmann at Sussex, explains, “We have anatomical data constraints coupled with substantial computational power.”

Comprehensive brain simulations could facilitate tests of foundational theories regarding memory formation—an endeavor impractical with miniature models or actual brains. Testing such theories might involve inputting images to observe neural responses and analyze alterations in memory formation with varying brain sizes. Furthermore, this approach could aid in drug testing, such as assessing impacts on a model of epilepsy characterized by abnormal brain activity.

The enhanced computational capabilities enable rapid brain simulations, thereby assisting researchers in understanding gradual processes such as learning, as noted by Senk. Additionally, researchers can devise more intricate biological models detailing neuronal changes and firings.

Nonetheless, despite the ability to simulate vast brain networks, Novotny acknowledges considerable gaps in knowledge. Even simplified whole-brain models for organisms like fruit flies fail to replicate authentic animal behavior.

Simulations run on supercomputers are fundamentally limited, lacking essential features inherent to real brains, such as real-world environmental inputs. “While we can simulate brain size, we cannot fully replicate a functional brain,” warns Novotny.

Topics:

Source: www.newscientist.com

Launch of ‘Knit’ Satellite: Advanced Radar Technology for Earth Surface Monitoring

Artist's impression of CarbSAR satellite orbiting Earth

Artist’s Impression of CarbSAR Satellite Orbiting the Earth

Credit: Oxford Space Systems

Britain’s newest satellite, **CarbSAR**, is set to launch on Sunday, equipped with cutting-edge knitwear technology. This innovative satellite will deploy a mesh radar antenna crafted using machinery typically found in textile manufacturing.

“We utilize a standard industrial knitting machine for jumpers, enhanced with features tailored to create specialized threads,” says Amur Raina, Director of Production at Oxford Space Systems (OSS) in the UK.

OSS collaborates with Surrey Satellite Technology Limited (SSTL) to install the antenna on a compact, cost-effective spacecraft capable of capturing high-resolution images of the Earth’s surface.

If successful, this unique design could be integrated into the UK Ministry of Defence’s (MoD) surveillance satellite network later this year.

The “wool” utilized in OSS’s weaving process is ultra-fine tungsten wire, coated with gold. The machines produce several meters of fabric simultaneously, which are then cut into segments and sewn into 3 millimeter-wide discs. These discs are tightly stretched over 48 carbon fiber ribs to form a smooth parabolic dish optimized for radar imaging.

The key innovation lies in the structural design, where each rib wraps radially around a central hub, resembling a 48-coil tape measure. This unique design enables the entire assembly to collapse down to just 75 cm in diameter, drastically reducing the volume of the 140-kilogram CarbSAR satellite during launch.

Upon reaching orbit, the stored strain energy in the bent carbon fibers will allow the ribs to return to their original shape, thereby pulling the mesh into a precise parabolic configuration.

“For optimal imaging, we must deploy it accurately to achieve the perfect parabolic shape,” adds Sean Sutcliffe, CEO of OSS. “Our design’s precision is its standout feature.” Testing has shown the mesh sheet remains within 1 millimeter of its ideal shape, ensuring exceptional performance.

The demand for Earth observation via small radar satellites is on the rise, thanks to their ability to image the ground in all weather conditions and even at night—a capability increasingly appreciated by emerging space companies.

This data is particularly sought after by military forces globally and played a crucial role as an intelligence resource during the recent Russian-Ukrainian conflict.

Despite once leading Europe in space radar developments in the 1990s, the UK has fallen significantly behind in the international arena.

With CarbSAR and the upcoming MoD constellation named Oberon, part of the broader ISTARI program, UK aerospace engineers have a chance to re-establish their presence in the industry.

“We’re seeing heightened interest from foreign governments in radar solutions,” states Andrew Cawthorn, Managing Director of SSTL. “Our primary focus is demonstrating that we can successfully deploy this antenna and capture images.”

CarbSAR is engineered to detect objects as small as 50 cm, sufficient for identifying tanks and aircraft.

After deployment, approximately two days post-liftoff, the Royal Space Force, supervised by the Royal Air Force, will closely monitor the antenna’s performance.

“CarbSAR symbolizes the innovative spirit and collaboration of one of the UK’s leading space companies,” said Major General Paul Tedman, Commander of the UK Space Force. “We eagerly anticipate seeing CarbSAR operational and exploring how its advanced technologies can enhance Oberon and our comprehensive ISTARI satellite initiative.”

Topic:

Source: www.newscientist.com

Exploring the World’s Most Advanced X-Ray Machine: Journey Before Its Power Boost

Electron beam traversing a niobium cavity, integral to SLAC's LCLS-II X-ray laser.

Electron Beam in Niobium Cavity: A Core Element of SLAC’s LCLS-II X-ray Laser

Credit: SLAC National Accelerator Laboratory

The Klystron Gallery at SLAC National Accelerator Laboratory is a concrete corridor lined with robust metal columns that stretch well beyond my line of sight. Yet, beneath this unassuming structure lies a marvel of modern science.

Below the gallery, the Linac Coherent Light Source II (LCLS-II) extends over an impressive 3.2 kilometers. This cutting-edge machine produces X-ray pulses that are the strongest in the world. I am here to witness it because a significant record has just been surpassed. However, an upgrade is set to take its most powerful component offline soon. When it reopens—anticipated as early as 2027—it will more than double its X-ray energy output.

“It’s like the difference between a star’s twinkle and the brightness of a light bulb,” says James Cryan at SLAC.

Dismissing LCLS-II as merely a sparkle would be profoundly misleading. In 2024, it achieved the most potent X-ray pulse ever recorded. Although it lasted a mere 440 billionths of a second, it released nearly 1 terawatt of energy—far surpassing the annual output of a typical nuclear power plant. Moreover, in 2025, LCLS-II set a record of generating 93,000 X-ray pulses per second, a remarkable feat for an X-ray laser.

According to Cryan, this milestone enables researchers to undertake groundbreaking studies of how particles behave within molecules after absorbing energy. It’s akin to transforming a black-and-white film into a vibrant, colorful cinematic experience. With this breakthrough and forthcoming enhancements, LCLS-II has the capacity to revolutionize our understanding of the subatomic behavior of light-sensitive systems, from photosynthetic organisms to advanced solar cell technologies.

LCLS-II operates by accelerating electrons toward near-light speeds—the ultimate velocity threshold in physics. The cylindrical device known as the klystron, which gives the klystron gallery its name, generates the microwaves necessary for this acceleration. Once the electrons attain sufficient speed, they navigate through arrays of thousands of strategically placed magnets, enabling their oscillation and producing an X-ray pulse. These pulses can be utilized for imaging the internal structure of various materials, similar to medical X-rays.

During my visit, I had the opportunity to tour one of several experimental halls. Here, the X-ray pulses collide with molecules, enabling a closer look at their interactions. These experimental areas resemble futuristic submarines—with heavy metal exteriors and large glass windows—engineered to exclude stray air molecules that could disrupt their experiments.

Just before my visit, Cryan and his team conducted an experiment to examine proton movements within molecules. Traditional imaging techniques struggle to provide detailed insight into proton dynamics, yet these specifics are vital for advancing solar cell technology, Cryan emphasizes.

What awaits these investigations post-upgrade when LCLS-II evolves into LCLS-II-HE? Cryan states that the enhanced capability to examine particle behavior within molecules will be significantly augmented. However, the path to upgrades is challenging.

Explore CERN: The Hub of Particle Physics in Europe

Get ready to explore CERN, Europe’s premier center for particle physics, nestled near the beautiful city of Geneva, Switzerland, famous for housing the Large Hadron Collider.

John Schmage from SLAC notes that as the energy of the electron beam increases, the risk of particles straying becomes a significant concern. He recounts witnessing a misbehaving beam damage equipment at another facility, highlighting the necessity for precision. SLAC’s Ding Yuantao emphasizes that all new components installed during the upgrade are designed to endure higher power outputs, but they must increase energy levels gradually to ensure operational integrity. “We’ll activate the beam and closely monitor its performance,” he states.

In 2026, the team plans to engage in a significant engineering initiative to align the components, followed by one to two years of meticulous setup for a staged increase in power output. If all progresses according to plan, the upgraded LCLS-II-HE will be available for global researchers by 2030. Ongoing communication between X-ray users like Cryan, and operators like Schmage and Ding, will be essential. “This tool will evolve, and we will continually enhance its capabilities,” Schmage notes.

Topic:

Source: www.newscientist.com

Teen Creates Advanced Robotic Hand Using Just Lego Parts

Jared Lepola and a robotic hand crafted from LEGO Mindstorms components

Nathan Leppola

A robotic hand constructed by a 16-year-old boy and his father using Lego pieces can effectively grasp and manipulate objects, showcasing functionality akin to natural human hands.

Jared Leppola, a student at Bristol Grammar School in England, began working on this project with his father when he was just 14 years old. Nathan Leppola is affiliated with the University of Bristol.

The device utilizes concepts from leading research institutions like Pisa/IIT SoftHand, yet it is built entirely from readily available components from Lego Mindstorms, a popular series of educational kits designed for creating programmable robots.

“My father is a professor of robotics at the University of Bristol, and I was really inspired by the design of robotic hands,” Jared explains. “This motivated me to pursue it in an educational context using Lego.”

The hand operates using two motors based on tendon mechanics, and each of its four fingers is equipped with three joints. A differential mechanism made of Lego clutch gears connects the fingers, allowing them to move in unison until they contact an object and stop, mimicking human grasping behavior.

Throughout testing, the Lego hand successfully grabbed nine common household items, including plastic cups, bowls, and a stuffed toy weighing 0.8 kilograms.

When one finger is engaged, it fully closes in approximately 0.84 seconds and reopens in about 0.97 seconds. This speed is about half that of the Pisa/IIT SoftHand’s 3D-printed counterpart, which employs metal bearings. In static tests, the Lego hand could withstand loads of 5 Newtons, exert a pushing force of 6 Newtons, and deliver a closing force of 1.8 Newtons. Comparatively, the 3D-printed version can manage loads up to 8 Newtons, push with 7 Newtons, and has a closing force of 2 Newtons.

“You won’t find a better hand,” Nathan states regarding the 3D-printed alternative. “In terms of functionality, the LEGO hands are also considerably larger, with each finger measuring 145 millimeters long and 30 millimeters wide.”

While Lego Mindstorms was discontinued in 2022, Jared noted that the device can still be easily modified with a variety of Lego creations. “The way I designed the motor, you can simply take it out and replace it with a new one,” he explains.

Topic:

Source: www.newscientist.com

Trump Sparks Concerns Over Nvidia’s Potential Sale of Advanced AI Chips in China

Donald Trump has indicated that Nvidia can sell more advanced chips in China than is currently allowed.

During a Monday briefing, Trump addressed the recent development, revealing his groundbreaking agreements with NVIDIA and AMD. He has authorized an export license allowing the sale of previously restricted chips to China, with the US government receiving 15% of the sales revenue. The US president defended the deal after analysts labeled it as potentially resembling “shakedown” payments or unconstitutional export taxes. He expressed hope for further negotiations regarding a more advanced Nvidia chip.

Trump mentioned that Nvidia’s latest chip, Blackwell, would not be available for trade, but he is considering trading “a slightly negatively impacted version of Blackwell,” which could see a downgrade of 30-50%.

“I believe he’ll be back to discuss it, but it will be a significant yet unenhanced version,” he remarked, referring to Nvidia’s CEO Jensen Huang, who has had multiple discussions with Trump about China’s export limits.



Huang has yet to comment on the revenue-sharing agreement pertaining to the sales of Nvidia’s H20 chips and AMD’s Mi308 chips in China.

The H20 and Mi308 chips were prohibited from being sold to China in April, even though the low-power H20 was specially designed to meet the restrictions set by the Biden administration. Nvidia previously stated last month that they hoped to receive clearance to resume shipments soon.

Nvidia’s impact is a major driver of the AI boom, garnering significant interest from both China and the US, which has led to heightened scrutiny among analysts in Washington and concerns from Chinese officials.

“I’m worried about reports indicating the US government might take revenue from sales of chips akin to advanced H20 sales,” he told the Financial Times.

Trump justified the agreement on Monday: “I stated, ‘Listen, I want 20% if I approve this for you,'” emphasizing that he hasn’t received any personal money from the deal. He suggested that Huang provided 15% as part of the agreement.

“I permitted him only for the H20,” Trump clarified.

He referred to the H20 as an “outdated” chip that is “already in a different form for China.”

However, Harry Cleja, research director at the Washington office of the Carnegie Mellon Institute of Strategic Technology, labeled the H20 as a “second tier” AI chip.

“The H20 is not the premier training chip available, but the type of computing dominating AI tasks today—particularly the ‘inference’ model and ‘agent’ products—are what the field is focused on,” Kresja told the Guardian, referring to systems employing advanced inference to autonomously resolve complex issues.

“Lifting H20 export restrictions undoubtedly provides Beijing with the necessary tools to compete in the AI realm.”

The US government has been attempting for several years to defend national security, especially concerning artificial intelligence development and the provision of technology that could be weaponized.

China’s Foreign Ministry remarked on Monday that the country has consistently articulated its stance on US chip exports, accusing Washington of utilizing technology and trade measures to “maliciously suppress and hinder China.”

Revenue-sharing contracts are quite rare in the US, reflecting Trump’s latest interference in corporate decisions after pressuring executives to reinvest in American manufacturing. He has requested the resignation of Intel’s new CEO, Lip-Bu Tan, regarding its connections with Chinese companies.

Trump has also suggested imposing 100% tariffs on the global semiconductor market, exempting businesses that commit to investing in the US.

Taiwan’s TSMC, a leading semiconductor manufacturer, announced plans in April to expand its US operations through a $100 million investment. However, foreign investments of this magnitude require government approval from Taiwan.

The Guardian confirmed that TSMC has yet to apply for this approval. The company has not responded to requests for comment.

Source: www.theguardian.com

This Audacious Theory Suggests We Are Not the Planet’s First Advanced Civilization.

For centuries, humanity has been intrigued by the possibility of encountering advanced civilizations beyond our planet. But what if such a society existed on Earth long before humans evolved?

In 2018, physicist Professor Adam Frank and climate modeler Dr. Gavin Schmidt published a paper exploring whether modern science might uncover traces of an extinct industrial civilization from millions of years ago. The paper is available here.

Dubbed “Silur’s Hypothesis,” after the advanced reptilian species from the long-running BBC science fiction series Doctor Who, the researchers concluded that, while unlikely, evidence of such a civilization may be elusive.

The study focuses on the timeframe between 400 million and 4 million years ago, investigating what remnants this hypothetical society might have left behind.

Over just a few centuries, our industries have significantly altered global climate and ecosystems. If humanity were to vanish over millions of years, however, any direct evidence of our society would likely fade away.

Our largest cities could vanish within a geological instant due to erosion and tectonic activity.

Consequently, scientists searching for an ancient civilization should look for geological signatures of their existence.

Advanced civilizations, much like modern humans, would demand substantial energy and food production. As a result, we might anticipate similar indicators in Earth’s geologic layers, such as evidence of extensive carbon emissions, climate change, and rising sea levels.

Should pyramids reminiscent of alien architecture have been constructed by lost ancient civilizations millions of years ago, Silur’s Hypothesis suggests that discovering them would be quite unlikely.

The challenge lies in distinguishing climate change caused by fossil fuel-dependent civilizations from that induced by natural processes in the geological record.

Interestingly, there is a striking resemblance between current climate change and historical events on Earth referred to as “hyperthermal” events. One such instance occurred around 55 million years ago, where global temperatures surged by up to 8°C (14.4°F) and were accompanied by intense geological upheavals.

Another consideration is that the longer a sophisticated civilization endures, the more evidence it generates. However, for a civilization to have longevity, it must be sustainable, leading to reduced geological traces.

For instance, a civilization relying on wind and solar energy would leave less physical evidence compared to one powered by fossil fuels. This paradox explains why the traces of such civilizations, if they indeed existed, would be infrequent.

Silur’s Hypothesis encourages us to reflect on the imprints humanity leaves behind. Addressing these inquiries may enhance our search for advanced civilizations on other planets.


This article answers the question posed by Exeter’s Joshua Stucky: “If advanced civilizations lived on Earth millions of years ago, could we recognize their existence?”

For your inquiries, please reach out to us at Question @sciencefocus.com or Message Facebook, Twitter, or Instagram (please include your name and location).

Discover our ultimate Fun fact and explore more amazing science pages!


Read more:

Source: www.sciencefocus.com

Advanced AI Experiences “Total Accuracy Breakdown” When Confronted with Complex Issues, Research Finds

Researchers at Apple have identified “fundamental limitations” in state-of-the-art artificial intelligence models, prompting concerns about the competitive landscape in the tech industry for developing more robust systems.

In a study, Apple noted that the advanced AI model, known as the large-scale inference model (LRMS), experienced a “complete collapse in accuracy” when faced with complex challenges.

Standard AI models outperformed LRMS on tasks of lower complexity, yet both encountered “complete collapse” on highly complex tasks. LRMS attempts to handle intricate queries by creating detailed reasoning processes to break down issues into manageable steps.


The research, which evaluated the models’ puzzle-solving capabilities, revealed that LRMS began to “reduce inference efforts” as it neared performance breakdowns—something researchers labeled as “particularly concerning.”

Gary Marcus, a noted academic voice on AI capabilities, characterized the Apple paper as “quite devastating” and highlighted that these findings raise pivotal concerns regarding the race towards achieving artificial general intelligence (AGI), which would enable systems to emulate human-level cognitive tasks.

Referencing large language models (LLMs), Marcus remarked: “[of] AGIs, who can fundamentally change society, are joking about themselves.”

Moreover, the paper indicated that early in the “thinking” process, the inference model often squandered computational resources seeking solutions for simpler problems. However, as complexity increased, the model initially considered incorrect answers before ultimately arriving at correct ones.

When confronted with complex issues, the model experienced “collapse” and failed to generate accurate solutions. In one instance, it could not succeed even with an algorithm provided to assist.

The findings illustrated that “as problem difficulty rises, models begin to intuitively diminish inference efforts as they approach critical thresholds that closely align with the accuracy collapse point.”

According to Apple experts, these findings highlight “fundamental scaling limitations” in the reasoning capabilities of current inference models.

The study involved LRMS-based assignments like the Tower of Hanoi and River Crossing puzzle. The researchers acknowledged that their focus on puzzles signifies a boundary to their work.

Skip past newsletter promotions

The study concluded that current AI methodologies may have hit fundamental limitations. Models tested included OpenAI’s O3, Google’s Gemini Thinking, Anthropic’s Claude 3.7 Sonnet-Thinking, and Deepseek-R1. Google and Deepseek will be approached for comments, while OpenAI, the organization behind ChatGPT, opted not to provide a statement.

Discussing AI models’ capacity for “generalizable reasoning” or broader conclusions, the paper observes:

Andrew Rogoiski from the People-centered AI Institute at Surrey University remarked that Apple’s findings illustrate the industry remains grappling with AGI, suggesting that the current methods may have hit a “dead end.”

He added, “The revelation that the large model underperforms on complex tasks while faring well in simpler or medium-complexity contexts indicates we may be approaching a profound impasse.”

Source: www.theguardian.com

Bonobo Footsteps and Vocalizations Suggest Advanced Communication, Scientists Find

New research suggests that the peeps, cries, and groans of wild bonobos, a species of great apes living in Africa’s rainforests, can convey complex ideas in ways that resemble elements of human language.

According to a study published in the Journal Science, the closest living genetic relatives of humans can combine different calls to construct phrases that modify the meaning of another, challenging the notion that only humans possess such abilities.

Simon Townsend, a professor at the University of Zurich and the author of the study, stated that while language is not unique to humans, bonobos seem to exhibit language features in their communication systems.

Experts have found the research to be persuasive, suggesting that bonobos may be beyond chimpanzees in their communication abilities, with other species possibly exhibiting similar behaviors as well.

Young male bonobo scratching his head.
Lukas Bierhoff / Kokolopori Bonobo Research Project

Witness

Melissa Bursett, the lead author of the University of Zurich study, spent about six months in the Democratic Republic of the Congo studying wild bonobos at the Kokoropoli Bonobo Reserve, documenting their various vocalizations and behaviors.

The study mapped over 700 vocal calls in relation to their meanings and highlighted instances where bonobos combined different calls to convey new meanings, demonstrating their complex communication abilities.

Researchers believe that bonobos, along with chimpanzees, share common ancestors with humans, providing insights into the evolution of language and communication among early humans.

The origin of language

Bonobos, with their sophisticated communication systems, serve as a link to understand the evolution of human language and shed light on how early humans developed complex forms of verbal communication.

The study raises questions about the ancient origins of human language and how bonobos and chimpanzees exhibit building blocks of communication that help in understanding the transition to more advanced languages in humans.

Despite the challenges in studying wild bonobos, researchers see them as a unique opportunity to reflect on human history and evolution, emphasizing the importance of preserving these endangered species.

Source: www.nbcnews.com

The Cretaceous period larvae possessed advanced eyes

Paleontologists have discovered three racewing larvae in Myanmar's 100 million Kachin amber with large forward trunks (the eyes of Holometabolan). These specimens show highly developed, simple eye convergent evolution of at least two additional lines, indicating the enormous diversity of Cretaceous larvae.

A larva from Kachin Amber, 100 million years ago. Image credit: Haug et al. , doi: 10.1111/1744-7917.13509.

Adult insects are known for their fascinating and complicated eyes. This allows you to achieve amazing sensory feats when performing functions such as food and peers search.

However, in many insect larvae, these eyes are not yet developed. The simple eyes known as the stem are usually sufficient for these larvae. Often, it is a machine that is mostly eating at this stage.

However, some insect larvae are predators, and a few of these have developed highly efficient imaging systems from simple stems.

“The adults and pups of beetles, bees, flies, butterflies and close relative insects also have complex eyes that are present in some larvae,” says Dr. Carolin Haug, researcher at Ludwig-Maximilians-Universität München.

“In contrast, most holometaboran larvae have a small group of up to seven simple eyes, known as stems, on either side of the head.”

“The trunk is inherent to holometaborane, usually a simple structure, often slightly radial oriented, creating a wide field of view.”

“However, the fields of the right and left trunks rarely overlap, but denies binocular vision in the larvae.”

“And more, most stems lack the complex internal structures needed to create images.”

“In contrast, several predatory holometaboran larvae evolved anteriorly directed stems, which were expanded with overlapping fields of vision that promote binocular vision.”

“Examples include the larvae of diving beetles known as water tigers, tiger beetles, anthraion and the Whirlgihi beetle.”

“The trunk has been reported in over 120 fossil larvae, but no imaging eyes have been identified that allow binocular vision.”

In a new study, the authors discovered three predatory larvae with unusually large and positive trunks in the Cretaceous Cachin Amber.

They found that the size and orientation of the larvae eyes are comparable to the size and orientation of modern anthraions, allowing for similar optical resolution.

“This is evidence of the first fossils of such an eye and therefore the oldest,” Dr. Haug said.

“The highly refined, simple eyes of predatory larvae evolved with a further double convergence, not just anti-, water tigers and tiger beetles, but also at least among extinct larvae.”

“Our results reveal greater diversity in morphology, ecology, and feeding strategies among Cretaceous larvae than today.”

Survey results Published in the journal Insect Science.

____

Karolyn Haug et al. Cretaceous horny larvae with binocular vision show convergent evolution of refined, simple eyes. Insect SciencePublished online on February 18th, 2025. doi:10.1111/1744-7917.13509

Source: www.sci.news

Scientists create advanced nanosensor for measuring forces

The newly developed all-optical nanosensor is a luminescent nanocrystal that changes intensity and color when pushed or pulled. Probed only with light, allowing fully remote reading. No wires or connections required. They have force sensitivity that is 100 times better than existing nanoparticles that utilize rare earth ions for their optical response, with a force operating range of more than four orders of magnitude and a much wider range than other nanoparticles (10–100 times). Conventional optical nanosensor.

Illustration of atomic arrangement within a single lanthanide-doped nanocrystal. Each lanthanide ion can emit light. Image credit: Andrew Mueller / Columbia Engineering.

“Our discovery revolutionizes the sensitivity and dynamic range achievable with optical force sensors, and has implications for applications from robotics to cellular biophysics, medicine to space travel,” said Dr. Jim Shack, a researcher at Columbia University. We expect that this technology will immediately disrupt technology in this field.”

The new nanosensor enables high-resolution, multiscale capabilities for the first time in the same nanosensor.

This means that this nanosensor alone, rather than a series of different classes of sensors, can be used for the continuous study of forces from the subcellular level to the whole system level in engineered and biological systems such as embryonic development. It is important because it means , moving cells, batteries, or integrated NEMS, highly sensitive nanoelectromechanical systems in which the physical movement of nanometer-scale structures is controlled by electronic circuits and vice versa.

“Aside from their unparalleled multiscale sensing capabilities, what makes these force sensors unique is that they operate with benign, biocompatible, and deeply penetrating infrared light,” said Natalie, a postdoctoral fellow at Columbia University. said Dr. Fardian Melamed.

“This will allow us to peer deeply into various technical and physiological systems and monitor health conditions from a distance.”

“These sensors will enable early detection of system malfunctions and failures, and will have a major impact on sectors ranging from human health to energy and sustainability.”

Researchers were able to construct these nanosensors by exploiting the photon avalanche effect within nanocrystals.

In photon avalanche nanoparticles, the absorption of a single photon within the material causes a chain reaction that ultimately leads to the emission of many photons. Therefore, one photon is absorbed and many photons are emitted.

The optically active components within the nanocrystals studied are atomic ions from the lanthanide series of elements of the periodic table, also known as rare earth elements, doped into the nanocrystals. In this study, the scientists used thulium.

They found that the photon avalanche process is very sensitive to several things, such as the spacing between lanthanide ions.

With this in mind, they tapped a piece of a photon avalanche nanoparticle (ANP) with an atomic force microscope (AFM) tip and found that the avalanche's behavior was influenced by these gentler forces than previously expected. I found that I was greatly affected.

“We discovered this almost by accident,” Shook said.

“We suspected that these nanoparticles were force-sensitive, so we measured the release while hitting the nanoparticles.”

“And they turned out to be much more sensitive than expected!”

“In fact, we couldn't believe it at first either. We thought the chip might be having a different effect.”

The authors knew how sensitive ANPs were, so they designed new nanoparticles that responded to force in different ways.

In one new design, nanoparticles change the color of their emitted light depending on the applied force.

In another design, they created nanoparticles that do not exhibit photon avalanches under ambient conditions, but start avalanching when a force is applied. These turned out to be very sensitive to forces.

They are now applying these force sensors to critical systems with the goal of making a big impact.

“The importance of developing new force sensors was recently highlighted by 2021 Nobel Prize Laureate Erdem Patapoutian. “It highlighted the difficulty of investigating biological processes,” said Dr. Shook.

“We are thrilled to be part of these discoveries that will transform the sensing paradigm and allow us to sensitively and dynamically map significant changes in forces and pressures in real-world environments that are unreachable with today's technology.” I think so.

team's work Published in today's diary nature.

_____

Natalie Fardian Melamed others. 2025. Infrared nanosensor from piconewton to micronewton forces. naturein press. doi: 10.1038/s41586-024-08221-2

This article is a version of a press release provided by Columbia University.

Source: www.sci.news

Meta puts a stop to launching advanced AI models in the EU

Mark Zuckerberg’s Meta announced that it would not release an advanced version of its artificial intelligence model in the EU, citing “unpredictable” behavior of regulators.

The owners of Facebook, Instagram and WhatsApp are preparing to make the Llama model available in a multimodal format, meaning it can work with text, video, images and audio, not just one format. Llama is an open-source model, meaning users can freely download and adapt it.

But a Meta spokesperson confirmed that the model would not be available in the EU, a decision that highlights tensions between big tech companies and Brussels amid an increasingly tough regulatory environment.

“We plan to release a multi-modal Llama model in the coming months, but it will not be released in the EU due to the unpredictable regulatory environment there,” the spokesperson said.

Brussels is introducing an EU AI law which comes into force next month, while new regulatory requirements for big tech companies are being introduced in the form of the Digital Markets Act (DMA).

However, Meta’s decision regarding its multimodal Llama model has implications on its compliance with the General Data Protection Regulation (GDPR): Meta was ordered to stop training its AI models on posts from Facebook and Instagram users in the EU for potential violations of privacy regulations.

The Irish Data Protection Commission, which oversees Meta’s compliance with GDPR, said it was in discussions with the company about training its models.

However, Meta is concerned that other EU data watchdogs could step in to the regulatory process and halt its approval. Although a text-based version of Llama is available in the EU, and a new text-only version is due to be released in the EU soon, these models have not been trained on EU Meta user data.

The move comes after Apple announced last month that it would not roll out some new AI features in the EU due to concerns about compliance with the DMA.

Skip Newsletter Promotions

Meta had planned to use the multimodal Llama model in products such as Ray-Ban smart glasses and smartphones. Llama’s decision was first reported by Axios.

Meta also announced on Wednesday that it had suspended use of its Generative AI tool in Brazil after the Brazilian government raised privacy concerns about the use of user data to train models. The company said it decided to suspend use of the tool while it consults with Brazil’s data authorities.

Source: www.theguardian.com

Mark Zuckerberg commits to developing advanced AI to address concerns

Mark Zuckerberg has faced accusations of being irresponsible in his approach to artificial intelligence after working to develop AI systems as powerful as human intelligence. The Facebook founder has also raised the possibility of making it available to the public for free.

Meta’s CEO announced that the company intends to build an artificial general intelligence (AGI) system and plans to open source it, making it accessible to outside developers. He emphasized that the system should be “responsibly made as widely available as possible.”

In a Facebook post, Zuckerberg stated that the next generation of technology services requires the creation of complete general-purpose intelligence.

Although the term AGI is not strictly defined, it generally refers to a theoretical AI system capable of performing a range of tasks at a level of intelligence equal to or exceeding that of humans. The potential emergence of AGI has raised concerns among experts and politicians worldwide that such a system, or a combination of multiple AGI systems, could evade human control and pose a threat to humanity.

Zuckerberg expressed that Meta would consider open sourcing its AGI or making it freely available for developers and the public to use and adapt, similar to the company’s Llama 2 AI model.

Dame Wendy Hall, a professor of computer science at the University of Southampton and a member of the United Nations advisory body on AI, expressed concern about the potential for open source AGI, calling it “really, very scary” and labeling Zuckerberg’s approach as irresponsible.

According to Mr. Hall, “Thankfully, I think it will still be many years before those aspirations become a reality.” She stressed the need to establish a regulatory system for AGI to ensure public safety.

Last year, Meta participated in the Global AI Safety Summit in the UK and committed to help governments scrutinize artificial intelligence tools before and after their release.

Another UK-based expert emphasized that decisions about open sourcing AGI systems should not be made by technology companies alone but should involve international consensus.

In an interview with tech news website The Verge, Zuckerberg indicated that Meta would lean toward open sourcing AGI as long as it is safe and responsible.

Meta’s decision to open source Llama 2 last year drew criticism, with some experts likening it to “giving people a template to build a nuclear bomb.”

OpenAI, the developer of ChatGPT, defines AGI as “an AI system that is generally smarter than humans.” Meanwhile, Google DeepMind’s head, Demis Hassabis, suggested that AGI may be further out than some predict.

OpenAI CEO Sam Altman warned at the World Economic Forum in Davos, Switzerland, that further advances in AI will be impossible without energy supply breakthroughs, such as nuclear fusion.

Zuckerberg pointed out that Meta has built an “absolutely huge amount of infrastructure” to develop the new AI system, but did not specify the development timeline. He also mentioned that a sequel to Rama 2 is in the works.

Source: www.theguardian.com

Utilizing Webb’s Advanced Optical Techniques to Unravel the Mysteries of the Ring Nebula

New images captured by the James Webb Space Telescope’s MIRI (Mid-Infrared Instrument) reveal intriguing details of the Ring Nebula. These images show approximately 10 concentric arcs located just beyond the outer edge of the main ring, suggesting the presence of a low-mass companion star orbiting the central star at a distance similar to that between Earth and Pluto. Researchers from the Royal Observatory of Belgium, Griet van de Steene and Peter van Hof, are part of the international team of astronomers who released these breathtaking images. In their research paper, they analyze these features and discuss their implications for the star’s evolution.

The Ring Nebula, located about 2,200 light-years from Earth in the constellation Lyra, is a well-known and visually striking planetary nebula. It displays a donut-shaped structure consisting of glowing gas, which was shed by a dying star as it reached the end of its lifecycle. The web’s NIRCam (near-infrared camera) and MIRI instruments have captured stunning footage of the nebula, providing scientists with an opportunity to study and understand its complex structure.

The recent images obtained by the James Webb Space Telescope’s NIRCam reveal intricate details of the filamentary structure of the inner ring of the Ring Nebula. This inner region contains about 20,000 dense spherules and is rich in hydrogen molecules. Additionally, the outer region of the nebula contains a thin ring with enhanced emission from carbon-based molecules known as polycyclic aromatic hydrocarbons (PAHs). These details were analyzed and described in a research paper by Griet van de Steene, Peter van Hof, and their team.

The Webb images also show peculiar spikes extending outward from the central star on the outside of the ring. These spikes, observed in the infrared but faint in the visible spectrum captured by the Hubble Space Telescope, may be caused by molecules forming in the shadow of the densest part of the ring, shielded from direct radiation from the hot central star.

Furthermore, the researchers discovered 10 concentric arcs in a faint halo outside the ring. These arcs indicate the possible presence of a companion star orbiting at a distance similar to that between our Sun and Pluto. The interaction between the central star and this companion star may have shaped the nebula into its distinctive elliptical form.

The detailed images captured by the Webb telescope provide valuable insights into the process of stellar evolution. By studying the Ring Nebula, scientists hope to gain a better understanding of the life cycles of stars and the elements they release into space. Griet van de Steene and Peter van Hof, along with their team of experts in planetary nebulae and related objects, are actively researching and analyzing the Ring Nebula using imaging and spectroscopy techniques.

Source: scitechdaily.com