In the last 25 years, the field of human evolution has witnessed remarkable growth, showcased by a significant increase in discoveries. Archaeologists have unearthed more fossils, species, and artifacts from diverse locations, from the diminutive “hobbits” to enigmatic creatures inhabiting Indonesian islands. Notably, Homo naledi is known solely from a single deep cave in South Africa. Simultaneously, advanced analytical techniques have enhanced our understanding of these findings, revealing a treasure trove of information about our origins and extinct relatives.
This whirlwind of discoveries has yielded two major lessons. First, since 2000, our understanding of the human fossil record has been extended further back in time. Previously, the oldest known human fossil was 4.4 million-year-old Ardipithecus, but subsequent discoveries in 2000 and 2001 unearthed even older species: Ardipithecus, Orrorin tugenensis from 6 million years ago, and Sahelanthropus tchadensis from 7 million years ago. Additionally, the Orrorin lineage was tentatively identified in 2022, suggesting it is slightly more recent than O. tugenensis.
According to Clement Zanoli from the University of Bordeaux, the discovery of these early human fossils represents “one of the great revolutions” in our understanding of evolution.
The second major lesson has enriched the narrative of how our species emerged from earlier hominins. By 2000, genetic evidence established that all non-Africans descend from ancestors who lived in Africa around 60,000 years ago. This revelation indicated that modern humans evolved in Africa and subsequently migrated, replacing other hominid species.
However, by 2010, the sequencing of the first Neanderthal genome opened a new chapter, along with the DNA analysis of several other ancient humans. These studies revealed that our species interbred with Neanderthals, Denisovans, and possibly other groups, creating a complex tapestry of human ancestry.
Skeletal research has long suggested interbreeding as many fossils exhibit traits that defy clear species categorization, as noted by Sheila Athreya at Texas A&M University. In 2003, Eric Trinkaus and colleagues described a jawbone excavated from Peștera cu Oase, Romania, as a Human-Neanderthal hybrid, based on its morphology. Later genetic testing in 2015 confirmed that individuals from Oase had Neanderthal ancestry, tracing back 4 to 6 generations ago.
This evidence highlights that our species did not merely expand from Africa; rather, our population absorbed genetic contributions from Neanderthals and Denisovans along the way. Genetically, we are a mosaic, a fusion of countless years of diverse human lineages.
We perceive color using input from cone cells in the retina.
Shutterstock/Kytriel
In April, researchers announced that they developed a device that allows people to see vibrant green and blue colors previously unseen by humans. Following this revelation, numerous requests poured in from the public eager to experience these colors firsthand.
This device could potentially enable individuals with certain types of color blindness to experience typical vision, while also giving those with normal vision an opportunity to perceive a broader spectrum of colors. “Our aim is to enhance the color experience,” states Austin Rolda from the University of Waterloo in Canada.
The retina at the rear of most individuals’ eyes contains three types of cone cells identified as S, M, and L. Each of these cones detects different wavelength ranges of light, aiding the brain in forming color perceptions based on signals received from them.
The M cone cells’ sensitivity range overlaps with the other two types, meaning they typically receive combined signals from multiple cone types.
Roorda and his team employed a highly accurate laser to selectively target about 300 M cones in a small area of the retina, roughly the size of a fingernail when held at arm’s length.
When five team members tested the device, they encountered a vivid blue-green hue that exceeded anything they had seen so far, which they named “olo.” This discovery was validated through a color matching experiment that compared olo to the complete visible light spectrum.
“It was truly an incredible experience,” remarks Roorda, who has witnessed olo more frequently than anyone else due to his essential role in developing the system. “The most vibrant natural light appeared dull in comparison.”
After their findings attracted media attention, the team received numerous inquiries from various individuals, including artists, interested in seeing olo. However, Roorda explained that they were unable to fulfill these requests, as setting up the device for a new person requires several days.
Instead, they are concentrating on two ongoing experiments. The first experiment aims to determine whether the device can temporarily enable individuals with color blindness to experience typical vision. Certain color blindness types arise from having only two cone types rather than the typical three. “We manipulate the signaling from specific cones within a type to simulate the existence of a third cone type,” Roorda explains. The objective is for people’s brains to interpret these signals as colors they have never experienced before.
The researchers are also exploring whether a similar technique could allow individuals with three cone types to perceive the world as if they had four cone types, potentially expanding their color perception. Results from both studies are anticipated to be available next year, Roorda indicated.
In a room adorned with gray walls in the Dutch city of Nijmegen, peculiar activities unfold beneath your feet. You find yourself seated in a chair, donning a hat covered with sensors, and your bare feet are placed in holes in the platform. Below, a robot equipped with a metal probe begins tickling the soles of your feet. Soon, the air fills with shrieks, laughter, and a certain painful mirth. Here at Radboud University’s Touch and Tickle Laboratory, volunteers are subjected to relentless tickling for the sake of science.
“We can monitor the intensity, speed, and specific areas of stimulation on the legs,” explains Constantina Kirteni, the lab’s director, regarding the robotic tickling experiment. Simultaneously, researchers document participants’ brain activity and physiological metrics such as heart rate, respiration, and sweating. Armed with these neurological and physiological insights, the researchers aim to tackle age-old questions that have intrigued philosophers from Socrates to René Descartes. Why do we experience ticklishness, what does it reveal about the boundary between pleasure and pain, and does this peculiar behavior serve any real purpose? The findings could illuminate areas such as infant brain development, clinical conditions like schizophrenia, and the structure of conscious experience in our brains.
Though the researchers have yet to publish their findings, Kirteni is willing to share some early insights. Regarding what triggers the tickling sensation, she states, “For us to recognize it as tickling, the contact must be both strong and rapid.” Preliminary analyses also indicate that electroencephalography (EEG) reveals distinct patterns of brain activity when experiencing ticklish feelings. To delve deeper into which brain regions process these sensations, the researchers intend to employ functional MRI, although the robot will require modifications to avoid interfering with the scanner. Moreover, scientists at the institute have initiated inquiries into the intriguing question of whether people actually enjoy being tickled.
“We observe a mix of responses, allowing us to see both those who find it pleasurable and those who find it distressing,” Kirteni notes. While people’s reactions may include smiles or laughter, these do not necessarily correlate with their enjoyment levels. Additionally, perceptions can shift over time. “Some individuals have reported that though it may be enjoyable initially, prolonged exposure can become uncomfortable and even painful,” she adds.
Tickling Laboratory at Radboud University, Nijmegen, Netherlands
Cohen Verheiden
One of the enduring enigmas about tickling that Kirteni is eager to unravel is why self-tickling is impossible. This peculiar fact suggests that unpredictability in stimulation is crucial, a notion supported by contemporary studies. Numerous investigations indicate that our brains predict sensations triggered by our own actions, leading us to perceive our touch as less significant than that of others. This can become particularly perplexing in certain mental health conditions. Research suggests that individuals experiencing auditory hallucinations or sensations of being controlled by external forces find their own touch more ticklish. “This indicates a possible breakdown in how our brains forecast our feelings based on our movements,” Kirteni mentions. “We are keen to explore this further in clinical populations, especially those with schizophrenia.”
What Makes Us Ticklish?
Perhaps the most significant unanswered inquiry revolves around why we are ticklish. Known primarily among humans and their close relatives, tickling may have evolved from behaviors in great ape ancestors. For instance, chimpanzees and bonobos frequently tickle each other during play. In a study published this year, Elisa Demur and colleagues from the University of Lyon in France observed a bonobo colony for three months. They discovered a notable correlation between tickling and age, with older bonobos being tickled more often, while younger ones were tickled frequently.
Demur remarked, “This is intriguing because it aligns closely with human behavior, chiefly as an interaction for young children.” The researchers observed that social bonds significantly influenced the tickling interactions; pairs that primarily engaged in tickling sessions shared strong attachments.
For Demur, this suggests that tickling evolved as a prosocial behavior enhancing connections between youngsters and their group members. This is closely related to pretend play, she adds, since acts appearing aggressive and unpleasant from strangers can be enjoyable in the presence of friends or close acquaintances. In her studies of bonobos at the Lola ya Bonobo Sanctuary in the Democratic Republic of the Congo, she observes how orphaned infants respond to tickling by their human surrogate parents, highlighting the importance of familiarity. “It’s a fascinating behavior. It’s always joyful to see them laugh; they’re incredibly adorable!” she shares.
Regardless of one’s mental state or the relationship with the person (or machine) doing the tickling, even non-consensual tickling can elicit laughter. Some researchers argue that this indicates that tickling is a physiological reflex; however, this does not preclude the idea that its evolution served a social purpose. Another hypothesis suggests that this behavior could help young individuals learn to protect vulnerable areas of their body during play or combat. “The truth remains that we don’t have definitive answers because there are valid counterarguments for all these theories,” Kirteni states.
Rats “laugh” when tickled
Shinpei Ishiyama and Michael Brecht
Nevertheless, focusing exclusively on tickling behaviors in great apes may overlook a significant aspect of this behavior. While rodents are not known to engage in tickling among themselves, they appear to enjoy human tickling. Though previously thought to be non-ticklish, mice have shown a fondness for tickling when they feel comfortable. Researcher Marlies Austrand from the University of Amsterdam found that if mice are relaxed and flipped over, they can delight in tickling, producing high-pitched sounds that resemble laughter.
Interestingly, these sounds are beyond human hearing range, and it’s uncertain whether mice can hear them as well, adding to the mystery of their laughter. While Austrand’s findings are not yet published, it’s evident that rodents respond positively to tickling. “If given the choice between a safe, scented hutch in their home cage and being tickled, mice will choose the latter,” she asserts.
Austrand speculates on why humans and animals react as they do under tickling. Our brains are constantly engaged in predicting external stimuli, evaluating potential threats and survival tactics. She proposes that tickling introduces surprises that contradict these expectations. Yet, if we feel secure, these unexpected sensations can be exhilarating. “This is more of a hypothesis; it remains unproven,” she admits. “But I believe tickling aids animals, especially young ones, in adapting to a fluid environment,” she concludes. Such peculiar behavior may well be an evolutionary quirk that we should embrace.
Karo people overlooking the Omo River Valley in Ethiopia
Michael Honegger/Alamy
Here’s a snippet from Our Human Story, a newsletter focusing on advancements in archaeology. Subscribe to receive it monthly in your inbox.
On the eastern shores of Lake Turkana in Kenya lies Namorotuknan Hill, where a river once flowed but has since dried up. The area features a dry landscape with sparse shrubbery.
Between 2013 and 2022, a team of researchers led by David Brown from George Washington University excavated clay layers adjacent to the river. Their findings included 1,290 stone tools crafted by ancient humans, dating back between 2.44 and 2.75 million years. They reported their discoveries in Nature Communications last week.
The tools belong to the Oldowan type, which are prevalent in various regions of Africa and Eurasia. These items are among the oldest Oldowan tools ever found.
Brown and his team noted a remarkable consistency in the tools’ design. Despite spanning 300,000 years, the creators displayed a preference for specific rock types, indicating a reliable and habitual approach to tool-making rather than isolated incidents.
The tools from Namorotuknan represent yet another significant discovery from the Omo Turkana Basin, a key site for understanding human origins.
Basins, Cradles, and Rifts
Since the 1960s, the Omo Turkana Basin has served as a focal point for human evolution research.
It stretches from the sandy beaches of southern Ethiopia, where the Omo River flows southward into Lake Turkana—one of the world’s longest lakes, extending deep into Kenya. The Türkwel and Kerio rivers also flow into its southern reaches.
Various fossil-rich locations pepper the basin. On the lake’s western side is the Nachukui Formation, while the Kobi Fora is situated on the east. Additional archaeological sites include the Usno Formation near Omo in the north and Kanapoi near Kerio in the south.
Map of fossil and tool sites in the Omo Turkana Basin
François Marchal et al. 2025
Led by François Marchal, a team from France’s Aix-Marseille University has compiled all known human fossil findings from the Omo Turkana Basin into a database. They detailed these patterns in the Human Evolution Journal, offering a snapshot of historical paleoanthropological research and a wealth of knowledge about human evolution.
Research in the Omo Turkana Basin began with early expeditions led by a collaborative French, American, and Kenyan team, including notable figures such as Camille Aramboul, Yves Coppens, F. Clark Howell, and Richard Leakey. Leakey also spearheaded explorations in the eastern Koobi Fora and western sites like Nachukwi.
Richard Leakey was a pivotal figure in the study of human evolution during the 1960s, 70s, and 80s. He is part of a family legacy in paleoanthropology, being the son of Louis and Mary Leakey, renowned for their groundbreaking work in the Oldupai Valley, Tanzania; his daughter Louise continues the exploration of human evolution.
Research on the Omo Turkana Basin transcends individual contributions. Marchal’s team collected a substantial 1,231 hominin specimens from around 658 individuals, accounting for about one-third of all known hominin remains across Africa.
Alongside the Great Rift Valley of East Africa—encompassing places like the Oldupai Gorge and the Cradle of Humanity in South Africa—the Omo Turkana Basin ranks as one of Africa’s richest hominid fossil sites.
Discovery
To the north, near the Omo River, researchers have uncovered some of the earliest Homo sapiens remains on record. At Omo Kibishu, two skull fragments and several bones were found, along with numerous teeth. Ongoing studies reveal these remains date back significantly further than initially believed, once estimated at 130,000 years, later revised to 195,000 years ago, and a subsequent analysis in 2022 indicated they could be at least 233,000 years old. Of all discovered, only the fossils from Morocco’s Jebel Irhoud are older, dating back to about 300,000 years.
The fossils from Omo Kibishu and Jebel Irhoud significantly deepen our understanding, suggesting that our species may have been evolving far earlier than the previously accepted timeline of around 200,000 years.
This trend also extends to the Homo genus, encompassing various groups like Homo erectus and Neanderthals. Determining which branch of Homo originated first remains complex—although records regarding Homo are sparse before 2 million years ago, they become increasingly elusive as one goes further back.
By meticulously analyzing fossils from the Omo Turkana Basin, Marchal and his team determined that Homo thrived in the region between 2.7 and 2 million years ago.
The earliest known Homo specimens in this basin are from the Shungra Formation, estimated to be between 2.74 and 2.58 million years old. Despite being announced in 2008, detailed examinations have yet to be conducted.
Faced with this gap, Marchal’s team posits that an influx of unexamined material could bring the number of known early Homo individuals to 75, creating a substantial and informative dataset, suggesting that there is “much more than just a handful of fossils.”
Notably, the Homo genus became well-established in the Omo Turkana Basin between 2.7 and 2 million years ago. While they were not the dominant species, another genus, Paranthropus, featuring smaller brains and larger teeth, was twice as prevalent. Numerous species from the Australopithecus genus also existed, indicating a period of cohabitation among different hominins. Importantly, some Homo individuals likely produced the Oldowan tools found.
This type of discovery is made possible by decades of dedicated research, and it is anticipated that the Omo Turkana Basin will continue to illuminate our origins for years to come.
Neanderthals, ancient humans, and cave art: France
Accompany New Scientist’s Kate Douglas on an intriguing journey through time, exploring significant Neanderthal and Upper Paleolithic sites across southern France, from Bordeaux to Montpellier.
In the German fairy tale of the fisherman and his wife, an old man catches a peculiar fish—a talking flounder. This enchanting creature holds an enchanted prince within, granting any wish the fisherman desires. His wife, Ilsevil, revels in her newfound fortune, continuously asking for more extravagant things. They transform their humble shed into a grand castle, yet it never feels sufficient. Ultimately, she desires to become Pope, and eventually, God. This insatiable greed enrages the elemental power, darkens the ocean, and restores her to her original impoverished state. The moral of the story: Don’t covet what you aren’t entitled to.
Numerous variations of this classic tale exist. Sometimes, wishes are clumsy or contradictory rather than overtly aggressive toward the divine order, as seen in Charles Perrault’s “outrageous wishes.” Similarly, in W.W. Jacobs’ 1902 horror story “The Monkey’s Paw,” wishes unintentionally harm those closer to the wishers than the objects of their desires.
Nowadays, many young people grow up with their own enchanted fish in their pockets. They can wish for homework completion, and the fish fulfills those wishes. They can indulge in countless sexual scenarios, and if they bypass age restrictions using a VPN, those scenarios become visible. Soon, they may wish for movies that match their interests, and those will materialize in seconds. They hope to finish their college essays—only to find them fully written.
This shift in perspective not only alters the consumer relationship with creative arts—literature, music, and visual content—but also redefines the essence of creativity and, thus, being human. In the near future, most individuals may delegate troublesome interactions to AI agents. These agents would negotiate contracts, act as representatives, receive critique, match information, and gather opinions. And the ocean remains undisturbed.
Currently, a young Ilsevil, sitting in a university auditorium, might still face fines from professors who grew up in a different era when they see her entrusting a seductive fish to write yet another essay. However, this won’t last much longer, as Ilsevil will soon belong to a confident majority, with most professors having shared her experiences. Ilsevil desires a boyfriend, a spiritual guide, and a therapist, and soon, she will have them. With each of these connections, it feels as if Ilsevil has known them for years, and in a literal sense, she has.
Just like her mythological counterpart, she aspires to be Pope and soon accomplishes this within her small world. However, one could challenge Ilsevil for complicating matters. If becoming Pope becomes effortless, the allure of the title will dwindle for her generation. After all, the most intriguing and desirable things often require overcoming significant obstacles. Yet, Ilsevil understands that the nature of this attractive resistance can also be found in encouragement, learning, and even more precise wishes.
Today, young people grow up with enchanting fish in their pockets…the fisherman and his wife. Illustration: Aramie
She dedicates much of her energy to refining the tone of her results. Though she may lack an innate sense of what makes her writing compelling, she can gauge the appropriateness of her content through responses from others and AI. This becomes a way to develop wishes that are more reliable than ever before. In times past, Ilsevil rarely encountered anyone who found her words intriguing or surprising. However, nowadays, every conversation she has with her AI is regarded as captivating and surprising. At last, she feels heard in a way that human partners might struggle to offer.
But what occurs when the fulfillment of all wishes leads Ilsevil to feel empty? What paths remain open to her?
The first path is the descent into decadence. This pattern is familiar from studies of affluent individuals. In the future, those with ample wealth will be able to hire human therapists or enjoy films featuring real people. Recently, someone in an AI forum suggested that AI might produce excessive amounts of child sexual abuse imagery, suggesting that this will avoid harm to real children. Consumers of such visuals seek not only visual stimulation but also a sense of certainty that real children were harmed. They claim the “aura” surrounding their products. With sufficient resources, Ilsevil could tread this path, just as they do.
The second path involves creating a small, insular community that deliberately constructs challenges and obstacles for one another, perhaps in a cult-like manner reminiscent of traditional sports or hunting clubs. They may host secret or exclusive underground events, with no other objective than to endure the discomfort of queuing and waiting. This concept was inspired by Stanisław Lem’s novel “Futurological Congress.” As of 2025, queuing remains a free experience, but future generations may be astonished by this.
The third path is both the most likely and the most obvious. Within her fairytale existence, Ilsevil uncovers the fundamental principle of redefining her wishes, enhancing their significance while infusing them with a sense of guilt. Guilt is a powerful mechanism constraining individuals to a product; a beloved but embarrassing product becomes intertwined with one’s identity, fostering neuroses and alternative realities that amplify this guilt.
Ilsevil naturally assumes the enormous ecological guilt connected to the immense resource waste created by AI. This primary guilt has shifted directly to her from the actions of large corporations and states. Consequently, she begins to limit and punish herself in her daily life. Each morning, she awakes with the conviction that every small choice and desire inflicts great harm on the “planet,” “society,” or “future.” She flourishes within her martyr-like guilt, assuming a savior’s role. This newfound identity feels like an eternal struggle without resolution, becoming a magical element that preserves her self-sacrificial essence amidst her internal contradictions. Rather than protesting against the insatiable waste of resources, Ilsevil constrains her personal freedom, encompassing nutrition, water consumption, family size, and mobility. Ultimately, she embodies a sort of sacrificial figure, taking all her transgressions to the grave.
The cautionary tales of European folklore against impulsive and unwise wishes stem from a universal theme: the intricate journey of individual lives. They explore questions about personal growth, life’s purpose, and what to pass on to the next generation. Yet in this final scenario, Ilsevil finds herself unable to address these fundamental questions freely; they are decided for her.
Humanity, an artificial intelligence firm, has agreed to a $1.5 billion settlement in response to a class action lawsuit filed by the author of a specific book, who alleges that the company used a pirated copy of their work to train chatbots.
If a judge approves the landmark settlement on Monday, it could signify a significant shift in the ongoing legal conflict between AI companies and writers, visual artists, and other creative professionals who are raising concerns about copyright violations.
The company plans to compensate the author approximately $3,000 for each of the estimated 500,000 books involved in the settlement.
“This could be the largest copyright restoration we’ve seen,” stated Justin Nelson, the author’s attorney. “This marks a first in the era of AI.”
Authors Andrea Burtz, Charles Greber, and Kirk Wallace Johnson, who were litigated against last year, now represent a wider group of writers and publishers whose works were utilized to train the AI chatbot Claude.
In June, a federal judge issued a complex ruling stating that training AI chatbots on copyrighted books is not illegal. Unfortunately, Humanity acquired millions of books from copyright-infringing sources inadvertently.
Experts predict that if Humanity hadn’t settled, they would likely have lost the lawsuit as it was set to go to trial in December.
“We’re eager to see how this unfolds in the future,” commented William Long, a legal analyst at Wolters Kluwer.
U.S. District Judge William Alsup in San Francisco is scheduled to hear the terms of the settlement on Monday.
Why are books important to AI?
Books are crucial as they provide the critical data sources—essentially billions of words—needed to develop the large language models that power chatbots like Anthropic’s Claude and OpenAI’s ChatGPT.
Judge Alsup’s ruling revealed that Anthropic had downloaded over 7 million digitized books, many of which are believed to be pirated. The initial download included nearly 200,000 titles from an online library named Books3, created by researchers other than OpenAI to build a vast collection utilized for training ChatGPT.
Burtz’s debut thriller, The Lost Night, served as the lead plaintiff in this case and was also part of the Books3 dataset.
The ruling revealed that at least 5 million copies had been ingested from around 2 million instances found on Pirate websites like Library Genesis.
The Author Guild informed its thousands of members last month that it anticipated losses of at least $750 per work, which could potentially be much higher. A sizeable settlement award of about $3,000 per work could indicate a reduced pool of impacted titles after taking duplicates and non-copyrighted works into account.
On Friday, Author Guild CEO Mary Raysenberger stated that the settlement represents “a tremendous victory for authors, publishers, and rights holders, sending a strong message to the AI industry about the dangers of using pirated works to train AI at the expense of those who can’t afford it.”
A dramatic reconstruction of early modern Homo sapiens in Africa
BBC/BBC Studios
human Available on BBC iPlayer (UK); US PBS (September 17)
Based on my observations, science documentaries often fall into two categories, akin to French and Italian cuisines. (Hear me out before you judge that comparison.) The first category employs intricate techniques for a deep experience. The second is more straightforward, allowing the content to shine naturally.
Both documentary styles can yield impressive results in their own ways. human, a five-part BBC series exploring the roots of our genus, Homo, undoubtedly fits into the latter category. It weaves together compelling stories, stunning visuals, and the charismatic presence of paleontologist Ella Al Shamahi, inviting viewers to embark on a heartfelt journey through six million years of our human history. No flashy add-ons are necessary.
The first episode delves into complex inquiries. When exactly did our species emerge? Multiple perspectives yield varying answers. Was it 300,000 years ago when humans began to exhibit features resembling ours? Was it when our skulls, according to Al Shamahi, transformed to become softer and more spherical? Or, more poetically, when we developed remarkable traits like intricate language, abstract thought, and cooperative behavior?
“
The series intertwines fascinating narratives, stunning visuals, and the captivating presence of Ella Al Shamahi. “
It’s an engaging episode, particularly when the narrative shifts to other extinct human species. For instance, Al Shamahi’s exploration of Indonesia introduces us to Homo floresiensis, a meter-tall human uniquely adapted to life on Flores. The discovery of these “hobbits” in the Liang Bua Caves two decades ago reshaped our understanding of ancient human biology. Their small brains provide insights into tool use, with their long arms and short stature diverging from other human species.
Episode three highlights the fate of our most famous relative, the Neanderthals. As we spread into Europe and Asia, they adapted to colder climates but ultimately faced extinction.
Throughout the series, Al Shamahi showcases amazing paleontological discoveries made over recent decades (many of which you may have read about in New Scientist). For instance, rainbow feathers from birds like the red kite have garnered interest regarding their significance to Neanderthals. Meanwhile, the perikymata—a growth line in tooth enamel—affirms that H. sapiens experienced extended childhoods, leveraging our cognitive capacity.
Over just five episodes, human cannot cover every aspect of our evolutionary story. Yet, it illuminates how H. sapiens has been shaped by climate influences, the flora and fauna that provide for us, other human species, and collaborative nomadic groups that shared skills, knowledge, and DNA, allowing us to thrive and eventually build cities.
This dimension of H. sapiens portrays humanity as the ultimate survivor, capable of progression and dominance over the Earth. In contrast, human offers a more humble narrative, emphasizing our species alongside our ancient relatives.
Tracking Human Evolution Gain insights behind the scenes of the new BBC series human with Ella Al Shamahi on NewsCientist.com/Video
In a captivating and poignant narrative, Ella Al Shamahi addresses the inadequate frontline science conducted in regions perceived as inhospitable to Western researchers. Discover Neanderthal skeletons exhibiting severe disabilities unearthed in present-day Iraq—a striking reminder of the discoveries we’ve overlooked.
Bethan Ackerley is a sub-editor at New Scientist. She has a passion for science fiction, sitcoms, and all things eerie. Follow her on Twitter @inkerley
New Scientist Book Club
Are you a book lover? Join a welcoming community of fellow readers. Every six weeks, we dive into exciting new titles, and members enjoy exclusive access to excerpts, author articles, and video interviews.
From an early age, the inevitability and finality of death profoundly shape our lives. Our capacity to comprehend the sorrow of our eventual end, as well as the loss of connection, is a fundamental aspect of what it means to be human. These understandings have fostered iconic rituals that are deeply embedded in human culture.
Historically, we have presumed that Homo sapiens is the only human species aware of the mortality inherent in living beings. However, as detailed in “What Ancient Humans Thought When They Began Burying the Dead,” archaeologists are challenging the notion that a significant emotional response to death is uniquely ours.
A particularly provocative assertion is that ancient humans, vastly different from us, established the death ritual. But evidence points to Homo naledi, an ancient human from southern Africa, whose brain was only one-third the size of ours and who lived at least 245,000 years ago. It remains unclear what drove these early humans to develop a culture surrounding death; one intriguing, though speculative, theory posits they did so to help younger members of their community cope with the loss of others.
Many controversies surround claims regarding H. naledi and their burial practices, primarily concerning the evidence’s quality. Nevertheless, since the mid-20th century, researchers have worked to bridge the behavioral gap between our species and others, propelled by studies revealing that many animals lead emotionally complex lives. Some even create their own rituals when encountering death within their communities. This adds weight to the argument that our ancestors may have developed their own cultural practices surrounding death as far back as 500,000 years ago, suggesting that H. naledi might also have established a burial tradition.
Archaeologists question whether a profound response to death is exclusively our domain.
A striking reflection of melancholy regarding H. naledi suggests that they may have aided the younger generation in confronting the weight of loss. This consideration brings into question our understanding of what it means to be human, and whether our ancestors were as unique as we assume in processing the concept of loss.
The iPhone designer has pledged that his upcoming AI-infused device will be guided by the belief that “humanity is better,” acknowledging his sense of “responsibility” for certain adverse effects of contemporary technology.
Sir Jony Ive mentioned that his new collaboration with OpenAI, the organization behind ChatGPT, aims to refresh its technological optimism amidst growing unease regarding the repercussions of smartphones and social media.
In an interview with the Financial Times, the London-born designer refrained from disclosing specifics about the devices he is working on at OpenAI but voiced concerns over people’s interactions with certain high-tech products.
“Many people would agree that there is an uncomfortable relationship with technology today,” he stated. He further emphasized that the design of the device is motivated by the notion that “we deserve better; humanity deserves better.”
However, Ive, the former chief design officer at Apple, expressed his feelings of accountability for the adverse effects produced by modern tech products. “Some of the negative outcomes were unintended, but I still feel responsible, and that drives my determination to create something beneficial.”
He added, “Whenever you create something new or innovate, the outcomes will be unpredictable; some will be wonderful, while others may cause harm.”
Just last month, Ive finalized the sale of hardware startup IO to OpenAI in a $6.4 billion (£4.7 billion) transaction, illustrating his creative and design leadership within the merged entity.
In a video announcing the deal, OpenAI CEO Sam Altman referred to the prototype devised by Ive as “the coolest technology the world has ever seen.”
Apple analyst Ming-Chi Kuo mentioned that the device would be reportedly screenless, designed to be worn around the neck, and “compact and elegant like an iPod shuffle.” Mass production is projected to commence in 2027.
According to The Wall Street Journal, this device is fully attuned to the user’s environment and life, described as a third essential device for users after the MacBook Pro and iPhone.
Ive, who began his journey at Apple in 1992, expressed that the OpenAI partnership has rekindled his optimism regarding the potential of technology.
“When I first arrived here, it was a place where people genuinely aimed to serve humanity, inspire individuals, and aid creativity; that was my draw. I don’t sense that spirit here currently,” he remarked.
Ive was interviewed alongside Laurene Powell Jobs, the widow of Apple co-founder Steve Jobs.
She remarked, “We observe research being conducted solely focusing on the surge of anxiety and mental health challenges among teenage girls and young people.”
Powell Jobs, who invests in Love from Business by Emerson Collective, linked to Ive’s venture, chose not to comment on whether the new OpenAI devices would rival Apple products.
“I still maintain close ties with Apple’s leadership,” she stated. “They are truly commendable individuals, and I hope for their success.”
A prominent British-Canadian computer scientist often referred to as the “godfather” of artificial intelligence has reduced the likelihood of AI causing the extinction of humanity in the next 30 years, stating that the rate of technological advancement is “much faster” than anticipated. I warned you.
Professor Geoffrey Hinton, the recipient of this year’s Nobel Prize in Physics for his contributions to AI, suggested that there is a “10% to 20%” probability of AI leading to human extinction within the next three decades.
Hinton previously expressed that there was a. 10% chance that technology could result in catastrophic outcomes for humanity.
When asked on BBC Radio 4’s Today program if he had revised his assessment of the potential AI doomsday scenario and the one in 10 likelihood of it happening, he replied, “No, it’s between 10% and 20%.”
In response to Hinton’s estimate, former Prime Minister Sajid Javid, who was guest editing Today, remarked, “You’re going up,” to which Hinton quipped, “You’re going up. You know, we’ve never had to confront anything more intelligent than ourselves.”
He further added, “And how many instances do you know of something more intelligent being controlled by something less intelligent? There are very few examples. There’s a mother and a baby. In evolutionary theory, the baby controls the mother. It took a lot of effort to make it possible, but that’s the only example I know of.”
Hinton, a professor emeritus born in London and based at the University of Toronto, emphasized that humans would appear infantile compared to the intelligence of highly advanced AI systems.
“I like to compare it like this: Imagine yourself and a 3-year-old. We’re in third grade,” he stated.
AI can broadly be defined as computer systems that can perform tasks typically requiring human intelligence.
Last year, Hinton resigned from his position at Google to speak more candidly about the risks associated with unchecked AI development, citing concerns that “bad actors” could exploit the technology to cause harm. This issue gained significant attention. One of the primary worries of AI safety advocates is that the progression of artificial general intelligence, or systems that surpass human intellect, could enable the technology to elude human control and pose an existential threat.
Reflecting on where he anticipated AI development would bring him when he initially delved into AI research, Hinton remarked, “[we are] here now. I thought we would arrive here at some point in the future.”
He added, “Because in the current environment, most experts in this field believe that AI surpassing human intelligence will likely materialize within the next 20 years.” And that’s a rather frightening notion.
Hinton remarked that the pace of advancement was “extremely rapid, much quicker than anticipated” and advocated for government oversight of the technology.
“My concern is that the invisible hand isn’t safeguarding us. In a scenario where we simply rely on the profit motive of large corporations, we cannot ensure secure development. That’s insufficient,” he stated. “The only factor that can compel these major corporations to conduct more safety research is government regulation.”
Hinton is one of three “Godfathers of AI” who were awarded the ACM A.M. Turing Prize, the computer science equivalent of the Nobel Prize, for their contributions. However, one of the trio, Yann LeCun, the chief AI scientist at Mark Zuckerberg’s Meta, downplayed the existential threat, suggesting that AI “may actually save humanity from extinction.”
IThis was one of history’s monumental moments, but if John Glenn had not stopped at a supermarket on his way aboard Friendship 7 to pick up a Contax camera and 35mm film, the visual record may not have existed. A photograph taken by an American astronaut through the window of a capsule while in Earth orbit on February 20, 1962, provided unprecedented evidence of Project Mercury’s first orbital mission. The Soviet Union may have beaten the Americans in the race to human spaceflight, but the Americans were also taking the first color photographs of the galaxy.
German gallerist Daniel Blau points out that these photos are also “the most expensive photographs ever taken.” Billions of dollars were spent to obtain them. Blau has an original print of Glenn’s first photograph taken in space. Photos from Paris this yearalong with NASA’s cache of rare photographic prints, many of which have never been publicly displayed before, most of them by unknown scientists and astronauts.
“At that time, NASA didn’t provide cameras to astronauts,” Blau says. “In a way, this was Glenn’s private photograph.” Despite their scientific motivations, Glenn’s images convey the inescapable mystery of the universe. A warm, glowing ball of light spreads out from the center of the frame. Luminescent flashes blaze into the deep darkness of the void, dancing like the “fireflies” described by Glenn. It must have been terrifying to watch. In fact, the spark turned out to be condensation.
Traveling at 28,000 km/h, humans managed to reach space, but they had not yet designed a photographic machine powerful enough to keep up with the journey. Lacking much visual information or detail, Glenn’s photographs probably reveal less about the universe and have become totems of human ambition. Glenn later added a personal caption, warning, “I guarantee you a photo will never be able to recreate the brilliance of a real scene.”
Blau began carrying vintage NASA prints in the 1990s. “The Space Race and the Cold War were the defining forces of the second half of the 20th century. Of course, my generation remembers all the important moments.” Some of the photos were published at the time, but original prints It is difficult to obtain. “These scientists and the people who worked on the missions passed down their personal archives to their children, and now their grandchildren, so there is still a lot of material on the market. It was natural for me to start searching and working with these photographs.”
At Paris Photo, a crowd gathered around a series of six silver gelatin photographs from 1948 overlooking the Rio Grande from a V-2 rocket at 73,000 feet. Also on display were humanity’s first close-up photo of Mars, taken in 1965, and the first panoramic photo of Earth seen from the moon. The latter was not photographed by humans, but was sent by radio signal from an unmanned mission in August 1966. They were then stitched together pixel by pixel into a single image at NASA’s Jet Propulsion Laboratory.
By 1979, the interstellar probe Voyager was able to take better pictures of the planet, and its images of Jupiter and its four moons suspended like marble in an onyx atmosphere were particularly startling.
The impressive large-scale mosaic of Mercury’s pockmarked surface, created in 1974, is “the only mosaic of this size I’ve ever seen,” Blau says. “It was probably produced for a NASA presentation, similar to Voyager’s photo of Mars.” This photo only shows part of the solar system’s smallest planet, but it doesn’t fit our understanding and You get another glimpse of what lies beyond your control.
By the late ’70s, photography had taken on a more central role in missions and the advancement of space science. “NASA was and still is dependent on public funding, but Glenn’s color photographs taken in Earth orbit showed that the best and most positive way for NASA to demonstrate its accomplishments was through photography.” It became clear that there was one thing,” Blau said. “Of course, the scientific side of things is the driving force, but photography tells a first-hand story.”
Blau’s footage was released the day after the US presidential election. He said he wanted to remind visitors of the “positive common efforts of many countries.” They are certainly humble. “Perhaps no photograph embodies more than this photograph the combination of mystical awe and mastery of nature that constitutes the human condition,” Blau muses. “Humans escape from the confines of the earth to see and record things that have never been seen or recorded before – the impossible.”
IIt’s been eight years since Civilization 6 launched, the latest in the long-running strategy game series in which you lead a nation from the first town in prehistoric times through centuries of development to the space age. Since 2016, the game has accumulated a plethora of expansions, scenario packs, new nations, modes, and systems for players to master, but Dennis Shirk, series producer at Firaxis Games, feels like he’s had enough. “It was getting out of hand,” he says. “It was time to build something new.”
“Even completing the whole game is a struggle,” says designer Ed Beach, citing a key problem Firaxis is trying to solve with the upcoming Civilization 7. While the early turns of Civilization 6’s campaign may be quick, when you’re only deciding what the inhabitants of a single town will do, “after a while you explode with the number of systems, units, and entities you have to manage,” Beach says. From turn one to victory, a single campaign can take more than 20 hours, and as you start to fall behind other nations, you might want to start over long before you see the endgame.
To that end, Civilization 7’s campaign is split into three eras — Ancient, Exploration, and Modern — with each era ending in a dramatic explosion of global crisis. “By dividing the game into chapters, we’re giving people a better sense of history,” Beach says.
Mongolian city in Civilization 7. Photography: Firaxis Games
When you start a new campaign, you choose a leader and civilization to rule, and lead your people to establish their first settlements and encounter other peoples in a largely undeveloped land. Choose which technologies to research, which cities to expand, and who to befriend or conquer. Every turn completed and every scientific, economic, cultural and military milestone passed adds points to a meter running in the background. When the meter reaches 200, you and all other surviving civilizations on the map will move on to the next era.
Choose and lead a new civilization as you move from Ancient to Exploration and from Exploration to Modern. You’ll keep all the cities you previously controlled, but you’ll have access to different technologies and attributes. This may seem odd, but it’s designed to reflect history – think of London, once ruled by the Romans, then replaced by the Anglo-Saxons. No empire lasts forever, but not all fall.
Dividing Civilization 7 into chapters also gives the campaign a new rhythm. As you approach the end of an era, you start to face global crises. In ancient times, for example, you see a surge of independent factions similar to the tribes that toppled Rome. “We don’t call them barbarians anymore,” Beach says. “It’s a more nuanced way of describing it.” These crises increase and intensify until you reach the next era. “It’s like a sci-fi or fantasy series that has a big, crazy ending, and then the next book is a calm, feel-good beginning,” Beach says. “There’s a moment of relief when you get to the next era.”
Veteran players will recognize the flow of Civilization 7’s franchise-wide offerings, but this new structure is certainly a radical change, introducing more chaotic and dramatic moments to every campaign. Whereas previously you were assured of victory (or defeat) after a few hours of play, each new era brings with it climactic crises and plenty of opportunities for game-changing moments. “Not everyone will survive,” Shirk says. “It’s a lot of fun to play.”
ChatGPT and other large-scale language models (LLMs) consist of billions of parameters, are pre-trained on large web-scale corpora, and are claimed to be able to acquire certain features without any special training. These features, known as emergent capabilities, have fueled debates about the promise and peril of language models. Their new paperUniversity of Bath researcher Harish Tayyar Madhavshi and his colleagues present a new theory to explain emergent abilities, taking into account potential confounding factors, and rigorously validate this theory through over 1,000 experiments. Their findings suggest that so-called emergent abilities are not in fact emergent, but rather result from a combination of contextual learning, model memory, and linguistic knowledge.
Lou othersThis suggests that large language models like ChatGPT cannot learn independently or acquire new skills.
“The common perception that this type of AI is a threat to humanity is both preventing the widespread adoption and development of this technology and distracting from the real problems that need our attention,” said Dr Tayyar Madhavshi.
Dr. Tayyar Madabhushi and his colleagues carried out experiments to test LLM's ability to complete tasks that the model had not encountered before – so-called emergent capabilities.
As an example, LLMs can answer questions about social situations without being explicitly trained or programmed to do so.
While previous research has suggested that this is a product of the model's 'knowing' the social situation, the researchers show that this is actually a result of the model using a well-known ability of LLMs to complete a task based on a few examples that it is presented with – so-called 'in-context learning' (ICL).
Across thousands of experiments, the researchers demonstrated that a combination of LLMs' ability to follow instructions, memory, and language abilities explains both the capabilities and limitations they exhibit.
“There is a concern that as models get larger and larger, they will be able to solve new problems that we currently cannot predict, and as a result these large models may gain dangerous capabilities such as reasoning and planning,” Dr Tayyar Madabhshi said.
“This has generated a lot of debate – for example we were asked to comment at last year's AI Safety Summit at Bletchley Park – but our research shows that fears that the models will go off and do something totally unexpected, innovative and potentially dangerous are unfounded.”
“Concerns about the existential threat posed by the LLM are not limited to non-specialists but have been expressed by some of the leading AI researchers around the world.”
However, Dr Tayyar Madabushi and his co-authors argue that this concern is unfounded as tests show that LLMs lack complex reasoning skills.
“While it is important to address existing potential misuse of AI, such as the creation of fake news and increased risk of fraud, it would be premature to enact regulations based on perceived existential threats,” Dr Tayyar Madabhsi said.
“The point is, it is likely a mistake for end users to rely on LLMs to interpret and perform complex tasks that require complex reasoning without explicit instructions.”
“Instead, users are likely to benefit from being explicitly told what they want the model to do, and from providing examples, where possible, for all but the simplest tasks.”
“Our findings do not mean that AI is not a threat at all,” said Professor Irina Gurevich of Darmstadt University of Technology.
“Rather, the emergence of threat-specific complex thinking skills is not supported by the evidence, and we show that the learning process in LLMs can ultimately be quite well controlled.”
“Future research should therefore focus on other risks posed by the model, such as the possibility that it could be used to generate fake news.”
_____
Shen Lu others. 2024. Is emergent capability in large-scale language models just in-context learning? arXiv: 2309.01809
Is it in the way we live, laugh, love? Or is it our aversion to clichés? Deep inside each of us, there must be something that makes us human. The problem is, after centuries of searching, we haven’t found it yet. Maybe it’s because we’ve been looking in the wrong places.
Ever since researchers began unearthing ancient hominin bones and stone tools, their work has held the tantalizing promise of pinpointing the long-ago moment when our ancestors transformed into humans. Two of the most important fossil discoveries in this quest reach an important milestone this year: 100 years since the first “near-human” was found. Australopithecus Fossils have been discovered in South Africa that have upended previous ideas about human origins, and it’s been 50 years since the most famous fossil was found. Australopithecus Lucy, also known as humanity’s grandmother, emerged from the dusty hills of Ethiopia, and the two fossils have led researchers to believe they can pinpoint humanity’s Big Bang, the period when a dramatic evolutionary wave led to the emergence of humans. Homo.
But today, the story of human origins is much more complicated. A series of discoveries over the past two decades has shown that the beginning of humanity is harder to pinpoint than we thought. So why did it once seem like we could define humanity and pinpoint its emergence, thanks to Lucy and her peers? Why are we now further away than ever from pinpointing exactly what it means to be human?
“Demographic composition has changed significantly in recent years,” Li Junhua, the U.N. Under-Secretary-General for Economic and Social Affairs, said in a news release.
The report predicts that the world’s population will continue to grow over the coming decades, from 8.2 billion in 2024 to a peak of nearly 10.3 billion in the next 50 to 60 years. But population won’t keep growing forever: By 2100, the world’s population is expected to return to 10.2 billion, 6% lower than UN experts predicted a decade ago.
The United Nations’ last population assessment, released in 2022, suggested humanity could reach 10.4 billion people by the late 2000s, but falling birth rates in some of the world’s largest countries, including China, are one of the reasons why the population peak will come sooner than expected.
More than half of countries have fertility rates below 2.1 children per woman, or the “replacement rate,” the number of children each woman needs to have to avoid population decline.
An additional 48 countries, including Vietnam, Brazil, Turkey and Iran, are also expected to see their populations peak over the next 30 years.
India’s population is 1.4India’s population is expected to surpass China’s in 2022, surpassing 2 billion and becoming the world’s most populous country. India’s population is also expected to continue growing until the middle of this century, according to the report.
However, China’s population continues to decline.
“China has experienced a rapid and significant decline in births in recent years,” said Patrick Garland, head of the Population Estimates and Projections Division at the United Nations Population Division.
“The changes China has undergone in the past generation are among the fastest in the world,” Garland said.
Without immigration, the United States would also face a population decline. It is one of about 50 countries projected to continue experiencing population growth due to increased immigration. The U.S. population is projected to grow from 345 million in 2024 to 421 million by the end of the century.
People pass through a crowded street in Kampala, Uganda. Since 2013, Uganda’s population has grown by 13 million people, or nearly 40 percent, second only to the Democratic Republic of Congo. Badru Katumba/AFP via Getty Images
A growing country is likely to exacerbate problems related to consumption, greenhouse gas emissions, and other drivers of global warming. A growing population also means more people are exposed to climate risks such as droughts, heat waves, and other extreme weather events that are intensified by global warming.
“Just because a challenge might emerge 60 years from now doesn’t mean it’s pointless to talk about it now,” said Dean Spears, an associate professor of economics at the University of Texas at Austin.
“Decades from now, people will be talking about these new demographic changes with the same level of academic and societal concern that we are talking about today about climate change,” Spears said.
Countries where population growth is expected to continue through to 2054 include India, Indonesia, Pakistan and Nigeria. In parts of Africa, including Angola, the Central African Republic, the Democratic Republic of the Congo, Niger and Somalia, populations are expected to double dramatically between 2024 and 2054, according to the United Nations.
But a growing population on Earth does not necessarily mean that climate change will occur faster. Most of the world’s fastest growing regions are also countries that have historically contributed the least to global warming. These regions are also typically disproportionately affected by climate change.
The report notes that life expectancy has recovered after the impact of the pandemic. Global life expectancy will be 73.2 years in 2023, up from the pandemic low of 70.9 years in 2021 and higher than the pre-pandemic level of 72.4 years five years ago. Global life expectancy is projected to reach 81.7 years in 2100.
As life expectancy increases and birth rates fall, the world’s population is ageing. Projections show that by 2080, people aged 65 and over will outnumber children under 18. By 2023, there will be almost three times as many children as people aged 65 and over.
T
A few weeks ago, it was quietly announced that the Future of Humanity Institute, a famous interdisciplinary research center in Oxford, no longer has a future. It closed without warning on April 16th. Initially, its website contained only a short statement that it had been closed and that research could continue elsewhere within or outside the university.
The institute, dedicated to the study of humanity’s existential risks, was founded in 2005 by Swedish-born philosopher Nick Bostrom and quickly made a name for itself beyond academia. Many high-tech billionaires praised the institute, especially in Silicon Valley, and provided financial support.
Mr. Bostrom is perhaps best known for his 2014 best-selling book. super intelligence, which warned of the existential dangers of artificial intelligence, but also became widely known for his 2003 academic paper “Are You Living in a Computer Simulation?” The paper argues that over time, humans are likely to develop the ability to create simulations that are indistinguishable from reality, and if this is the case, it has already happened and we may be the simulation. insisted.
I interviewed Bostrom more than a decade ago, and he had one of those elusive and rather abstract personalities that perhaps lends credence to simulation theory. He was pale, had a reputation for working all night, and seemed like the type of person who didn’t go out much. The Institute appears to be aware of this social shortcoming. final reporta long inscription written by Fuji Heavy Industries researcher Anders Sandberg states:
“We have not invested enough in the politics and socialization of the university to build long-term, stable relationships with faculty…When epistemology and communication practices become too disconnected, misunderstandings flourish.”
Nick Bostrom: “Proudly provocative on paper, cautious and defensive in person.” Photo: Washington Post/Getty Images
Like Sandberg, Bostrom is an advocate of transhumanism, the belief in using advanced technology to improve longevity and cognitive abilities, and is said …
Elon Musk is suing OpenAI and its CEO Sam Altman for prioritizing profit over humanity’s interests, contrary to its core mission.
As the wealthiest individual globally and a founding director of the AI company behind ChatGPT, Musk alleges that Altman violated OpenAI’s founding covenant by striking an investment deal with Microsoft.
The lawsuit, filed in San Francisco, accuses OpenAI of prioritizing profit over human well-being by shifting its focus to developing artificial general intelligence (AGI) for commercial gain rather than humanitarian purposes.
Musk claims that OpenAI has essentially become a subsidiary of Microsoft, the world’s largest tech company, under new leadership, diverting from its original principles outlined in the founding agreement.
The lawsuit raises concerns about AGI posing a significant threat to humanity, particularly if it falls into profit-driven companies’ hands, like Google.
Originally founded to be a nonprofit, open-source organization working for the greater good, OpenAI’s alleged transition to a profit-centric entity under Microsoft’s influence has prompted Musk to take legal action.
The lawsuit contends that the development of OpenAI’s GPT-4 model, shrouded in secrecy, deviates from their initial mission and breaches contractual obligations.
Musk, who played a significant role in establishing OpenAI but exited in 2018, claims that the company’s recent actions concerning AGI technology are in direct conflict with its intended purpose.
The lawsuit aims to compel OpenAI to adhere to its original mission of developing AGI for humanity’s benefit, not for personal gain or for tech giants like Microsoft.
The deal between OpenAI and Microsoft is now facing scrutiny from competition authorities in various regions, including the US, EU, and UK.
AI researchers predict apocalyptic outcome unlikely
Steven Taylor/Alamy Stock Photo
Although many artificial intelligence researchers see the possibility of future development of superhuman AI as having a considerable chance of causing the extinction of humanity, there is disagreement and uncertainty about such risks. are also widely available.
Those findings can be found below Survey of 2,700 AI researchers They recently presented their research at six major AI conferences. This is the largest study of its kind to date. The survey asked participants to share their thoughts on possible timelines for future AI technology milestones and the positive or negative social impact those achievements would have. Almost 58% of researchers said they believe there is a 5% chance of human extinction or other very bad AI-related outcomes.
“This is an important sign that most AI researchers do not think it is highly implausible that advanced AI will destroy humanity,” he says. Katya Grace Author of the paper, affiliated with the Machine Intelligence Institute, California. “I think the general idea that the risk is not trivial says much more than the exact percentage of risk.”
But he says there's no need to panic just yet. Emil Torres At Case Western Reserve University in Ohio. Such research by AI experts “doesn't have a good track record” of predicting future AI developments, they say. A 2012 study showed that over the long term, AI experts' predictions are no more accurate than non-expert public opinion. The authors of the new study also noted that AI researchers are not experts in predicting AI's future trajectory.
When compared to responses from the same survey in 2022, many AI researchers predicted that AI would reach certain milestones sooner than previously predicted. This coincides with his November 2022 debut of ChatGPT and Silicon Valley's rush to broadly deploy similar AI chatbot services based on large-scale language models.
The researchers surveyed found that within the next 10 years, AI systems will be able to perform most of the 39 sample tasks, such as creating a new song indistinguishable from a Taylor Swift banger or coding an entire payment processing site from scratch. He predicted a 50 percent chance of success. Other tasks, such as physically installing electrical wiring in a new home and solving age-old math puzzles, are expected to take even longer.
There is a 50 percent chance of developing AI that can outperform humans at any task by 2047, while there is a 50 percent chance of all human jobs being fully automated by 2116. It is said that this will happen with a probability of . These estimates are 13 years earlier, 48 years earlier than last year's survey.
However, Torres says the rising expectations for AI development could also be disappointed. “Many of these breakthroughs are completely unpredictable, and it's quite possible that the AI field will experience another winter,” he says. I mentioned that funding and corporate interest in has dried up.
Even without the risk of superhuman AI, there are also more pressing concerns. The majority of AI researchers (over 70%) say that scenarios using AI, including deepfakes, public opinion manipulation, engineered weapons, authoritarian population control, and worsening economic inequality, are of serious or extreme concern. It states that there is. Torres also highlighted the danger that AI could contribute to disinformation around existential issues such as climate change and the deterioration of democratic governance.
“We already have technologies that can seriously harm society, right here, right now. [the US] It’s a democracy,” Torres said. “Let's see what happens in the 2024 election.”
Amid rising geopolitical tensions, many Chinese tech companies are recalibrating their overseas operations, often avoiding mention of their origins. A bold startup DP technology Stand out in the crowd. Working on the application of artificial intelligence to molecular simulations, DP (short for “Deep Potential”) believes that the collective power of “scientific research for humanity” will pave the way for its global expansion.
Founded in 2018 with renowned mathematician Weinan E as an advisor, DP provides a set of tools for performing scientific calculations. A process in which “computer simulations of mathematical models play an essential role in technology development and scientific research.” according to Definition by University of Waterloo. Areas that can benefit from scientific computing include: From biopharmaceutical research and automobile design to semiconductor development.
While the world is currently focused on using AI to generate text, images, and videos, DP is focusing on machine learning, which allows computers to automatically learn from the data they are given, and the real world. We found ourselves in a less developed field of combining molecular simulations for analysis. Products and systems via virtual models. Machine learning can be applied in combination to improve the speed and accuracy of simulations to solve problems in the physical world.
“Until now, in the absence of good computing or AI platforms, everyone relied on empirical trial and error. The process was often referred to as ‘cooking’ or ‘alchemy.'” DP CEO and founder Sun Weijie said in an interview with TechCrunch.
“This approach was relatively effective in the early stages of industrial development, when user expectations for iteration were not very high, but now [technological] “It’s progress,” he continued. “For example, consumers expect increased battery capacity every year and performance improvements with each new generation of vehicles. Traditional R&D models can no longer withstand these rapid market changes. you can’t.”
“Meeting the expectations of these rapid iterations will require breakthrough advances in research and development approaches,” he added.
To this end, DP has devised a software suite to help industry players discover and develop new products more efficiently. One is that we run a scientific computing platform that allows us to simulate physical properties such as magnetism, optics, and electricity. As a result of running these models, materials such as semiconductors and batteries can be designed faster and cheaper. He also operates his SaaS platform focused on preclinical research for drug discovery.
DP goes one step further by not only supplying software to industrial researchers and designers, but by selling services tailored to their needs and carrying out research and development processes for customers who cannot fully exploit the potential of their tools. I’m here.
This combination of SaaS and services business model has proven some early success in China. DP is expected to win contracts worth around 100 million yuan ($14 million) in 2023, up from “tens of millions of yuan” last year. The company is now preparing to bring that strategy to Western markets, where deep-pocketed giants like DeepMind dominate the space.
“There’s an old saying in China: ‘Children from poor families grow up early.’ We’re the poor kids compared to the likes of DeepMind and OpenAI because we have much less money on hand.” Sun said.
To date, the DP has focused on the following issues: $140 million Selected from a lineup of top Chinese VC firms, including Qiming Venture Partners and Hillhouse Ventures. For reference, 13-year-old DeepMind was acquired by Google in 2014 for over $500 million. The London-based AI giant made a whopping £477 million ($650 million) in 2020, reporting a profit of £44 million ($60 million). ) losses in 2019.
Sun claimed that despite having its physical headquarters in Beijing, DP was conceived with a global mindset thanks to the open source scientific and technical computing community it founded. deep modeling. Early stops in China were also more accidental than intentional. “Since international exchange has stopped due to the COVID-19 pandemic, we decided to stop and work on monetization.” [in China] “For the first two years,” Sun said.
DP’s international expansion begins in the United States, where it opens offices and works with partners to market its products and services. The startup, which is looking to establish a presence in new markets, is looking to boost its reputation by leveraging the open source community and participating in what Sun describes as a relatively “close-knit” basic research exhibition. There is.
On the other hand, the DP’s international ambitions may run into obstacles from the ongoing decoupling that divides the United States and China in many areas, including scientific research. For example, back in August, Biden administration stretched narrowly The scientific partnership has underpinned U.S.-China relations since 1979.
But Sun exuded confidence in science’s resilience in the face of geopolitical complexity. “Both the fields of basic science and biopharmaceuticals are shared by all of humanity and are relatively open and inclusive. Relatively speaking, I think these regions are doing okay,” he said.
This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.
Strictly Necessary Cookies
Strictly Necessary Cookie should be enabled at all times so that we can save your preferences for cookie settings.