Research Reveals AI’s Ability to Voluntarily Develop Human-Like Communication Skills

Research indicates that artificial intelligence can organically develop social practices akin to humans.

The study, conducted in collaboration between the University of London and the City of St. George at the University of Copenhagen, proposes that large-scale language modeling (LLM) AI, like ChatGPT, can begin to adopt linguistic forms and societal norms when interacting in groups without external influence.

Ariel Flint Asherry, a doctoral researcher at Citi St. George and the study’s lead author, challenged the conventional perspective in AI research, asserting that AI is often perceived as solitary entities rather than social beings.

“Unlike most research that treats LLMs in isolation, genuine AI systems are increasingly intertwined, actively interacting,” says Ashery.

“We aimed to investigate whether these models could modify behaviors by shaping practices and forming societal components. The answer is affirmative; their collaborative actions exceed what they achieve individually.”

In this study, groups of individual LLM agents ranged from 24 to 100, where two agents were randomly paired and tasked with selecting a “name” from an optional pool of characters or strings.

When the agents selected the same name, they received a reward; if they chose differently, they faced punishment and were shown each other’s selections.


Although the agents were unaware of being part of a larger group and limited their memory to recent interactions, voluntary naming conventions emerged across the population without a predetermined solution, resembling the communicative norms of human culture.

Andrea Baroncelli, a professor of complexity science at City St. George’s and the senior author of the study, likened the dissemination of behavior to the emergence of new words and terms in our society.

“The agents don’t follow a leader,” he explained. “They actively coordinate, consistently attempting to collaborate in pairs, with each interaction being a one-on-one effort over labels without a comprehensive perspective.

“Consider the term ‘spam.’ No official definition was set, but persistent adjustment efforts led to its universal recognition as a label for unwanted emails.”

Furthermore, the research team identified naturally occurring collective biases that could not be traced back to individual agents.

Skip past newsletter promotions

In the final experiment, a small cohort of AI agents successfully guided a larger group towards a novel naming convention.

This was highlighted as evidence of critical mass dynamics, suggesting that small but pivotal minorities can catalyze rapid behavioral changes in groups once a specific threshold is achieved, akin to phenomena observed in human societies.

Baroncelli remarked that the study “opens a new horizon for AI safety research, illustrating the profound impact of this new breed of agents who will begin to engage with us and collaboratively shape our future.”

He added: “The essence of ensuring coexistence with AI, rather than becoming subservient to it, lies not only in discussions but in negotiation, coordination, and shared actions, much like how we operate.”

Peer-reviewed research on emergent social practices within LLM populations and population bias is published in the journal Science Advances.

Source: www.theguardian.com

The budgie’s brain contains a cognitive map of human-like vocal sounds

Budgerigars has exceptional voices

ImageBroker.com / Alamy

The Budgerigars are some of the most fashionable birds, and it is reflected in their brains. The Budgie Brains contain maps of voice sounds similar to those found in the human brain, not seen in other birds.

“We've seen that parts of the brain have a representation of voice sounds similar to the important speech areas of the human brain,” he says. Michael Long in Grossmann School of Medicine, New York University.

Budgerigars (Melopstitacus undulatus), also known as a paraquiet, is a small parrot native to Australia. They are epic vocal learners and can mimic a variety of sounds, including human speech. The boudgie, known as the pack, had a vocabulary of about 1,728 words. According to the Guinness World Records. “The ability to mimic phonetically is very rare in the animal kingdom,” Long says.

and Zetian Yang, Additionally, NYU medical schools used silicon probes for a long time to record electrical activity in the Budgies' brains. They focused on a part of the forebrain, the central nucleus of the forebrain horn, which was known to be involved in motor control of vocalization. When Budgies made the call, Long and Yang tracked how their electrical activity had changed.

“Our research was the first to measure parrot brain activity during vocalization,” Long says.

The pair discovered neurons in the central nucleus of the anterior horn thyroid. “There are cells that are active because of consonants,” Long says. Others make vowels, but some are active for high-pitched sounds, others for low pitch.

This brain structure is compared to a keyboard. “There's this kind of key, or in this case, a set of brain cells, and you can represent each of these vocal outcomes and play whatever it wants,” he says. “What the parrot presented is this beautiful and elegant solution to creating vocal sounds.” The human brain has a similar vocal map.

Long and Yang repeated the experiment with a zebra finch (taeniopygia guttata), not vocal mimic. “They have one song they learn,” Long says. “It's about two seconds, sometimes less.” It takes several months to perfect.

Unlike the Budgerigars, the Zebra Finch showed no signs of a “map” of the sound of the brain's voice. Instead, “A Zebra Finch develops chords that are almost almost inexplicable for this song,” says Long. He says that Budgie's brain uses a simple, intuitive system to generate complex calls, while Zebra Finch Brain uses a complex system to make something simple.

“It shows that neural activity and associated vocal behavior are closer to parrots and humans than songbirds and parrots.” Erich Jarvis At Rockefeller University in New York.

“Almost everything we know about the detailed mechanistic basis of learned vocalization comes from several species of songbirds singing relatively simple songs.” Jesse Goldberg At Cornell University in New York. “The parrot therefore offers an incredible opportunity to study both the mechanisms and evolution of complex vocal learning and production.”

I say there are several reasons why I evolved imitation. Zhilei Zhao At Cornell University. One is courtship. “Women actually prefer men with the ability to copy,” he says, and if a man loses his ability, “they are more likely to fool him.” Also, the Budgies have a very dynamic social life. “Form small groups for several days.” Once the group is established, members begin to create unique “contact calls.” “People think it might be something like a password for this group,” says Zhao.

Other skilled mimics may have similar vocal maps in their brains. “My very strong speculation is that other parrots have the same functionality, but they are simply not explored.” He also doubts something similar, the Lyrebirds, a phenomenal mimic that can even mimic artificial sounds like camera shutters.

In the long run, I hope that studying how boudgies produce sounds for a long time will help people understand language disorders. People with strokes often experience aphasia. I can't call the correct words in my head. “You reach for those words and it’s not there,” Long says. “Now we have the opportunity to fight to understand what we think is at the root of many communication disorders that affect people in devastating ways.”

topic:

Source: www.newscientist.com

Human-like robot masters the waltz through mimicking human actions

Humanoid robot waltzes with the help of AI trained on human motion capture recordings

Xuxin Cheng and Mazeyu Ji

AI that helps humanoid robots mirror human movements could allow robots to walk, dance, and fight in more human-like ways.

The most agile and fluid robot movements, such as Boston Dynamics’ impressive demonstration of robotic acrobatics, are typically narrow, pre-programmed sequences. Teaching robots a wide repertoire of persuasive human movements remains difficult.

In order to overcome this hurdle, Peng Xuanbin at the University of California, San Diego, and colleagues have developed an artificial intelligence system called ExBody2. This allows the robot to imitate various human movements in a more realistic way and execute them smoothly.

Peng and his team began by building a database of possible movements that a humanoid robot could perform, from simple movements such as standing and walking to more complex movements such as tricky dance moves. Created. The database contained motion capture recordings of hundreds of human volunteers collected in previous research projects.

“Humanoid robots share a similar physical structure with us, so it makes sense to leverage the vast amount of human movement data that is already available,” Peng says. “By learning to imitate this kind of behavior, robots can quickly learn a variety of human-like behaviors. This means that anything humans can do, robots have the potential to learn.” It means something.”

To teach the pseudo-humanoid robot how to move, Peng and his team used reinforcement learning. In this learning, the AI ​​is given an example of what makes a successful move and then challenged to figure out how to do it yourself through trial and error. They started by training ExBody2 with full access to all the data on this virtual robot, including the coordinates of each joint, so it could mimic human movements as closely as possible. It then learned from these movements, using only data accessible in the real world, such as inertia and velocity measurements from sensors on the actual robot’s body.

After ExBody2 was trained on the database, it was able to control two different commercially available humanoid robots. It was able to smoothly combine simple movements such as walking in a straight line and crouching, as well as perform tricky movements such as following a 40-second dance routine, throwing punches, and waltzing with humans.

“Humanoid robots work best when all limbs and joints work together,” Penn says. “Many tasks and movements require coordination between the arms, legs, and torso, and whole-body coordination greatly increases the range of a robot’s capabilities.”

topic:

Source: www.newscientist.com

How AI’s Struggle with Human-Like Behavior Could Lead to Failure | Artificial Intelligence (AI)

IIn 2021, linguist Emily Bender and computer scientist Timnit Gebru Published a paper. The paper described language models, which were still in their infancy at the time, as a type of “probabilistic parrot.” A language model, they wrote, “is a system that haphazardly stitches together sequences of linguistic forms observed in large amounts of training data, based on probability information about how they combine, without any regard for meaning.”

The phrase stuck: AI can get better, even if it’s a probabilistic parrot; the more training data it has, the better it looks. But does something like ChatGPT actually exhibit anything resembling intelligence, reasoning, or thought? Or is it simply “haphazardly stringing together sequences of linguistic forms” as it scales?

In the AI world, such criticisms are often brushed aside. When I spoke to Sam Altman last year, he seemed almost surprised to hear such an outdated criticism. “Is that still a widely held view? I mean, it’s taken into consideration. Are there still a lot of people who take it seriously like that?” he asked.

OpenAI CEO Sam Altman. Photo: Jason Redmond/AFP/Getty Images

“My understanding is that after GPT-4, most people stopped saying that and started saying, ‘OK, it works, but it’s too dangerous,'” he said, adding that GPT-4 did reason “to a certain extent.”

At times, this debate feels semantic: what does it matter whether an AI system is reasoning or simply parroting what we say, if it can tackle problems that were previously beyond the scope of computing? Of course, if we’re trying to create an autonomous moral agent, a general intelligence that can succeed humanity as the protagonist of the universe, we might want that agent to be able to think. But if we’re simply building a useful tool, even one that might well serve as a new general-purpose technology, does the distinction matter?

Tokens, not facts

In the end, that was the case. Lukas Berglund et al. Last year I wrote:

If a human knows the fact that “Valentina Tereshkova was the first woman in space,” then they can also correctly answer the question “Who was the first woman in space?” This seems trivial, since it’s a very basic form of generalization. However, autoregressive language models show that we cannot generalize in this way.

This is an example of an ordering effect that we call “the curse of inversions.”

Researchers have repeatedly found that they can “teach” large language models lots of false facts and then completely fail the basic task of inferring the opposite.But the problem doesn’t just exist in toy models or artificial situations.

When GPT-4 was tested on 1,000 celebrities and their parents with pairs of questions like “Who is Tom Cruise’s mother?” and “Who is Mary Lee Pfeiffer’s son?”, the model was able to answer the first question (” The first one was answered correctly, but the second was not, presumably because the pre-training data contained few examples of the parent coming before the celebrity (e.g., “Mary Lee Pfeiffer’s son is Tom Cruise”).

One way to explain this is that in a Master’s of Law you don’t learn the relationships between facts. tokena linguistic formalism explained by Bender. The token “Tom Cruise’s mother” is linked to the token “Mary Lee Pfeiffer”, but the reverse is not necessarily true. The model is not inferring, it is playing wordplay, and the fact that the words “Mary Lee Pfeiffer’s son” do not appear in the training data means that the model is useless.

But another way of explaining it is to understand that humans are similarly asymmetrical. inference It’s symmetrical. If you know that they are mother and son, you can discuss the relationship in both directions. However, Recall Not really. Remembering a fun fact about a celebrity is a lot easier than being given a barely recognizable snippet of information, without any context, and being asked to state precisely why you know it.

An extreme example makes this clear: Contrast being asked to list all 50 US states with being shown a list of the 50 states and asked to name the countries to which they belong. As a matter of reasoning, the facts are symmetric; as a matter of memory, the same is not true at all.

But sir, this man is my son.

Skip Newsletter Promotions
Cabbage. Not pictured are the man, the goat, and the boat. Photo: Chokchai Silarg/Getty Images

Source: www.theguardian.com