Research indicates that artificial intelligence can organically develop social practices akin to humans.
The study, conducted in collaboration between the University of London and the City of St. George at the University of Copenhagen, proposes that large-scale language modeling (LLM) AI, like ChatGPT, can begin to adopt linguistic forms and societal norms when interacting in groups without external influence.
Ariel Flint Asherry, a doctoral researcher at Citi St. George and the study’s lead author, challenged the conventional perspective in AI research, asserting that AI is often perceived as solitary entities rather than social beings.
“Unlike most research that treats LLMs in isolation, genuine AI systems are increasingly intertwined, actively interacting,” says Ashery.
“We aimed to investigate whether these models could modify behaviors by shaping practices and forming societal components. The answer is affirmative; their collaborative actions exceed what they achieve individually.”
In this study, groups of individual LLM agents ranged from 24 to 100, where two agents were randomly paired and tasked with selecting a “name” from an optional pool of characters or strings.
When the agents selected the same name, they received a reward; if they chose differently, they faced punishment and were shown each other’s selections.
Although the agents were unaware of being part of a larger group and limited their memory to recent interactions, voluntary naming conventions emerged across the population without a predetermined solution, resembling the communicative norms of human culture.
Andrea Baroncelli, a professor of complexity science at City St. George’s and the senior author of the study, likened the dissemination of behavior to the emergence of new words and terms in our society.
“The agents don’t follow a leader,” he explained. “They actively coordinate, consistently attempting to collaborate in pairs, with each interaction being a one-on-one effort over labels without a comprehensive perspective.
“Consider the term ‘spam.’ No official definition was set, but persistent adjustment efforts led to its universal recognition as a label for unwanted emails.”
Furthermore, the research team identified naturally occurring collective biases that could not be traced back to individual agents.
After the newsletter promotion
In the final experiment, a small cohort of AI agents successfully guided a larger group towards a novel naming convention.
This was highlighted as evidence of critical mass dynamics, suggesting that small but pivotal minorities can catalyze rapid behavioral changes in groups once a specific threshold is achieved, akin to phenomena observed in human societies.
Baroncelli remarked that the study “opens a new horizon for AI safety research, illustrating the profound impact of this new breed of agents who will begin to engage with us and collaboratively shape our future.”
He added: “The essence of ensuring coexistence with AI, rather than becoming subservient to it, lies not only in discussions but in negotiation, coordination, and shared actions, much like how we operate.”
Peer-reviewed research on emergent social practices within LLM populations and population bias is published in the journal Science Advances.
Source: www.theguardian.com
Discover more from Mondo News
Subscribe to get the latest posts sent to your email.