Existential Cosmology: Embracing the Possibility of the Universe’s Disappearance

Billions, perhaps trillions of years from now, long after the sun has swallowed the Earth, cosmologists predict the universe will end. Some people wrestle with whether they are likely to collapse under the weight of the Big Crunch or become an infinitely empty Big Freeze that will continue to expand forever. Some believe that the end of our universe will be determined by a mysterious energy that rips the universe apart.

But there is a more immediate cataclysm that may already be heading towards us at the speed of light. They call it “big sip.”

The slurp in question begins with a quantum fluctuation, causing the bubble to roll through space like a cosmic tsunami, obliterating everything in its path. We should take this possibility seriously, says John Ellis of King's College London. In fact, the question is not so much if this apocalypse will happen, but when. “It could be happening as we speak,” he says.

Theorists like Ellis are actually surprised that such a catastrophe has not yet occurred in the observable universe. But rather than take our precarious existence for granted, they use the obvious fact that we are still here as a tool. The idea is that some weird physics is protecting us.

This kind of existential cosmology also helps physicists filter through the myriad models of the universe, and could tell us how the universe began in the first place. “Maybe you need something to stabilize it. [the universe]And it could be new physics.'' arthu rajanti

Source: www.newscientist.com

Scientists say that large-scale language models do not pose an existential threat to humanity

ChatGPT and other large-scale language models (LLMs) consist of billions of parameters, are pre-trained on large web-scale corpora, and are claimed to be able to acquire certain features without any special training. These features, known as emergent capabilities, have fueled debates about the promise and peril of language models. Their new paperUniversity of Bath researcher Harish Tayyar Madhavshi and his colleagues present a new theory to explain emergent abilities, taking into account potential confounding factors, and rigorously validate this theory through over 1,000 experiments. Their findings suggest that so-called emergent abilities are not in fact emergent, but rather result from a combination of contextual learning, model memory, and linguistic knowledge.



Lou othersThis suggests that large language models like ChatGPT cannot learn independently or acquire new skills.

“The common perception that this type of AI is a threat to humanity is both preventing the widespread adoption and development of this technology and distracting from the real problems that need our attention,” said Dr Tayyar Madhavshi.

Dr. Tayyar Madabhushi and his colleagues carried out experiments to test LLM's ability to complete tasks that the model had not encountered before – so-called emergent capabilities.

As an example, LLMs can answer questions about social situations without being explicitly trained or programmed to do so.

While previous research has suggested that this is a product of the model's 'knowing' the social situation, the researchers show that this is actually a result of the model using a well-known ability of LLMs to complete a task based on a few examples that it is presented with – so-called 'in-context learning' (ICL).

Across thousands of experiments, the researchers demonstrated that a combination of LLMs' ability to follow instructions, memory, and language abilities explains both the capabilities and limitations they exhibit.

“There is a concern that as models get larger and larger, they will be able to solve new problems that we currently cannot predict, and as a result these large models may gain dangerous capabilities such as reasoning and planning,” Dr Tayyar Madabhshi said.

“This has generated a lot of debate – for example we were asked to comment at last year's AI Safety Summit at Bletchley Park – but our research shows that fears that the models will go off and do something totally unexpected, innovative and potentially dangerous are unfounded.”

“Concerns about the existential threat posed by the LLM are not limited to non-specialists but have been expressed by some of the leading AI researchers around the world.”

However, Dr Tayyar Madabushi and his co-authors argue that this concern is unfounded as tests show that LLMs lack complex reasoning skills.

“While it is important to address existing potential misuse of AI, such as the creation of fake news and increased risk of fraud, it would be premature to enact regulations based on perceived existential threats,” Dr Tayyar Madabhsi said.

“The point is, it is likely a mistake for end users to rely on LLMs to interpret and perform complex tasks that require complex reasoning without explicit instructions.”

“Instead, users are likely to benefit from being explicitly told what they want the model to do, and from providing examples, where possible, for all but the simplest tasks.”

“Our findings do not mean that AI is not a threat at all,” said Professor Irina Gurevich of Darmstadt University of Technology.

“Rather, the emergence of threat-specific complex thinking skills is not supported by the evidence, and we show that the learning process in LLMs can ultimately be quite well controlled.”

“Future research should therefore focus on other risks posed by the model, such as the possibility that it could be used to generate fake news.”

_____

Shen Lu others. 2024. Is emergent capability in large-scale language models just in-context learning? arXiv: 2309.01809

Source: www.sci.news

Research shows that doom scrolling is associated with existential anxiety, skepticism, uncertainty, and hopelessness.

Are you facing an existential crisis from scrolling through your phone? A recent study conducted by an international team of experts aimed to explore this issue. Read the full report in the Journal of Computers in Human Behavior.

The study surveyed 800 college students in the US and Iran and discovered a connection between doomscrolling – excessive consumption of negative news – and feelings of existential anxiety, distrust of others, and despair.

Researcher Reza Shabahan from Flinders University highlighted that constant exposure to negative news can indirectly cause trauma, affecting even those who have not experienced direct trauma.

The study revealed that continuous exposure to negative news led individuals to believe that life is fragile and limited, humans are inherently lonely, and people have little control over their lives.

In the case of Iranian students, doomscrolling was also linked to misanthropy, a deep disdain and mistrust of humanity.

The researchers suggested that constant exposure to negative news reinforces the idea that humanity is flawed and the world lacks justice, challenging individuals’ beliefs about the fairness and goodness of the world.

However, they acknowledged limitations in their sample selection and size, cautioning against drawing definitive conclusions about the association observed.

Professor Helen Christensen from the University of New South Wales expressed interest in the study but cautioned that biases could exist due to the sample size.

Digital behavior expert Dr. Joan Orlando emphasized the potential long-term impact of doomscrolling on mental health, likening it to being constantly berated.

Orlando recommended being mindful of how social media and news consumption affect mental well-being, suggesting a delay in checking such platforms upon waking up.

She further emphasized the importance of understanding the impact of media consumption on one’s worldview.

For more insights, check out a Joint submission by mental health organizations ReachOut, Beyond Blue, and Black Dog Institute on the impact of social media on young Australians.

George Herman, CEO of Beyond Blue, highlighted the dual nature of social media in affecting young people’s mental health and called for social media platforms to take responsibility for their impact.

He stressed that individuals should have a say in the content they are exposed to and questioned social media platforms on their strategies to address the issue of doomscrolling.

Source: www.theguardian.com

Kenan Malik argues that Elon Musk and OpenAI are fostering existential dread to evade regulation

IIn 1914, on the eve of World War I, H.G. Wells published a novel about the possibility of an even bigger conflagration. liberated world Thirty years before the Manhattan Project, “humankind'' [to] Carry around enough potential energy in your handbag to destroy half a city. ” A global war breaks out, precipitating a nuclear apocalypse. To achieve peace, it is necessary to establish a world government.

Wells was concerned not just with the dangers of new technology, but also with the dangers of democracy. Wells's world government was not created by democratic will, but was imposed as a benign dictatorship. “The ruled will show their consent by silence,” King Ecbert of England says menacingly. For Wells, “common man” means “Violent idiots in social issues and public affairs”. Only an educated, scientifically-minded elite can “save democracy from itself.”

A century later, another technology inspires similar awe and fear: artificial intelligence. From Silicon Valley boardrooms to the backrooms of Davos, political leaders, technology moguls, and academics are exulting in the immense benefits of AI, but they are also concerned about its potential. ing. announce the end of humanity When super-intelligent machines come to rule the world. And, as a century ago, questions of democracy and social control are at the heart of the debate.

In 2015, journalist Stephen Levy Interview with Elon Musk and Sam Altmanthe two founders of OpenAI, a technology company that gained public attention two years ago with the release of ChatGPT, a seemingly human-like chatbot. Fearful of the potential impact of AI, Silicon Valley moguls founded the company as a nonprofit charitable trust with the goal of developing technology in an ethical manner to benefit “all of humanity.”

Levy asked Musk and Altman about the future of AI. “There are two schools of thought,” Musk mused. “Do you want a lot of AI or a few? I think more is probably better.”

“If I used it on Dr. Evil, wouldn't it give me powers?” Levy asked. Altman responded that Dr. Evil is more likely to be empowered if only a few people control the technology, saying, “In that case, we'd be in a really bad situation.” Ta.

In reality, that “bad place” is being built by the technology companies themselves. Musk resigned from OpenAI's board six years ago and is developing his own AI project, but he is now accused of prioritizing profit over public interest and neglecting to develop AI “for the benefit of humanity.” He is suing his former company for breach of contract.

In 2019, OpenAI created a commercial subsidiary to raise money from investors, particularly Microsoft. When he released ChatGPT in 2022, the inner workings of the model were hidden. I didn't need to be too open about it, Ilya SatskevaOne of OpenAI's founders, who was the company's chief scientist at the time, responded to criticism by claiming that it would prevent malicious actors from using it to “cause significant damage.” Fear of technology became a cover for creating a shield from surveillance.

In response to Musk's lawsuit, OpenAI released a series of documents last week. Emails between Mr. Musk and other members of the board of directors. All of this makes it clear that all board members agreed from the beginning that OpenAI could never actually be open.

As AI develops, Sutskever wrote to Musk: The “open” in openAI means that everyone should benefit from the results of AI once it is developed. [sic] It's built, but it's totally fine if you don't share the science. ” “Yes,” Musk replied. Regardless of the nature of the lawsuit, Musk, like other tech industry moguls, has not been as open-minded. The legal challenges to OpenAI are more a power struggle within Silicon Valley than an attempt at accountability.

Wells wrote liberated world At a time of great political turmoil, when many people were questioning the wisdom of extending suffrage to the working class.

“Was that what you wanted, and was it safe to leave it to you?” [the masses],” Fabian Beatrice Webb wondered., “The ballot box that creates and controls the British government with its vast wealth and far-flung territories”? This was the question at the heart of Wells's novel: Who can one entrust their future to?

A century later, we are once again engaged in heated debates about the virtues of democracy. For some, the political turmoil of recent years is a product of democratic overreach, the result of allowing irrational and uneducated people to make important decisions. “It's unfair to put the responsibility of making a very complex and sophisticated historical decision on an unqualified simpleton.” Richard Dawkins said: After the Brexit referendum, Mr Wells would have agreed with that view.

Others say that such contempt for ordinary people is what contributes to the flaws in democracy, where large sections of the population feel deprived of a say in how society is run. .

It's a disdain that also affects discussions about technology.like the world is liberated, The AI ​​debate focuses not only on technology, but also on questions of openness and control. Alarmingly enough, we are far from being “superintelligent” machines. Today's AI models, such as ChatGPT, or claude 3, released last week by another AI company, Anthropic, is so good at predicting what the next word in a sequence is that it makes us believe we can have human-like conversations. You can cheat. However, they are not intelligent in the human sense. Negligible understanding of the real world And I'm not trying to destroy humanity.

The problems posed by AI are not existential, but social.from Algorithm bias to surveillance societyfrom Disinformation and censorship to copyright theftOur concern is not that machines might someday exercise power over humans, but that machines already function in ways that reinforce inequalities and injustices, and that those in power strengthen their own authority. It should be about providing tools for

That's why what we might call “Operation Ecbert,” the argument that some technologies are so dangerous that they must be controlled by a select few over democratic pressure, It's very threatening. The problem isn't just Dr. Evil, it's the people who use fear of Dr. Evil to protect themselves from surveillance.

Kenan Malik is a columnist for the Observer

Source: www.theguardian.com