The “Godfather” of AI warns that Deepseek’s advancements may heighten safety concerns.

A groundbreaking report by AI experts suggests that the risk of artificial intelligence systems being used for malicious purposes is on the rise. Researchers, particularly in DeepSeek and other similar organizations, are concerned about safety risks which may escalate.

Yoshua Bengio, a prominent figure in the AI field, views the progress of China’s DeepSeek startup with apprehension as it challenges the dominance of the United States in the industry.

“This leads to a tighter competition, which is concerning from a safety standpoint,” voiced Bengio.

He cautioned that American companies and competitors need to focus on overtaking DeepSeek to ensure safety and maintain their lead. Openai, known for Chatgpt, responded by hastening the release of a new virtual assistant to keep up with DeepSeek’s advancements.

In a wide-ranging discussion on AI safety, Bengio stressed the importance of understanding the implications of the latest safety report on AI. The report, spearheaded by a group of 96 experts and endorsed by renowned figures like Jeffrey Hinton, sheds light on the potential misuse of general-purpose AI systems for malicious intents.

One of the highlighted risks is the development of AI models capable of generating hazardous substances beyond the expertise of human experts. While these advancements have potential benefits in medicine, there is also a concern about their misuse.

Although AI systems have become more adept at identifying software vulnerabilities independently, the report emphasizes the need for caution in the face of escalating cyber threats orchestrated by hackers.

Additionally, the report discusses the risks associated with AI technologies like Deep Fake, which can be exploited for fraudulent activities, including financial scams, misinformation, and creating explicit content.

Furthermore, the report flags the vulnerability of closed-source AI models to security breaches, highlighting the potential for malicious use if not regulated effectively.

In light of recent advancements like the O3 model by OPENAI, Bengio underscores the need for a thorough risk assessment to comprehend the evolving landscape of AI capabilities and associated risks.

While AI innovations hold promise for transforming various industries, there is a looming concern about their potential misuse, particularly by malicious actors seeking to exploit autonomous AI for nefarious purposes.

It is essential to address these risks proactively to mitigate the threats posed by AI developments and ensure that the technology is harnessed for beneficial purposes.

As society navigates the uncertainties surrounding AI advancements, there is a collective responsibility to shape the future trajectory of this transformative technology.

Source: www.theguardian.com

AI Godfather Ensures Technology Won’t Eradicate Humanity in 30 Years

A prominent British-Canadian computer scientist often referred to as the “godfather” of artificial intelligence has reduced the likelihood of AI causing the extinction of humanity in the next 30 years, stating that the rate of technological advancement is “much faster” than anticipated. I warned you.

Professor Geoffrey Hinton, the recipient of this year’s Nobel Prize in Physics for his contributions to AI, suggested that there is a “10% to 20%” probability of AI leading to human extinction within the next three decades.

Hinton previously expressed that there was a. 10% chance that technology could result in catastrophic outcomes for humanity.

When asked on BBC Radio 4’s Today program if he had revised his assessment of the potential AI doomsday scenario and the one in 10 likelihood of it happening, he replied, “No, it’s between 10% and 20%.”

In response to Hinton’s estimate, former Prime Minister Sajid Javid, who was guest editing Today, remarked, “You’re going up,” to which Hinton quipped, “You’re going up. You know, we’ve never had to confront anything more intelligent than ourselves.”

He further added, “And how many instances do you know of something more intelligent being controlled by something less intelligent? There are very few examples. There’s a mother and a baby. In evolutionary theory, the baby controls the mother. It took a lot of effort to make it possible, but that’s the only example I know of.”

Hinton, a professor emeritus born in London and based at the University of Toronto, emphasized that humans would appear infantile compared to the intelligence of highly advanced AI systems.

“I like to compare it like this: Imagine yourself and a 3-year-old. We’re in third grade,” he stated.

AI can broadly be defined as computer systems that can perform tasks typically requiring human intelligence.

Last year, Hinton resigned from his position at Google to speak more candidly about the risks associated with unchecked AI development, citing concerns that “bad actors” could exploit the technology to cause harm. This issue gained significant attention. One of the primary worries of AI safety advocates is that the progression of artificial general intelligence, or systems that surpass human intellect, could enable the technology to elude human control and pose an existential threat.

Reflecting on where he anticipated AI development would bring him when he initially delved into AI research, Hinton remarked, “[we are] here now. I thought we would arrive here at some point in the future.”

Skip past newsletter promotions

He added, “Because in the current environment, most experts in this field believe that AI surpassing human intelligence will likely materialize within the next 20 years.” And that’s a rather frightening notion.

Hinton remarked that the pace of advancement was “extremely rapid, much quicker than anticipated” and advocated for government oversight of the technology.

“My concern is that the invisible hand isn’t safeguarding us. In a scenario where we simply rely on the profit motive of large corporations, we cannot ensure secure development. That’s insufficient,” he stated. “The only factor that can compel these major corporations to conduct more safety research is government regulation.”

Hinton is one of three “Godfathers of AI” who were awarded the ACM A.M. Turing Prize, the computer science equivalent of the Nobel Prize, for their contributions. However, one of the trio, Yann LeCun, the chief AI scientist at Mark Zuckerberg’s Meta, downplayed the existential threat, suggesting that AI “may actually save humanity from extinction.”

Source: www.theguardian.com

Renowned AI pioneer Jeffrey Hinton honored as “godfather of AI” – an offer too good to refuse

WBack in 2011, Marc Andreessen was a venture capitalist with dreams of becoming a public intellectual. published an essay Titled “Why Software is Eating the World,'', he predicted that computer code would take over large swaths of the economy. Now, 13 years later, the software seems to be making its way into academia. In any case, this is one possible conclusion to be drawn from the fact that computer scientist Jeffrey Hinton shares the following about 2024: Nobel Prize in Physics John Hopfield and computer scientist Demis Hassabis share half of it. Nobel Prize in Chemistry With one of my colleagues at DeepMind, John Jumper.

In some ways, Hassabis and Jumper's awards were as expected. Because they built the machine. alpha fold 2 – This will enable researchers to solve one of the most difficult problems in biochemistry: predicting the structure of proteins, the building blocks of biological life. Their machine was able to predict the structure of virtually every 200m protein the researchers identified. So this is a big problem for chemistry.

But Hinton is not a physicist. Indeed, he once Introduced at an academic conference As someone who “failed physics, dropped out of psychology, and then joined a field with absolutely no standards: artificial intelligence.” After graduating, I worked as a carpenter for a year. But he's the guy who found a way to do it (“backpropagationThis allows neural networks to be trained. This was one of the two keys that opened the door to machine learning and sparked the current AI frenzy. (The other is transformer model (published by Google researchers in 2017).

But where's the physics in this? That's from Mr. Hopfield, who shares the award with Mr. Hinton. “Hopfield networks and their further development, called Boltzmann machines, are based on physics,” Hinton explained to the man. new york times. “Hopfield nets used energy functions and Boltzmann machines used ideas from statistical physics. So that stage of the development of neural networks relied heavily on ideas from physics.”

that's ok. But the media often describes Hinton as the “godfather of AI,” which has vaguely sinister overtones. In reality, he is the exact opposite: tall, affable, polite, intelligent, and endowed with an acerbic and sometimes acerbic wit. When I asked Cade Metz how he reacted when he heard the news of the award, he said he was “shocked, surprised, and appalled,” which I think most people would say. But in 2018, he shared the Turing Award, computer science's equivalent of the Nobel Prize, with Joshua Bengio and Yann LeCun for their work in deep learning. So he was always in the top league. It's just that there is no Nobel Prize in computer science. Given the way software is eating up the world, perhaps that should change.

There's an old joke that the key to becoming a Nobel Prize winner is to “outlive” your rivals. Hinton, now 77, clearly took notice. But in fact, what is most admirable about him is his persistence in believing in the potential of neural networks as the key to artificial intelligence, long after the idea had been discredited by the profession. Given the way academia works, it required an extraordinary amount of determination and confidence, especially in a rapidly developing field like computer science. Perhaps what drove him through his dark times was the idea that his great-grandfather was George Boole, the 19th century mathematician who invented the underlying logic. all Of this digital stuff.

We also think about the impact awards have on people. When news of Hinton's award broke, I thought of Seamus Heaney, who won the literary prize in 1995. He described the experience as “like being attacked by something.” generally “A benign avalanche.” Note that I say “almost.” One of the consequences of the Nobel Prize is that the recipient instantly becomes public property, and everyone wants a piece of it. “All I'm doing these days is 'going to work,'” Heaney wrote resignedly to a friend in June 1996. And this situation will continue for weeks and months yet… Whatever the final outcome of the Stockholm effect, its direct result is the desire to quit and start over. with a unique persona (within myself)”

So…note to Jeff: Congratulations. And manage your calendar.

what i was reading

talk like this
Is chatting with a bot a conversation? wonderful new yorker essay Historian Jill Lepore talks about interacting with GPT-4o's Advanced Voice Mode.

Interesting times…
October 2, 2024. this particular problem Heather Cox Richardson's essential Substack blog is a gem.

real page turner
Elite college students who can't read books, interesting report in atlantic ocean Written by Rose Horowich.

Source: www.theguardian.com

Top Podcast Picks: Pacino, De Niro, and Others Reflect on 50 Years of ‘The Godfather’

This week’s picks

Mo Gilligan: Beginning, Middle, End

Widely available, with weekly episodes
Mo Gilligan is as loveable as ever in his new podcast series, inviting famous guests like George the Poet, Aisling Bea and Joel Dommett to talk about their careers and the legacy they want to leave behind. First up is Jonathan Ross, who’s in full chat mode, sharing great anecdotes like the origins of his legendary star-studded Halloween party and the time he showed Eminem his laundry room. Hannah Verdier

Famous for…Winona

BBC Sounds, all episodes available now

For her first-ever podcast, Maisie Williams has decided to tell the story of her idol Winona Ryder so far. Why now? Because Ryder is returning in the upcoming Beetlejuice sequel, and her life has certainly had its ups and downs, from being Tim Burton’s favorite to that infamous shoplifting arrest to her career resurgence thanks to Stranger Things. But that’s just the plot of a six-episode series. Holly Richardson


Mo Gilligan, host of “Beginning, Middle & End.” Photo: Paul Hansen/Observer

The Godfather: A movie you can’t refuse

Audible, weekly episodes

What more can be said about The Godfather, a film that has been celebrated worldwide for 50 years? Host Rebecca Keegan discovers much more with the help of Al Pacino, Robert De Niro and Talia Shire, covering a huge range of ground from delicate family dynamics to Francis Ford Coppola’s approach to improvisation. HV

Scum Town

Widely available, with weekly episodes

Do you like stories about book thieves, heavy metal con artists, arson, fraud and deception? Then James Lee Hernandez and Brian Lazarte, hosts of McMillion$ and The Big Conn, are your go-to host. This highly entertaining podcast uncovers twists, wild tactics and stories that are almost unbelievable. HV

The worst podcast

Episodes will be released weekly starting September 4th and will be widely available

“What’s the worst thing you’ve ever said to your mother?” Filmmaker and “reformed bigot” Alan Zweig asks his guests the nastiest questions in his intentionally somber podcast. Topics include hemorrhoids, terrible mistakes and major regrets, and Zweig doesn’t know who he’s interviewing in advance. HV

There is a podcast


Pacific Crest Trail. Photo: Danita Delimont/Alamy

this week, Ella Braidwood 5 best podcasts The Great Outdoorsfrom the wild adventures of adventurers to a practical hiking handbook

Byland Podcast
The Byland Podcast is full of practical advice for getting started in the outdoors, including tips on the best gear. Hosted by Emory Wanger (above), who started the podcast after hiking the Pacific Crest Trail from Mexico to Canada, each episode features guest interviews, many of which are with industry experts who outline the best gear for camping, thru-hiking, mountaineering, and more. There are also interviews with outdoor enthusiasts, like David Daly, who hikes with his three kids, and Bailey Bremner, who takes her dogs on adventures.

Source: www.theguardian.com