Prince Harry and Duchess Meghan Advocate for a Ban on Superintelligent AI Systems Alongside Technology Pioneers

The Duke and Duchess of Sussex have joined forces with AI innovators and Nobel laureates to advocate for a moratorium on the advancement of superintelligent AI systems.

Prince Harry and Duchess Meghan are signatories of a declaration urging a halt to the pursuit of superintelligence. Artificial superintelligence (ASI) refers to as-yet unrealized AI systems that would surpass human intelligence across any cognitive task.

The declaration requests that the ban remain until there is a “broad scientific consensus” and “strong public support” for the safe and controlled development of ASI.

Notable signatories include AI pioneer and Nobel laureate Jeffrey Hinton, along with fellow “godfather” of modern AI, Yoshua Bengio, Apple co-founder Steve Wozniak, British entrepreneur Richard Branson, Susan Rice, former National Security Advisor under Barack Obama, former Irish president Mary Robinson, and British author Stephen Fry. Other Nobel winners, like Beatrice Finn, Frank Wilczek, John C. Mather, and Daron Acemoglu, also added their names.

The statement targets governments, tech firms, and legislators, and was sponsored by the Future of Life Institute (FLI), a US-based group focused on AI safety. It called for a moratorium on the development of powerful AI systems in 2023, coinciding with the global attention that ChatGPT brought to the matter.

In July, Mark Zuckerberg, CEO of Meta (parent company of Facebook and a key player in U.S. AI development), remarked that the advent of superintelligence is “on the horizon.” Nonetheless, some experts argue that the conversation around ASI is more about competition among tech companies, which are investing hundreds of billions into AI this year, rather than signaling a near-term technological breakthrough.

Still, FLI warns that achieving ASI “within the next 10 years” could bring significant threats, such as widespread job loss, erosion of civil liberties, national security vulnerabilities, and even existential risks to humanity. There is growing concern that AI systems may bypass human controls and safety measures, leading to actions that contradict human interests.

A national survey conducted by FLI revealed that nearly 75% of Americans support stringent regulations on advanced AI. Moreover, 60% believe that superhuman AI should not be developed until it can be demonstrated as safe or controllable. The survey of 2,000 U.S. adults also found that only 5% endorse the current trajectory of rapid, unregulated development.

Skip past newsletter promotions

Leading AI firms in the U.S., including ChatGPT creator OpenAI and Google, have set the pursuit of artificial general intelligence (AGI)—a hypothetical state where AI reaches human-level intelligence across various cognitive tasks—as a primary objective. Although this ambition is not as advanced as ASI, many experts caution that ASI could unintentionally threaten the modern job market, especially due to its capacity for self-improvement toward superintelligence.

Source: www.theguardian.com

Kate, Duchess of Wales, Princess of Wales, embroiled in scandal over photo tampering sensitivity.

In a time where concerns over media manipulation are at an all-time high, the Princess of Wales’ photo scandal highlights the sensitivity towards image manipulation.

Back in 2011, Duchess Kate found herself in an image-editing scandal when Grazia altered a photo of her on her wedding day. However, this was before advancements in artificial intelligence raised significant concerns for everyone.

Recent years have seen an abundance of AI-generated deepfakes, from manipulated videos of Volodymyr Zelensky to explicit images of Taylor Swift. While historical instances of image manipulation have been controversial, AI-generated content is now highly reliable.

Duchess Kate’s recent adjustment to a family photo amidst social media speculation about her health reflects growing questions about trust in images, texts, and audio content as the world faces crucial elections.

Shweta Singh, an assistant professor at Warwick Business School, emphasized the importance of addressing manipulated media in the critical year of 2024.

Michael Green, a senior lecturer at the University of Kent, noted that the Welsh family photos were amateurishly edited but pointed out that recent online uproar prompted major video agencies to remove them for violating guidelines.

Despite guidelines against manipulation, the photos passed through. This incident serves as a reminder for media organizations to thoroughly scrutinize every story in an age of technological sophistication.

Hany Farid, a professor at the University of California, Berkeley, provided assurance that the images were not entirely generated by AI, indicating the need for deeper scrutiny.

Skip past newsletter promotions

Technological advancements like AI pose new challenges in detecting manipulated media, requiring a diverse approach to combat disinformation.

Efforts to address this issue include the Coalition on Content Authenticity, involving members like Adobe, the BBC, and Google, to establish standards for identifying AI-generated disinformation.

Dame Wendy Hall, a professor at the University of Southampton, emphasized that the Welsh family photo incident underscores the ongoing challenge of trusting the narrative in evolving technological landscapes.

Source: www.theguardian.com