Should politicians prioritize AI to aid in galaxy colonization, or should they safeguard individuals from the excessive influence of powerful tech? While the former seems more appealing, it’s not the primary concern.
Among the Silicon Valley elite, the emergence of super-intelligent AI is viewed as an imminent reality, with tech CEOs enthusiastically anticipating a golden age of progress in the 2030s. This perspective has permeated both Westminster and Washington, as think tanks encourage politicians to prepare to leverage the approaching AI capabilities. The Trump administration even backed a $500 billion initiative for a super AI data center.
While this sounds thrilling, the so-called “silly intelligence” is already creating issues, akin to the lofty aspirations of super intelligence. A pressing question in the AI sector revolves around whether the vast array of online content essential for training AI constitutes copyright infringement.
Arguments exist on both sides. Proponents assert that AI is not infringing when learning from existing content. New Scientist highlights that simply reading these words should enable AI to learn in the same fashion. Conversely, industry giants like Disney and Universal are opposing this view. They are suing AI company Midjourney for generating replicas of copyrighted images, from Darth Vader to his minions. Ultimately, only the law can reconcile this issue.
We are approaching a world where machines can cause death with minimal human oversight.
The ongoing conflict in Ukraine presents another pressing AI-related dilemma. Sam Altman from OpenAI warns about the potential dangers of advanced AI, noting that fatal, unintelligent AI already exists. The war has progressed towards a scenario where machines could effectively cause harm with minimal human oversight.
Politicians seem to have underestimated this threat. The United Nations convened its first meeting in 2014 to discuss the regulation of “killer robots.” If leaders expect time to resolve their challenges, they may be gravely mistaken.
Topic:
Source: www.newscientist.com
