The political landscape of AI regulation became clearer when an influential Labor think tank outlined a framework for addressing the issue in the party’s manifesto.
From our story:
The policy paper, created by center-left think tank Labor Together, suggests banning specialized nudity tools that enable users to create explicit content from real images.
It also calls for developers of general AI tools and web hosting companies to take measures to prevent the creation of such harmful deepfakes.
While Labor’s proposals are not yet official party policy, they highlight the issues that Westminster technocrats believe they can rally around. (Shadow technology minister Peter Kyle has expressed interest in the proposals.)
For years, technology in the UK has been politically neutral, with all parties agreeing on the importance of supporting British technology for growth and influence. However, there have been limited efforts to go beyond this consensus.
Even as concerns about technology regulation grew, especially with the introduction of the Online Safety Act under Theresa May’s government, the debate remained technocratic rather than principled or partisan. The Labor Party pushed for specific amendments to the bill, which eventually passed without significant opposition.
The most notable opposition to the bill came from within the Conservative Party, with one faction attempting to ban acts that they deemed as “hurtful.” This was partially due to provisions in the bill aimed at replacing the outdated “malicious communications” offense with more specific crimes.
However, the current proposals by Labor, such as banning nudity tools, may face opposition from the Conservatives, showcasing the differing concerns of the two parties on AI issues. While the Conservative Party, led by Rishi Sunak, focuses on existential risks from Silicon Valley, Labor is more concerned with exploitation risks.
“MrDeepFakes does not represent technology”
In discussing this article with authors Kirsty Innes and Laurel Boxall, the expected disagreement was notable. “Analog conservatives lack rapid response in this area. They view AI as a ‘mutant algorithm’ or a Silicon Valley novelty that can be scaled without regard for its impact on workers,” said Innes. “It took seven years to pass the Online Safety Act through Congress, but the world has changed since then.”
“We need to move beyond the dichotomy of supporting innovation versus protecting public interest – government versus business,” added Innes. “Most tech companies want their tools used for positive purposes. They recognize the issue, but MrDeepFakes does not represent the tech industry. Therefore, they are likely to support us on this matter.”
The policy document also suggests more flexible regulations for various technology sectors supporting AI. Web hosts, search engines, and payment platforms would be required to prevent the creation of “harmful deepfakes” under threat of fines from Ofcom. Critics may argue that such policies could stifle innovation, potentially leading platforms to ban all deepfake tools deemed “harmful.”
According to a survey by Control AI, the UK public overwhelmingly supports a ban on deepfakes, with 86% expressing their approval – higher than in other countries like Italy (74%).
Deepfakes, “cheapfakes” and AI elections – join us live
Another proposal in the paper suggests that major political parties abstain from using AI to create misleading content in their campaigns for the next nine months, as a pledge. However, the feasibility and sustainability of such a commitment amidst the UK’s political environment remain uncertain.
I’ll be hosting a Guardian Live event next month on the impact of AI on elections, where experts like Katie Harvath from Anchor Change and Imran Ahmed from the Center to Counter Digital Hate will discuss the implications of generative AI on the electoral process involving 2 billion voters.
While deepfakes and AI-generated misinformation are expected to play a role in campaigns, the extent to which they will be used remains uncertain. Are fake images and videos a significant shift in misinformation, or are they a continuation of existing deceptive practices?
What concerns me more is how new technologies will impact an already fragile public sphere. With social media platforms making changes, the direction of political discourse is unclear. Where are conversations headed, and how will campaigning evolve in this changing landscape?
Robotics
I don’t usually share YouTube videos, but Figure’s latest demo is too cool to miss. Watch the video.
Although prediction season is over, I predict that chatbots in 2022 will be like robots in 2024.
Robotics, historically challenging and costly, is being revolutionized by advances in AI. Training systems in simulated environments, enabling natural language commands, and controlling physical bodies may lead to rapid progress akin to that seen in large-scale language models in recent years.
It appears that this transformation is already underway.
Subscribe to receive the full newsletter, TechScape, every Tuesday in your inbox.
Source: www.theguardian.com