BRetain wants to lead the world in AI regulation. However, AI regulation is a rapidly evolving and contested policy area, with little agreement on what a good outcome looks like, let alone the best way to get there. And the fact that he is the third most important AI research center in the world does not give him so much power if the first two are the United States and China.
How do we cut this Gordian knot? Simple: Act quickly and decisively and do nothing.
The UK Government has today taken the next step towards legislation to regulate AI. From our story:
The government will admit on Tuesday that binding measures to oversee
cutting-edge AI development will be needed at some point, but not immediately. Instead, ministers will set out “initial thoughts on future binding requirements” for advanced systems and discuss them with technical, legal and civil society experts.The Government will also give regulators £10m to help tackle AI risks and require them to develop their approach to the technology by April 30th.
When the first draft of the AI whitepaper was published in March 2023, the reaction was negative. The government’s proposal was withdrawn on the same day as the now-infamous call for a six-month “pause” on AI research to control the risks of an out-of-control system. Against this background, this white paper seemed pathetic.
The proposal would give regulators no new powers and would not give responsibility for guiding AI development to any private group. Instead, the government planned to align existing regulators, such as the Competition and Markets Authority and the Health and Safety Executive, and set out five principles to guide the regulatory framework when considering AI.
This approach has been criticized by the UK’s leading AI research group, the Ada Lovelace Institute, as having “significant gaps”, and even the fact that a multi-year legislative process will leave AI unregulated during the interim period. Ignored.
So what has changed?Well, the government Really awesome £10 million
Asking regulators to “upskill”, has set an April 30 deadline for the largest companies to publish their AI plans. A Department for Science, Innovation and Technology spokesperson said: “The UK Government is in no hurry to legislate and will not risk introducing ‘ready-to-read’ rules that quickly become outdated or ineffective.” Ta.
This is a strange definition of “global AI leadership” and it’s important to immediately say “we’re not doing anything.” The government is also “considering” actual regulations, envisioning “future binding requirements that may be introduced for developers building cutting-edge AI systems.”
The second, slightly larger fund will cost “almost” £90m to launch “nine new centers of excellence across the UK”. The government also announced £2 million in funding to support “new research projects that help define what responsible AI looks like”.
There is an element of tragedy when reading the government press release that triumphantly revealed £2 million in funding from Yoshua Bengio, one of the three “godfathers” of AI, just a week later.
Asks Canada to spend $1 billion We are building publicly owned supercomputers to keep up with the big tech companies. It’s like bringing a spoon to a knife fight.
You can say you’re agile in the face of conflicting demands, but after more than 11 months, it just seems impossible to commit. The day before the latest update to the AI White Paper was published, the Financial Times broke the news that another pillar of AI regulation had collapsed.
from that story (lb):
The Intellectual Property Office, the UK government’s agency that oversees copyright law, is working with AI companies and rights holders to produce guidance on text and data mining, where AI models are trained on existing materials such as books and music. We are discussing with.
But a group of industry executives convened by the IPO to oversee the work was unable to agree on a voluntary code of conduct, handing responsibility back to officials at the Department of Science, Innovation and Technology.
Unlike broader AI regulation, which has a quagmire of conflicting opinions and very vague long-term goals, copyright reform is a very clear trade-off. On the one hand, creative and media companies that own valuable intellectual property. On the other side are technology companies that can use their intellectual property to build valuable AI tools. One group or the other will be frustrated by the outcome. A perfect compromise simply means that both are true.
Last month, the head of Getty Images was one of many to call on the UK to support its creative industries, which make up a tenth of the UK economy, citing the theoretical benefits that AI could bring in the future. And, faced with difficult choices with no right answers, the government chose to do nothing. Then you cannot lead the world in the wrong direction. And isn’t that what leadership is all about?
completely fake
To be fair to the government, there are obvious problems with moving too quickly. Let’s take a look at social media to see some of them. Facebook’s rules do not prohibit deepfake videos of Joe Biden, the company’s Oversight Board (also known as the “Supreme Court”) has found.But honestly, it’s not clear what they are do Prohibition will become increasingly problematic. From our story:
Meta’s oversight board found that a Facebook video that falsely suggested that U.S. President Joe Biden is a pedophile did not violate the company’s current rules, but said the rules were “disjointed”. Yes, we believe that the focus is too narrow on AI-generated content.
The board, which is funded by Facebook’s parent company Meta but operates independently, took over the Biden video case in October in response to user complaints about a doctored seven-second video of the president.
Facebook rushed out a “manipulated media” policy several years ago, before ChatGPT and large-scale language models became AI trends, and amid growing interest in deepfakes. The rule
s prohibited misleading and altered videos created by AI.
The problem, the oversight committee said, is that the policy is impossible to apply because it has little clear rationale behind it and no clear theory of the harm it seeks to prevent. How can moderators differentiate between videos created by AI (which is prohibited) and videos created by skilled video editors (which are allowed)? Even if they could, Why is only the former problematic enough to be removed from the site?
The Oversight Committee proposed updating the rules to remove the temporary reference to AI altogether and instead require labels to identify manipulated audio and video content, regardless of the manipulation method. Mehta said it would update its policy.
Brianna Gee’s mother is calling for a revolution in how teens approach social media after her daughter was murdered by two of her classmates. Under-16s, she says, should be limited to devices made for teenagers that allow parents to easily monitor their technological lives, which are age-restricted by governments and tech companies.
I spoke to Archie Brand, editor of the daily newsletter First Edition, about her plea:
This lament will resonate with many parents, but in Brianna’s case it has special power. She was “secretly accessing sites on her smartphone that promoted anorexia and self-harm.”
Petition created by Esther Say. And prosecutors
said her killers used Google to search for poisons, “serial killer facts” and ways to combat anxiety, and searched Amazon for rope.“We don’t need new software to do everything Esther Gee wants us to do,” says Alex Hahn. “But there’s a broader problem here. Just as this sector has historically moved faster than governments can keep up, it’s also moving faster than parents can keep up. This varies from app to app and changes regularly, so it’s a large and difficult job to keep track of.”
You can read Archie's full email here (and sign up here to get the first edition every weekday morning).
Wider TechScape
Source: www.theguardian.com