Getty Images CEO discusses potential withdrawal from UK’s creative sector or investing in AI with Sunak

Rishi Sunak needs to decide whether to support Britain’s creative industries or bet everything on the artificial intelligence boom, Getty Images’ chief executive has said.

Craig Peters, who has led Image Library since 2019, made the comments amid growing anger in the creative and media sectors over the material being collected as “training data” for AI companies. His company is suing a number of AI image generators for copyright infringement in the UK and US.

“If you look at the UK, probably about 10% of GDP is made up of creative industries like film, music and television. I think it’s dangerous to make that trade-off. It’s a bit of a complicated trade-off to bet on AI, which is less than a quarter of the country’s GDP, much less than the creative industries.”

In 2023, the government, in response to consultation from the Intellectual Property Office, set a goal to “overcome the barriers currently faced by AI companies and users” when using copyrighted material, and promised to “support access to copyrighted works.” input to the model.”

This was already a step back from previous proposals for broad copyright exceptions for text and data mining. In a response to a House of Commons committee on Thursday, Viscount Camrose, a hereditary peer and under-secretary of state for artificial intelligence and intellectual property, said: This will help secure the UK’s place as a world leader in AI, while supporting the UK’s thriving creative sector.”

The role of copyrighted material in AI training is under increasing pressure. In the US, the New York Times sued OpenAI and Microsoft, the creators of ChatGPT, for using news articles as part of training data for their AI system. OpenAI said in a court filing that it is impossible to build an AI system without using copyrighted material.

Peters disagrees. Getty Images collaborated with Nvidia to create its own image generation AI that is trained using only licensed images.

The tide is changing within the industry as well. A dataset of pirated e-books, called Books3, is hosted by an AI group whose copyright takedown policy at one point even includes a costumed person pretending to masturbate with an imaginary penis while singing. Similar to the lawsuit by Getty and the New York Times, a number of other legal actions are underway against AI companies over potential training data breaches.

Ultimately, whether courts or even governments decide how to regulate the use of copyrighted material to train AI systems may not be the final word on this issue. Peters is optimistic that this result is not a foregone conclusion.

Source: www.theguardian.com

Mr. Sunak in Deepfake Video Ads on Facebook Issuing Election AI Warning

According to a study, more than 100 deepfake video ads impersonating Rishi Sunak were paid to promote on Facebook in the last month alone. This study warns of the risks posed by AI ahead of the general election.

The ads may have reached up to 400,000 people, despite potentially violating some of Facebook’s policies. It was the first time a prime minister’s image had been systematically defaced all at once.

Over £12,929 was spent on 143 ads from 23 countries, including the US, Turkey, Malaysia, and the Philippines.

One ad includes a breaking news story in which BBC newsreader Sarah Campbell falsely claims that a scandal has broken out centering on Mr. Sunak. It also includes a fake video that appears to be reading out loud.

The article falsely claims that Elon Musk has launched an application that can “collect” stock market trades and suggests the government should test the application. It includes a fabricated clip of Mr. Sunak saying he has made the decision.

The clip leads to a fake BBC news page promoting fraudulent investments.

research

The scheme was carried out by Fenimore Harper, the communications company founded by Marcus Beard, a former Downing Street official who was the number 10 head of counter-conspiracy theory during the coronavirus crisis. He warned that this ad, which shows a change in the quality of fakes, shows that this year’s election is at risk of being manipulated by a large amount of high-quality falsehoods generated by AI.

“With the advent of cheap and easy-to-use voice and facial cloning, little knowledge or expertise is required to use a person’s likeness for malicious purposes.”

“Unfortunately, this problem is exacerbated by lax moderation policies for paid ads. These ads violate several of Facebook’s advertising policies. However, few of the ads we found were removed. There was very little.”

Meta, the company that owns Facebook, has been contacted for comment.

A UK government spokesperson said: “We work widely across government, through the Democracy Defense Task Force and dedicated government teams, to ensure we respond quickly to any threats to democratic processes.”

“Our online safety laws go further by creating new requirements for social platforms to quickly remove illegal misinformation and disinformation – even if it is generated by AI – as it becomes aware of it.”

A BBC spokesperson said: “In a world where disinformation is on the rise, we urge everyone to ensure they get their news from trusted sources. We are committed to tackling the growing threat of disinformation. In 2023, we launched BBC Verify to investigate, fact-check, verify video, counter disinformation, analyze data and explain complex stories using a range of forensic and open source intelligence (OSINT) tools. We invest in a highly specialized team with

“We build trust with our viewers by showing them how BBC journalists know the information they report and explaining how to spot fake and deepfake content. When we become aware of fake content, we take swift action.”

Regulators are concerned that time is running out to enact sweeping changes to ensure Britain’s electoral system is ready for advances in artificial intelligence before the next general election, expected to be held in November.

The government continues to consult with regulators, including the Electoral Commission, and under legislation from 2022 there will be new requirements for digital campaign materials to include ‘imprints’, allowing voters to control who spends on advertising. This will ensure that you know who has paid and who is participating in your ads. To influence them.

Source: www.theguardian.com

Rishi Sunak Commends AI Safety Institute at Bletchley, Though Regulation is Delayed

The Frontier AI Taskforce, set up by the UK in June in preparation for this week’s AI Safety Summit, is expected to become a permanent fixture as the UK aims to take a leading role in future AI policy. UK Chancellor Rishi Sunak today formally announced the launch of the AI ​​Safety Institute, a “global hub based in the UK tasked with testing the safety of emerging types of AI”.

The institute was informally announced last week ahead of this week’s summit. This time, the government announced that the committee will be led by Ian Hogarth, an investor, founder and engineer who also chaired the taskforce, and that Yoshuo Bengio, one of the most prominent figures in the AI ​​field, will lead the committee. It was confirmed that the Creating your first report.

It’s unclear how much money the government will put into the AI ​​Safety Institute, or whether industry players will pick up some of the costs. The institute, which falls under the Department of Science, Innovation and Technology, is described as “supported by major AI companies,” but this may refer to approval rather than financial support. do not have. We have reached out to his DSIT and will update as soon as we know more.

The news coincided with yesterday’s announcement of a new agreement, the Bletchley Declaration. The Bletchley Declaration was signed by all countries participating in the summit, pledging to jointly undertake testing and other commitments related to risk assessment of ‘frontier AI’ technologies. An example of a large language model.

“Until now, the only people testing the safety of new AI models were the companies developing them,” Sunak said in a meeting with journalists this evening. Citing efforts being made by other countries, the United Nations and the G7 to address AI, the plan is to “collaborate to test the safety of new AI models before they are released.”

Admittedly, all of this is still in its early stages. The UK has so far resisted moves to consider how to regulate AI technologies, both at the platform level and more specific application level, and the idea of ​​quantifying safety and risk has stalled. Some people think that it is meaningless.

Mr Sunak argued it was too early to regulate.

“Technology is developing at such a fast pace that the government needs to make sure we can keep up,” Sunak said, focusing too much on big ideas but too little on legislation. He spoke in response to accusations that he was “Before we make things mandatory and legislate, we need to know exactly what we’re legislating for.”

Transparency appears to be a very clear goal of many long-term efforts around this brave new world of technology, but today’s series of meetings at Bletchley, on the second day of the summit, It was far from the ideal.

In addition to bilateral talks with European Commission President Ursula von der Leyen and United Nations Secretary-General António Guterres, today’s summit focused on two plenary sessions. Though not accessible to journalists watching from across a small pool as people gather in the room, attendees at the event included the CEOs of DeepMind, OpenAI, Antrhopic, InflectionAI, Salesforce, and Mistral, as well as Microsoft The president of the company and the head of AWS were also included. Among those representing governments were Sunak, US Vice President Kamala Harris, Italy’s Giorgia Meloni and France’s Finance Minister Bruno Le Maire.

Remarkably, although China was a much-touted guest on the first day, it did not appear at the closed plenary session on the second day.

Elon Musk, owner of X.ai (formerly Twitter), also appeared to be absent from today’s session. Mr. Sunak is scheduled to have a fireside chat with Mr. Musk on his social platforms this evening. Interestingly, it is not expected to be a live broadcast.

Source: techcrunch.com