“Significant apprehensions” about the utilization of DWP AI for interpreting welfare communications

wDeciding whether to respond to the daily influx of 25,000 letters and emails can be challenging. If you are overwhelmed and seeking help from the most vulnerable individuals in the country, your workload will only increase.

This is a dilemma faced by the Department of Work and Pensions (DWP) as they receive a flood of communication, including handwritten letters, from over 20 million people, including British retirees and welfare claimants. The DWP is exploring the use of artificial intelligence, like White Mail, to speed up the process of reading and responding to these messages.

While human reading used to take weeks, White Mail can process the same amount of information in a day, prioritizing cases of the most vulnerable individuals for prompt attention. However, concerns remain about the accuracy and fairness of this AI-driven system, especially as it has not been publicly documented in the Central Government AIS registry.

White Mail has been undergoing trials since at least 2023 under the leadership of Mel Stride, the then Secretary of State for Welfare. While the system aims to expedite support for those in need, there are concerns about the lack of transparency and consent in handling sensitive personal data.

Organizations like Turn2us have expressed reservations about the processing of highly confidential information without the knowledge or consent of the individuals involved. The DWP claims that data is encrypted and securely stored, but questions remain about the ethical implications of using AI in this context.

The use of AI like White Mail raises questions about accountability, transparency, and the protection of vulnerable claimants’ rights. Regular audits and data transparency are essential to ensure fair and ethical use of such technology.

DWP’s approach to utilizing AI in handling large volumes of communication requires careful scrutiny to uphold the principles of fairness and integrity. Transparency and accountability should be at the forefront of AI implementation to safeguard the rights of those who rely on welfare support.

For further information or comments, please reach out to the DWP.

Source: www.theguardian.com

TechScape: Is OpenAI’s $5 billion chatbot investment worth it? It depends on your utilization of it | Artificial Intelligence (AI)

What if you build it and no one comes?


It’s fair to say the luster of the AI boom is fading. Skyrocketing valuations are starting to look shaky compared to the massive spending required to keep them going. Over the weekend, tech site The Information reported that OpenAI is An astonishing $5 billion in additional spending is expected More than this year alone:

If our predictions are correct, OpenAI’s recent valuation would be $80bnwill need to raise more capital over the next 12 months or so. Our analysis is based on informed estimates of what OpenAI will spend to operate the ChatGPT chatbot and train future large-scale language models, as well as a “guesstimate” of how much OpenAI will spend on staffing, based on OpenAI’s previous projections and our knowledge of its adoption. Our conclusion shows exactly why so many investors are concerned about the profit prospects of conversational artificial intelligence.

The most pessimistic view is that AI — and especially chatbots, an expensive and competitive sector of an industry that has captured the public’s imagination — isn’t as good as we’ve been told.

This argument suggests that as adoption grows and iteration slows, most people have had a chance to use cutting-edge AI properly and are beginning to realize that it’s great but probably useless. The first time you use ChatGPT, it’s a miracle, but by the 100th time, the flaws are obvious and the magic fades into the background. You decide ChatGPT is bullshit.

In this paper, I argue against the view that ChatGPT and others are lying or hallucinating when they make false claims, and support the position that what they are doing is bullshit. … Since these programs themselves could not care less about the truth, and are designed to generate text that looks true without actually caring about the truth, it seems appropriate to call their output bullshit.

Get them trained




It is estimated that only a handful of jobs will be completely eliminated by AI. Photo: Bim/Getty Images/iStockphoto

I don’t think it’s that bad. But that’s not because the system is perfect. I think the move to AI is a hurdle we’ve got to overcome much earlier. You have to try a chatbot in any meaningful way to even begin to realize it’s bullshit and give up. And judging by the tech industry’s response, that’s starting to become a bigger hurdle. Last Thursday, I reported on how Google is partnering with a network of small businesses and several academy trusts to bring AI into the workplace to enhance, rather than replace, worker capabilities. Debbie Weinstein, managing director of Google UK and Ireland, said:

It’s hard for us to talk about this right now because we don’t know exactly what’s going to happen. What we do know is that the first step is to sit down and talk. [with the partners] And then really understanding the use case. If you have school administrators and students in the classroom, what are the specific tasks that you actually want to perform for these people?

For teachers, this could be a quick email with ideas on how to use Gemini in their lesson plans, formal classroom training, or one-on-one coaching. Various pilot programs will be run with 1,200 participants, with each group having around 100 participants.

One way of looking at this is that it’s just another feel-good investment in the upskilling schemes of big companies. Google in particular has been helping to upskill Brits for years with its digital training scheme, formerly branded as the company’s “Digital Garage”. To put it more cynically, teaching people how to use new technology by teaching them how to use your own tools is good business. Brits of a certain age will vividly remember “IT” or “ICT” classes as thinly veiled instructions on how to use Microsoft Office. People older and younger than me learned some basic computer programming. I learned how to use Microsoft Access.

In this case, it’s something deeper: Google needs to go beyond simply teaching people how to use AI and also run experiments to figure out what exactly to teach them. “This isn’t about a fundamental rethinking of how we understand technology, it’s about the little everyday things that make work a little more productive and a little more enjoyable,” Weinstein says. “Today, we have tools that make work a little easier. Those three minutes you save every time you write an email.

“Our goal is to make sure that everyone can benefit from technology, whether it’s Google technology or other companies’ technology. And I think the general idea of working together with tools that help make your life more efficient is something that everyone can benefit from.”

Ever since ChatGPT came out, the underlying assumption has been that the technology speaks for itself, and the fact that it literally does is a big help to that. But chat interfaces are confusing. Even if you’re dealing with a real human being, it’s still a skill to get the best out of them when you need help, and an even better skill when the only way to communicate with them is through text chat.

AI chatbots are not people. They are so unlike humans that it’s all the more difficult to even think about how they might fit into common work patterns. The pessimistic view of this technology isn’t “what if there wasn’t one there” – there is, of course, a pessimistic view, despite all the hallucinations and nonsense. Rather, it’s a much simpler view: what if most people never bothered to learn how to use them?

Skip Newsletter Promotions

Masbot Gold




Google DeepMind has trained its new AI system to solve problems from the International Mathematical Olympiad. Photo: Pittinan Piyavatin/Alamy

Meanwhile, elsewhere in Google it reads:

Although computers are being built to perform calculations faster than humans, the highest levels of formal mathematics remain the sole domain of humans. But a groundbreaking discovery by researchers at Google DeepMind has brought AI systems closer than ever to beating the best human mathematicians at the field.

Two new systems, called AlphaProof and AlphaGeometry 2, worked together to tackle problems in the International Mathematical Olympiad, a worldwide math competition for middle school students. 1959Each year, the Olympiad consists of six incredibly difficult problems covering subjects such as algebra, geometry and number theory, and winning a gold medal makes you one of the best young mathematicians in the world.

A word of warning: the Google DeepMind system solved “only” four of the six problems, and one of them they solved using a “neurosymbolic” system, which is less AI-like than you might expect. All problems were manually translated into a programming language called Lean, which allows the system to read it as a formal description of the problem without having to parse human-readable text first. (Google DeepMind also tried to use LLM to do this part, but it didn’t work very well.)

But this is still a pretty big step. The International Mathematical Olympiad difficultand AI won the medal. What happens when you win the gold medal? Is there a big difference between being able to solve problems that only the best high school mathematicians could tackle and being able to solve problems that only the best undergraduates, graduate students, and doctors could solve? What changes when a branch of science is automated?

If you’d like to read the full newsletter, sign up to receive TechScape in your inbox every Tuesday.

Source: www.theguardian.com

Photos demonstrate the utilization of AI to reinterpret ancient graffiti

Reinterpretation of etching

Matthew Attard and Galeria Michela Rizzo

Maltese artists at the 60th Venice Biennale Matthew Attard Through the prism of AI-driven technology, we address our nation's maritime heritage, along with concepts of faith and progress. His work focuses on images of ships graffitied by sailors on the stone facades of Maltese chapels from the 16th century to his 19th century, one of his is pictured below.

Boat graffiti in the Chapel of the Visitation of Our Lady in Weed Kilda, Malta

Elise Tonna

Attard, pictured below, used his line of sight to follow notches in the ship's hull, rigging, and billowing sails, a process facilitated by line-of-sight trackers and generative algorithms. “This line of sight was converted by technology into data points, which were further interpreted to produce lines and drawings,” he says.

A database of digital images generated from the data points captured the sculpture from different perspectives, from which works of art such as 3D scans and video works were created.

Matthew Attard wearing an eye tracking device.

Elise Tonna

Marine graffiti resonates with cultures where the relationship with the sea has been and continues to be important, and ships remain a metaphor for hope and survival. Similarly, the Maltese chapels have long been sanctuaries. Attard said he wanted to explore “parallels to the current 'blind faith' in digital technology.”

A reinterpretation of his etching is the impression of a ghostly skeleton, as shown in the main image. “Some would argue that even the most traditional mediums, such as pencil or charcoal, can be considered a form of drawing technique,” he points out. His exhibition, commissioned by the Malta Arts Council, will run at the Malta Pavilion at the Venice Biennale in Italy until 24 November.

topic:

Source: www.newscientist.com