If Keir Starmer Doesn’t Feel Robotic Enough, His AI Doubles Are Here to Answer Your Questions

For those rare individuals who dream of conversing with Keir Starmer, a new AI model has arrived.

The former Chief of Staff to the Tories has developed a platform called Nostrada, designed to enable users to engage with AI representations of all 650 UK Parliament members.

Founded by Leon Emirali, who previously worked with Steve Berkeley, Nostrada is built to allow users to converse with the “digital twin” of each MP, replicating their political views and mannerisms.

This service targets diplomats, lobbyists, and the general public, helping users explore each MP’s position on various matters and find relevant colleagues.

“Politicians are never short of opinions, which provide us with ample data sources,” Emirali stated. “They have a viewpoint on everything, and the quality of an AI product relies heavily on the data it is built upon.”

The reliability of chatbots may come into question from the politicians themselves.

The Guardian challenged the digital avatars of cabinet members; most chose not to respond, while Health Secretary Wes Street’s representation voted for himself.

These models draw on a vast range of written and spoken material from politicians available online. No matter how hard you attempt to sway them, their stances won’t change. This is due to their inability to learn from new input, meaning that every interaction remains static. The Guardian aims to shed light on the nature of these AI models.

Emirali’s concept originated in 2017 when he attempted to convince conservatives to create a chatbot for then-Prime Minister Theresa May, who was nicknamed “Mabot” to provide brief overviews of key issues.

The AI is already in use among various politicians, including accounts associated with cabinet office emails as well as two distinct accounts linked to foreign embassy emails for investigating the prime minister and his cabinet. Emirali mentioned that several notable lobbying and marketing firms have utilized this technology over recent months.

Skip past newsletter promotions

Despite the numerous applications of Nostrada, Emirali concedes that AI could be a “shortcoming” for future voters who might rely entirely on it to shape their understanding.

He remarked, “Political nuances are too intricate. AI may not be adequately comprehensive for voters to depend on fully. The hope is that for those already familiar with politics, this tool proves to be incredibly beneficial.”

Source: www.theguardian.com

Is Keir Starmer Receiving AI Advice? The UK Government Remains Silent

Keir Starmer, the British Prime Minister, aims to establish the UK as a leader in artificial intelligence.

PA Images/Alamy

Numerous civil servants within the UK government are utilizing their own AI chatbots to assist with their duties, including those supporting Prime Minister Keir Starmer, as revealed by New Scientist. Officials have not accurately recorded how the Prime Minister is receiving AI-generated advice, whether civil servants are addressing the risks of inaccurate or biased AI outputs, or how the Prime Minister utilizes these tools. Experts express concerns over this lack of transparency and its implications for the reliability of governmental information.

Following the acquisition of the world’s first ChatGPT logs under the Freedom of Information (FOI) Act, New Scientist has reached out to 20 government departments to document their interactions with Redbox. Redbox is a generative AI tool being trialed among UK government employees, enabling users to analyze government documents and generate initial drafts for briefings. According to one of the developers involved, early tests reported that a civil servant managed to consolidate 50 documents in mere seconds, a task that typically would take a day.

All contacted departments stated they do not use Redbox or declined to provide a record of interactions, which New Scientist deemed “troubling.” This is a formal term used in responses to FOI requests, as defined by the Office of Information Commissioner, which describes it as likely to cause undue distress, confusion, or irritation.

However, two departments divulged information regarding Redbox’s usage. The Cabinet Office, which assists the Prime Minister, reported that 3,000 individuals engaged in 30,000 chats with Redbox. After reviewing these exchanges, they noted that redacting sensitive information requires more than a year before any content can be released under FOI regulations. The Trade Bureau acknowledged retaining “over 13,000 prompts and responses” while also requiring review before release.

Both departments were contacted for additional inquiries about Redbox use. The Department of Science, Innovation and Technology (DSIT), which oversees these tools, declined to respond to specific questions about whether the Prime Minister or other ministers received AI-generated advice.

A DSIT representative informed New Scientist that “time should not be wasted on AI which operates faster and faster.” They added that Redbox is integrated into Whitehall to help civil servants utilize AI safely and effectively, simplifying document summarization and agenda drafting.

Nonetheless, some experts raise concerns regarding the use of generative AI tools. Large language models are known to have significant challenges related to bias and accuracy, making it hard to ensure Redbox delivers trustworthy information. DSIT did not clarify how Redbox users could mitigate those risks.

“My concern is that the government exists to serve the public, and part of its mandate is providing transparency regarding decision-making processes,” asserts Catherine Flick from Staffordshire University.

Due to the “black box” nature of generative AI tools, Flick emphasizes the difficulty of evaluating or understanding how a specific output is produced, especially if certain aspects of a document are emphasized over others. When governments withhold such information, they diminish transparency further, she argues.

This lack of transparency also extends to the Treasury, the third government department. The Ministry of Finance stated, in response to the FOI request, that New Scientist staff members cannot access Redbox, indicating that “GPT tools are available within HM [His Majesty’s] Treasury without maintaining a log of interactions.” The specific GPT tool referenced remains unidentified. While ChatGPT is well-known, other large language models also bear the GPT label, suggesting that the Treasury employs AI tools but lacks a comprehensive record of their usage, as New Scientist sought clarification on.

“If prompts aren’t documented, it’s challenging to replicate the decision-making process,” Flick adds.

John Baines from Mishcon De Reya remarked that it’s unusual for a UK law firm to forego recording such information. “It’s surprising that the government claims it cannot retrieve the prompts used in the internal GPT system.” While courts have ruled that public agencies aren’t required to maintain records before archiving, “good data governance implies that retaining records is crucial, particularly when they may influence policy development or communication,” he explains.

However, data protection specialist Tim Turner believes the Treasury is justified in not retaining AI prompts under the FOI Act. “This is permissible unless specific legal or employee regulations determine otherwise,” he states.

Topics:

Source: www.newscientist.com