Keir Starmer, the British Prime Minister, aims to establish the UK as a leader in artificial intelligence.
PA Images/Alamy
Numerous civil servants within the UK government are utilizing their own AI chatbots to assist with their duties, including those supporting Prime Minister Keir Starmer, as revealed by New Scientist. Officials have not accurately recorded how the Prime Minister is receiving AI-generated advice, whether civil servants are addressing the risks of inaccurate or biased AI outputs, or how the Prime Minister utilizes these tools. Experts express concerns over this lack of transparency and its implications for the reliability of governmental information.
Following the acquisition of the world’s first ChatGPT logs under the Freedom of Information (FOI) Act, New Scientist has reached out to 20 government departments to document their interactions with Redbox. Redbox is a generative AI tool being trialed among UK government employees, enabling users to analyze government documents and generate initial drafts for briefings. According to one of the developers involved, early tests reported that a civil servant managed to consolidate 50 documents in mere seconds, a task that typically would take a day.
All contacted departments stated they do not use Redbox or declined to provide a record of interactions, which New Scientist deemed “troubling.” This is a formal term used in responses to FOI requests, as defined by the Office of Information Commissioner, which describes it as likely to cause undue distress, confusion, or irritation.
However, two departments divulged information regarding Redbox’s usage. The Cabinet Office, which assists the Prime Minister, reported that 3,000 individuals engaged in 30,000 chats with Redbox. After reviewing these exchanges, they noted that redacting sensitive information requires more than a year before any content can be released under FOI regulations. The Trade Bureau acknowledged retaining “over 13,000 prompts and responses” while also requiring review before release.
Both departments were contacted for additional inquiries about Redbox use. The Department of Science, Innovation and Technology (DSIT), which oversees these tools, declined to respond to specific questions about whether the Prime Minister or other ministers received AI-generated advice.
A DSIT representative informed New Scientist that “time should not be wasted on AI which operates faster and faster.” They added that Redbox is integrated into Whitehall to help civil servants utilize AI safely and effectively, simplifying document summarization and agenda drafting.
Nonetheless, some experts raise concerns regarding the use of generative AI tools. Large language models are known to have significant challenges related to bias and accuracy, making it hard to ensure Redbox delivers trustworthy information. DSIT did not clarify how Redbox users could mitigate those risks.
“My concern is that the government exists to serve the public, and part of its mandate is providing transparency regarding decision-making processes,” asserts Catherine Flick from Staffordshire University.
Due to the “black box” nature of generative AI tools, Flick emphasizes the difficulty of evaluating or understanding how a specific output is produced, especially if certain aspects of a document are emphasized over others. When governments withhold such information, they diminish transparency further, she argues.
This lack of transparency also extends to the Treasury, the third government department. The Ministry of Finance stated, in response to the FOI request, that New Scientist staff members cannot access Redbox, indicating that “GPT tools are available within HM [His Majesty’s] Treasury without maintaining a log of interactions.” The specific GPT tool referenced remains unidentified. While ChatGPT is well-known, other large language models also bear the GPT label, suggesting that the Treasury employs AI tools but lacks a comprehensive record of their usage, as New Scientist sought clarification on.
“If prompts aren’t documented, it’s challenging to replicate the decision-making process,” Flick adds.
John Baines from Mishcon De Reya remarked that it’s unusual for a UK law firm to forego recording such information. “It’s surprising that the government claims it cannot retrieve the prompts used in the internal GPT system.” While courts have ruled that public agencies aren’t required to maintain records before archiving, “good data governance implies that retaining records is crucial, particularly when they may influence policy development or communication,” he explains.
However, data protection specialist Tim Turner believes the Treasury is justified in not retaining AI prompts under the FOI Act. “This is permissible unless specific legal or employee regulations determine otherwise,” he states.
Topics:
Source: www.newscientist.com
Discover more from Mondo News
Subscribe to get the latest posts sent to your email.