Error: unable to get links from server. Please make sure that your site supports either file_get_contents() or the cURL library.

The Trump administration wants to streamline the US government to use AI to increase efficiency
Greggory Disalvo/Getty Images
What is artificial intelligence? This is a question scientists wrestled in the 1950s when Alan Turing asked, “Can you think of a machine?” With large-scale language models (LLMs) like ChatGpt unlocking around the world, finding the answer is more pressing than ever before.
Although their use is already widespread, the social norms around these new AI tools are still evolving rapidly. Should students use them to write essays? Will they replace your therapist? And can they turbocharge the government?
That last question is being asked in both the US and the UK. Under the new Trump administration, Elon Musk’s Department of Government Efficiency (DOGE) task force is eliminating federal workers and deploying chatbots with those who have left GSAIs behind. Meanwhile, British Prime Minister Kiel Starmer calls it a “money opportunity” that will help rebuild the nation.
Certainly there are government jobs that can benefit from automation, but is LLMS a suitable tool for the job? Part of the problem is that they don’t agree with what they actually are. This was properly demonstrated this week
New Scientist Using the Freedom of Information (FOI) law, we acquired the ChatGPT interaction of Peter Kyle, Secretary of State for Science, Innovation and Technology. Politicians, data privacy experts, journalists, and in particular we were amazed at how a request was recognized.
The release of the records suggests that the UK government considers ChatGpt to be similar to ministerial conversations with civil servants via email or WhatsApp. Both are subject to the FOI Act. Kyle’s interactions with ChatGpt show no strong reliance on AI to form serious policies. One of his questions was about which podcasts they should appear on. However, the fact that the FOI request has been granted suggests that some governments seem to believe that AI can speak like humans.
As
New Scientist LLM is currently responsible for spitting out the inaccuracies of sound that are as compelling as they provide useful advice, rather than intelligent in a meaningful sense. Furthermore, their answers reflect the inherent bias in the information they ingested.
In fact, many AI scientists are increasingly seeing the view that LLMS is not the route to the lofty goals of artificial general information (AGI). We can match or surpass what humans can do. For example, in a recent survey of AI researchers, around 76% of respondents said that it is “impossible” or “very unlikely” that current approaches will succeed in achieving AGI.
Instead, perhaps we need to think of these AIs in new ways.
Write in a journal
Science this weeka team of AI researchers stated that “it should not be seen primarily as intelligent agents, but as a new kind of cultural and social technology, allowing humans to access information accumulated by other humans.” Researchers compare LLM to “past technologies such as writing, printing, markets, bureaucracy, and representative democracy” that changed the way information was accessed and processed.
This way, the answers to many questions are clearer. Can the government use LLM to increase efficiency? It’s almost certainly true, but only when used by people who understand their strengths and limitations. Should interactions with chatbots be subject to the Freedom of Information Act? Perhaps existing sculptures designed to give the minister a “safe space” for internal deliberations should be applied. And, as Turing asked, can the machine think? no. still.
topic:
Source: www.newscientist.com