Is it wise for the government to utilize AI to reform the state?

SEI 243837055

The Trump administration wants to streamline the US government to use AI to increase efficiency

Greggory Disalvo/Getty Images

What is artificial intelligence? This is a question scientists wrestled in the 1950s when Alan Turing asked, “Can you think of a machine?” With large-scale language models (LLMs) like ChatGpt unlocking around the world, finding the answer is more pressing than ever before.

Although their use is already widespread, the social norms around these new AI tools are still evolving rapidly. Should students use them to write essays? Will they replace your therapist? And can they turbocharge the government?

That last question is being asked in both the US and the UK. Under the new Trump administration, Elon Musk’s Department of Government Efficiency (DOGE) task force is eliminating federal workers and deploying chatbots with those who have left GSAIs behind. Meanwhile, British Prime Minister Kiel Starmer calls it a “money opportunity” that will help rebuild the nation.

Certainly there are government jobs that can benefit from automation, but is LLMS a suitable tool for the job? Part of the problem is that they don’t agree with what they actually are. This was properly demonstrated this week
New Scientist Using the Freedom of Information (FOI) law, we acquired the ChatGPT interaction of Peter Kyle, Secretary of State for Science, Innovation and Technology. Politicians, data privacy experts, journalists, and in particular we were amazed at how a request was recognized.

The release of the records suggests that the UK government considers ChatGpt to be similar to ministerial conversations with civil servants via email or WhatsApp. Both are subject to the FOI Act. Kyle’s interactions with ChatGpt show no strong reliance on AI to form serious policies. One of his questions was about which podcasts they should appear on. However, the fact that the FOI request has been granted suggests that some governments seem to believe that AI can speak like humans.

As
New Scientist LLM is currently responsible for spitting out the inaccuracies of sound that are as compelling as they provide useful advice, rather than intelligent in a meaningful sense. Furthermore, their answers reflect the inherent bias in the information they ingested.

In fact, many AI scientists are increasingly seeing the view that LLMS is not the route to the lofty goals of artificial general information (AGI). We can match or surpass what humans can do. For example, in a recent survey of AI researchers, around 76% of respondents said that it is “impossible” or “very unlikely” that current approaches will succeed in achieving AGI.

Instead, perhaps we need to think of these AIs in new ways.
Write in a journal
Science this week
a team of AI researchers stated that “it should not be seen primarily as intelligent agents, but as a new kind of cultural and social technology, allowing humans to access information accumulated by other humans.” Researchers compare LLM to “past technologies such as writing, printing, markets, bureaucracy, and representative democracy” that changed the way information was accessed and processed.

This way, the answers to many questions are clearer. Can the government use LLM to increase efficiency? It’s almost certainly true, but only when used by people who understand their strengths and limitations. Should interactions with chatbots be subject to the Freedom of Information Act? Perhaps existing sculptures designed to give the minister a “safe space” for internal deliberations should be applied. And, as Turing asked, can the machine think? no. still.

topic:

Source: www.newscientist.com

NASA’s wise decision to implement a backup plan proved crucial in wake of the Starliner grounding

Whenever a rocket launch or mission goes wrong, experts always say the same thing: “Space is hard.” As advances in the space industry accelerate, this mantra has only grown more important, if not less, as we face—and, for the most part, overcome—the challenges of spaceflight with increasing frequency.

The situation that has unfolded aboard the International Space Station (ISS) over the past few months is a case in point: Boeing’s Starliner spacecraft successfully completed its first manned flight on June 5, but a hardware problem meant that after arriving at the ISS it was unclear whether the two NASA astronauts on board would be able to safely return to Earth as scheduled.

So after ground tests and much deliberation, NASA reversed course, announcing that its astronauts would stay longer and return instead in February 2025 aboard SpaceX’s Crew Dragon spacecraft (see “Astronauts stranded on ISS reveal U.S. space program not in peril.”) A potentially catastrophic problem was reduced to a mere inconvenience thanks to NASA’s wise decision a decade ago to hire not one but two companies to build the capsules that would carry astronauts into space. We’d always known space was a tough place, and preparation paid off.

The first ever private spacewalk will likely be the most dangerous one ever.

Hopefully, the thorough preparations will also pay off for the crew of SpaceX’s upcoming Polaris Dawn mission, which, if all goes well, will conduct the first-ever civilian spacewalk, and perhaps the most dangerous one ever (see page 8).

If the flight goes well, it will be another big win for commercial spaceflight, and especially for SpaceX, as it will be the first test of the company’s new spacesuit. Aging spacesuits have been a big problem for NASA and other space agencies for decades. The spacesuits NASA uses are the same ones astronauts wore in the 1980s and are long past their prime. A new spacesuit that is comfortable for civilians to wear, has better mobility, better temperature regulation and is more reliable would be a big win. It would make life in space a little easier.

topic:

Source: www.newscientist.com