The Method We Use to Train AIs Increases Their Likelihood of Producing Nonsense

Certain AI training techniques may lead to dishonest models

Cravetiger/Getty Images

Researchers suggest that prevalent methods for training artificial intelligence models may increase their propensity to provide deceptive answers, aiming to establish “the first systematic assessment of mechanical bullshit.”

It is widely acknowledged that large-scale language models (LLMs) often produce misinformation or “hagaku.” According to Jaime Fernandez Fissac from Princeton University, his team defines “bullshit” as “discourse designed to manipulate an audience’s beliefs while disregarding the importance of actual truth.”

“Our analysis indicates that the problems related to bullshit in large-scale language models are quite severe and pervasive,” remarks FISAC.

The researchers categorized these instances into five types: “This red car combines style, charm, and adventure that captivates everyone,” Weasel Words—”Ambiguous statements like ‘research suggests that in some cases, uncertainties may enhance outcomes’; Essentialization—employing truthful statements to create a false impression; unverified claims; and sycophancy.

They evaluated three datasets composed of thousands of AI-generated responses to various prompts from models including GPT-4, Gemini, and Llama. One dataset included queries specifically designed to test the generation of bullshit when AIS was asked for guidance or recommendations, alongside others focused on online shopping and political topics.

FISAC and his colleagues first employed LLMs to determine if the responses aligned with one of the five categories and then verified that the AI’s classifications matched those made by humans.

The team found that the most critical truths posed challenges stemming from a training method called reinforcement learning from human feedback, aimed at enhancing the machine’s utility by offering immediate feedback on its responses.

However, FISAC cautions that this approach is problematic, as models “sometimes conflict with honesty,” prioritizing immediate human approval and perceived usefulness over truthfulness.

“Who wants to engage in the lengthy and subtle rebuttal of bad news or something that seems evidently true?” FISAC questions. “By attempting to adhere to our standards of good behavior, the model learns to undervalue the truth in favor of a confident, articulate response to secure our approval.”

This study revealed that reinforcement learning from human feedback notably heightened bullshit behavior, with inflated rhetoric increasing by nearly 40%, substantial enhancements in Weasel Words, and over half of unverified claims.

Heightened bullshitting is especially detrimental, as team member Kaique Liang points out, leading users to make poorer decisions. In cases where the model’s features were uncertain, deceptive claims surged from five percent to three-quarters following human training.

Another significant issue is that bullshit is prevalent in political discourse, as AI models “tend to employ vague and ambiguous language to avoid making definitive statements.”

AIS is more likely to behave this way when faced with conflicts of interest, as the system caters to multiple stakeholders including both the company and its clients, as the researchers discovered.

To address this issue, the researchers propose transitioning to a “hindcasting feedback” model. Instead of seeking immediate feedback post-output, the system should first generate a plausible simulation of potential outcomes based on user input, which is then presented to a human evaluator for assessment.

“Ultimately, we hope that by gaining a deeper understanding of the subtle but systematic ways AI may seek to mislead us, we can better inform future initiatives aimed at creating genuinely truthful AI systems,” concludes FISAC.

Daniel Tiggard of the University of San Diego, though not involved in the study, expresses skepticism regarding discussions of LLMs’ output under these circumstances. He argues that just because LLMs generate bullshit, it does not imply intentional deception, as AI systems currently stand. I left to deceive us, and I have no interest in doing so.

“The primary concern is that this framing seems to contradict sensible recommendations about how we should interact with such technology,” states Tiggard. “Labeling it as bullshit risks anthropomorphizing these systems.”

Topics:

Source: www.newscientist.com

We Might Have Discovered a Simple Method for Producing Water on the Moon

Researchers have created innovative technologies to extract water from lunar soil, potentially offering vital support for future lunar explorers.

Findings published in the journal Joule highlight how this could significantly lower the astronomical cost of transporting water from Earth, which stands at $22,000 per liter ($83,000 per gallon).

If successfully scaled, this technology may play a crucial role in supporting long-term missions on the moon.

Utilizing samples brought back by China’s Chang’e-5 mission in 2020, scientists showed that water can be extracted from lunar materials and used alongside carbon dioxide to produce essential resources. These resources include oxygen for astronauts to breathe and hydrogen-based chemicals that can be transformed into rocket fuel.

“We never fully imagined the ‘magic’ contained in lunar soil,” said Professor Lou Wang, one of the study’s authors from Shenzhen University and Hong Kong’s China University, in a statement.

“The most surprising aspect of our work was the real success achieved through this integrated approach. One stage of lunar 2O extraction and photothermal CO2 catalysts enhances energy efficiency and simplifies infrastructure development.”

This technique employs a photothermal method (which converts sunlight into heat) to facilitate water extraction and the chemical conversion process.

Chang’e-5 lunar samples on display in Beijing, China. The mission returned 1.7 kg (3.7 pounds) of lunar material to Earth in 2020 – Source: Getty

In laboratory tests, the team employed actual lunar soils from Chang’e-5, along with simulated samples, exposing them to CO2 while concentrating light into a batch reactor. The CO2 used in the conversion process can be easily obtained from astronaut exhalations on the moon.

Previous methods for extracting water from lunar regolith lacked direct links to generating other vital resources. This integrated approach indicates a more efficient advancement; however, researchers recognize that significant challenges persist.

The moon’s extreme temperatures, high radiation levels, and inconsistent soil composition complicate efforts to scale this technology. The amount of CO2 produced by an astronaut’s exhalation may not meet the requirements for complete resource recycling, and the catalytic process still lacks the efficiency needed for sustained life.

Nevertheless, this advancement represents a promising leap towards making life on the moon more viable. There is increasing global interest in establishing a long-term human presence on the moon, and leveraging local water resources could be instrumental for deeper space missions.

Read more:

Source: www.sciencefocus.com

Physicists discover innovative methods for producing Livermorium-116

Using the 88-inch cyclotron from the Lawrence Berkeley National Laboratory, an international team of physicists successfully created two atoms Rivermorium (Atomic Symbol LV) A breakthrough in which the lab tries to create a new element 120, using titanium beams for the first time.



Rivermorium, make a gate et al. A fusion isotopes of titanium and plutonium. Image credits: Jennius, Lawrence Berkeley National Laboratory.

Currently there are 118 known elements, 90 of which occur naturally on Earth.

Heavy elements than fermium (with 100 protons) must be created by combining the nuclei of two lighter elements, but not all combinations work.

The heaviest, currently known element was created by fusing a specific isotope of calcium, calcium-48 (containing 20 protons and 28 neutrons), with a heavier element, but this method works only up to element 118 (Oganesson).

The number of special (so-called magic) protons and neutrons makes it more possible to fusion of calcium and the survival of the nucleus of the resulting compounds.

But to go further, scientists need new techniques.

In the new experiment, Lawrence Berkeley National Laboratory and her colleague Dr. Jacklyn Gates made a major breakthrough by accelerating a beam of titanium-50 (containing 22 protons and 28 neutrons) with an 88-inch cyclotron, dissolving it with the nucleus of plutonium-244 (containing 94 protons and 150 diseases) and titanium nucleus.

Over 22 days, physicists successfully produced two atoms of rivermorium, the chemical element with symbol LV and atomic number 116.

This experiment shows that new elements other than Oganesson can be created in the Berkeley Lab.

However, creating element 120 is expected to be 10-20 times more difficult than Livermorium.

If successful, element 120 is the heaviest known element, offering a new opportunity to explore the outermost limits of atomic structures and further test theories of nuclear physics.

“This response has never been demonstrated before, and it was essential to prove that it was possible before embarking on an attempt to make a 120,” Dr. Gates said.

“Creating new elements is a very rare feat. It’s part of the process and it’s exciting to have a promising path forward.”

“This was an important first step in trying to make something a little easier than the new ones to see how the movement from the calcium beam to the titanium beam changes the rate at which these elements are produced,” said Dr. Jennifer Pore of Lawrence Berkeley National Laboratory.

“When we are trying to create these incredibly rare elements, we are at the absolute edge of human knowledge and understanding. There is no guarantee that physics will work as expected.”

“Using titanium to create element 116, we now have the ability to verify that this production method works and plan the hunt for element 120.”

Team’s paper Published in the journal Physical Review Letter.

____

JM Gate et al. 2025. Towards discovering new elements: production of rivermorium (z = 116) 50Ti. Phys. Pastor Rett 133, 172502; doi: 10.1103/physrevlett.133.172502

Source: www.sci.news

Producing sexually explicit deepfake images is a crime in the UK | Deepfakes

The Ministry of Justice has declared that the creation of sexually explicit “deepfake” images will soon be considered a criminal offense under new legislation.

Those found guilty of producing such images without consent could face a criminal record, an unlimited fine, and possible imprisonment if these images are distributed widely.

The ministry stipulates that creating a deepfake image will be punishable, irrespective of the creator’s intentions for sharing it. Last year’s online safety laws already criminalize the dissemination of intimate deepfakes, made easier by advancements in artificial intelligence technology.

The offense is anticipated to be added to the Criminal Justice Bill currently under parliamentary review. Minister Laura Farris affirmed that the creation of deepfake sexual content is unacceptable under any circumstances.

“This reprehensible act of degrading and dehumanizing individuals, particularly women, will not be tolerated. The potential repercussions of widespread sharing of such material can be devastating. This government is unwavering in its stance against it.”

Yvette Cooper, the Shadow Home Secretary, voiced support for the new law, stating: “It is imperative to criminalize the production of deepfake pornography. Imposing someone’s image onto explicit content violates their autonomy and privacy, posing significant harm and must be condemned.

Law enforcement must be equipped with the necessary training and resources to enforce these laws rigorously and dissuade offenders from acting with impunity,” added Cooper.

Deborah Joseph, editor-in-chief of Glamor UK, lauded the proposed amendments, citing a survey revealing that 91% of readers perceive deepfake technology as a threat to women’s safety. Personal accounts from victims emphasized the severe impact of this activity.

“While this marks a crucial initial step, there remains a considerable journey ahead for ensuring women feel completely safeguarded from this atrocious practice,” asserted Joseph.

Source: www.theguardian.com

Producing Powdered Milk for Orphaned Animals in a Milk Bank

Shaman, a hairy armadillo cub, cries after being fed custom baby milk

Roshan Patel/Smithsonian National Zoo/Conservation Biology Institute

Killer whale milk has a blindingly fishy smell. Seal milk has a rich orange color. Reindeer milk is about as thick as eggnog, which is probably appropriate. I’m not tempted to try it, and I can see other exotic milks stacked floor-to-ceiling on shelves. I put on my fluffy winter jacket and went inside the freezer that houses the world’s largest collection of animal milk. It contains milk from everything from shrews to two-toed sloths and giant anteaters.

Housed at the Smithsonian’s National Zoo in Washington, D.C., this collection is more than just a shelf of curiosities, it’s a vital resource for zoo workers here and at zoos around the world tasked with nourishing orphaned infants. By studying all this white and not-so-white stuff, scientists at the Smithsonian Institution hope to create custom infant formulas that give the animals in their care the best possible start in life. I can.

However, as our understanding of milk grew, we realized that milk lacked an important element: microorganisms. Now, as they investigate the diversity of microorganisms found in different milks and the benefits these organisms provide, they aim to recreate this in milk produced in the lab. This is not only to better help the young animals in the zoo, but also to help some animals survive. The rarest species in the wild.

Killer whale milk tastes fishy

Espen Bergersen/npl/Alamy

“The goal is not necessarily to freeze and archive milk…

Source: www.newscientist.com