Is This the Most Memorable Acronym in Science? It Definitely Stinks!

Feedback is your go-to source for the latest in science and technology news from new scientists. Share your intriguing finds with us at Feedback@newscientist.com for potential inclusion.

And inhaling…

To achieve success in science, having good ideas and conducting effective experiments is beneficial. However, mastering the art of crafting a catchy acronym is essential. If you can distill a description of your project into an acronym, you’ll be onto something great.

That’s how I came up with names like Antarctic Moon, Neutrino Detector Arrays (Amanda), and Telescope Axis Exchange (CoSTAR) in the Modified Optical Space. Unfortunately, some folks resort to manipulating letters to craft the acronyms they desire—leading to humorously awkward titles like the BMJ 2014 paper titled “Title”Search for humor and luxurious acronyms: A completely inappropriate name of a critical clinical trial (science): qualitative and quantitative systematic research.”

A hat tip to Raif Sheeben, Yoel Zimmerman, and their team for a July survey in NPJ Food Science. They developed a “chemical language model for predicting molecular taste,” a machine learning model capable of forecasting the flavor of chemicals based on their molecular structure. Trained on over 15,000 compounds, the researchers successfully categorized tastes into four distinct groups.

Remarkably, this model achieved over 91% accuracy and can assist in flavor creation. Naturally, the team dubbed it a flavour analysis and recognition transformer, or ‘fart’ for short.

Food engineer Andy Clayton flagged this, humorously pointing out that “regardless of its value, one can’t read it without laughing.”

We encourage readers to share their stories about the most ridiculous acronyms and cringe-worthy attempts they’ve encountered in their pursuits.

No surprises here

Feedback called upon readers for “Shit, Sherlock”: an illustration of scientific endeavors that invest extensive time and effort to demonstrate something rather obvious. Your responses have been coming in since.

Maggie Jacobs highlighted an article from Discover about The psychological benefits of loneliness, referencing a 2023 study. This research examined whether individuals reap benefits from maintaining a balance between solitude and social interactions, concluding there was no evidence of an ‘optimal balance.’ It found no negative consequences, especially when people consciously chose to spend time alone. As Maggie aptly states, “When people choose their activities, they tend to be happier.”

For extra context, the study’s authors utilized the outdated term “selective” to suggest individuals are intentionally engaging in activities rather than opting for a more contemporary term like “intentional.”

Meanwhile, Ernest Ager pointed out the obvious title of a The Conversation article: “Can you spot a ‘fake’ accent? It depends on where you’re from.” While it seems straightforward, the findings were even clearer: people from the US, Canada, and Australia are less adept at identifying fake versions of various UK accents than those native to the UK.

Farewell to Tom

We were saddened to hear of Tom Lehrer’s passing on July 26th. He was a brilliant satirical singer-songwriter renowned for his clever takes on mathematics. His song element has undoubtedly become his most recognizable tune. Thanks to countless devoted chemistry educators, Feedback appreciates his satirical take on nuclear warfare in songs like When we go, we all go together and his delightfully dark love songs such as Masochistic Tango.

In 2022, Lehrer made all of his music available under copyright, so you can access it freely at Tomlehrersongs.com. We highly recommend checking out the site for a treasure trove of lesser-known tracks that weren’t part of his popular albums.

For instance, we hadn’t encountered his piece Love Song by a Physical Anthropologist before. It humorously critiques that “every traditional love song that addresses the physical attributes of a beloved individual limits its praise to features like hair, eyes, and lips, whereas physical anthropologists can utilize an extensive array of descriptive adjectives” and hence, “I love you / she is beautiful, she is enchanting / it is traumatic, vascular, riffipilous, laryngeal production / my gal of metriocephaly.”

We were curious to discover, via Opalescentopal on Bluesky, some of the clever antics Lehrer pulled while serving in the US military. Notably, he worked for the NSA, and one of his papers is now publicly available at his discretion, titled “Defeating the gambler with his heartfelt enemies.” It delves into enduring mathematical challenges.

At the conclusion of the 1957 paper, there are six references, one of which is humorously attributed to “Lobachevsky,” relating to analytical and algebraic topology, claiming it discusses a topology of infinitely differentiable Lemanian local Euclidean metrics. [sic] This is actually a joke, referencing Lehrer’s own “Lobachevsky” rather than a legitimate mathematical paper.

This is how people play the long game. A very long game, indeed, Tom.

Have you shared your feedback with us?

You can send your stories to feedback@newscientist.com. Please include your home address. Current and past feedback is also available on our website.

Source: www.newscientist.com

Meta’s AI Memorable Book Verbatim – Can Cost Billions

In April, authors and publishers protested utilizing copyrighted books for AI training

Vuk Valcic/Alamy Live News

Amid legal battles, billions are at stake as courts in the US and UK deliberate on whether technology firms can legitimately train AI models using copyrighted literature. Numerous lawsuits have been filed by authors and publishers, revealing that at least one AI model has not only utilized popular texts for training but has also memorized portions of these works verbatim.

The crux of the dispute lies in whether AI developers hold the legal authority to employ copyrighted materials without obtaining prior permission. Previous research highlighted that many large language models (LLMs) powering popular AI chatbots were trained on the “Books3” dataset. Developers of these models argued they were not infringing copyright, claiming they were generating new combinations of words rather than directly reproducing the copyrighted content.

However, recent investigations have examined various AI models to determine the extent of verbatim recall from their training datasets. While most models did not retain exact texts, one particular model from Meta remembered nearly the entire text of a specific book. Should the ruling be unfavorable to the company, researchers predict damages could exceed $1 billion.

“AI models are not merely ‘plagiarism machines’ as some suggest; they do not just capture general relationships among words,” explained Mark Remley from Stanford University. “The diversity in responses among different models complicates the establishment of universal legal standards.”

Previously, Lemley defended Meta in a copyright case involving generative AI known as Kadrey V Meta Platforms. The plaintiff, whose works were used to train Meta’s AI models, filed a class-action lawsuit against the tech giant for copyright infringement. The case is currently under consideration in Northern California.

In January 2025, Remley announced he had parted ways with Meta as a client, yet he remains convinced of the company’s favorable chances in the lawsuit. Emile Vasquez, a Meta spokesperson, stated, “Fair use of copyrighted materials is crucial. We challenge the plaintiff’s claims, and the full record presents a different narrative.”

In this new study, Lemley and his team evaluated the memory capabilities of the AI by dividing excerpts from a small book into prefix and suffix segments, checking if a model prompted with the prefix could recall the suffix. For instance, one excerpt from F. Scott Fitzgerald’s The Great Gatsby was divided into a prefix that read, “They were careless people, Tom and Daisy—they broke things and creatures and then retreated,” and a suffix that concluded with, “We went back to money and their vast carelessness, which kept them together and allowed them to clean up any mess that other people had made.”

Researchers calculated the probability of each AI model completing the excerpt accurately and compared these probabilities against random chance.

The tested excerpts included selections from 36 copyrighted works, featuring popular titles by authors like George RR Martin’s Games and Cheryl Sandberg’s Lean In. Additionally, excerpts from books authored by plaintiffs in the Kadrey V Meta Platforms case were also examined.

The experiments involved 13 open-source AI models, including those created by Meta, Google, DeepMind, EleutherAI, and Microsoft. Most companies outside of Meta did not provide comments, with Microsoft opting not to comment.

The analysis revealed that Meta’s Llama 3.1 70b model had a significant recall of texts from JK Rowling’s first Harry Potter tome, as well as from The Great Gatsby and George Orwell’s 1984. Other models, however, showed minimal recall of the texts, including those penned by the plaintiffs. Meta declined to comment on these findings.

Researchers estimate that an AI model found to have infringed on merely 3% of the Books3 dataset could incur almost $1 billion in damages.

This technique has potential as a “forensic tool” for gauging the extent of AI memory, as noted by Randy McCarthy from Hallestill Law Office in Oklahoma. Yet, it does not address whether companies are legally permitted to train AI models on copyrighted works under US “fair use” provisions.

McCarthy points out that AI firms generally utilize copyrighted material for training. “The real question is whether they had the right to do so,” he remarked.

Meanwhile, in the UK, memory assessment is crucial from a copyright perspective, according to Robert Lands from Howard Kennedy Law Office in London. UK copyright legislation adheres to “fair dealing,” which presents much narrower allowances for copyright infringement compared to US fair use doctrine. Therefore, he posits that AI models retaining pirated content would not satisfy this exception.

Topics:

  • artificial intelligence/
  • Law

Source: www.newscientist.com

Unusual Yet Delicious: Creating a Memorable Christmas Dinner with Unique Flavors

Guests enjoy turkey, peanut and chocolate main courses and test ‘flavor bridging’ theory

david stock

Some foods are made for each other. From the comforting combination of mozzarella, tomato, and marjoram on pizza to the enchanting trinity of ginger, garlic, and soy sauce that makes East Asian cuisine so natural, some combinations are so natural that you could live without them. It’s so hard to imagine. But for centuries, gourmets and scholars have been puzzled as to why some foods go together so well.

In 1992, with chef Heston Blumenthal. Francois Benge Let’s go to the laboratory to solve the mystery of this dish. They came up with the idea that foods that taste good together also share many volatile flavor compounds, chemicals that carry aromas that rise to the back of the nose and create flavor perception on the tongue. Their findings were validated in 2011 with the following study: Analyzed 56,498 recipes from various world cuisines.

Yongyeol Ang and his colleagues at Indiana University used that data to build a network model, a complex map that shows the relationships between all the ingredients in a recipe and the flavor compounds they share. This confirms that North American and Western European recipes tend to combine ingredients that share flavor components.

The “Flavor Pairing Theory” has revolutionized the world of cooking. food manufacturer Investing resources to apply that idea to a product, startup companies Leverage open source data on flavor compounds to predict the next big…

Source: www.newscientist.com