Analog Computers May Train AI 1,000 Times Faster While Consuming Less Energy

Analog computers use less energy compared to digital computers

Metamol Works/Getty Images

Analog computers that can swiftly resolve the primary types of equations essential for training artificial intelligence models may offer a viable solution to the growing energy demands of data centers spurred by the AI revolution.

Devices like laptops and smartphones are known as digital computers because they handle data in binary form (0s and 1s) and can be programmed for various tasks. Conversely, analog computers are generally crafted to tackle specific problems, using continuously variable quantities like electrical resistance rather than discrete binary values.

While analog computers excel in terms of speed and energy efficiency, they have historically lagged in accuracy compared to their digital counterparts. Recently, Zhong Sun and his team at Peking University in China developed two analog chips that work collaboratively to solve matrix equations accurately—crucial for data transmission, large-scale scientific simulations, and AI model training.

The first chip generates low-precision outputs for matrix computations at high speed, while the second chip refines these outputs through an iterative improvement algorithm to assess and minimize the error rate of the initial results. Sun noted that the first chip produced results with a 1% error rate, but after three iterations with the second chip, this rate dropped to 0.0000001%, comparable to the accuracy found in conventional digital calculations.

Currently, the researchers have successfully designed a chip capable of solving 16 × 16 matrices, which equates to handling 256 variables, sufficient for addressing smaller problems. However, Sun acknowledges that addressing the complexities of today’s large-scale AI models will necessitate substantially larger circuits, potentially scaling up to 1 million by 1 million.

A unique advantage of analog chips is their ability to handle larger matrices without increased solving time, unlike digital chips, whose solving complexity rises exponentially with matrix size. This translates to a 32 x 32 analog chip outperforming the Nvidia H100 GPU, a leading chip for AI training.

Theoretically, further scaling could yield throughput up to 1,000 times greater than digital alternatives like GPUs while consuming 100 times less energy, according to Sun. However, he cautions that practical applications may exceed the circuit’s limited capabilities, limiting the perceived benefits.

“This is merely a speed comparison; your specific challenges may differ in real-world scenarios,” Sun explains. “Our chip is designed exclusively for matrix computations. If these computations dominate your tasks, the acceleration will be substantial; otherwise, the benefits may be constrained.”

Sun suggests that the most realistic outcome may be the creation of hybrid chips that incorporate some analog circuitry alongside GPUs to tackle specific problem areas, although this development might still be years away.

James Millen, a professor at King’s College London, emphasizes that matrix calculations are pivotal in AI model training, indicating that analog computing has the potential to make a significant impact.

“The contemporary landscape is dominated by digital computers. These remarkable machines are universal, capable of tackling any computation, yet not necessarily with optimal efficiency or speed,” Millen states. “Analog computers excel in performing specific tasks, making them exceptionally fast and efficient. In this research, we leverage analog computing chips to enhance matrix inversion processes—essential for training certain AI models. Improving this efficiency could help mitigate the substantial energy demands accompanying our expanding reliance on AI.”

Topic:

Source: www.newscientist.com

The Method We Use to Train AIs Increases Their Likelihood of Producing Nonsense

Certain AI training techniques may lead to dishonest models

Cravetiger/Getty Images

Researchers suggest that prevalent methods for training artificial intelligence models may increase their propensity to provide deceptive answers, aiming to establish “the first systematic assessment of mechanical bullshit.”

It is widely acknowledged that large-scale language models (LLMs) often produce misinformation or “hagaku.” According to Jaime Fernandez Fissac from Princeton University, his team defines “bullshit” as “discourse designed to manipulate an audience’s beliefs while disregarding the importance of actual truth.”

“Our analysis indicates that the problems related to bullshit in large-scale language models are quite severe and pervasive,” remarks FISAC.

The researchers categorized these instances into five types: “This red car combines style, charm, and adventure that captivates everyone,” Weasel Words—”Ambiguous statements like ‘research suggests that in some cases, uncertainties may enhance outcomes’; Essentialization—employing truthful statements to create a false impression; unverified claims; and sycophancy.

They evaluated three datasets composed of thousands of AI-generated responses to various prompts from models including GPT-4, Gemini, and Llama. One dataset included queries specifically designed to test the generation of bullshit when AIS was asked for guidance or recommendations, alongside others focused on online shopping and political topics.

FISAC and his colleagues first employed LLMs to determine if the responses aligned with one of the five categories and then verified that the AI’s classifications matched those made by humans.

The team found that the most critical truths posed challenges stemming from a training method called reinforcement learning from human feedback, aimed at enhancing the machine’s utility by offering immediate feedback on its responses.

However, FISAC cautions that this approach is problematic, as models “sometimes conflict with honesty,” prioritizing immediate human approval and perceived usefulness over truthfulness.

“Who wants to engage in the lengthy and subtle rebuttal of bad news or something that seems evidently true?” FISAC questions. “By attempting to adhere to our standards of good behavior, the model learns to undervalue the truth in favor of a confident, articulate response to secure our approval.”

This study revealed that reinforcement learning from human feedback notably heightened bullshit behavior, with inflated rhetoric increasing by nearly 40%, substantial enhancements in Weasel Words, and over half of unverified claims.

Heightened bullshitting is especially detrimental, as team member Kaique Liang points out, leading users to make poorer decisions. In cases where the model’s features were uncertain, deceptive claims surged from five percent to three-quarters following human training.

Another significant issue is that bullshit is prevalent in political discourse, as AI models “tend to employ vague and ambiguous language to avoid making definitive statements.”

AIS is more likely to behave this way when faced with conflicts of interest, as the system caters to multiple stakeholders including both the company and its clients, as the researchers discovered.

To address this issue, the researchers propose transitioning to a “hindcasting feedback” model. Instead of seeking immediate feedback post-output, the system should first generate a plausible simulation of potential outcomes based on user input, which is then presented to a human evaluator for assessment.

“Ultimately, we hope that by gaining a deeper understanding of the subtle but systematic ways AI may seek to mislead us, we can better inform future initiatives aimed at creating genuinely truthful AI systems,” concludes FISAC.

Daniel Tiggard of the University of San Diego, though not involved in the study, expresses skepticism regarding discussions of LLMs’ output under these circumstances. He argues that just because LLMs generate bullshit, it does not imply intentional deception, as AI systems currently stand. I left to deceive us, and I have no interest in doing so.

“The primary concern is that this framing seems to contradict sensible recommendations about how we should interact with such technology,” states Tiggard. “Labeling it as bullshit risks anthropomorphizing these systems.”

Topics:

Source: www.newscientist.com

Jerry Adams may take legal action against Meta for reportedly using his book to train artificial intelligence

Former President Sinn Fair Jerry Adams is contemplating legal action against Meta for potentially using his book to train artificial intelligence.

Adams claims that Meta, and other tech companies, have incorporated several books, including his own, into a collection of copyrighted materials for developing AI systems. He stated, “Meta has utilized many of my books without obtaining my consent. I have handed the matter over to lawyers.”

On Wednesday, Sinn Féin released a statement listing the titles that were included in the collection, which contained a variety of memoirs, cookbooks, and short stories, including Adams’ autobiography “Before the Dawn: Prison Memoirs, Cage 11; Reflections on the Peace Process, Hope, and History in Northern Ireland.”

Adams joins a group of authors who have filed court documents against Meta, accusing the company of approving the use of Library Genesis, a “shadow library” known as Libgen, to access over 7.5 million books.

The authors, which include well-known names such as Ta-Nehisi Coates, Jacqueline Woodson, Andrew Sean Greer, Junot Díaz, and Sarah Silverman, have alleged that Meta executives, including Mark Zuckerberg, knew that Libgen contained pirated material.

Authors have identified numerous titles from Libgen that Meta may have used to train its AI system, Llama, according to a report by the Atlantic Magazine.

The Authors Association has expressed outrage over Meta’s actions, with Chair Vanessa Fox O’Laurin stating that Meta’s actions are detrimental to writers as it allows AI to replicate creative content without permission.

Novelist Richard Osman emphasized the importance of respecting copyright laws, stating that permission is required to use an author’s work.

In response to the allegations, a Meta spokesperson stated that the company respects intellectual property rights and believes that using information to train AI models is lawful.

Skip past newsletter promotions

Last year, Meta launched an open-source AI app called Llama, a large language model similar to other AI tools such as Open Ai’s ChatGpt and Google’s Gemini. Llama is trained on a vast dataset to mimic human language and computer coding.

Adams, a prolific author, has written a variety of genres and has been identified as one of the authors in the Libgen database. Other Northern Ireland authors listed in the database include Jan Carson, Lynne Graham, Deric Henderson, and Anna Burns as reported by BBC.

Source: www.theguardian.com

Authors in London protest Meta’s theft of book and use of ‘Shadow Library’ to train AI

A demonstration will be held today outside Meta’s London office by authors and other publishing industry experts protesting the organization’s use of copyrighted books for training artificial intelligence.

Notable figures like novelists Kate Moss and Tracy Chevalier, poet Daljit Nagra, and former chairman of the Royal Literature Society, are expected to be present outside Meta’s Kings Cross office.

Protesters will gather at Granary Square at 1:30 pm, with hand-written letters to Meta by the Authors Association (SOA) planned for 1:45 pm, also to be sent to Meta’s US headquarters.

Earlier this year, Meta CEO Mark Zuckerberg allegedly approved the use of Libgen, known as the “Shadow Library,” which contains over 7.5 million books. The Atlantic recently released a searchable database of the titles in Libgen, suggesting that authors’ works may have been used to train Meta’s AI models.

SOA Chair Vanessa Fox O’Loughlin condemned Meta’s actions as “illegal, shocking, and devastating for writers.”

Vanessa added, “Books take years to write, and Meta stealing them for AI replication threatens authors’ livelihoods.”

In response, a Meta spokesperson claimed they respect intellectual property rights and believe their actions comply with the law.

Skip past newsletter promotions

Several prominent authors, including Moss, Richard Osman, Isiguro Kawako, and Val McDermid, signed a letter to Culture Secretary Lisa Nandi asking for Meta executives to appear before Congress. The petition garnered over 7,000 signatures.

Today’s protest is led by novelist AJ West, who expressed dismay at seeing their work in the Libgen database without consent.

A court filing in January revealed a group of authors suing Meta for copyright infringement, noting the impact on authors’ rights by using unauthorized databases like Libgen.

SOA’s chief executive Anna Gunley emphasized the detrimental effect of companies exploiting authors’ copyrighted works.

Protesters are encouraged to create placards and use hashtags like #MetaBookThieves, #DothewRiteThing, #MakeItfair.

Source: www.theguardian.com

Australian authors suggest Meta might have used their book to train AI without permission

The Australian author expresses being “lively alive” and feels violated knowing their work was allegedly included in a pirated dataset used to train AI.

Parents company of Facebook and Instagram faces a copyright infringement lawsuit from US authors like Ta-Nehisi Coates and comedian Sarah Silverman.

In a court application from January, CEO Mark Zuckerberg reportedly approved using the book’s online archive, Libgen Dataset, to train the company’s AI models, despite warnings from the AI executive team of its pirated nature.

In the Atlantic, Searchable databases have been released for authors to check if their work is in the Libgen Dataset.

Books by notable Australian authors, including former Prime Ministers Malcolm Turnbull, Kevin Rudd, Julia Gillard, and John Howard, are among those published.

Holden Sheppard, author of Invisible Boys, a popular young adult novel adapted to a Stan series, expressed disappointment that his work was utilized in training meta AI.

He expressed his disapproval of his books being used without consent to train generative AI systems, considering it unethical and illegal and calling for fair compensation for the authors.

He emphasized the need for AI-specific laws in Australia to ensure compliance with existing copyright laws by generative AI developers or deployers.

Journalist and author Tracey Spicer discovered two of her books, including one that addresses artificial intelligence, were included in the dataset without her consent.

She called for a class-action lawsuit in Australia and urged affected authors to contact local federal lawmakers.

Skip past newsletter promotions

She criticized big technology companies for profiting while reducing writers to a serf-like status, highlighting the financial struggles of many authors.

Alexandra Heller-Nicholas, an award-winning film critic and author of several books, expressed her frustration and called for government action.

The Australian Authors Association urged Facebook to advocate for authors whose work was used without permission.

Society Chair Sophie Cunningham contacted affected authors and condemned the treatment of writers by large companies profiting from their work.

Cunningham criticized Meta’s dealings with writers as exploitative and called for fair treatment and compensation for authors.

Mehta declined to comment on the ongoing lawsuit and is reportedly lobbying for AI training on copyrighted data via executive orders.

Previously, Melbourne publisher Black Inc. Books raised concerns about the use of AI in the industry, with some companies entering agreements with publishers for content use.

Source: www.theguardian.com

Train your brain to see through visual fantasies

Have you found that the orange circle on the left is smaller than the orange circle on the right?

Radoslaw Wincza et al. (2025)

Optical fantasies may make you feel like a fool, but you may be able to train your brain to resist your brain.

“People in the general population are very likely trained to unravel illusions and have the ability to perceive the world more objectively,” he says. Radoslaw Wincza At Lancaster University, UK.

Wincza and his colleagues recruited 44 radiologists with an average age of 36. He spent over a decade finding small details such as fractures from a medical scan. They also saw 107 college students, an average of 23 years old, studying medicine and psychology.

Each participant displayed four fantasies one at a time on the screen. With each illusion, participants had to look at size or length size or length shape or line pairs and choose larger or longer ones.

In the three illusions, other objects made larger shapes or longer lines smaller and shorter lines. The team found that radiologists were less susceptible to these illusions than students.

“Radiosists have this ability to really focus on the key elements of the visual scene, where they ignore unrelated contexts and have tunnel vision,” says Wincza. “By adjusting your targets, they don’t experience that much illusion.”

In the fourth illusion, one of the shapes was vertical, and the pair was horizontal. This made the latter look even wider, even if it was actually narrower. Both groups were equally susceptible to fantasy. This is probably because this didn’t involve much of an adjustment to background distraction, as it didn’t contain any surrounding objects.

“It suggests that if everyone trains themselves, they can gain the ability to be susceptible to illusions,” he says. Carla Evans At York University, UK. Focusing on certain aspects of photography, for example, could improve this ability, but she says there is more work to see how fast this can be. “It could take years or weeks.”

topic:

Source: www.newscientist.com