Analog computers use less energy compared to digital computers
Metamol Works/Getty Images
Analog computers that can swiftly resolve the primary types of equations essential for training artificial intelligence models may offer a viable solution to the growing energy demands of data centers spurred by the AI revolution.
Devices like laptops and smartphones are known as digital computers because they handle data in binary form (0s and 1s) and can be programmed for various tasks. Conversely, analog computers are generally crafted to tackle specific problems, using continuously variable quantities like electrical resistance rather than discrete binary values.
While analog computers excel in terms of speed and energy efficiency, they have historically lagged in accuracy compared to their digital counterparts. Recently, Zhong Sun and his team at Peking University in China developed two analog chips that work collaboratively to solve matrix equations accurately—crucial for data transmission, large-scale scientific simulations, and AI model training.
The first chip generates low-precision outputs for matrix computations at high speed, while the second chip refines these outputs through an iterative improvement algorithm to assess and minimize the error rate of the initial results. Sun noted that the first chip produced results with a 1% error rate, but after three iterations with the second chip, this rate dropped to 0.0000001%, comparable to the accuracy found in conventional digital calculations.
Currently, the researchers have successfully designed a chip capable of solving 16 × 16 matrices, which equates to handling 256 variables, sufficient for addressing smaller problems. However, Sun acknowledges that addressing the complexities of today’s large-scale AI models will necessitate substantially larger circuits, potentially scaling up to 1 million by 1 million.
A unique advantage of analog chips is their ability to handle larger matrices without increased solving time, unlike digital chips, whose solving complexity rises exponentially with matrix size. This translates to a 32 x 32 analog chip outperforming the Nvidia H100 GPU, a leading chip for AI training.
Theoretically, further scaling could yield throughput up to 1,000 times greater than digital alternatives like GPUs while consuming 100 times less energy, according to Sun. However, he cautions that practical applications may exceed the circuit’s limited capabilities, limiting the perceived benefits.
“This is merely a speed comparison; your specific challenges may differ in real-world scenarios,” Sun explains. “Our chip is designed exclusively for matrix computations. If these computations dominate your tasks, the acceleration will be substantial; otherwise, the benefits may be constrained.”
Sun suggests that the most realistic outcome may be the creation of hybrid chips that incorporate some analog circuitry alongside GPUs to tackle specific problem areas, although this development might still be years away.
James Millen, a professor at King’s College London, emphasizes that matrix calculations are pivotal in AI model training, indicating that analog computing has the potential to make a significant impact.
“The contemporary landscape is dominated by digital computers. These remarkable machines are universal, capable of tackling any computation, yet not necessarily with optimal efficiency or speed,” Millen states. “Analog computers excel in performing specific tasks, making them exceptionally fast and efficient. In this research, we leverage analog computing chips to enhance matrix inversion processes—essential for training certain AI models. Improving this efficiency could help mitigate the substantial energy demands accompanying our expanding reliance on AI.”
Topic:
Source: www.newscientist.com
