Intel has developed the world’s largest neuromorphic computer, a device that aims to mimic the behavior of the human brain. The company hopes to be able to run more advanced AI models than traditional computers can run, but experts say the device will not be able to compete with, let alone surpass, the cutting-edge. says there are engineering hurdles to overcome.
Expectations for neuromorphic computers are high because they are inherently different from traditional machines. While regular computers use a processor to perform operations and store data in separate memory, neuromorphic devices use artificial neurons for both storage and calculation, similar to our brains. To do. This eliminates the need to pass data between components, which can be a bottleneck in today’s computers.
This architecture has the potential to result in much greater energy efficiency, and Intel says its new Hala Point neuromorphic computer will solve an optimization problem that involves finding an optimal solution to a problem given certain constraints. It claims to use 100 times less energy than traditional machines when running. It also trains and runs AI models that use chains of neurons, similar to how a real brain processes information, rather than mechanically passing input through each layer of artificial neurons as in current models. New methods may also become possible.
Hala Point contains 1.15 billion artificial neurons across 1152 Loihi 2 chips, capable of 380 trillion synaptic operations per second. mike davis Despite this power, Intel says it takes up only six racks of space in a standard server case, which is about as much space as a microwave oven. Larger machines will also be possible, Davis said. “We built a system of this scale because, honestly, one billion neurons was a good number,” he says. “So there were no special technical engineering challenges that would cause us to stop at this level.”
No other existing machine can match Harapoint’s scale, but Deep South, a neuromorphic computer due for completion later this year, is said to be capable of 228 trillion synaptic operations per second.
The Loihi 2 chip is still a prototype that Intel has produced in small numbers, but Davis said the real bottleneck is the processing required to take a real-world problem, translate it into a format that can run on a neuromorphic computer, and run it. It is said to be in the software layer. process. This process, like neuromorphic computing in general, is still in its infancy. “Software is a big limiting factor,” he says. That means there’s still little point in building a large machine.
Intel has suggested that machines like Hala Point could create AI models that continuously learn, rather than having to be trained from scratch to learn new tasks like current models do. Masu.but james knight Researchers at the University of Sussex in the UK dismissed this as “hype”.
Knight points out that current models like ChatGPT are trained using graphics cards running in parallel, which means many chips can be used to train the same model. But since neuromorphic computers operate on a single input and cannot be trained in parallel, it could take decades to even initially train something like ChatGPT on such hardware. He says it’s expensive, let alone come up with a way to enable continuous learning once it’s up and running.
Although current neuromorphic hardware is not suitable for training large-scale AI models from scratch, Davis said that one day pre-trained models could be used to learn new tasks over time. He said he hopes it will be possible. “Although this method is still in the research phase, this is a kind of continuous learning problem that large-scale neuromorphic systems like Hala Point can solve in a very efficient way in the future. “It’s considered,” he says.
Knight said neuromorphic computers could solve many other computer science problems as the tools needed for developers to write software for these problems to run on their own hardware become more mature. We are optimistic that we can improve this and increase efficiency at the same time.
It may also offer a better path toward human-level intelligence, also known as artificial general intelligence (AGI), although many AI experts believe that large-scale language models that power things like ChatGPT I think it’s impossible. “I think it’s becoming less and less of a controversial opinion,” Knight says. “The dream is that one day neuromorphic computing will allow us to create brain-like models.”
topic:
Source: www.newscientist.com