Simulating the Human Brain with Supercomputers: Exploring Advanced Neuroscience Technology

3D MRI scan of human brain

3D MRI Scan of the Human Brain

K H FUNG/Science Photo Library

Simulating the human brain involves using advanced computing power to model billions of neurons, aiming to replicate the intricacies of real brain function. Researchers aspire to enhance brain simulations, uncovering secrets of cognition with enhanced understanding of neuronal wiring.

Historically, researchers have focused on isolating specific brain regions for simulations to elucidate particular functions. However, a comprehensive model encompassing the entire brain has yet to be achieved. As Markus Diesmann from the Jülich Research Center in Germany notes, “This is now changing.”

This shift is largely due to the emergence of state-of-the-art supercomputers, nearing exascale capabilities—performing billions of operations per second. Currently, only four such machines exist, according to the Top 500 list. Diesmann’s team is set to execute extensive brain simulations on one such supercomputer, named JUPITER (Joint Venture Pioneer for Innovative Exascale Research in Germany).

Recently, Diesmann and colleagues demonstrated that a simple model of brain neurons and their synapses, known as a spiking neural network, can be configured to leverage JUPITER’s thousands of GPUs. This scaling can achieve 20 billion neurons and 100 trillion connections, effectively mimicking the human cerebral cortex, the hub of higher brain functions.

These simulations promise more impactful outcomes than previous models of smaller brains such as fruit flies. Recent insights from large language models reveal that larger systems exhibit behaviors unattainable in their smaller counterparts. “We recognize that expansive networks demonstrate qualitatively different capabilities than their reduced size equivalents,” asserts Diesmann. “It’s evident that larger networks offer unique functionalities.”

Thomas Novotny from the University of Sussex emphasizes that downscaling risks omitting crucial characteristics entirely. “Conducting full-scale simulations is vital; without it, we can’t truly replicate reality,” Novotny states.

The model in development at JUPITER is founded on empirical data from limited neuron and synapse experiments in humans. As Johanna Cenk, a collaborator with Diesmann at Sussex, explains, “We have anatomical data constraints coupled with substantial computational power.”

Comprehensive brain simulations could facilitate tests of foundational theories regarding memory formation—an endeavor impractical with miniature models or actual brains. Testing such theories might involve inputting images to observe neural responses and analyze alterations in memory formation with varying brain sizes. Furthermore, this approach could aid in drug testing, such as assessing impacts on a model of epilepsy characterized by abnormal brain activity.

The enhanced computational capabilities enable rapid brain simulations, thereby assisting researchers in understanding gradual processes such as learning, as noted by Senk. Additionally, researchers can devise more intricate biological models detailing neuronal changes and firings.

Nonetheless, despite the ability to simulate vast brain networks, Novotny acknowledges considerable gaps in knowledge. Even simplified whole-brain models for organisms like fruit flies fail to replicate authentic animal behavior.

Simulations run on supercomputers are fundamentally limited, lacking essential features inherent to real brains, such as real-world environmental inputs. “While we can simulate brain size, we cannot fully replicate a functional brain,” warns Novotny.

Topics:

Source: www.newscientist.com

Why Some Quantum Computers Demand More Power Than Traditional Supercomputers

El Capitan, the National Nuclear Security Administration's leading exascale computer

El Capitan Supercomputer: Power Play in Quantum Computing

Credit: LLNL/Garry McLeod

The advancement of large quantum computers offers the potential to solve complex problems beyond the reach of today’s most powerful classical supercomputers. However, this leap in capability may come with increased energy demands.

Currently, most existing quantum computers are limited in size, with less than 1,000 qubits. These fragile qubits are susceptible to errors, hindering their ability to tackle significant issues, like aiding in drug discovery. Experts agree that to reach practical utility, a Fault-Tolerant Quantum Computer (FTQC) must emerge, with a much higher qubit count and robust error correction. The engineering hurdles involved in this pursuit are substantial, compounded by multiple competing designs.

Olivier Ezratty, from the Quantum Energy Initiative (QEI), warns that the energy consumption of utility-scale FTQCs has been largely overlooked. During the Q2B Silicon Valley Conference in Santa Clara, California, on December 9, he presented his preliminary estimates. Notably, some FTQC designs could eclipse the energy requirements of the world’s top supercomputers.

For context, El Capitan, the fastest supercomputer globally, located at Lawrence Livermore National Laboratory, draws approximately 20 megawatts of electricity—three times that of the nearby city of Livermore, which has a population of 88,000. Ezratty forecasts that FTQC designs scaling up to 4,000 logical qubits may demand even more energy. Some of the power-hungry designs could require upwards of 200 megawatts.

Ezratty’s estimates derive from accessible data, proprietary insights from quantum tech firms, and theoretical models. He outlines a wide energy consumption range for future FTQCs, from 100 kilowatts to 200 megawatts. Interestingly, he believes that three forthcoming FTQC designs could ultimately operate below 1 megawatt, aligning with conventional supercomputers utilized in research labs. This variance could significantly steer industry trends, particularly as low-power models become more mainstream.

The discrepancies in projected energy use stem from the various strategies that quantum computing companies employ to construct and maintain their qubits. For instance, certain qubit technologies necessitate extensive cooling to function effectively. Light-based qubits struggle with warm light sources and detectors, leading to heightened energy consumption. Similarly, superconducting circuits require entire chips to be housed in large refrigeration systems, while designs based on trapped ions or ultracold atoms demand substantial energy input from lasers or microwaves to precisely control qubits.

Oliver Dial from IBM, known for superconducting quantum computers, anticipates that his company’s large-scale FTQC will need approximately 2 to 3 megawatts of power, a fraction of what a hyperscale AI data center could consume. This demand could be lessened through integration with existing supercomputers. Meanwhile, a team from QuEra, specializing in ultracold atomic quantum computing, estimates their FTQC will require around 100 kilowatts, landing on the lower end of Ezratty’s spectrum.

Other companies like Xanadu, focusing on light-based quantum technologies, as well as Google Quantum AI, centered on superconducting qubits, have opted not to comment. PsiQuantum, another light-based qubit developer, was unavailable for a response. New Scientist has made multiple attempts for their insights.

Ezratty also pointed out that traditional electronics responsible for directing and monitoring qubit operations could result in additional costs, particularly for FTQC systems where qubits need further instructions to self-correct errors. This complexity necessitates understanding how these algorithms contribute to energy footprints. The operational runtime length of quantum computers adds another layer, as energy savings from fewer qubits might be negated if longer operation times are needed.

To effectively measure and report the energy consumption of machines, the industry must establish robust standards and benchmarks. Ezratty emphasizes that this is an integral element of QEI’s mission, with projects actively progressing in both the United States and the European Union.

As the field of quantum computing continues to mature, Ezratty anticipates that his research will pave the way for insights into FTQC energy consumption. This understanding could be vital for optimizing designs to minimize energy use. “Countless technological options could facilitate reduced energy consumption,” he asserts.

Topics:

Source: www.newscientist.com

Quantum-Enhanced Supercomputers Are Set to Transform Chemistry

SEI 257702938

Portion of the IBM quantum computer showcased

Angela Weiss/AFP via Getty Images

Quantum computers and conventional supercomputers can serve as powerful tools for analyzing chemical processes. The ongoing collaboration between IBM and Riken, a Japanese scientific institute, is paving the way towards this goal.

Successful chemical analysis often hinges on comprehending how molecules behave during reactions, such as in therapies or industrial catalysts, frequently linked to the quantum state of electrons. Quantum computers can expedite the calculations of these states, yet they remain prone to errors in their current configurations. Traditional supercomputers can catch these discrepancies before they escalate into larger issues.

In a collective statement to New Scientist, Aoki Sei and Mitsui Sato from Riken noted that quantum computers can augment traditional computing capabilities. Currently, they and their team are modeling two distinct iron-sulfur compounds using IBM’s Heron quantum computer in conjunction with Riken’s Fugaku supercomputer.

The researchers divided the computation of the quantum states of the molecules among machines that leverage up to 77 qubits and utilize an algorithm known as SQD. The quantum computer performs the calculations while the supercomputer verifies and corrects errors. For instance, if Heron generates a mathematical representation indicating more electrons than actually present in the molecule, Fugaku discards some of the results, prompting Heron to adjust and retry the computation.

This hybrid approach has not yet surpassed the optimal scenarios achievable by standalone supercomputers, but it competes well against some standard methods, according to Jay Gambetta at IBM, who was not involved in the research. “It’s a matter of comparing calculators,” he remarked.

Recently, this integration is being recognized as the “secret sauce” for addressing the challenges posed by error-prone quantum computers, as articulated by Kenneth Meltz from the Cleveland Clinic in Ohio. His team is employing another IBM quantum computer, paired with a traditional system, to innovate variations of SQD algorithms that model molecules in solutions, offering a more accurate depiction of chemical experiments than past models.

In Meltz’s perspective, advancing the SQD algorithm will enable the combination of quantum and conventional computing to yield substantial benefits over the next year.

“The synergy between quantum and supercomputing is not merely useful; it is an inevitability,” stated Sam Stanwyck from Nvidia. He emphasizes that the future of quantum computing lies in its seamless integration with robust classical and quantum processors from supercomputing centers. Nvidia has already developed a software platform to facilitate such hybrid methodologies.

Aseem Data from Microsoft remarked that his organization is also venturing into groundbreaking possibilities that merge quantum computing, supercomputing, and AI to expedite developments in chemistry and materials science.

Despite these advancements, numerous challenges persist within the quantum computing sector. Markus Reiher from ETH Zurich acknowledged that while the outcomes of the Riken experiments look promising, it remains uncertain if this methodology will become the preferred technique for executing quantum chemical analyses. The precision of the computed results derived from Quantum and Supercomputing partnerships is still undetermined. Additionally, conventional methods for performing such calculations are already established and highly effective.

The potential of integrating quantum computers into computational processes is lauded for enabling the modeling of larger molecules and enhancing processing speed. However, Reiher expresses caution about the scalability of this emerging approach.

According to Gambetta, a new iteration of IBM’s Heron Quantum Computer was launched at Riken in June, boasting reduced error rates compared to its predecessors. He anticipates noteworthy hardware advancements in the near future.

Moreover, researchers have fine-tuned the SQD algorithm to bolster how Heron and Fugaku collaborate in parallel, making the process more efficient. Meltz compares the current status to that of traditional supercomputers from the 1980s, highlighting numerous unresolved issues. Nevertheless, the infusion of new technology promises significant returns.

Topics:

  • Chemistry /
  • Quantum Computing

Source: www.newscientist.com

Elon Musk’s Xai Faces Accusations of Contaminating Memphis Supercomputers

Controversy surrounds Elon Musk’s artificial intelligence company in Memphis, Tennessee, where a massive supercomputer for his company Xai is being constructed. Local residents and environmental activists are concerned about the significant air pollution generated by the supercomputer since it was activated last summer. Despite this, some local officials have defended Musk, citing his investments in Memphis.

A hearing with the Health Department is scheduled for Friday to address the various perspectives on the issue. Xai has distributed flyers claiming low emissions to residents in the historically black neighborhood. Meanwhile, Environmental Groups have gathered data on the pollution levels produced by AI companies.

Recently, the Southern Environmental Law Center disclosed that Xai had quietly installed 35 portable methane gas turbines without the necessary air permits to power the supercomputer. Satellite images of the facility confirmed this discovery, raising concerns about the environmental impact.

Memphis Mayor Paul Young stated in a public forum that only 15 out of the 35 turbines at Xai’s site were in use, with the company having pending permit applications for the rest.

Memphis thermal image. Photo: Steve Jones/Flight by Southwings for the South Environmental Law Center: Steve Jones/Flight by South Wings

Recent thermal imaging of Xai’s site revealed significant heat emissions from the turbines, indicating their operation during the imaging. Environmental advocates are raising concerns about the lack of oversight and transparency in Xai’s operations.

The Southern Environmental Law Center criticized Xai for operating multiple methane gas turbines without proper permits or public scrutiny. The community surrounding Xai is calling for stricter regulations and monitoring of the company’s environmental impact.

Despite community concerns, Musk continues to expand Xai’s infrastructure in Memphis, aiming to double its computing power and energy storage capacity.

The energy-intensive operations of artificial intelligence companies like Xai contribute to air pollution and health concerns in nearby residential areas. The community is demanding greater transparency and accountability from Xai to protect their health and environment.

Xai flyer sent to Memphis residents. Photo: Courtesy of Keshaun Pearson

Local residents are pushing for more transparency and regulation of Xai’s operations, citing health risks from methane gas emissions. Efforts are underway to challenge misinformation and ensure a clean and safe environment for all community members.

Source: www.theguardian.com

New EU initiative to provide increased support for AI startups using supercomputers for model training

The European Union plans to support its own AI startups by providing access to processing power for model training on the region’s supercomputers, announced and launched in September. According to the latest information from the EU, France’s Mistral AI is participating in an early pilot phase. But one early learning is that the program needs to include dedicated support to train AI startups on how to make the most of the ‘s high-performance computing. “One of the things we’ve seen is that we don’t just provide access; facility — In particular, the skills, knowledge and experience we have at our hosting centers — to not only facilitate this access, but also to develop training algorithms that take full advantage of the architecture and computing power currently available at each supercomputing center. however, an EU official said at a press conference today. The plan is to establish a “center of excellence” to support the development of specialized AI algorithms that can run on EU supercomputers. Rather than relying on the processing power provided by supercomputers as a training resource, AI startups may be accustomed to training their models using specialized computing hardware provided by US hyperscalers. Access to high-performance computing for AI training programs is therefore being enhanced with support wrappers, said EU officials speaking in the background ahead of the formal ribbon-cutting, mare nostrum 5a pre-exascale supercomputer, which goes live on Thursday at the Barcelona Supercomputing Center in Spain. “We are developing a facility to help small and medium-sized enterprises understand how best to use supercomputers, how to access supercomputers, how to parallelize algorithms so that they can develop models in the case of AI,” said a European Commission official. “In 2024, we expect to see a lot more of this kind of approach than we do today.” “AI is now considered a strategic priority for the , they added. “Next to the AI ​​Act, as AI becomes a strategic priority, we are providing innovation capabilities or enabling small businesses and startups to make the most of our machines and this public infrastructure. “We want to provide a major window of innovation.” ” Another EU official confirmed that an “AI support center” was in the works, including a “special . “What we need to realize is that the AI community hasn’t used supercomputers in the past decade,” they noted. “They’re not new users of GPUs, but they’re new to how to interact with supercomputers, so we need to help them. “A lot of times the AI community comes from a huge amount of knowledge about how many GPUs you can put in a box. And they’ve been very good at it. What you have is a bunch of boxes with GPUs, and you need additional skillsets and extra help to scale out the supercomputer and exploit its full potential.” The bloc has significantly increased its investment in supercomputers over the past five years, expanding its hardware to regionally located clusters of eight machines, interconnected via a Terabit network. We also plan to create federated supercomputing resources. Accessed in the cloud, it is available to users across Europe. The EU‘s first exascale supercomputers are also expected to come online in the next few years, with one in Germany (likely next year) and a second in France (expected in 2025). The European Commission also plans to invest in quantum computing, providing hybrid resources co-located with supercomputers and combining both types of hardware, so that quantum computers can act as “accelerators”. There are plans to acquire a quantum simulator that will As the committee states, it is a classic supercomputer. Applications being developed on the EU‘s high-performance computing hardware include projects that simulate Earth’s ecosystems to better model climate change and weather systems. destination earth and one more thing needs to be devised Digital twin of the human body This is expected to contribute to the advancement of medicine by supporting drug development and making personalized medicine possible. Leveraging his resources in supercomputing to launch his AI startup has recently been announced, especially after the EU president announced this fall that his AI model would have computing access to his training program. It is emerging as a strategic priority. The bloc also announced what it called the “Large-Scale AI Grand Challenge.” This is a competition for European AI startups “with experience in large-scale AI models” and aims to select up to four promising domestic startups for a total of four. Access to millions of hours of supercomputing to support foundational model development. According to the European Commission, there will be a prize of 1 million euros to be distributed to the winners, who will be able to release their developed model or publish their research results under a non-commercial open source license. It is expected. The EU already had a program that provided industry users with access to core hours of supercomputing resources through a project recruitment process. However, the bloc is increasing its focus on commercial AI with dedicated programs and resources, and there is an opportunity to incorporate the growing supercomputing network into a strategic power source for expanding ‘Made in Europe’ general purpose AI. They are intently aiming for this. Thus, France’s Mistral, an AI startup that aims to compete with US infrastructure model giants like OpenAI and claims to offer “open assets” (if not fully open source), is an early adopter of It seems no coincidence that the beneficiaries of the Commission‘s Supercomputer Access Program. (That said, the technology company, which just raised €385 million in Series A funding that includes US investors including Andreessen Horowitz, General Catalyst and Salesforce, is at the front of the line for computing giveaways.) That may raise some eyebrows, but hey, it’s another sign of the high-level strategic bets being made on “big AI.”) The ‘s “Supercomputing for AI” program is still in its infancy, so it’s still unclear whether there will be enough benefits in model training to warrant reporting from dedicated access. (We reached out to Mistral for comment, but he did not respond as of press time.) But the committee’s at least hope is that by focusing support on AI startups, they will be able to move into high-performance computing. It is about being able to leverage investments. The construction of supercomputer hardware is increasingly being procured and configured with AI model training in mind, and this is due to the fact that local, hyperscalar-like US AI giants are starting at a disadvantage. This will be a competitive advantage for the AI ​​ecosystem. “We don’t have the massive hyperscalers that the Americans have when it comes to training this kind of basic model, so we’re using supercomputers and a new generation that is increasingly compliant with AI. “We intend to develop a supercomputer,” a committee official said. “The objective in 2024, not just with the supercomputers that we have now, is to move in this direction so that even more small and medium-sized businesses can use supercomputers to develop these basic models. It is to do.” The plan includes acquiring “more dedicated AI supercomputing machines based on accelerators rather than standard CPUs,” they added. Will the ‘s AI support strategy align with or diverge from certain member states’ ambitions to develop national AI champions? We heard a lot about this during the recent difficult negotiations to develop the ‘s AI rulebook, in which France took the lead in pushing forward the AI rulebook. Regulatory carve-outs to the underlying model It drew criticism from small and medium-sized businesses. – As seen. But Mistral’s early presence in the ‘s supercomputing access program may suggest a consensus.

Source: techcrunch.com