IBM Introduces Two Quantum Computers with Unmatched Complexity

IBM researchers hold components of the Loon quantum computer

IBM

In the competitive landscape of developing error-resistant quantum supercomputers, IBM is adopting a unique approach distinct from its primary rivals. The company has recently unveiled two new quantum computing models, dubbed Nighthawk and Loon, which may validate its methodology and deliver the advancements essential for transforming next-gen devices into practical tools.

IBM’s design for quantum supercomputers is modular, emphasizing the innovation of connecting superconducting qubits both within and across different quantum units. When this interconnectivity was first proposed, some researchers expressed skepticism about its feasibility. Jay Gambetta from IBM noted that critics implied to the team, “You exist in a theoretical realm; achieving this is impossible,” which they aim to refute.

Within Loon, every qubit interlinks with six others, allowing for unique connectivity that enables vertical movement in addition to lateral motion. This feature has not been previously observed in existing superconducting quantum systems. Conversely, Nighthawk implements four-way connections among qubits.

This enhanced connectivity may be pivotal in tackling some of the most pressing issues encountered by current quantum computers. The advancements could boost computational capabilities and reduce error rates. Gambetta indicated that initial tests with Nighthawk demonstrated the ability to execute quantum programs that are 30% more complex than those on most other quantum computers in use today. Such an increase in complexity is expected to facilitate further advancements in quantum computing applications, with IBM’s earlier models already finding utility in fields like chemistry.

The industry’s ultimate objective remains the ability to cluster qubits into error-free “logical qubits.” IBM is promoting strategies that necessitate smaller groupings than those pursued by competitors like Google. This could permit IBM to realize error-free computation while sidestepping some of the financial and engineering hurdles associated with creating millions of qubits. Nonetheless, this goal hinges on the connectivity standards achieved with Loon, as stated by Gambetta.

Stephen Bartlett, a researcher at the University of Sydney in Australia, expressed enthusiasm about the enhanced qubit connectivity but noted that further testing and benchmarking of the new systems are required. “While this is not a panacea for scaling superconducting devices to a size capable of supporting genuinely useful algorithms, it represents a significant advancement,” he remarked.

However, there remain several engineering and physical challenges on the horizon. One crucial task is to identify the most effective method for reading the output of a quantum computer after calculations, an area where Gambetta mentioned recent IBM progress. The team, led by Matthias Steffen, also aims to enhance the “coherence time” for each qubit. This measure indicates how long a quantum state remains valid for computational purposes, but the introduction of new connections can often degrade this quantum state. Additionally, they are developing techniques to reset certain qubits while computations are ongoing.

Plans are in place for IBM to launch a modular quantum computer in 2026 capable of both storing and processing information, with future tests on Loon and Nighthawk expected to provide deeper insights.

Topic:

Source: www.newscientist.com

NASA and IBM Develop AI to Forecast Solar Flares Before They Reach Earth

Solar flares pose risks to GPS systems and communication satellites

NASA/SDO/AIA

AI models developed with NASA satellite imagery are now capable of forecasting the sun’s appearance hours ahead.

“I envision this model as an AI telescope that enables us to observe the sun and grasp its ‘mood,'” states Juan Bernabe Moreno from IBM Research Europe.

The sun’s state is crucial because bursts of solar activity can bombard Earth with high-energy particles, X-rays, and extreme ultraviolet radiation. These events have the potential to disrupt GPS systems and communication satellites, as well as endanger astronauts and commercial flights. Solar flares may also be accompanied by coronal mass ejections, which can severely impact Earth’s magnetic field, leading to geomagnetic storms that could incapacitate power grids.

Bernabé-Moreno and his team at IBM and NASA created an AI model named Surya, derived from the Sanskrit word for ‘sun,’ by utilizing nine years of data from NASA’s Solar Dynamics Observatory. This satellite captures ultra-high-resolution images of the sun across 13 wavelength channels. The AI models have learned to recognize patterns in this visual data and create forecasts of how the sun will appear from future observational stations.

When tested against historical solar flare data, the Surya model demonstrated a 16% improvement in accuracy for predicting flare occurrences within the next day compared to traditional machine learning models. There is also a possibility that the model could generate visualizations of flares observable for up to two hours in advance.

“The strength of AI lies in its capacity to comprehend physics in unconventional ways. It enhances our intuition regarding physical processes,” remarks Lisa Upton at the Southwest Research Institute in Colorado.

Upton is especially eager to explore if the Surya model can aid in predicting solar activity across the sun and at its poles—areas where NASA instruments cannot directly observe. While Surya does not explicitly aim to model the far side of the sun, it has shown promise in forecasting what the sun will resemble for several hours ahead as sections rotate into view, according to Bernabe Moreno.

However, it remains uncertain whether AI models can overcome existing obstacles in accurately predicting how solar activity will influence Earth. Bernard Jackson from the University of California, San Diego, points out that there is currently no means to directly observe the magnetic field composition between the Sun and Earth, a crucial factor determining the direction of high-energy particles emanating from the star.

As stated by Bernabé-Moreno, this model is intended for scientific use now, but future collaborations with other AI systems that could leverage Surya’s capabilities may allow it to support power grid operators and satellite constellation owners as part of early warning frameworks.

Topic:

Source: www.newscientist.com

IBM Plans to Develop a Functional Quantum Supercomputer by 2029

Rendering of IBM’s proposed quantum supercomputer

IBM

In less than five years, you’ll have access to a Quantum SuperComputer without errors, according to IBM. The company has unveiled a roadmap for a machine named Starling, set to be available for academic and industrial researchers by 2029.

“These are scientific dreams that have been transformed into engineering achievements,” says Jay Gambetta at IBM. He mentions that he and his team have developed all the required components to make Starling a reality, giving them confidence in their ambitious timeline. The new systems will be based in a New York data center and are expected to aid in manufacturing novel chemicals and materials.

IBM has already constructed a fleet of quantum computers, yet the path to truly user-friendly devices remains challenging, with little competition in the field. Errors continue to thwart many efforts to utilize quantum effects for solving problems that typical supercomputers struggle with.

This underscores the necessity for a fault-tolerant quantum computer that can autonomously correct its mistakes. Such capabilities lead to larger, more powerful devices. There is no universal agreement on the optimal strategy to tackle these challenges, prompting the research team to explore various approaches.

All quantum computers depend on qubits, yet different groups create these essential units from light particles, extremely cold atoms, and in Starling’s case, superconducting qubits. IBM is banking on two innovations to enhance its robustness against significant errors.

First, Starling establishes new connections among its qubits, including those that are quite distant from one another. Each qubit is embedded within a chip, and researchers have innovated new hardware to link these components within a single chip and connect multiple chips together. This advancement enables Starling to be larger than its forerunners while allowing it to execute more complex programs.

According to Gambetta, Starling will employ tens of thousands of qubits, permitting 100 million quantum manipulations. Currently, the largest quantum computers house around 1,000 physical qubits, grouped into roughly 200 “logical qubits.” Within each logical qubit, several qubits function together as a single computational unit resilient to errors. The current record for logical qubits belongs to the Quantum Computing Company Quantinuum with a count of 50.

IBM is implementing a novel method for merging physical qubits into logical qubits via LDPC codes. This marks a significant shift from previous methods employed in other superconducting quantum computers. Gambetta notes that utilizing LDPC codes was once seen as a “pipe dream,” but his team has now realized crucial details to make it feasible.

The benefit of this somewhat unconventional technique is that each logical qubit created with an LDPC approach requires fewer physical qubits compared to competing strategies. Consequently, they are smaller and faster error correction becomes achievable.

“IBM has consistently set ambitious goals and accomplished significant milestones over the years,” states Stephen Bartlett from the University of Sydney. “They have achieved notable innovations and improvements in the last five years, and this represents a genuine breakthrough.” He points out that both the distant qubits and the new hardware for connecting the logical qubit codes deviate from the well-performing devices IBM previously developed, necessitating extensive testing. “It looks promising, but it also requires a leap of faith,” Bartlett adds.

Matthew Otten from the University of Wisconsin-Madison mentions that LDPC codes have only been seriously explored in recent years, and IBM’s roadmap clarifies how it functions. He emphasizes its importance as it helps researchers pinpoint potential bottlenecks and trade-offs. For example, he notes that Starling may operate slower than current superconducting quantum computers.

At its intended scale, the device could address challenges relevant to sectors such as pharmaceuticals. Here, simulations of small molecules or proteins on quantum computers like Starling could replace costly and cumbersome experimental steps in drug development, Otten explains.

IBM isn’t the only contender in the quantum computing sector planning significant advancements. For instance, Quantinuum and Psiquantum have also announced their intentions to develop fault-tolerant utility-scale machines by 2029 and 2027, respectively.

Topics:

Source: www.newscientist.com