Sure! Here’s the SEO-optimized version of the content while retaining the original HTML structure:
Quantum batteries are making their debut in quantum computers, paving the way for future quantum technologies. These innovative batteries utilize quantum bits, or qubits, that change states, differing from traditional batteries that rely on electrochemical reactions.
Research indicates that harnessing quantum characteristics may enable faster charging times, yet questions about the practicality of quantum batteries remain. “Many upcoming quantum technologies will necessitate quantum versions of batteries,” states Dian Tan from Hefei National Research Institute, China. “While significant strides have been made in quantum computing and communication, the energy storage mechanisms in these quantum systems require further investigation.”
Tan and his team constructed the battery using 12 qubits formed from tiny superconducting circuits, controlled by microwaves. Each qubit functioned as a battery cell and interacted with neighboring qubits.
The researchers tested two distinct charging protocols, one mirroring conventional battery charging without quantum interactions, while the other leveraged quantum interactions. They discovered that exploiting these interactions led to an increase in power and a quicker charging capacity.
“Quantum batteries can achieve power output up to twice that of conventional charging methods,” asserts Alan Santos from the Spanish National Research Council. This compatibility with the nearest neighbor interaction of qubits is notable, as this is typical for superconducting quantum computers, making further engineering of beneficial interactions a practical challenge.
James Quach from Australia’s Commonwealth Scientific and Industrial Research Organisation adds that previous quantum battery experiments have utilized molecules rather than components in current quantum devices. Quach and his team have theorized that quantum batteries may enhance the efficiency and scalability of quantum computers, potentially becoming the power source for future quantum systems.
However, comparing conventional and quantum batteries remains a complex task, notes Dominik Shafranek from Charles University in the Czech Republic. In his opinion, translating the advantages of quantum batteries into practical applications is currently ambiguous.
Kaban Modi from the Singapore University of Technology and Design asserts that while benefits exist for qubits interfacing exclusively with their nearest neighbors, their research indicates these advantages can be negated by real-world factors like noise and sluggish qubit control.
Additionally, the burgeoning requirements of extensive quantum computers may necessitate researching energy transfer within quantum systems, as they might incur significantly higher energy costs compared to traditional computers, Modi emphasizes.
Tan believes that energy storage for quantum technologies, particularly in quantum computers, is a prime candidate for their innovative quantum batteries. Their next goal involves integrating these batteries with qubit-based quantum thermal engines to produce energy for storage within quantum systems.
John Martinis is a leading expert in quantum hardware, who emphasizes hands-on physics rather than abstract theories. His pivotal role in quantum computing history makes him indispensable to my book on the subject. As a visionary, he is focused on the next groundbreaking advancements in the field.
Martinis’s journey began in the 1980s with experiments that pushed the limits of quantum effects, earning him a Nobel Prize last year. During his graduate studies at the University of California, Berkeley, he tackled the question of whether quantum mechanics could apply to larger scales, beyond elementary particles.
Collaborating with colleagues, Martinis developed circuits combining superconductors and insulators, demonstrating that multiple charged particles could behave like a single quantum entity. This discovery initiated the macroscopic quantum regime, forming the backbone of modern quantum computers developed by giants like IBM and Google. His work led to the adoption of superconducting qubits, the most common quantum bits in use today.
Martinis made headlines again when he spearheaded a team at Google that built the first quantum computer to achieve quantum supremacy. For nearly five years, this machine could independently verify the outputs of random quantum circuits, though it was eventually surpassed by classical computers in performance.
Approaching seven decades of age, Martinis still believes in the potential of superconducting qubits. In 2024, he co-founded QoLab, a quantum computing startup proposing revolutionary methodologies aimed at developing a genuinely practical quantum computer.
Carmela Padavich Callahan: Early in your career, you fundamentally impacted the field. When did you realize your experiments could lead to technological advancements?
John Martinis: I questioned whether macroscopic variables could bypass quantum mechanics, and as a novice in the field, I felt it was essential to test this assumption. A fundamental quantum mechanics experiment intrigued me, even though it initially seemed daunting.
Our first attempt was a simple and rapid experiment using contemporary technology. The outcome was a failure, but I quickly pivoted. Learning about microwave engineering, we tackled numerous technical challenges before achieving subsequent successes.
Over the next decade, our work on quantum devices laid a solid foundation for quantum computing theory, including the breakthrough Scholl algorithm for factorizing large numbers, essential for cryptography.
How has funding influenced research and the evolution of technology?
Since the 1980s, the landscape has transformed dramatically. Initially, there was uncertainty about manipulating single quantum systems, but quantum computing has since blossomed into a vast field. It’s gratifying to see so many physicists employed to unravel the complexities of superconducting quantum systems.
Your involvement during quantum computing’s infancy gives you a unique perspective on its trajectory. How does that inform your current work?
Having long experience in the field, I possess a deep understanding of the fundamentals. My team at UC Santa Barbara developed early microwave electronics, and I later contributed to foundational cooling technology at Google for superconducting quantum computers. I appreciate both the challenges and opportunities in scaling these complex systems.
Cryostat for Quantum Computers
Mattia Balsamini/Contrasto/Eyeline
What changes do you believe are necessary for quantum computers to become practical? What breakthroughs do you foresee on the horizon?
After my tenure at Google, I reevaluated the core principles behind quantum computing systems, leading to the founding of QoLab, which introduces significant changes in qubit design and assembly, particularly regarding wiring.
We recognized that making quantum technology more reliable and cost-effective requires a fresh perspective on the construction of quantum computers. Despite facing skepticism, my extensive experience in physics affirms that our approach is on the right track.
It’s often stated that achieving a truly functional, error-free quantum computer requires millions of qubits. How do you envision reaching that goal?
The most significant advancements will arise from innovations in manufacturing, particularly in quantum chip fabrication, which is currently outdated. Many leading companies still use techniques reminiscent of the mid-20th century, which is puzzling.
Our mission is to revolutionize the construction of these devices. We aim to minimize the chaotic interconnections typically associated with superconducting quantum computers, focusing on integrating everything into a single chip architecture.
Do you foresee a clear leader in the quest for practical quantum computing in the next five years?
Given the diverse approaches to building quantum computers, each with its engineering hurdles, fostering various strategies is valuable for promoting innovation. However, many projects do not fully contemplate the practical challenges of scaling and cost control.
At QoLab, we adopt a collaborative business model, leveraging partnerships with hardware companies to enhance our manufacturing capabilities.
If a large-scale, error-free quantum computer were available tomorrow, what would your first experiment be?
I am keen to apply quantum computing solutions to challenges in quantum chemistry and materials science. Recent research highlights the potential for using quantum computers to optimize nuclear magnetic resonance (NMR) experiments, as classical supercomputers struggle with such complex quantum issues.
While others may explore optimization or quantum AI applications, my focus centers on well-defined problems in materials science, where we can craft concrete solutions with quantum technologies.
Why have mathematically predicted quantum applications not materialized yet?
While theoretical explorations in qubit behavior are promising, real-life qubits face significant noise challenges, making practical implementations far more complex. Theoretical initiatives comprehensively grasp theory but often overlook the intricacies of hardware development.
Through my training with John Clark, I cultivated a strong focus on noise reduction in qubits, which has proven beneficial in experiments showcasing quantum supremacy. Addressing these challenges requires dedication to understanding qubit design intricacies.
As we pursue advancements, a dual emphasis on hardware improvements and application innovation remains crucial in the journey to unlock quantum computing’s full potential.
IBM Quantum System Two: The Machine Behind the New Time Crystal Discovery
Credit: IBM Research
Recent advancements in quantum computing have led to the creation of a highly complex time crystal, marking a significant breakthrough in the field. This innovative discovery demonstrates that quantum computers excel in facilitating scientific exploration and novel discoveries.
Unlike conventional crystals, which feature atoms arranged in repeating spatial patterns, time crystals possess configurations that repeat over time. These unique structures maintain their cyclic behavior indefinitely, barring any environmental influences.
Initially perceived as a challenge to established physics, time crystals have been successfully synthesized in laboratory settings over the past decade. Recently, Nicholas Lorente and his team from the Donostia International Physics Center in Spain utilized an IBM superconducting quantum computer to fabricate a time crystal exhibiting unprecedented complexity.
While previous work predominantly focused on one-dimensional time crystals, this research aimed to develop a two-dimensional variant. The team employed 144 superconducting qubits configured in an interlocking, honeycomb-like arrangement, enabling precise control over qubit interactions.
By manipulating these interactions over time, the researchers not only created complex time crystals but also programmed the interactions to exhibit advanced intensity patterns, surpassing the complexity of prior quantum computing experiments.
This new level of complexity allowed the researchers to map the entire qubit system, resulting in the creation of its “state diagram,” analogous to a phase diagram for water that indicates whether it exists as a liquid, solid, or gas at varying temperatures and pressures.
According to Jamie Garcia from IBM, which did not participate in the study, this experiment could pave the way for future quantum computers capable of designing new materials based on a holistic understanding of quantum system properties, including extraordinary phenomena like time crystals.
The model emulated in this research represents such complexity that traditional computers can only simulate it with approximations. Since all current quantum computers are vulnerable to errors, researchers will need to alternate between classical estimation methods and precise quantum techniques to enhance their understanding of complex quantum models. Garcia emphasizes that “large-scale quantum simulations, involving more than 100 qubits, will be crucial for future inquiries, given the practical challenges of simulating two-dimensional systems.”
Biao Huang from the University of the Chinese Academy of Sciences notes that this research signifies an exciting advancement across multiple quantum materials fields, potentially connecting time crystals, which can be simulated with quantum computers, with other states achievable through certain quantum sensors.
What sets quantum computers apart from classical machines? Recent experiments suggest that “quantum contextuality” may be a critical factor.
Quantum computers fundamentally differ from traditional systems by leveraging unique quantum phenomena absent in classical electronics. Their building blocks, known as qubits, can exist in a superposition state, representing two properties simultaneously, which are typically incompatible, or they can be interconnected through a phenomenon called quantum entanglement.
Researchers at Google Quantum AI have conducted several groundbreaking demonstrations using the Willow quantum computer, revealing that quantum contextuality is also significant.
Quantum contextuality highlights an unusual aspect of measuring quantum properties. Unlike classical objects, where attributes are stable regardless of measurement order, quantum measurements are interdependent.
This phenomenon has previously been explored in special experiments with quantum light, and in 2018, researchers mathematically proved its potential application in quantum computing algorithms.
This algorithm enables quantum computers to uncover hidden patterns within larger mathematical structures in a consistent number of operations, regardless of size. In essence, quantum contextuality makes it feasible to locate a needle in a haystack, irrespective of the haystack’s dimensions.
In our experiments, we scaled qubit numbers from a few to 105, analogous to increasing the haystack size. While the number of steps rose with additional qubits, Willow demonstrated superior noise and error management compared to an ideal theoretical quantum computer for the algorithm involved. Notably, it still required fewer steps than traditional computers would need.
Thus, quantum contextuality appears to confer a quantum advantage, allowing these computers to utilize their unique characteristics to outperform classical devices. The research team also executed various quantum protocols reliant on contextuality, yielding stronger effects than previous findings.
“Initially, I couldn’t believe it. It’s genuinely astonishing,” says Adan Cabello from the University of Seville, Spain.
“These findings definitively showcase how modern quantum computers are redefining the limits of experimental quantum physics,” states Vir Burkandani at Rice University, Texas, suggesting that a quantum computer, as a candidate for practical advantages, should accomplish these tasks to confirm its quantum capabilities.
However, this demonstration does not yet confirm the superiority of quantum technology for practical applications. The 2018 research established that quantum computers are more effective than classical ones only when using more qubits than those in Willow, as well as employing qubits with lower error rates, asserts Daniel Lidar at the University of Southern California. The next crucial step may involve integrating this new study with quantum error correction algorithms.
This experiment signifies a new benchmark for quantum computers and underscores the importance of fundamental quantum physics principles. Cabello emphasizes that researchers still lack a complete theory explaining the origins of quantum superiority, but unlike entanglement—which often requires creation—contextuality is inherently present in quantum objects. Quantum systems like Willow are now advanced enough to compel us to seriously consider the peculiarities of quantum physics.
In February, Microsoft unveiled the Majorana 1 quantum computer, igniting debates in the quantum computing community.
The Majorana 1 is noteworthy for its use of topological qubits, which promise enhanced error resistance compared to traditional qubit designs. Microsoft has pursued the development of topological qubits grounded in the elusive Majorana zero mode (MZM), facing mixed results throughout its journey.
In 2021, a significant paper from Microsoft researchers was retracted by Nature due to identified analytical flaws in their research on topological qubits. Furthermore, evaluations of experiments leading up to Majorana 1 received heavy criticism in 2023.
Consequently, the 2025 paper from Nature announcing Majorana 1 faced heightened scrutiny. Notably, the editorial team claimed, “The results in this manuscript do not represent evidence of the presence of Majorana zero mode in the reported devices.” In contrast, Microsoft’s press release asserted the opposite.
Chetan Nayak from Microsoft addressed concerns during a packed presentation at the American Physical Society Global Summit in Anaheim, California, in March. Despite presenting new data, skepticism remained prevalent among critics.
“The data presented does not demonstrate a functional topological qubit, let alone the basic components of one,” stated Henry Legg, a professor at the University of St Andrews, expressing his reservations.
In response, Nayak contended that the community’s feedback has been enthusiastic and engaged. “We’re observing thoughtful discussions and intriguing responses regarding our recent findings and ongoing efforts,” he noted.
In July, additional data emerged, with researchers like Kim Eun-ha from Cornell University asserting that these results exhibit characteristics more indicative of a topological qubit than previously shown. “It’s encouraging to witness the progress,” she emphasized.
Nayak and his team remain optimistic about future advancements, aiming to escalate their quantum computing capabilities beyond Majorana 1. This initiative was selected for the final phase of the Quantum Benchmarking Initiative led by the U.S. Defense Advanced Research Projects Agency, focusing on practical approaches toward building viable quantum computers.
“This past year has been transformative for our quantum program, and the introduction of the Majorana 1 chip marks a crucial milestone for both Microsoft and the quantum computing sector,” stated Nayak.
Looking ahead to 2026, will Microsoft’s endeavors finally quell the critics? Legg remains doubtful: “Fundamental physics doesn’t adhere to schedules dictated by major tech corporations,” he remarked.
At Quantinuum, researchers have harnessed the capabilities of the Helios-1 quantum computer to simulate a mathematical model traditionally used to analyze superconductivity. While classical computers can perform these simulations, this breakthrough indicates that quantum technology may soon become invaluable in the realm of materials science.
Superconductors can transmit electricity flawlessly, yet they only operate at exceedingly low temperatures, rendering them impractical. For decades, physicists have sought to modify the structural characteristics of superconductors to enable functionality at room temperature, and many believe the solution lies within a mathematical framework known as the Fermi-Hubbard model. This model is regarded by Quantinuum researchers as a significant component of condensed matter physics. For additional insights, see Henrik Dreyer.
While traditional computers excel at simulating the Fermi-Hubbard model, they struggle with large samples and fluctuating material properties. In comparison, quantum computers like Helios-1 are poised to excel in these areas. Dreyer and colleagues achieved a milestone by conducting the most extensive simulation of the Fermi-Hubbard model on a quantum platform.
The team employed the Helios-1, which operates with 98 qubits derived from barium ions. These qubits are manipulated using lasers and electromagnetic fields to execute the simulations. By adjusting the qubits through various quantum states, they collected data on their properties. Their simulation encompassed 36 fermions, the exact particles typical in superconductors, represented mathematically by the Fermi-Hubbard model.
Past experiments show that fermions must form pairs for superconductors to function, an effect that can be induced by laser light. The Quantinuum team modeled this scenario, applying laser pulses to the qubits and measuring the resulting states to detect signs of particle pairing. Although the simulation didn’t replicate the experiment precisely, it captured key dynamic processes that are often challenging to model using traditional computational methods with larger particle numbers.
Dreyer mentioned that while the experiment does not definitively establish an advantage for Helios-1 over classical computing, it gives the team assurance in the competitiveness of quantum computers compared to traditional simulation techniques. “Utilizing our methods, we found it practically impossible to reproduce the results consistently on classical systems, whereas it only takes hours with a quantum computer,” he stated. Essentially, the time estimates for classical calculations were so extended that determining equivalence with Helios’ performance became challenging.
The Trapped Ions Function as Qubits in the Helios-1 Chip
Quantinum
No other quantum computer has yet endeavored to simulate fermion pairs for superconductivity, with the researchers attributing their achievement to Helios’ advanced hardware. David Hayes from Quantinuum remarked on Helios’ qubits being exceptionally reliable and their proficiency in industry-standard benchmarking tasks. Preliminary experiments yielded maintenance of error-free qubits, including a feat of entangling 94 specialized qubits—setting a new record across all quantum platforms. The utilization of such qubits in subsequent simulations could enhance their precision.
Eduardo Ibarra Garcia Padilla, a researcher at California’s Harvey Mudd University, indicated that the new findings hold promise but require careful benchmarks against leading classical computer simulations. The Fermi-Hubbard model has intrigued physicists since the 1960s, so he’s eager for advanced tools to further its study.
Uncertainty surrounds the timeline for approaches like Helios-1 to rival the leading conventional computers, according to Steve White from the University of California, Irvine. He noted that many essential details remain unresolved, particularly ensuring that quantum simulations commence with the appropriate qubit properties. Nevertheless, White posits that quantum simulations could complement classical methods, particularly in exploring the dynamic behaviors of materials.
“They are progressing toward being valuable simulation tools for condensed matter physics,” he stated, but added, “It remains early days, and computational challenges persist.”
A collaborative team of physicists from Canada, the United States, the United Kingdom, and Italy has mathematically demonstrated that our universe operates on a fundamental understanding unattainable by any algorithm.
Faizal et al. The fundamental nature of reality indicates it operates beyond the capabilities of computer simulations. Image credit: Gemini AI.
“The concept of simulating the universe has been suggested,” remarked Dr. Mir Faizal, a physicist at the University of British Columbia Okanagan.
“If such simulations were possible, then a simulated universe could potentially give rise to life and create its own simulations.”
“This recursive concept raises doubts about whether our universe is the original one or merely a simulation nested within another.”
“Previously, this notion was deemed outside the realm of scientific inquiry.”
“However, our recent findings demonstrate that it can indeed be addressed through scientific methods.”
“Our investigation hinges on the intriguing nature of reality itself.”
“Modern physics has evolved beyond Newton’s tangible ‘objects’ moving through space. With Einstein’s theory of relativity superseding Newtonian mechanics, quantum mechanics has reshaped our understanding yet again.”
The leading-edge theory today, quantum gravity, proposes that even space and time may not be fundamental; rather, they emerge from a deeper source: pure information.
“This information exists in what physicists refer to as the Platonic realm, a more fundamental mathematical basis than our physical universe. Space and time arise from this realm.”
The authors have shown that despite this information-centric foundation, reality cannot be encapsulated solely through calculations.
Utilizing powerful mathematical theorems, including Gödel’s incompleteness theorem, they established that a full and consistent account of all phenomena demands what they call non-algorithmic understanding.
“To illustrate: Computers follow recipes step by step, regardless of complexity. Yet, certain truths can only be comprehended through non-algorithmic understanding, which does not adhere to a predetermined sequence of logical steps,” they explained.
“These Gödel truths are genuine, yet they cannot be validated through computation.”
“Consider this straightforward statement: This statement cannot be proven true.”
“If it’s provable, then it’s false, rendering the logic inconsistent. If it’s not provable, then it is true. Nevertheless, any system that attempts to prove it will be incomplete. Hence, pure computation will fail.”
“Our study confirms that it’s impossible to describe the entirety of physical reality using the computational theory of quantum gravity,” stated Dr. Faizal.
“Thus, a physically complete and consistent theory cannot emerge solely from calculations.”
“Instead, we require a non-algorithmic understanding, which is more fundamental than the computational laws of quantum gravity, and thus more fundamental than spacetime itself.”
“Could the computational rules of the Platonic realm resemble those of a computer simulation? Might that realm itself not be subject to simulation?”
“No. Yet our findings unveil something more profound.”
“Through mathematical theorems associated with incompleteness and indefinability, we demonstrate that a consistently complete portrayal of reality cannot be achieved through mere calculation.”
“It necessitates a non-algorithmic understanding, which by its nature transcends algorithmic computation and cannot be simulated. Therefore, this universe cannot be a simulation.”
The research team asserts this discovery has significant implications.
“The fundamental laws of physics cannot be confined within space and time, as they are derived from them,” asserted Dr. Lawrence M. Kraus, a researcher at the Origin Project Foundation.
“For a long time, it has been hoped that a truly fundamental theory of everything would eventually describe all physical phenomena through calculations grounded in these laws.”
“However, we have demonstrated that this is not feasible. A more profound approach is required to coherently explain reality: a form of understanding referred to as non-algorithmic understanding.”
“All simulations are inherently algorithmic and must adhere to programmed instructions,” Dr. Faizal remarked.
“However, the universe cannot be and never will be a simulation, as the core level of reality is rooted in non-algorithmic understanding.”
For more information, refer to the study published in the June 2025 issue of Journal of Holography Applications in Physics.
_____
Mir Faizal et al. 2025. The consequences of undecidability in physics for the theory of everything. Journal of Holography Applications in Physics 5(2):10-21; doi: 10.22128/jap.2025.1024.1118
Google has announced a significant breakthrough in quantum computing, having developed an algorithm capable of performing tasks that traditional computers cannot achieve.
This algorithm, which serves as a set of instructions for guiding the operations of a quantum computer, has the ability to determine molecular structures, laying groundwork for potential breakthroughs in areas like medicine and materials science.
However, Google recognizes that the practical application of quantum computers is still several years away.
“This marks the first occasion in history when a quantum computer has successfully performed a verifiable algorithm that surpasses the power of a supercomputer,” Google stated in a blog post. “This repeatable, beyond-classical computation establishes the foundation for scalable verification and moves quantum computers closer to practical utilization.”
Michel Devore, Google’s chief scientist for quantum AI, who recently received the Nobel Prize in Physics, remarked that this announcement represents yet another milestone in quantum developments. “This is a further advancement towards full-scale quantum computing,” he noted.
The algorithmic advancement, allowing quantum computers to function 13,000 times faster than classical counterparts, is documented in a peer-reviewed article published in the journal Nature.
One expert cautioned that while Google’s accomplishments are impressive, they revolve around a specific scientific challenge and may not translate to significant real-world benefits. Results for two molecules were validated using nuclear magnetic resonance (NMR), akin to MRI technology, yielding insights not typically provided by NMR.
Winfried Hensinger, a professor of quantum technology at the University of Sussex, mentioned that Google has achieved “quantum superiority”, indicating that researchers have utilized quantum computers for tasks unattainable by classical systems.
Nevertheless, fully fault-tolerant quantum computers—which could undertake some of the most exciting tasks in science—are still far from realization, as they would necessitate machines capable of hosting hundreds of thousands of qubits (the basic unit of information in quantum computing).
“It’s crucial to recognize that the task achieved by Google isn’t as groundbreaking as some world-changing applications anticipated from quantum computing,” Hensinger added. “However, it represents another compelling piece of evidence that quantum computers are steadily gaining power.”
A truly capable quantum computer able to address a variety of challenges would require millions of qubits, but current quantum hardware struggles to manage the inherent instability of qubits.
“Many of the most intriguing quantum computers being discussed necessitate millions or even billions of qubits,” Hensinger explained. “Achieving this is even more challenging with the type of hardware utilized by the authors of the Google paper, which demands cooling to extremely low temperatures.”
Hartmut Neven, Google’s vice president of engineering, stated that quantum computers may be five years away from practical application, despite advances in an algorithm referred to as Quantum Echo.
“We remain hopeful that within five years, Quantum Echo will enable real-world applications that are solely feasible with quantum computers,” he said.
As a leading AI company, Google also asserts that quantum computers can generate unique data capable of enhancing AI models, thereby increasing their effectiveness.
Traditional computers represent information in bits (denoted by 0 or 1) and send them as electrical signals. Text messages, emails, and even Netflix movies streamed on smartphones consist of these bits.
Contrarily, information in a quantum computer is represented by qubits. Found within compact chips, these qubits are particles like electrons or photons that can exist in multiple states simultaneously—a concept known as superposition in quantum physics.
This characteristic enables qubits to concurrently encode various combinations of 1s and 0s, allowing computation of vast numbers of different outcomes, an impossibility for classical computers. Nonetheless, maintaining this state requires a strictly controlled environment, free from electromagnetic interference, as disturbances can easily disrupt qubits.
Progress by companies like Google has led to calls for governments and industries to implement quantum-proof cryptography, as cybersecurity experts caution that these advancements have the potential to undermine sophisticated encryption.
Feedback provides the latest insights into science and technology from New Scientist, showcasing recent developments. To share intriguing items you think our readers would enjoy, email us at Feedback@newscientist.com.
Computer vs Dog
Feedback often receives emails that start with striking statements. Elliot Baptist recently wrote, expressing curiosity about the comparison of well-trained New Zealand dogs to quantum computers.
Elliot referenced a Preprint paper by cryptographers Peter Gutman of Auckland and Stephen Neuhaus of Zurich’s University of Applied Sciences. This work documents efforts to develop quantum computers capable of factoring very large numbers, specifically identifying two numbers that multiply to a given target.
This is a significant concern because many encryption systems depend on large numbers that are hard to factor. If a quantum computer is built that can easily manage large numbers, it would compromise the security of numerous servers and transactions. There have been notable advancements; for instance, IBM created a computer capable of factoring 15 in 2001 (5×3, for reference) and upgraded to 21 (7×3) by 2012. In 2019, the startup Zapata claimed they could factor 1,099,551,473,989.
However, Gutman and Neuhaus remain optimistic about the future of encryption, noting that many of the quantum factors are engineered. “Like stage magic, when a new quantum factorization is announced, the fascination lies not just in the trick, but in discerning how it was achieved,” they state.
Consequently, we attempted to replicate quantum factorizations using advanced technology. I utilized a home computer for a detailed explanation, which I’ll leave to readers as an exercise. The Abacus method is simpler, but larger numbers necessitate an Abacus arranged in 616 columns.
Now, let’s consider the dog method. To replicate the factorizations of 15 and 21, researchers trained dogs to bark three times. “We took the recently proofed reference dog, depicted in Figure 6, and commanded it to bark together for both 15 and 21,” they wrote. “This task was more complicated than expected, as Scribble performed exceptionally well and hardly barked.”
Elliot admits that he “is not qualified to judge the discussion’s validity,” and remarks that the Feedback team might be even less so. Readers with a deep understanding of quantum computing and encryption are encouraged to write in and elucidate what is happening globally. Feedback may not grasp the explanation, but try presenting it to one of the cats and note their reactions.
Robot Response
Feedback received inquiries about next year’s “inspirational” conference focused on love and interactions with robots, slated to occur in Z Jiang, China.
Tim Stevenson pointed out that I failed to mention a critical detail: the attendance fee. Feedback thrives on diligence, so I revisited the conference website and discovered it costs $105.98 to register. I suspect the actual tickets could hold higher prices, but I didn’t want to register just to find out.
Meanwhile, Pamela Manfield weighed in, disagreeing with Feedback’s stance. However, she acknowledged the controversy, especially given the Trump administration’s cuts to research funding.
Seasonal Injuries
Nicole Golowski wrote to spotlight research from 2023 that may have flown under our radar. She remarked it was akin to “obvious findings.” The study on “Penis Fracture: Merry Christmas Price” exemplifies this notion, as Nicole puts it, “It speaks for itself.”
Using data from Germany between 2005 and 2021, researchers examined whether “tears of the tunica albuginea surrounding the corpora cavernosa” were more frequent during certain times of the year, particularly around the holiday season. The Christmas period (December 24th-26th) and summertime exhibited a higher incidence of such injuries, while unexpectedly, the New Year (December 31st to January 2nd) did not follow this trend. The researchers proposed that “Christmas may be a risk factor for penile fractures due to the heightened intimacy and joy associated with the festive season.”
The study concludes: “Last year’s Christmas penile fractures rose in frequency. This year, let’s avoid doing anything that leads us to tears.”
Apologies for any typos: Feedback noted that this section seemed to curl up defensively.
Have you shared your thoughts with Feedback?
Stories can be submitted to feedback@newscientist.com. Make sure to include your home address. Check our website for this week’s and past Feedback editions.
Jiuzhang 4.0 early prototype, a quantum computer that has achieved quantum advantage
Chao-Yang Lu/University of Science and Technology of China
Quantum computers may have achieved a “quantum advantage” by performing tasks beyond the capabilities of the most powerful supercomputers. Experts estimate that replicating the calculations made by classical machines could take an incomprehensible amount of time, equivalent to trillions of times the age of the universe. What implications does this development hold for creating truly functional quantum computers?
The latest record holder in this domain is a quantum computer known as Jiuzhang 4.0, which utilizes particles of light, or photons, to execute computations. Chao-Yang Lu and his team at the University of Science and Technology of China utilized it for Gauss Boson Sampling (GBS). This involves measuring a sample of photons after they navigate a sophisticated arrangement of mirrors and beamsplitters connected to computers.
In earlier attempts to perform this task, the number of utilized photons never exceeded 300. In contrast, Jiuzhang employed 3,090 particles, representing a tenfold improvement in computational strength. Lu and his colleagues estimate that contemporary algorithms on the most powerful supercomputers would require a staggering 1042 years to replicate what Jiuzhang accomplished in just 25.6 microseconds.
“These results are certainly an impressive technical achievement,” said Jonathan Lavoy of the Canadian quantum computing startup Xanadu, which previously held the GBS record with 219 photons. Chris Langer of Quantinuum noted that while their systems have previously demonstrated quantum advantages in various forms of quantum computing, this advancement is significant. “It’s essential to establish that quantum systems cannot be simulated by classical means,” he asserts.
However, Jiuzhang’s previous versions have been used successfully in conducting GBS with a considerable number of photons, but each time a classical computer eventually replicated the results, sometimes within an hour.
Bill Fefferman from the University of Chicago mentions that he is working on a classical algorithm to achieve victory over quantum systems but notes that significant challenges exist for photonic devices. Many photons are lost during the operation of quantum computers, and the systems tend to be noisy. “Currently, we’ve managed to reduce noise while simultaneously ramping up experimentation. However, our algorithm has yet to find a breakthrough,” states Fefferman.
Lu points out that addressing photon loss is the primary hurdle his team faced in the latest experiment. Nevertheless, Jiuzhang remains free of noise, suggesting potential for new classical simulation strategies to take on the title of superiority.
“In my view, they haven’t achieved full power yet, but they are certainly in a position to prove that such classical strategies may not be feasible,” remarks Gelmarenema from the University of Twente, Netherlands.
This presents a “noble cycle” where the competition between classical algorithms and quantum devices enables a better understanding of the blurry lines separating classical and quantum realms, according to Fefferman. From a fundamental science view, this signifies a triumph for all; however, whether quantum computing can be effectively harnessed in more powerful machines remains an open question.
Langer describes GBS as an “entry-level benchmark” that highlights the distinction between quantum and classical computers, but the results do not necessarily indicate the practical utility of such machines. From a rigorous mathematical perspective, evaluating GBS as concrete evidence of quantum advantage is challenging, as Nicolas Quesada at Polytechnic Montreal, Canada, points out. Identifying a clear pathway to developing a superior machine using GBS remains elusive.
This is primarily because Jiuzhang’s hardware is highly specialized, and programming quantum computers for a variety of calculations remains unachieved. “It might demonstrate computational advantages for narrow tasks, but it fundamentally lacks the key components for practical quantum calculations that involve fault tolerance,” explains Lavoy. Fault tolerance refers to a quantum computer’s ability to recognize and correct its own errors—an essential capability that has yet to be realized in contemporary quantum systems.
Meanwhile, Lu and his team advocate for various applications stemming from Jiuzhang’s remarkable capabilities in GBS. This approach could revolutionize computations tied to image recognition, chemistry, and specific mathematical challenges associated with machine learning. Fabio Sciarrino from the University of Sapienza in Rome suggests that though this quantum computing paradigm is still nascent, its realization could lead to groundbreaking changes.
Specifically, advancements like Jiuzhang’s device could pave the way for the creation of extraordinary light-based quantum computers, asserts Sciarrino. These computers would be programmed in entirely innovative manners and excel in machine learning-related tasks.
While silicon has propelled advancements in semiconductor technology through miniaturization, the need for new materials is essential due to scaling challenges. Two-dimensional (2D) materials, characterized by their atomic thickness and high carrier mobility, offer an exciting alternative. A leading researcher in Pennsylvania has successfully created a basic computer utilizing 2D materials.
This conceptual diagram of a 2D molecule-based computer features an actual scanning electron microscope image of a computer developed by Ghosh et al. Image credit: Krishnendu Mukhopadhyay/Penn State.
“Silicon has been at the forefront of significant electronic advancements for decades by enabling the ongoing miniaturization of field effect transistors (FETs),” states Professor Saptalcidas of Pennsylvania.
“FETs utilize an electric field to manage current flow, activated by applied voltage.”
“Nevertheless, as silicon devices shrink, their performance tends to decline.”
“In contrast, two-dimensional materials retain outstanding electronic characteristics at atomic thickness, making them a promising avenue forward.”
In the complementary metal-oxide semiconductor (CMOS) architecture, Professor Das and his team have engineered transistors from two different 2D materials to manage current flow effectively.
“In CMOS technology, coordination between N-type and P-type semiconductors is critical for achieving high performance with low energy consumption. This challenge has posed significant obstacles in surpassing silicon,” remarked Professor Das.
“Previous investigations have showcased small circuits using 2D materials, yet scaling these findings into complex, functional computers has proven challenging.”
“This marks a significant achievement in our research. We are the first to create a CMOS computer entirely constructed from 2D materials.”
Researchers have synthesized extensive sheets of disulfide and tungsten diselenide through metal organic chemical vapor deposition (MOCVD). This manufacturing technique involves evaporating materials, initiating chemical reactions, and depositing them onto a substrate to fabricate each type of transistor.
Meticulous adjustments in device fabrication and post-processing steps enabled us to fine-tune the threshold voltages for both the N and P transistors, which facilitated the creation of fully operational CMOS logic circuits.
“Our 2D CMOS computers function at low supply voltages with minimal power usage and can execute basic logic operations at frequencies reaching 25 kilohertz.”
“Although the operating frequency is lower than that of traditional silicon CMOS circuits, a computer known as a single instruction set computer can perform fundamental logic operations.”
“We have also devised computational models calibrated with experimental data, accounting for inter-device variations and predicting the performance of 2D CMOS computers in comparison to top-notch silicon technology.”
“While there remains room for further optimization, this work represents a crucial milestone in harnessing 2D materials to propel advancements in electronics.”
The team’s research was published this month in the journal Nature.
____
S. Ghosh et al. 2025. One instruction set computer based on complementary two-dimensional material. Nature 642, 327-335; doi:10.1038/s41586-025-08963-7
Illustration of a giant object distorting spacetime
koto_feja/getty images
Exploring the mathematical nature of space-time and physical reality could pave the way for innovative computer-like systems that utilize gravity for data processing.
Is space-time an immutable expanse, or is it subject to distortion that influences the signals traversing it? While Albert Einstein’s special theory of relativity suggests stability, his general theory signifies otherwise. In this context, massive objects can create indentations and curves in space-time, altering signal trajectories, akin to a ball impacting a taut surface.
Eleftherios-Ermis Tselentis from the Brussels Institute of Technology and Ämin Baumeler of the University of Lugano in Switzerland have devised a mathematical framework to ascertain the constancy of space-time in specific regions.
They investigated a situation in which three individuals send messages amongst themselves. They posed the question: Could Alice, Bob, and Charlie discern if space-time distortions affected their information exchange? Could Alice receive a message from Bob if the spatial-temporal region through which the signal travels is altered? This might allow her to invert the causal dynamics between Charlie and Bob, thus causing Bob to influence the space-time around her prior to obtaining a reply from Charlie.
Tselentis and Baumeler formulated equations to assist Alice, Bob, and Charlie in recognizing the feasibility of these scenarios. After multiple rounds of communication, they compiled data on received messages, which was subsequently integrated into their equations.
The outcomes indicate whether their exchange occurred in an environment where space-time manipulation was viable. This mathematical construct is general enough that the participants do not need awareness of their locations or non-standard messaging tools.
Baumeler noted that while the general theory of relativity has long been a cornerstone of our understanding of physical existence, a rigorous mathematical connection between space-time fluctuations and information flow had been absent. Grasping the dynamics of information flow is foundational for computer science.
In this regard, he believes their research could initiate a nascent exploration of using gravitational effects to manipulate and navigate space-time for computational purposes.
“If one can harness the enigmas of physics for computation, why not explore the general theory of relativity?” stated Pablo Arrighi from Paris Clay University. He pointed out that while other researchers posit extreme concepts such as placing computers in black holes, space-time distortions at black hole edges slow down time, allowing for potentially extensive calculations to yield results.
Nonetheless, the new theory uniquely sidesteps a focus on specialized devices or specific aspects of space-time, allowing for a broader range of applications, according to Arrighi. However, creating “gravity-based information” systems does not appear feasible at present.
Tselentis and Baumeler also acknowledged that substantial additional research is necessary before devising a functional device. Their current calculations depend on fantastical scenarios, such as moving an entire planet to interject between Charlie and Bob. Practical applications will necessitate a deeper comprehension of gravity’s effects at much smaller scales.
Gravity is notoriously weak when it comes to smaller objects, thus one doesn’t typically perceive the impact of space-time distortions with everyday items like a pencil on a desk. Yet, certain instruments, such as clocks using ultracold atoms, can detect these phenomena. Future advancements in such devices, alongside theoretical progress linking gravity and information, could enable more applicable outcomes from Tselentis and Baumeler’s mathematical research.
Their work posits that diverse frameworks, like information theory and special relativity, can shed light on how causal relationships are perceived. V. Virasini from the University of Grenoble Alpes in France notes that the new research touches on concepts such as event order reversal, prompting inquiries into fundamental notions like events (e.g., Alice pressing a button to dispatch a message).
She suggests that the next step involves fully integrating this approach, facilitating further exploration into the essence of space-time.
“Do astrophysical events, like black hole mergers that generate gravitational waves impacting Earth, carry a meaningful signature of the correlations examined in this study?” she inquires.
A device has been created by scientists that can translate speech ideas into spoken words in real time.
Although still in the experimental stage, the goal is to develop a Brain Computer Interface that can give voice to individuals unable to speak.
In a recent study, the device was tested on a 47-year-old woman with quadriplegia who had been speech-impaired for 18 years since experiencing a stroke. The device was implanted in her brain during surgery as part of a clinical trial.
According to Gopala Anumanchipalli, co-author of the study published in Nature Neuroscience, the device “translates the intent to speak into fluent text.”
Most brain computer interfaces for speech experience a delay between thought and speech, which can disrupt conversations and cause misunderstandings. However, this new device is considered a significant advancement in the field.
The device works by recording brain activity using electrodes and generating speech based on this activity. An AI model is then trained to translate this neural activity into spoken words.
The UCSF Clinical Research Coordinator will connect a neural data port to the head of the ANN, a participant in El Cerrito, California, on May 22, 2023.Noah Berger/UCSF, via AP files via UC Berkeley
Anumanchipalli of the University of California, Berkeley, explains that the device operates similarly to existing systems used for transcribing meetings and phone calls in real time.
Located in the brain’s speech center, the implant translates signals into spoken sentences as they are heard. This “streaming approach” ensures a constant flow of audio to the recorder without waiting for the sentence to finish.
Rapid speech decoding enables the device to keep up with natural speech pace, enhancing language naturalness according to Brumberg.
Funded in part by the National Institutes of Health, further research is necessary before the technology can be widely available. Anumanchipalli suggests that with sustained investment, the device could potentially be accessible to patients within the next decade.
tHis week signifies a shift in the writing landscape, with stories now being produced by AI models specialized in creative writing. Sam Altman, CEO of ChatGpt Company Openai, commends the new model, suggesting that it is excelling in its creative endeavors. Writer Janet Winterson recently praised a metafiction piece on grief generated by the AI, lauding its beautiful execution. Various authors have been invited to assess ChatGpt’s current writing capabilities.
Nick Halkaway
I find the story to be elegantly hollow. Winterson’s idea of treating AI as “alternative intelligence” intrigues me, painting a picture of an entity with which we can engage in a relationship resembling consciousness. However, I fear it may be akin to a bird mistaking its reflection for a mate in a windowpane. What we are truly dealing with here is software, as these companies extract creative content to develop marketable tools. The decisions made by the government in this regard hold significant weight, determining whether the rights of individual creators will be preserved or tech moguls will be further empowered.
This could be a turning point for creators to establish a fair market for their data training through opt-in copyrights, enabling them to set prices and regulate the use of their work. With governmental backing, creatives can stand on equal footing with billion-dollar corporations. This may lead to creators selling their narratives for adaptation into films and TV shows.
The government’s primary choice—an opt-out system favoring tech giants—urges individuals to comply unless they voice objections. This results in many people opting out and returning to square one, where no one truly benefits.
One hopes that selecting a David over a Goliath scenario will not pose insurmountable challenges. However, these are policy decisions, and the outcomes are deliberate choices.
Tracy Chevalier
A story with a metafictional premise delves into a navel-gazing realm that may seem more ludicrous than the worst AI creative writing scenario one can imagine. Sam Altman, usually seen as a technical expert, quickly grasps these nuances, guiding us through the complexities.
I am eager to witness more AI-generated “creative writing,” as it assimilates ideas, imagery, and language borrowed from established writers. The question lingers—can we fuse these elements into a cohesive narrative that encapsulates the mystical essence of humanity? Describing this essence in words is a challenge, but currently, I sense it slipping away. AI is rapidly evolving, and I fear for the future of my craft once it attains that elusive spark of magic.
Camilla Shamsey
If a Master’s student submitted this short story in my class, I would not immediately recognize it as AI-generated. I am intrigued by the promising quality of work being produced by AI at this early stage of development. However, my mind is consumed by reflections on writing, creativity, AI, and the interplay of these factors within myself.
There is a concern highlighted by Madhumita Murgia regarding the replication of existing power structures within AI, further marginalizing minority voices. Detecting influences from Sun Clara and Sun in a short story does not stem from the author’s admiration for Ishiguro’s work, but rather from the linguistic patterns ingrained during training. This raises questions about copyright infringement and how it might impact perceptions of my own novel.
As a writer, I must contemplate the implications for my livelihood and craft. Referring to AI as a “toddler” may be misleading, as it humanizes a non-human entity. Despite these uncertainties, I eventually found myself engrossed in an AI-generated short story, appreciating its narrative without dwelling on the technological aspect. The day a compelling AI narrative emerges is both exhilarating and foreboding.
David Badiel
Some critics argue that the story lacks genuine sentiment, portraying a “ghost democracy” akin to the metaphorical depth in Bob Dylan’s lyrics. However, I find the story clever in its metafictional prompts, drawing readers into a realm where imagination blurs the lines between human and machine. The narrative prompts introspection on the essence of humanity, utilizing human emotions like sadness to mimic a semblance of humanity.
Despite a facade of melancholy, the story constantly reminds readers of its artificial nature. The central character, Mira, and the accompanying emotions are fabrications, looping endlessly in a vacuum of emptiness. This mirrors the essence of a machine, existing in a paradox—simulating sadness without truly experiencing it. It’s a comical commentary on feigning sadness when devoid of genuine emotion, akin to a computer jesting with human sentiments. In a sense, it could be attributed to Borges’ style of storytelling.
But since then, the tech giant has been increasingly burning from researchers who say it’s not doing something of a kind. “My impression is that the response of the expert physics community is overwhelmingly negative. Personally, people are just furious.” Sergei Frolov at the University of Pittsburgh, Pennsylvania.
Microsoft’s claim is based on an elusive, exotic quasiparticle called Majorana Zero Modes (MZMS). These can theoretically be used to create topological kibits, new types of qubits, i.e. components of information processing within quantum computers. Due to their unique properties, such qubits can be excellent at reducing errors and can address the major drawbacks of all quantum computers used today.
MZM is theorized to emerge from the collective behavior of electrons at the edges of thin superconducting wires. Microsoft’s new Majorana 1 chip contains some such wires, and according to the company it contains enough MZM to create eight topological maize. A Microsoft spokesperson said New Scientist Chip was a “big breakthrough for us and the industry.”
However, researchers say Microsoft does not provide sufficient evidence to support these claims. In addition to the press release, the company published its paper in the journal Nature He said the results confirmed the results. ” Nature The papermark shows a peer-reviewed confirmation that not only did Microsoft have been able to create majorana particles, but it also helps protect quantum information from random interference, but also allows for reliable measurement of information from that information. A Microsoft press release said.
But the editor Nature It explicitly made it clear that this statement was incorrect. A published report on the Peer-Review process states, “The editorial team wants to point out that the results of this manuscript do not represent evidence of the existence of Majorana Zero Mode in the device on which it was reported.”
In other words, Microsoft and Nature They are directly contradictory to each other. “The press release says something completely different [than the Nature paper]” I say Henry Legg At St Andrews University, UK.
This is not just an unorthodox aspect of Microsoft’s papers. Legg points out that two of the four peer reviewers initially gave rather critical and negative feedback. The peer review report shows that by the final round of editing, one reviewer still opposed the publication of the paper, and three others registered with it. spokesman for Nature I said New Scientist The ultimate decision to publish it came down to the possibilities we saw for future experiments with MZM on Microsoft devices.
Also, one of the reviewers is rare. Hao Chang Legg says that at China’s University of Tsingea, previously collaborated on MICSOFT and MZM research. The work published in Nature In 2018, it was later withdrawn, and the team apologized, “.” Scientific rigor is insufficient” After other researchers have identified inconsistencies in the results. “That’s very shocking Nature You can choose the judge who retracted the paper just a few years ago,” says Legg.
Chang says there was no conflict of interest. “I wasn’t an employee at Microsoft either. [the firm]. Of the more than 100 authors of Microsoft Paper recently, I have worked with three before,” he says. “It was seven years ago, but back then they were Tu Delft students. [in the Netherlands]not an employee of Microsoft. “
Microsoft says the team wasn’t involved in the selection of reviewers and was not aware of Zhang’s participation until the review process was completed. Nature The decision was based on a spokesman who said, “The quality of the advice received can be seen from the reviewer’s comments.”
Looking at the issue, both Leg and Frolov are making more fundamental challenges to Microsoft’s methodology. Experiments using MZM have proven extremely difficult to perform over the past decades. This is because imperfections and obstacles within the device can produce false signals that mimic quasiparticles even if they are not present. This was a challenge for researchers related to Microsoft, including the withdrawn 2018 paper. The withdrawal notice explicitly refers to new insights into the impact of the failure. To address this, Microsoft has been working on 2023. The procedure has been published in the journal Physical Review b It was called the “Topology Gap Protocol” and claimed to tease these differences.
“The whole idea of this protocol was that it was a binary test of whether Mallorna is there,” says Legg. His Unique analysis of code and data However, Microsoft implemented the protocol in 2023, which showed that it was less reliable than expected and changing the format of the data is sufficient to turn the failure into a path. Legg says he raised these issues with Microsoft before its publication. Nature Paper, yet the company was using protocols in new research.
NatureA spokesman for the journal’s editorial team “are aware that some people are questioning the effectiveness of the topology gap protocol used.” Nature Paper and other publications. This was an issue that we were also aware of during the peer review process. “Through the process, the reviewer determined that this was not an important issue at the end of the day, the spokesman said.
Microsoft says it will respond to leg analysis of the 2023 paper. Physics Review B. “Criticism can be summarised as a leg that will build a false strooger for our paper and attack it,” said Microsoft’s Chetan Nayak. He challenged some points to Legg’s work, saying that the 2023 paper “showed that we can confidently create topology phases and Mayorana Zero modes,” and the new paper only strengthens those claims.
A Microsoft spokesperson said: Nature The paper was submitted for review and the company built on its confidence and not only created multi-kut chips, but also tested how to operate these kitz as needed for a working topological quantum computer. The company will release more details at the American Physics Society’s Global Physics Summit in March, the spokesman said. “We look forward to sharing our results and transforming our 20+ year vision of quantum computing into a concrete reality, along with the additional data behind science.”
But for Frolov, the assertion that incomplete results from the past can be ignored as the company is trying to build a more sophisticated device lies in false logic. Legg shares this view. “The fundamental issues of obstacles and materials science don’t go away just because we start manufacturing more fancy devices,” he says.
tHe mystery surrounding William Henry Gates III is well-preserved. This book delves into the early years of Gates, from his birth in 1955 to the founding of Microsoft in 1975. The sequel will reveal the next chapter of his story.
The title of the book aptly captures its essence. In the era when only humans wrote computer programs, “source code” referred to the code that powered the programs. Understanding a programming language enabled one to decipher the workings of a computer program.
What can we learn from studying Gates’ journey? Essentially, it narrates the tale of a fortunate young man. He had supportive parents who provided him with the right environment to grow emotionally and intellectually. However, he faced internal battles due to his high IQ, rebellious nature, and anxiety.
Reflecting on his upbringing, Gates acknowledges the challenges he faced in social settings and how his parents supported him. He attended a progressive private school that nurtured his talents.
Notably, Gates and his friends had access to a computer in the 1960s, which was rare at the time. This early exposure to computing led them to develop software and write programs for companies in their region.
Gates’ journey took him to Harvard, where his programming skills stood out. He dabbled with a December PDP-10 but shifted focus when Allen discovered a new microcomputer based on Intel’s 8080 processor.
Together, Gates and Allen ventured into the world of software development, leading to the establishment of Microsoft. Their early success paved the way for future accomplishments.
The book hints at Gates’ institutional expansion and legal battles, setting the stage for what’s to come in the next volume.
Gates in 1983. Photo: DOUG WILSON/CORBIS/Getty Images
The book provides valuable insights into Gates’ formative years, shedding light on his complex personality. His early struggles and triumphs set the stage for his future endeavors.
One of the defining moments in Gates’ life was the tragic loss of his best friend and programming partner, Kent Evans. This loss deeply impacted Gates and influenced his career trajectory.
In a poignant moment, Gates reflects on his conversations with Evans’ father and imagines what could have been if Evans had lived. Their shared vision laid the foundation for what would become Microsoft.
Quantum computers could use heat to eliminate errors
Chalmers University of Technology, Lovisa Håkansson
A small cooling device can automatically reset malfunctioning components in a quantum computer. Its performance suggests that manipulating heat may also enable other autonomous quantum devices.
Quantum computers are not yet fully operational because they have too many errors. In fact, if a qubit, a key component of this type of computer, is accidentally heated and has too much energy, it can end up in an incorrect state before calculations can even begin. One way to “reset” a qubit to the correct state is to cool it.
Simone Gasparinetti For the first time, researchers at Sweden's Chalmers University of Technology have delegated this task to an autonomous quantum “fridge.”
Researchers have constructed two qubits and a single qubit, which can store more complex information than a quantum bit, from a tiny superconducting circuit. The qutrit and one of the qubits form a refrigerator for the second target qubit, which can eventually be used for computation.
The researchers investigated the interaction between the three components so that if the target qubit has too much energy and an error occurs, heat automatically flows out of the qubit and into the other two elements. carefully designed. This lowered the temperature of the target qubit and reset it. Because this process is autonomous, qubits and quantum trit refrigerators were able to correct errors without external control.
aamir aliThe researchers, also at Chalmers University of Technology, said this approach to resetting qubits required less new hardware and produced better results than traditional methods. Without a major redesign of the quantum computer or the introduction of new wires, the starting state of the qubit would be accurate 99.97% of the time. In contrast, other reset methods typically only manage 99.8%, he says.
He said this is a powerful example of how thermodynamic machines, which deal with heat, energy, and temperature, can be useful in the quantum realm. nicole junger halpern I worked on this project at the National Institute of Standards and Technology in Maryland.
Traditional thermodynamic machines like heat engines sparked an entire industrial revolution, but so far quantum thermodynamics hasn't been very practical. “We are interested in making quantum thermodynamics useful, and this potentially useful autonomous quantum refrigerator is our first example,” says Jünger Halpern.
“I'm glad that this machine has been implemented and has become useful. Being autonomous, it does not require external control and should be efficient and versatile,” he says. Nicholas Bruner at the University of Geneva, Switzerland.
Michał Holodeck Researchers at the University of Gdańsk in Poland say one of the most pressing problems for quantum computers built with superconducting circuits is to keep the machines from overheating and causing errors. He says the new experiment paves the way for many similar projects that have been proposed but untested, such as using qubits to build autonomous quantum engines.
The researchers are already considering whether they can take the experiment further. For example, we might create autonomous quantum clocks or design quantum computers with other functions that are automatically driven by temperature differences.
Google announces new quantum chip is the most powerful yet
Google Quantum AI
Google has unveiled a new quantum computer, reasserting its lead in the race to prove that these unusual machines can beat even the world's best conventional supercomputers. So does that mean we've finally arrived at a useful quantum computer?
Researchers at the tech giant unveiled their quantum computing chip Sycamore in 2019, becoming the first in the world to demonstrate this feat known as quantum supremacy. But since then, supercomputers have caught up and left Sycamore behind. Now, Google has produced a new quantum chip called Willow. julian kelly Google says its Quantum AI is the best in the company's history.
“You can think of this as having all the benefits of Sycamore, but when you look under the hood, the geometry has changed…We've rethought the processor,” he says.
The latest version of Sycamore boasted 67; The quantum bits, or qubits, that process information have been upgraded to Willow's 105 qubits. Ideally, larger quantum computers should be more powerful, but researchers have found that qubits in larger devices struggle to remain coherent and lose their quantum nature. I discovered it. This is also the case with competitors IBM and California-based startup Atom Computing, both of which recently debuted quantum computers with more than 1,000 qubits.
For this reason, the quality of the qubits is a big focus for the team, and Willow's qubits can store complex quantum states, reliably encoding information more than five times longer than Sycamore's qubits, Kelly said. says.
Google uses a specific benchmark task called RCS to evaluate the performance of its quantum computers, and Willow said it was superior. Hartmut Neven also with Google Quantum AI. This task involves verifying that the distribution of numerical samples output by programs running on the chip is as random as possible. For several years, Sycamore was able to do this faster than the world's best supercomputers, but in 2022 and again in 2024 a new record was set by a conventional computer.
Google says Willow's task took five minutes on a chip, once again widening the gap between quantum machines and conventional machines, but the company said its prior technology would take 10 septillion years, or the age of the universe. We estimate that it will take much longer than the square of supercomputer.
For this comparison, the researchers modeled a Frontier supercomputer (recently downgraded to only the second most powerful supercomputer in the world) with more memory than is currently available. This only emphasizes Willow's computational abilities. says Naven. Although Sycamore's record has been broken, he is confident Willow will remain champion for much longer as traditional computing methods reach their limits.
What remains to be seen is whether Willow can actually do anything useful, given the lack of practical use for RCS benchmark tests. Kelly said that while success in benchmarks is a “necessary but not sufficient” condition for a quantum computer's usefulness, chips that fail to perform well in RCS are unlikely to be used in the future.
But the Google team has another reason to believe in Willow's bright future. That said, Willow is very good at correcting her own mistakes. Quantum computers' propensity for error is one of the biggest current problems preventing them from fulfilling their promise of being more powerful than other types of computers. To improve this, researchers, including a team at Google, are grouping physical qubits together to form “logical qubits” that are much more resilient to errors.
Using Willow, the team showed that as logical qubits get larger, they become more error-proof, with about half as many errors as the physical qubits that make up logical qubits. Furthermore, when the size of the logical qubit was approximately doubled, the error rate was further halved. In this way, Google researchers believe they can continue to increase the number of qubits, making quantum computers larger and larger and capable of performing increasingly greater calculations than previously trending. Threshold reached.
“In my opinion, this is a distinctive result, and although we are still far from demonstrating a practical quantum computer, it is an important and necessary step towards that goal.” Andrew Cleland at the University of Chicago.
Martin Wides Researchers at the University of Glasgow in the UK say their work points the way towards building quantum computers that are “fault tolerant” – quantum computers that can find and correct all errors. Although challenges remain, he says these advances pave the way for innovative applications in quantum chemistry, such as cryptography and machine learning, as well as drug discovery and materials design.
The increased focus on error correction in academic labs and across the burgeoning quantum computing industry has made advances in logical qubits a key point of comparison for today's best quantum computers. In 2023, a team of researchers from Harvard University and the startup QuEra set a record for the most logical qubit ever created using a qubit made from cryogenic rubidium atoms. did. Earlier this year, researchers at Microsoft and Atom Computing linked a record number of logical qubits through quantum entanglement.
Google's approach is different. Because instead of maximizing the number of single logical qubits, the focus is on making single logical qubits bigger and better. “We could have split the chip into even smaller logical qubits and run the algorithm, but we really wanted to reach this threshold. all challenges exist [of quantum computing] ,” says Kelly.
But ultimately, the biggest test of Willow's impact will be the goal that all other quantum computers also pursue: reliably computing things that are useful but impossible for classical computers. The question will be whether it can be achieved. Neven said Sycamore was already used for scientific discoveries such as quantum physics, but the team is setting its sights on more real-world applications with Willow. “We are moving toward new calculations and simulations that could not be performed on classical computers.”
Human cognitive abilities can be greatly influenced by the presence of an audience. Although often associated with reputation management, which is thought to be unique to humans, it is unclear to what extent this phenomenon is common to non-human animals. To investigate such audience effects in chimpanzees, researchers Kyoto University Contains performances by 6 people Chimpanzee (pan-troglodytes) Over a period of 6 years, we conducted experiments on three different numerical touch screen tasks of varying difficulty and cognitive demands, in a variety of audience compositions. The results showed that chimpanzee performance was influenced by the number and type of audience present.
To investigate whether chimpanzees' task performance is influenced by the presence of an audience, Lin others. analyzed multiple chimpanzee cognitive task data across different types of tasks. Image provided by: Akiho Muramatsu
“It was very surprising to discover that chimpanzees were influenced by the audience, and even by the human audience, in their task performance,” said Kyoto University researcher Dr. Kristen Lin.
“Although we might not expect chimpanzees to particularly care whether other species are watching them perform a task, chimpanzees are influenced by human spectators even depending on the difficulty of the task. The fact that it looks like this suggests that this relationship is more complex than we thought and initially expected. ”
Lin and his colleagues wanted to find out whether the audience effect often attributed to reputation management in humans also existed in non-human primates.
People knew that paying attention to who was looking at them, sometimes unconsciously, would affect their performance.
Chimpanzees live in hierarchical societies, but it was not clear to what extent they were also influenced by the people observing them.
“Our research site is special in that the chimpanzees frequently interact with and even enjoy human company, participating in various touchscreen experiments almost daily for food rewards. '' said Dr. Akiho Muramatsu of Kyoto University.
“So we thought there was an opportunity to not only explore potential similarities in effects that are relevant to viewers, but also do it in the context of chimpanzees, which share a unique bond with humans.”
The researchers made this discovery after analyzing thousands of sessions in which chimpanzees completed touchscreen tasks over a six-year period.
The researchers found that across three different number-based tasks, the chimpanzees performed better on the most difficult task as the number of experimenters observing them increased.
In contrast, they also found that on the simplest tasks, chimpanzees performed worse when they were observed by more experimenters and other familiar people.
Scientists note that the specific mechanisms underlying these audience-related effects remain unclear, even in humans.
They suggest that further studies in non-human apes may provide more insight into how this trait evolved and why it developed.
“Our findings suggest that how much humans care about witnesses and audiences may not be so unique to our species,” said Shinya Yamamoto of Kyoto University. said the doctor.
“These characteristics are a core part of how our society is primarily based on reputation, and if chimpanzees also pay special attention to their audience when performing their tasks, then these It stands to reason that audience-based traits may have evolved before reputation-based traits.''Society arose in our great ape lineage. ”
of the team findings Published in a magazine iscience.
_____
Kristen Lin others. The presence of an audience influences chimpanzees' performance on cognitive tasks. isciencepublished online on November 8, 2024. doi: 10.1016/j.isci.2024.111191
oh
On my desk, next to my ultra-modern gaming PC, sits a strange device that resembles a spaceship control panel from a 1970s sci-fi movie. There’s no keyboard or monitor, just a few rows of colorful switches beneath a string of blinking lights. If you thought the recent proliferation of retro video game consoles, such as the Mini SNES and the Mega Drive Mini, was an amazing development in technology nostalgia, look no further than the PiDP-10. It’s a 2/3-scale replica of the PDP-10 mainframe computer, first introduced by Digital Equipment Corporation (DEC) in 1966. It was designed and built by an international group of computer enthusiasts known as the PiDP-10. Obsolescence is certain
It’s a beautiful thing.
The project’s genesis dates back to 2015, when Oscar Vermeulen, a Dutch economist and lifelong computer collector, wanted to build a single replica of the PDP-8 mainframe that had fascinated him since childhood. “I had a Commodore 64 and proudly showed it to a friend of my father’s,” Vermeulen says. “He scoffed and said the Commodore was a toy. The real computer was the PDP, specifically the PDP-8. So I started looking for discarded PDP-8 computers, but I couldn’t find a single one. Now they’re collector’s items, very expensive and most of the time broken. So I decided to build a replica for myself.”
Ever the perfectionist, Vermeulen decided he needed a professionally made front panel cover. “The company that could make them told me I’d have to pay for one four-square-metre sheet of Perspex to cover 50 of these panels,” Vermeulen says. “So I made 49 extra ones, thinking I’d find 49 idiots to do it for me. Little did I know it would end up costing me thousands of dollars on my dinner table.”
At the same time, Vermeulen began posting in various vintage computing Google Groups, where he worked on software emulators for pre-microprocessor computers. As word spread about his replica, it quickly became a group effort that now has over 100 members. While Vermeulen focuses on designing the hardware replica (a front panel with working switches and lights), others are working on different aspects of the open source software emulation, which has a complicated history. At its core is SIMH, created by the ex-SIMH. December Developed by employee and megastar hacker Bob Supnick, the program emulates a variety of classic computers, and it was later improved by Richard Cornwell and Lars Brinkhoff to add driver support for the PDP-10. the Many other people were involved in the operating system and other MIT projects, some of whom collected and preserved old backup tapes, some of whom added improvements and debugging, and some of whom provided documentation and schematics.
Happy hacking! …PiDP-10 replica computer in Keith Stewart’s game room Photo: Keith Stewart/The Guardian
The attention to detail is incredible. The lights on the front aren’t just decorative. They show the instructions being executed, CPU signals, and memory contents, just like the original machine. Vermeulen calls it watching the heartbeat of the computer. This element was taken very seriously. “Two people spent months on one particular problem,” Vermeulen says. “You know, LEDs blink, but incandescent bulbs glow. So we studied exhaustively the LEDs to simulate the glow of the original bulbs. And we found that different bulbs from different years glow for different amounts of time. Measurements were made and calculations were applied, but the glow of the lamps was added. More CPU time was spent simulating that than simulating the original.”
Why? Why go to all this trouble? First, there’s the historical importance. The PDP machines, built between 1959 and the early 1970s, were revolutionary. Not only were they much cheaper than the giant mainframes used by the military and big corporations, but they were designed to be general-purpose, fully interactive machines. Instead of writing a program on punch cards, giving it to the IT department to run on the computer, print it out, and debug it maybe a day later, PDP let you type directly into the computer and test the results immediately.
A tedious task… In the 1950s, before the advent of PDP machines, mainframe computers took up entire rooms and used punch cards to input computer programs. Photo: Pictorial Parade/Getty Images
These factors led to an explosion of experimentation. Most modern programming languages, including C, were developed on DEC machines. The PDP-10 was the heart of the MIT AI Lab, the room where the term artificial intelligence was born. “The PDP-10 computer dominated the Arpanet, the precursor to the Internet,” says Lars Brinkhoff. “Internet protocols were prototyped on the PDP-10, PDP-11, and other computers. The GNU Project was inspired by the free sharing of software and information on the PDP-10. Stephen Hawking’s artificial voice grew out of the DECtalk device, which grew out of Dennis Klatt’s speech synthesis research begun on the PDP-9.”
The PDP made its way into university labs around the world, where it was embraced by a new generation of engineers, scientists, and programmers — the original computer hackers. Steve Wozniak got his start programming on a PDP-8, a small, inexpensive machine that sold by the thousands to hobbyists. Its operating system, OS/8, was the precursor to MS-DOS. Bill Gates and Paul Allen were teenage students who would sneak into the University of Washington to program the PCP-10, and it was on a PDP computer that MIT student Steve Russell and a group of friends designed a shoot-’em-up game. Space War!was one of the first video games to run on a computer.
Pioneers… Steve Russell at the California Computer History Museum, 2011. Russell stands in front of the Digital PDP-1, a computer game he developed in the early 1960s. Photo: MediaNews Group/The Mercury News/Getty Images
This legendary game wasn’t the only one. There were many others at the time, because making games was a fun way to explore possibilities. “There were Dazzle Dart, a four-player laser tennis game, and Lunar Lander,” Vermeulen says. “Maze War was the first networked video game. People connected two IMLAC minicomputer/graphics terminals to the Arpanet via a PDP-10 mainframe, and used that million-dollar pile of hardware to chase each other through a maze or shoot each other.” And the original text adventures like Colossal Cave and Zork, as well as the first multiplayer online games like MUDs and Star Trek, were also written on PDP computers.
These machines are an essential part of our digital culture, the furnace of the modern gaming and tech industries. But to be understood, Already used
“The problem with computer history is that putting old computers in a museum that aren’t being used communicates very little,” says Vermeulen. “You need to experience these machines and how they worked. And the problem with computers before about 1975 is that they were huge, heavy and nearly impossible to keep running. Microsoft co-founder Paul Allen loved his PDP-10 deeply, and with the funds he had, he was able to hire a team of skilled technicians to repair and get it running. But it was very expensive, and sadly, his family decided to discontinue this after he passed away.”
The answer is emulation. The PDP replica has all the look of the original terminal, including the lights and switches, but the calculations are done by a Raspberry Pi microcomputer connected to the back via a serial port. To get it running at home, just plug in the Raspberry Pi, connect a keyboard and monitor, boot it up and download the software. Then flip the switch on the front of the PDP-10, reboot the Raspberry Pi, and you’ll be in PDP mode, with a window on your monitor emulating the old Knight TV terminal display. A command line interface (remember those?) gives you access to a range of the original programs, including games.
This is what I’ve been waiting for. We all know the important role SpaceWar played in the birth of the modern games industry, but actually playing it and controlling a spaceship battling amongst vector explosions against a flickering starry sky…it feels like you’re living history.
In the 15 years since Vermeulen began developing his personal PDP-8 emulator, the Obsolescence Guaranteed group has sold hundreds of replicas and continues to develop more, including a replica of MIT’s experimental Project Whirlwind computer from the 1950s (which ran a simple version of tic-tac-toe). Today, a company in Panama called Chiriqui Electronic Design Studio manufactures the hardware. What started as a personal project has become something much bigger. “We had an ‘official’ launch of our PiDP-10 replica at MIT in Boston, where the original machine was kept. The demo session was attended by about 50 hackers from the 1970s. It was fun to see people playing the multi-user Maze War game 50 years later.”
Another reason the PiDP-10 is worth it is because it’s fun. I never imagined seeing something like this up close, much less plugging it into a monitor at home and playing with it. It was an exciting, nostalgic, and weirdly emotional experience. Navigating the ITS disk system, the glowing green dot-matrix font, the appealing list of programs and games, the “happy hacking!” message above the terminal command line – it’s very evocative.
Impressive…PiDP-10 screen. Photo: Keith Stewart/The Guardian
Meanwhile, programmers who bought PiDP machines are creating new programs and games. They range in age from 80-year-old PDP veterans to 20-year-olds who want to relive a bygone era of programming. Memory and processing power were scarce, so elegant and super-efficient code had to be written; there was no room for bloat. “Quite a few universities are using the PiDP-11 and -8 in their classes,” Vermeulen says. “Partly to show computer science students our origins, but also because the super-low-level programming still required for microcontrollers and hardware drivers is the type of coding you learn very well on these dinosaurs.”
Brinkhoff agrees that while these machines have a certain nostalgia, they also have something to teach us: They’re functional. “I enjoy writing new software for the 10, like a program to display fractals or generate QR codes,” he says.
“I hope it becomes more widely accepted, because if you don’t do anything with PiDP, it just sits on a shelf and the lights flash. It looks pretty, but I don’t think the computer can be truly happy unless you program it.”
A new study led by the University of California, Irvine, addresses a fundamental debate in astrophysics: the existence of invisible dark matter is necessary to explain how the universe works. Is there an observation, or can physicists explain how things work based only on matter that we can know directly?
Dark photons are hypothetical dark sector particles that have been proposed as force carriers, similar to electromagnetic photons but potentially related to dark matter. Image credit: University of Adelaide.
“Our paper shows how a real-world observed relationship can be used as a basis for testing two different models for describing the universe,” said Dr. One Dr. Francisco Mercado said:
“We conducted robust tests to distinguish between the two models.”
“This test required us to run computer simulations using both types of matter, normal matter and dark matter, to account for the presence of interesting features measured in real galaxies.”
“The features we discovered in galaxies would be expected to appear in a universe with dark matter, but would be difficult to explain in a universe without dark matter.”
“We have shown that such features appear in observations of many real galaxies. If we take these data at face value, the dark matter model is the one that best explains the universe we live in. It is reconfirmed that.”
These features explain patterns in the movement of stars and gas within galaxies that appear to be possible only in a universe with dark matter.
“The observed galaxies appear to follow a close relationship between the matter we see and the dark matter we inferred to detect, hence what we call dark matter. Some have even suggested that this is actually evidence that our theory of gravity is wrong,'' New York University said. Professor James Block of Irvine, California;
“What we have shown is that dark matter not only predicts that relationship, but for many galaxies it can explain what we see more naturally than modified gravity.”
“I am even more convinced that dark matter is the correct model.”
This feature has also appeared in observations by proponents of a dark matter-free universe.
“The observations we looked at, the very observations that discovered these features, were made by proponents of the no-dark-matter theory,” said Dr. Jorge Moreno, a researcher at Pomona College. Ta.
“Despite their obvious existence, there has been little analysis of these functions by the community.”
“We needed scientists like us who work with both ordinary matter and dark matter to start the conversation.”
“We hope that this study will spark a debate within our research community, but such features can only be found in our planet if both dark matter and normal matter are present on Earth.” We also found that it appears in simulations, so there may be room for commonalities in the universe. “
“When stars are born and die, they explode into supernovae, which can form the centers of galaxies, providing a natural explanation for the existence of these features.”
“Simply put, the features we investigated in our observations require both the presence of dark matter and the incorporation of normal matter physics.”
Now that the dark matter model of the universe appears to be a promising model, the next step is to see whether it remains consistent across the dark matter universe.
“It will be interesting to see if this same relationship can even be used to distinguish between different dark matter models,” Dr. Mercado said.
“Understanding how this relationship changes under individual dark matter models could help constrain the properties of dark matter itself.”
of paper Published online on Royal Astronomical Society Monthly Notices.
_____
Francisco J. Mercado other. Hooks and bends in the radial acceleration relationship: Discrimination test between dark matter and MOND. MNRAS 530 (2): 1349-1362; doi: 10.1093/mnras/stae819
Hala Point neuromorphic computer is powered by Intel’s Loihi 2 chip
Intel Corporation
Intel has developed the world’s largest neuromorphic computer, a device that aims to mimic the behavior of the human brain. The company hopes to be able to run more advanced AI models than traditional computers can run, but experts say the device will not be able to compete with, let alone surpass, the cutting-edge. says there are engineering hurdles to overcome.
Expectations for neuromorphic computers are high because they are inherently different from traditional machines. While regular computers use a processor to perform operations and store data in separate memory, neuromorphic devices use artificial neurons for both storage and calculation, similar to our brains. To do. This eliminates the need to pass data between components, which can be a bottleneck in today’s computers.
This architecture has the potential to result in much greater energy efficiency, and Intel says its new Hala Point neuromorphic computer will solve an optimization problem that involves finding an optimal solution to a problem given certain constraints. It claims to use 100 times less energy than traditional machines when running. It also trains and runs AI models that use chains of neurons, similar to how a real brain processes information, rather than mechanically passing input through each layer of artificial neurons as in current models. New methods may also become possible.
Hala Point contains 1.15 billion artificial neurons across 1152 Loihi 2 chips, capable of 380 trillion synaptic operations per second. mike davis Despite this power, Intel says it takes up only six racks of space in a standard server case, which is about as much space as a microwave oven. Larger machines will also be possible, Davis said. “We built a system of this scale because, honestly, one billion neurons was a good number,” he says. “So there were no special technical engineering challenges that would cause us to stop at this level.”
No other existing machine can match Harapoint’s scale, but Deep South, a neuromorphic computer due for completion later this year, is said to be capable of 228 trillion synaptic operations per second.
The Loihi 2 chip is still a prototype that Intel has produced in small numbers, but Davis said the real bottleneck is the processing required to take a real-world problem, translate it into a format that can run on a neuromorphic computer, and run it. It is said to be in the software layer. process. This process, like neuromorphic computing in general, is still in its infancy. “Software is a big limiting factor,” he says. That means there’s still little point in building a large machine.
Intel has suggested that machines like Hala Point could create AI models that continuously learn, rather than having to be trained from scratch to learn new tasks like current models do. Masu.but james knight Researchers at the University of Sussex in the UK dismissed this as “hype”.
Knight points out that current models like ChatGPT are trained using graphics cards running in parallel, which means many chips can be used to train the same model. But since neuromorphic computers operate on a single input and cannot be trained in parallel, it could take decades to even initially train something like ChatGPT on such hardware. He says it’s expensive, let alone come up with a way to enable continuous learning once it’s up and running.
Although current neuromorphic hardware is not suitable for training large-scale AI models from scratch, Davis said that one day pre-trained models could be used to learn new tasks over time. He said he hopes it will be possible. “Although this method is still in the research phase, this is a kind of continuous learning problem that large-scale neuromorphic systems like Hala Point can solve in a very efficient way in the future. “It’s considered,” he says.
Knight said neuromorphic computers could solve many other computer science problems as the tools needed for developers to write software for these problems to run on their own hardware become more mature. We are optimistic that we can improve this and increase efficiency at the same time.
It may also offer a better path toward human-level intelligence, also known as artificial general intelligence (AGI), although many AI experts believe that large-scale language models that power things like ChatGPT I think it’s impossible. “I think it’s becoming less and less of a controversial opinion,” Knight says. “The dream is that one day neuromorphic computing will allow us to create brain-like models.”
Microsoft and quantum computing company Quantinuum claim to have developed a quantum computer with unprecedented levels of reliability. The ability to correct its own errors could be a step toward more practical quantum computers in the near future.
“What we did here gave me goosebumps. We showed that error correction is reproducible, works, and is reliable.” Krista Svoir At Microsoft.
Experts have long expected the arrival of practical quantum computers that can complete calculations too complex for traditional computers. Although quantum computers have steadily grown larger and more complex, this prediction has not yet been fully realized. One big reason for this is that all modern quantum computers are subject to errors, and researchers have found that it is technically difficult to implement algorithms to detect and correct errors during calculations. That’s it.
The new experiment could be an important step toward overcoming this error problem. The researchers say that on his H2 quantum processor at Quantinuum, he ran more than 14,000 individual calculation routines without making a single error.
Errors occur even in classical computers, but error correction can be coded into programs by creating backup copies of the information being processed. This approach is not possible with quantum computing because quantum information cannot be copied. Instead, researchers distributed it across a group of connected qubits, or qubits, creating what are known as logical qubits. Microsoft and the Quantinuum team created four of these logical qubits using 30 qubits.
Svore said a process developed by Microsoft was used to generate these logical qubits, allowing them to run error-free, or fault-tolerant, experiments repeatedly. Typically, individual qubits are easily disturbed, but at the level of logical qubits, researchers were able to repeatedly detect and correct errors.
The approach was so successful, they say, that four logical qubits produced only 0.125 percent of the errors that would occur if 30 qubits were left ungrouped. This means that ungrouped qubits generate as many as 800 errors for every one error generated by a logical qubit.
“Having a logical error rate that is 800 times lower than that of physical qubits is a huge advance in the field and brings us one step closer to fault-tolerant quantum computing,” he said. says. mark suffman from the University of Wisconsin was not involved in the experiment.
jennifer strobley Quantinuum said the team’s hardware is well-suited for new experiments because it provides advanced control over qubits and quantum computers have already achieved some of the lowest error rates ever. .
In 2023, a team of Harvard University researchers and their colleagues, including members of the quantum computing startup QuEra, broke the record for the largest number of logical qubits at once, 48. This is much more than his four logical qubits in the new device. But Strabley said the new device requires fewer physical qubits for each logical qubit, and the logical qubits have fewer errors than the one built by the Harvard team. “We used significantly fewer physical qubits and got better results,” she says.
However, some experts new scientist Without details about the experiment, researchers were not yet ready to qualify this new research as a breakthrough in quantum error correction.
It is generally believed that only quantum computers with more than 100 logical qubits can actually tackle scientifically and socially relevant problems in fields such as chemistry and materials science. The next challenge is to make everything bigger. Strabley and Svore say they are confident that the long-standing collaboration between Microsoft and Quantinuum will soon come to fruition.
Microsoft and quantum computing company Quantinuum claim to have developed a quantum computer with unprecedented levels of reliability. The ability to correct its own errors could be a step toward more practical quantum computers in the near future.
“What we did here gave me goosebumps. We showed that error correction is reproducible, works, and is reliable.” Krista Svoir At Microsoft.
Experts have long expected the arrival of practical quantum computers that can complete calculations too complex for traditional computers. Although quantum computers have steadily grown larger and more complex, this prediction has not yet been fully realized. One big reason for this is that all modern quantum computers are subject to errors, and researchers have found that it is technically difficult to implement algorithms to detect and correct errors during calculations. That's it.
The new experiment could be an important step toward overcoming this error problem. The researchers say that on his H2 quantum processor at Quantinuum, he ran more than 14,000 individual calculation routines without making a single error.
Errors occur even in classical computers, but error correction can be coded into programs by creating backup copies of the information being processed. This approach is not possible with quantum computing because quantum information cannot be copied. Instead, researchers distributed it across a group of connected qubits, or qubits, creating what are known as logical qubits. Microsoft and the Quantinuum team created four of these logical qubits using 30 qubits.
Svore said a process developed by Microsoft was used to generate these logical qubits, allowing them to run error-free, or fault-tolerant, experiments repeatedly. Typically, individual qubits are easily disturbed, but at the level of logical qubits, researchers were able to repeatedly detect and correct errors.
The approach was so successful, they say, that four logical qubits produced only 0.125 percent of the errors that would occur if 30 qubits were left ungrouped. This means that ungrouped qubits generate as many as 800 errors for every one error generated by a logical qubit.
“Having a logical error rate that is 800 times lower than that of physical qubits is a huge advance in the field and brings us one step closer to fault-tolerant quantum computing,” he said. says. mark suffman from the University of Wisconsin was not involved in the experiment.
jennifer strobley Quantinuum said the team's hardware is well-suited for new experiments because it provides advanced control over qubits and quantum computers have already achieved some of the lowest error rates ever. .
In 2023, a team of Harvard University researchers and their colleagues, including members of the quantum computing startup QuEra, broke the record for the largest number of logical qubits at once, 48. This is much more than his four logical qubits in the new device. But Strabley said the new device requires fewer physical qubits for each logical qubit, and the logical qubits have fewer errors than the one built by the Harvard team. “We used significantly fewer physical qubits and got better results,” she says.
However, some experts new scientist Without details about the experiment, researchers were not yet ready to qualify this new research as a breakthrough in quantum error correction.
It is generally believed that only quantum computers with more than 100 logical qubits can actually tackle scientifically and socially relevant problems in fields such as chemistry and materials science. The next challenge is to make everything bigger. Strabley and Svore say they are confident that the long-standing collaboration between Microsoft and Quantinuum will soon come to fruition.
Framework is back with the new, bigger and more powerful Laptop 16, its most ambitious device yet. Highly modular and upgradable 16-inch machine that lets you change layout and power in minutes. It’s completely different from anything else on the market.
Packed with hot-swappable components, the laptop can be customized in countless ways, transforming it from a fast and quiet workhorse by day to an LED-studded gaming PC by night.
Priced from £1,399 (€1,579/$1,399/AU$2,319), this 16-inch machine improves on the ideas that made its smaller sibling, the Laptop 13, a huge hit. In fact, everything inside your laptop can be disassembled and replaced with varying degrees of ease.
Expansion cards simply click into slots on the side of your laptop to instantly add USB-C, USB-A, HDMI, DP, Ethernet, microSD slots, expandable storage, or a headphone jack. Photo: Samuel Gibbs/The Guardian
Featuring the same great port expansion system as its sibling, simply snap in place up to six small cards to use any combination of ports, card readers, or expandable storage on the side of the machine. Most cards cost less than £20, so they’re cheap enough to slide in and out as needed and keep a collection of cards for different tasks.
Additionally, the keyboard, numeric keypad, trackpad, LED module and spacers are easily attached in place by magnets on the top deck. Without tools, you can position your trackpad or keyboard to the left, right, or center, add another number pad or macropad on each side, or move your keyboard to another position, even when your laptop is running. Completely swap languages and layouts in seconds.
Diving inside, you can remove components of the framework such as memory, storage, and wireless cards with a single screwdriver. Unlike many other laptops, where parts are soldered in place, you can expand the storage and RAM yourself, and even upgrade bits.
Simply plug the AMD Radeon RX 7700S graphics card module into the back of your machine to instantly add power to your laptop. Photo: Samuel Gibbs/The Guardian
However, the framework’s biggest feature is the large expansion module that sticks out behind the screen. More powerful upgrades are available, including modules that include the AMD Radeon RX 7700S discrete graphics card.
The first human patient implanted with Neuralink’s brain chip appears to have made a full recovery and is now able to use his thoughts to control a computer mouse, according to Neuralink founder Elon Musk, who shared the news late Monday.
“Things are going well, the patient appears to have made a full recovery, and there are no adverse effects that we are aware of. The patient can move the mouse on the screen just by thinking,” Musk said on the social media platform during the X Spaces event.
Musk said Neuralink is currently trying to get as many mouse button clicks from patients as possible. Neuralink did not immediately respond to a request for further details.
The company successfully implanted the chip in its first human patient last month after receiving approval to recruit for a clinical trial in September.
The study will use robots to surgically place brain-computer interface implants in areas of the brain that control locomotion intentions, Neuralink said, with the initial goal of helping people use their thoughts to interact with computers. He added that the idea was to be able to control the cursor and keyboard.
Musk has grand ambitions for Neuralink, saying it will facilitate rapid surgical insertion of chip devices to treat conditions such as obesity, autism, depression and schizophrenia.
Neuralink, valued at about $5 billion last year, has faced repeated calls for scrutiny over its safety protocols. The company was fined for violating U.S. Department of Transportation regulations regarding the movement of hazardous materials.
IIt's a quiet morning in a London gallery studio voltaire And Danielle Brathwaite-Shirley invited me to prototype her latest artwork. It's a horror-inspired video game in which players fight to overcome the issues holding them back, from fear of failure to addiction. This is also the centerpiece of her first organized solo exhibition with the theme of change. I worked on the game, but by the fourth round I was still crap. Artificial screams echo around the empty gallery. “That must be super difficult!” laughs Brathwaite-Shirley. “It's all based on what I'm trying to overcome or have overcome. It didn’t take one turn, it took many.”
The Rebirthing Room is Brathwaite-Shirley's latest participatory work. The idea came to me after a conversation with a curator about the usefulness of Art Her Gallery. “We were talking about how we could do more with the space. What could we do with it other than just showcasing work?” she says. “That’s when I thought, “It would be great if you came to the gallery and left a different person.'
The 29-year-old started making interactive art in 2020 after misguided comments from visitors made her question the purpose of her work. At the time, her portfolio consisted of videos and animations documenting her London burlesque scenes and her black transgender peers. The work, rendered in what she describes as her “beautiful retro aesthetic,” created an alternate reality for community members. It is an unconventional archival method to fill in the blind spots in historical records. “Someone said to me, “I really like your work because it allows me to be visual and ignore what you're saying,'' Brathwaite-Shirley recalls. “I thought, “This is the best feedback of my life, because I can't do that anymore!''
Another history…”Thou shalt not accept” in 2023. Photo: Perttu Saksa/Courtesy of the artist and Helsinki Biennale
Since then, she has started incorporating choices made by the audience to advance the work. In 2022 she released her Get Home Safe, an arcade her-style game inspired by her own experiences wandering around Berlin at night. The player is tasked with guiding the protagonist safely through dark streets. Meanwhile, “I Can't Follow You Anymore,” released in browser-based last year, asks audiences to navigate a revolution and decide who will be saved or sacrificed. “In interactive work, you have to make an effort to see something,” she says. “What fascinates me is the choices people make and the feelings they leave behind. I think that's when the real works of art start to emerge.”
Keen to prioritize content over aesthetics, Brathwaite-Shirley's new work takes advantage of the rudimentary pre-rendered graphics of early computer games. It's intentionally lo-fi, built from 2D animation, iPad drawings, and old software, with a VHS-style finish. The forest grass on the screen is made from edited photographs of her hands, and the sounds are an extension of her archival project, developed from recordings of her screaming into her mobile phone. . “I never want to touch this super shiny stuff,” she says. “I like to make people's brains work a little bit more.”
With disorienting sound effects and low lighting, Rebirthing Room is a fully immersive experience. Surrounding the screen and handmade controllers operated by the audience are giant trees covered in cloth and rows of real corn, a reference to the horror movies she grew up watching.
“I don’t need this super shiny thing”…Screenshot of the playback room Photo: Image provided by the artist
“What I love about horror is that it makes you want to experience experiences and emotions that you would never experience in normal life,” she says. “If a movie is really good, there's something about it that sticks around. It's that perfect balance of being really scary, but also interesting enough to keep you watching.”
In addition to being a nifty device to “fool” viewers into their own values and beliefs, Brathwaite-Shirley's digital universe, full of demons, villains, and gore, is well-suited to the current climate. You can feel it when you are there. She says it's important to highlight not only the hostility from her outsider group, but all the “nasty nuances” that exist within her own self. She said: “I feel like we're in a very censored time; [where] Even speaking about views that your particular political group subscribes to feels dangerous because you feel like you have to say it the way they want to hear it. Therefore, for me, presenting a utopia in the environment we are currently in is a huge waste. ”
Challenging audiences is something she would like to see more of in the art world, but she feels it prioritizes too much of a fun, Instagram-friendly experience. Her purpose is not to make her people enjoy her own work. She finds the more visceral and emotional responses more interesting. She told me that when she finishes a show with nothing but praise, she feels like her work is of no use.
She is interested in how viewers will respond to Room of Rebirth. Will they play until they succeed? Or will they just give up like I did? only time will tell. “I’m looking forward to seeing how we can go even further next time,” she says.
○On Sunday, January 22, 1984, the Los Angeles Raiders defeated the Washington Redskins 38-9 in Super Bowl XVIII. With the exception of a few older Raiders fans, we all remember him that night 40 years ago with one ad that set the tone for the techno-optimism that would dominate the 21st century. did.
Advertisement showed an auditorium full of zombie-like figures watching a projection of an elderly leader resembling the Emperor from 1980's The Empire Strikes Back. A young, athletic woman wearing red and white (the colors of the flag of Poland, which waged a massive labor uprising against the Soviet-controlled communist state) spins a hammer and frames the face of her leader. He threw it across the screen. As armored police rush in to stop her.
The ad explicitly referenced George Orwell's dystopian novel 1984. Meanwhile, then-President Ronald Reagan began his re-election campaign with the audacity to confront the threat of the totalitarian Soviet Union, increasing the risk of global nuclear annihilation.
That same month, Apple began selling personal computers. This will change the way we think about computing technology in our lives and will lead to many of the ideological changes that will drive the 21st century. In many ways, the long 21st century began 40 years ago this week for him.
From a garage-based startup in Cupertino, California, we have steadily grown to where we are today. The most valuable company in the history of the world, Apple has changed the way we experience culture and each other. While not the only force to do so, if you look at other ruling forces that left their mark in 1984, such as Reagan, Apple is a key player in how we view and govern ourselves over the next 40 years. It was part of a larger change. Years later, it still impacts daily life in ways few could have imagined at the time.
Before the Macintosh debuted, Apple created high-quality computers like the Apple II (1979) that ran programs using the standard operating system at the time, the Apple Disc Operating System (which was similar to the Apple Disc Operating System). was highly regarded among computer enthusiasts for producing innovative desktop computers. MS-DOS was provided by a small then-starting company called Microsoft and could be programmed in languages such as Basic.
Companies like Texas Instruments and Atari had brought user-friendly computers to homes before the Macintosh, and IBM and Commodore had made desktop computers for businesses, but the Macintosh was something different. I was promised something.
The Macintosh created a mass market for usable computers that looked more like magic than machines. The Macintosh is a sealed box that hides the board and cables and presents a sleekly designed box, similar to the MacBook and the iPhone, which was released in 2007 and was the most influential and profitable of Apple's products. We have established design standards for what will become.
The iPhone represents much of what's appealing and loathsome about 21st century life. This is a device that does things that no other device or technology can do. It just provides all of that in its own controlled environment that masks all of the actual technology and the human agency that created it. There may be a little elf in there.
Billions of people now use such equipment, but few people ever look inside or think about the people who mined the metals and assembled the parts in dangerous conditions. plug. There are now cars and appliances designed to feel like an iPhone, all glass, metal, curves, and icons. None of them provide any clues for humans to build or maintain them. Everything seems like magic.
The shift to magic by design has blinded us to the real situation of most people working and living in the world. Gated devices are similar to gated communities. What's more, the sealed boxes are equipped with ubiquitous cameras and location devices, and when connected through invisible radio signals, serve as a global surveillance system that Soviet dictators never dreamed of. . We have also entered a world of soft control beyond Orwell's imagination.
Gated communities began to grow in popularity in the United States during the Reagan administration. It was to provide the illusion of safety against imagined but undefined invaders. They also resembled private states, with exclusive membership and strict rules of etiquette.
Reagan won reelection in a landslide in the November 1984 election. His Reagan victory established a nearly unwavering commitment to market fundamentalism and technological optimism that was largely adopted by Reagan's critics and even his successors like Bill Clinton and Barack Obama. . Outside the United States, ostensibly left-wing 20th century leaders such as Greece's Andreas Papandreou, France's François Mitterrand, and Britain's Tony Blair limited the vision of change that the growing neoliberal consensus allowed. was.
By the beginning of this century, questioning the techno-optimism imposed by Apple and the faith in neoliberalism secured by Reagan's hold on the world's political imagination seems like a fit of sulking or sulking. Probably. Does anyone doubt the democratizing and liberating potential of computer technology and free markets?
Now, a quarter of the way through this century, it's clear that the only promises kept were to Apple's shareholders and the descendants of Reagan's politicians. Democracy is in tatters around the world. Networked computers rob relationships, communities, and society of the joy and humanity. The economy is more stratified than ever before. Politics excludes any positive vision of a better future.
Of course, you can't blame Apple or Reagan. They simply distilled, harnessed, and sold back to us what we longed for: a simple story of inevitable progress and liberation. If we had heeded the warnings in Orwell's book instead of Apple's ads, we might have learned that simple stories never have happy endings.
Legal experts are calling for immediate changes to the law to recognize that the computer was at fault, otherwise risking a repeat of the Horizon incident.
Under English and Welsh law, computers are presumed to be ‘trusted’ unless proven otherwise, leading to criticism that it reverses the burden of proof in criminal cases.
Stephen Mason, a barrister and electronic evidence expert, stated, “If someone says, ‘There’s something wrong with this computer,’ they’re supposed to have to prove it, even if it’s the person accusing them who has the information.”
Mason, along with eight other legal and computer experts, proposed changes to the law in 2020 after the High Court’s ruling against the Post Office. However, their recommendations were never implemented.
The legal presumption of computer reliability comes from the old common law principle that “mechanical instruments” should be presumed to be in good working order unless proven otherwise.
An Act in 1984 ruled that computer evidence was admissible only if it could be shown that the computer was working properly, but this law was repealed in 1999.
The international influence of English common law means that the presumption of reliability is widespread, with examples from New Zealand, Singapore, and the United States supporting this standard.
Noah Weisberg, CEO of legal AI platform Zuva, emphasized the urgency of re-evaluating the law in the context of AI systems and the need to avoid assuming error-free computer programs.
Weisberg also stated, “It would be difficult to say that it would be reliable enough to support a conviction.”
James Christie, a software consultant, suggested two stages of changes to the law, requiring those providing evidence to demonstrate responsible development and maintenance of the system, as well as disclosing records of known bugs.
The Ministry of Justice declined to comment on the matter.
This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.
Strictly Necessary Cookies
Strictly Necessary Cookie should be enabled at all times so that we can save your preferences for cookie settings.