2025: Mathematicians Discover Cutting-Edge Advancements in Mathematics

Things Get Weird When Numbers Get Big.

Jezper / Alamy

In 2025, the Busy Beaver Challenge Community offers an unprecedented glimpse into the cutting-edge realm of mathematics, where large numbers are poised to challenge the very foundations of logical reasoning.

This exploration centers on the next number in the “Busy Beaver” sequence, a collection of rapidly increasing values that arise from a fundamental query: How can we determine whether a computer program has the potential to run indefinitely?

To answer this, researchers draw upon the seminal work of mathematician Alan Turing, who demonstrated that any computer algorithm could be modeled using a simplified mechanism called a Turing machine. More intricate algorithms correspond to Turing machines with expanded instruction sets or a greater number of states.

Each Busy Beaver number, denoted as BB(n), denotes the longest execution time achievable for an n-state Turing machine. For instance, BB(1) equals 1 and BB(2) equals 6, indicating that doubling the complexity of the algorithm extends its runtime sixfold. This growth escalates rapidly; for example, BB(5) reaches an astounding 47,176,870.

In 2024, members of the Busy Beaver Challenge succeeded in determining the exact value of BB(5), culminating a 40-year study into every Turing machine comprising five states. Consequently, 2025 became a year dedicated to pursuing BB(6).

In July, a member known as mxdys identified the lower bound for BB(6), revealing that its value is not only significantly larger than BB(5) but also dwarfs the number of atoms in the universe.

Due to the impracticality of expressing all its digits, mathematicians utilize a notation system called tetration, which involves exponentiating numbers repetitively. For example, 2 raised to the power of 2 results in 4, which can similarly be expressed as 2 raised to the power of 4, yielding 16. BB(6) is at least as large as 2 raised to the power of 2 raised to the power of 9, forming a towering structure of repeated squares.

Discovering BB(6) transcends mere record-setting; it holds significant implications for the field of mathematics. Turing’s findings assert the existence of a Turing machine behavior that eludes prediction within a framework known as ZFC theory, which underpins contemporary mathematics.

Researchers have previously indicated that BB(643) defies ZFC theory, but the potential for this occurrence in a limited number of cases remains uncertain, positioning the Busy Beaver Challenge as a vital contributor to advancing our understanding.

As of July, there were 2,728 Turing machines with six states still awaiting analysis of their stopping behavior. By October, that number diminished to 1,618. “The community is currently very engaged,” comments computer scientist Tristan Stellin, who introduced the Busy Beaver Challenge in 2022.

Among the remaining machines lies the potential key to precisely determining BB(6). Any one of these could be a crucial unknown, possibly revealing substantial limitations of the ZFC framework and contemporary mathematics. In the coming year, math enthusiasts worldwide are poised to delve deeply into these complexities.

Source: www.newscientist.com

Computers Could Resolve Mathematics’ Biggest Controversy

Computers can verify mathematical proofs

Monsisi/Getty Images

A major clash in the world of mathematics may see resolution thanks to computers, potentially bringing an end to a decade-long dispute surrounding a complex proof.

It all began in 2012 when Shinichi Mochizuki, a mathematician from Kyoto University in Japan, shocked the mathematical community with his extensive 500-page proof of the ABC conjecture. This conjecture stands as a significant unsolved issue at the very essence of number theory. Mochizuki’s proof relied on an intricate and obscure framework that he developed, known as Interuniversal Teichmuller (IUT) theory, which proved challenging for even seasoned mathematicians to grasp.

The ABC conjecture, which has been around for over 40 years, presents a seemingly straightforward equation involving three integers: a + b = c, investigating the relationships among the prime numbers that constitute these values. The conjecture offers profound insights into the fundamental interactions of addition and multiplication, with ramifications for other renowned mathematical conjectures, including Fermat’s Last Theorem.

Given these potential consequences, mathematicians initially expressed excitement over verifying the proof. However, Mochizuki noted that early attempts faced challenges and more focus was needed on understanding his findings. In 2018, two distinguished German mathematicians, Peter Scholze from the University of Bonn and Jakob Stix from Goethe University in Frankfurt, announced that they had found possible flaws in the proof.

Mochizuki, however, dismissed these critiques. Lacking a central authority to arbitrate the debate, the credibility of the IUT theory has split the mathematical community into opposing factions, with one side comprising a small collective of researchers aligned with Mochizuki and the Kyoto Institute for Mathematical Sciences, where he teaches.

Now, Mochizuki has suggested a path forward to resolve the deadlock. He proposes transferring proofs from their existing mathematical notation, intended for human comprehension, to a programming language known as Lean, which can be validated and checked by computers.

This approach, known as formalization, represents a promising area of research that could revolutionize the practice of mathematics. Although there have been earlier suggestions for Mochizuki to formalize his proof, this marks the first time he has publicly indicated plans to advance this initiative.

Mochizuki was unavailable for comment on this article. However, in recent reports, he asserted that Lean would be an excellent tool for clarifying certain disputes among mathematicians that have hindered acceptance of his proof. He stated, “This represents the best, and perhaps only, way to achieve significant progress in liberating mathematical truth from social and political constraints.”

Mochizuki became convinced of the advantages of formalization after attending a conference on Lean in Tokyo last July, particularly impressed by its capacity to manage the mathematical structures essential to his IUT theory.

This could be a vital step in overcoming the current stalemate, noted Kevin Buzzard from Imperial College London. “If it’s articulated using Lean, that’s not strange at all. Much of what’s found in papers is written in unusual terms, so being able to express it in Lean means that this unusual language has become universally defined,” he explains.

“We seek to understand why [of IUT], and we’ve been awaiting clarity for over a decade,” remarked Johann Kommelin from Utrecht University in the Netherlands. “Lean will aid in uncovering those answers.”

However, both Buzzard and Kommelin acknowledge that formalizing IUT theory is an immense challenge, necessitating the conversion of a series of mathematical equations that currently exist only in a human-readable format. This effort is anticipated to be the largest formalization endeavor ever attempted, often requiring teams of specialists and taking months or even years.

This daunting reality may dissuade the limited number of mathematicians capable of undertaking this project. “Individuals will need to decide whether they are willing to invest significant time in a project that may ultimately lead to failure,” Buzzard remarked.

Even if the mathematicians succeed in completing the project and the Lean code indicates that Mochizuki’s theorem is consistent, disputes about its interpretation could still arise among mathematicians, including Mochizuki himself, according to Kommelin.

“Lean has the potential to make a significant impact and resolve the controversy, but this hinges on Mochizuki’s genuine commitment to formalizing his work,” he adds. “If he abandons it after four months, claiming ‘I’ve tried this, but Lean is too limited to grasp my proof,’ it would just add another chapter to the long saga of social issues persisting.”

Despite Mochizuki’s enthusiasm about Lean, he concedes with his critics that interpreting the meaning of the code might lead to ongoing disputes, expressing that Lean “does not appear to be a ‘magic cure’ for completely resolving social and political issues at this stage.”

Nevertheless, Buzzard remains optimistic that the formalization project, especially if successful, could propel the decade-old saga forward. “You can’t contest software,” he concludes.

topic:

Source: www.newscientist.com

Why Zero is the Most Essential Number in Mathematics

Bakhshali manuscripts contain the first example of zero in written records

PA Image/Alamy

What’s the most significant number in mathematics? It seems like an absurd question—how do you choose from an infinite range? While prominent candidates like 2 or 10 might stand a better chance than a random option among trillions, the choice is still somewhat arbitrary. However, I contend that the most critical number is zero. Allow me to explain.

The rise of zero to the pinnacle of the math hierarchy resembles a classic hero’s narrative, originating from modest beginnings. When it emerged around 5000 years ago, it wasn’t even considered a number. Ancient Babylonians utilized cuneiform, a system crafted from lines and wedges, to represent numbers. These were akin to tally marks, where one type denoted values from 1 to 9 and another signified 10, 20, 30, 40, and 50.

Babylonian numerals

Sugarfish

Counting could extend to 59 with these symbols, but what came after 60? The Babylonians simply restarted, using the same symbol for both 1 and 60. This base-60 system was advantageous because 60 could be divided by many other numbers, simplifying calculations. This is partly why we still use this system for time today. Yet, the inability to differentiate between 1 and 60 represented a significant limitation.

Thus emerged zero—or something like it. The Babylonians, similar to us today, utilized two diagonal wedges to signify the absence of a number, allowing other numbers to maintain their correct placements.

For instance, in the modern numbering format, 3601 represents 3,000, 600, 10 of 0, and 1. The Babylonians would write it as 60 60, 0 10, 1. Without the zero marking its position, that symbol would look identical to 1 60 and 1. Notably, though, the Babylonians didn’t utilize zeros for counting positions; they functioned more like punctuation marks to indicate where to skip to the next number.

This placeholder concept has been utilized by various ancient cultures for millennia, although not all incorporated it. Roman numerals, for instance, lack a zero due to their non-positional nature; X consistently signifies 10 regardless of its placement. Zero’s evolution continued until the 3rd century AD, as evidenced by documents from present-day Pakistan. These texts featured numerous dot symbols indicating a position of zero, which eventually developed into the numerical 0 we recognize today.

Yet, we had to wait a few more centuries before zero was regarded as a number on its own, as opposed to merely a placeholder. Its first documented appearance occurred in the Brahmaspukhtasiddhanta, authored by Indian mathematician Brahmagupta around 628 AD. While many had previously recognized the oddity of computations like subtracting 3 from 2, such explorations were frequently considered nonsensical. Brahmagupta was the first to treat this concept with due seriousness and articulated arithmetic involving both negative numbers and zero. His definition of zero’s functionality closely resembles our contemporary understanding, with one key exception: dividing by zero. While Brahmagupta posited that 0/0 = 0, he was ambiguous regarding other instances involving division by zero.

The dot in Bakshali manuscript means zero

Zoom History / Alamy

We would have to wait another millennium before arriving at a satisfactory resolution to this issue. This period ushered in one of the most potent tools in mathematics: calculus. Independently formulated by Isaac Newton and Gottfried Wilhelm Leibniz in the 17th century, calculus engages with infinitesimals—numbers that aren’t precisely zero but approach it closely. Infinitesimals allow us to navigate the concept of division by zero without crossing that threshold, proving exceptionally practical.

For a clearer illustration, consider a hypothetical scenario where you’re accelerating your car rapidly. The equation v = t² describes this speed change, where t denotes time. For instance, after 4 seconds, the velocity shifts from 0 to 16 meters/second. But how far did the car travel during this interval?

Distance, determined by speed multiplied by time, would suggest 16 multiplied by 4 equals 64 meters—a misrepresentation, as the car only reached its maximum speed at the end of that period. To improve accuracy, we might assess the journey in segments, generating an overestimated distance as we rely on maximum speed.

To refine this estimation, we should truncate the time windows, focusing on the speed at a specific moment multiplied by the duration spent in that state. Here’s where zero becomes significant. Graphing v = t² reveals that our earlier estimates diverged from reality, with subsequent adjustments closing the gap. For the utmost precision, one must envision splitting the journey into intervals of 0 seconds and summing them. However, achieving this would necessitate division by zero—an impossibility until the advent of calculus.

Newton and Leibniz devised methods that facilitate an approach to division by zero without actually performing it. While a comprehensive explanation of calculus exceeds the scope of this article (consider exploring our online course for more details), their strategies unveil the genuine solution, derived from the integral of t², or t³/3, leading to a distance of 21 1/3 meters. This concept is often illustrated graphically as the area beneath a curve:

Calculus serves purposes beyond simply calculating a car’s distance. In fact, it’s utilized across numerous disciplines that require comprehension of shifting quantities, from physics to chemistry to economics. None of these advancements would have been possible without zero and our understanding of its profound capabilities.

However, for me, the true legacy of zero shines in the late 19th and early 20th centuries. For centuries, mathematics faced a crisis of identity. Mathematicians and logicians rigorously examined the foundations of their fields, uncovering alarming inconsistencies. In a bid to reinforce their disciplines, they began to define mathematical objects—numbers included—more explicitly than ever before.

What exactly constitutes a number? It can’t simply be a term like “3” or a symbol like “3,” as these are mere arbitrary labels we assign to the concept of three objects. We might point to a collection of fruits—apples, pears, and bananas—and express, “There are three pieces of fruit in this bowl,” yet we haven’t captured their intrinsic properties. What’s essential is establishing an abstract collection we can identify as “3.” Modern mathematics achieves this through zero.

Mathematicians operate with sets, rather than loose collections. For instance, a fruit collection would be represented as {apple, pear, banana}, with curly braces indicating a set. Set theory forms the bedrock of contemporary mathematics, akin to “computer code” for this discipline. To guarantee logical consistency and prevent the fundamental gaps discovered by mathematicians, every mathematical object must ultimately be articulated in terms of sets.

To define numbers, mathematicians commence with an “empty set,” a collection of zero elements. This can be represented as {}, but for clarity’s sake, it is often denoted as ∅. With this empty set established, the remaining numbers can be defined. The numeral one corresponds to a set containing one object—thus, {{}} or {∅} is visually clearer. The next number, 2, necessitates two objects; the first can again be an empty set. But what about the second? Defining this object inherently creates another—a set that contains the empty set, yielding {∅, {∅}} for two. Proceeding to three, it becomes {∅, {∅}, {∅, {∅}}}, and so forth indefinitely.

In summary, zero is not merely the most vital number; it can be regarded as the only number in a certain light. Within any given number, zero is always present at its core. Quite an accomplishment for something once dismissed as a mere placeholder.

topic:

Source: www.newscientist.com

The Unsung Genius of Mathematics You Likely Don’t Know

Alexander Grothendieck was a towering figure in mathematics

ihes

When you ask someone to name the top 20 physicists of the 20th Century, Albert Einstein will likely be at the forefront of their thoughts. However, a similar inquiry regarding mathematics may leave you with silence. Let me introduce you to Alexander Grothendieck.

Einstein, known for formulating the theory of relativity and playing a pivotal role in the advancement of quantum mechanics, became not only an influential physicist but a cultural icon. Grothendieck, too, revolutionized mathematics in profound ways, but he withdrew from public and academic life before his passing, leaving behind a legacy characterized solely by his groundbreaking contributions.

In contrast, while both Grothendieck and Einstein brought complexity to their respective fields, the former’s approach lacked the narrative charm that made Einstein’s theories, such as the twin paradox, more accessible. Grothendieck’s work, on the other hand, often veers into intricate and abstract concepts. I will endeavor to shed light on some of these profound ideas, even if my coverage is necessarily superficial.

To begin, Grothendieck is primarily renowned among mathematicians for revolutionizing the foundations of algebraic geometry, a domain examining the interplay between algebraic equations and geometric shapes. For instance, the equation x² + y² = 1 creates a circle of radius one when graphed.

Rene Descartes, a 17th-century philosopher, was among the first to formalize the relationship between algebra and geometry. This intersection, nevertheless, is far more intricate than it appears. Mathematicians are keen on generalizing, allowing them to form connections that were not previously evident. Grothendieck excelled in this endeavor—his life was depicted in a book recounting “the search for the greatest generality,” a hallmark of his mathematical ethos.

Taking our previous example, the points satisfying the equation and forming the circle are referred to as “algebraic varieties.” These varieties may reside not only on a Cartesian plane but also in three-dimensional space (like a sphere) or even in higher dimensions.

This foundational idea was merely the beginning for Grothendieck. As an illustration, consider the equations x² = 0 and x = 0. Each has a single solution where x equals 0, meaning the set of points (algebraic varieties) is identical. However, these equations are distinct. In 1960, during his quest for broader generality, Grothendieck introduced the notion of “schemes.”

What does this entail? It involves another concept, the “ring.” Confusingly, this term has no relation to circles. In mathematics, “rings” represent collections of objects that remain within that set when added or multiplied. In many respects, a ring is self-contained, akin to its namesake.

The simplest form of a ring is the integers: all negative integers, positive integers, and zero. Regardless of how you operate with integers, whether through addition or multiplication, you will remain within the integers. Moreover, a defining feature of a ring is the presence of a “multiplicative identity.” For integers, this identity is 1, since multiplying any integer by 1 results in that integer remaining unchanged. We also gain insight into what does not constitute a ring.

Through the introduction of schemes, Grothendieck effectively combined the notion of algebraic varieties with that of rings, addressing the missing elements for equations such as x² = 0 and x = 0 while utilizing geometric tools.

Handwritten notes by Alexander Grothendieck in 1982

University of Montpellier, Grothendieck Archives

This leads to two significant challenges that became pivotal for mathematicians. The first concerns four conjectures proposed by mathematician Andre Weil in 1949 regarding counting the number of solutions to certain types of algebraic varieties. In the context of the circle example, an infinite number of values satisfy the equation x² + y² = 1 (indicative of a circle containing infinite points). However, Weil was focused on varieties that permit only a finite number of solutions and speculated that the zeta function could likely be employed to count such solutions.

Utilizing the scheme, Grothendieck and his colleagues validated Weil’s three conjectures in 1965. The fourth was proved by his former student Pierre Deligne in the latter half of 1974 and is viewed as one of the 20 most significant outcomes in 20th-century mathematics, addressing challenges that had puzzled mathematicians for 25 years. This success underscored the profound power of Grothendieck’s schemes in linking geometry with number theory.

The scheme also played a crucial role in solving the infamous Fermat’s Last Theorem, a problem that confounded mathematicians for over 350 years, ultimately resolved by Andrew Wiles in 1995. The theorem states that there are no three positive integers a, b, and c that satisfy the equation an + bn = cn for any integer value of n greater than 2. Fermat had paradoxically written of a proof that was too vast to fit within the margin of his book, although he likely had no proof at all. Wiles’ solution incorporated methods developed post-Grothendieck, utilizing algebraic geometry to reformulate the problem in terms of elliptic curves—a particularly important class of algebraic varieties—which were studied through the lens of the scheme, inspired by Grothendieck’s innovative approach.

There remains a wealth of Grothendieck’s work that I have not explored, which forms the foundational tools many mathematicians rely on today. For instance, he generalized the concept of “space” to encompass “topoi,” introducing not only points within a space but also additional nuanced information, enriching problem-solving approaches. Alongside his collaborators, he authored two extensive texts on algebraic geometry which now serve as the essential reference works for the discipline.

Despite the magnitude of his influence, why does Grothendieck remain somewhat obscure? His work is undeniably complex, demanding considerable effort to understand. He also became a lesser-known figure for various reasons. A committed pacifist, he publicly opposed military actions in the Soviet Union, and notably declined to attend the prestigious 1966 Fields Medal ceremony, famously stating that “fruitfulness is measured not by honors, but by offspring,” indicating a preference for his mathematical contributions to stand on their own merit.

In 1970, Grothendieck withdrew from academia, resigning from his role at the French Institute for Advanced Scientific Research in protest against military funding. Though he initially continued his mathematical pursuits independent of formal institutions, he grew increasingly isolated. In 1986, he penned his autobiography, Harvest and Sowing, detailing his mathematical journey and disillusionment with the field. The following year, he created a philosophical manuscript, The Key to Dreams, sharing how a divine dream influenced his outlook. While both texts circulated among mathematicians, they were not officially published for some time.

Over the ensuing decade, Grothendieck further distanced himself from society, residing in a secluded French village, severing ties with the math community. At one point, he even attempted to subsist solely on dandelion soup until locals intervened. He is believed to have continued producing extensive writings on mathematics and philosophy, though none of these works were released to the public. In 2010, he began sending letters to various mathematicians. None were demands for engagement. Despite the myriad connections forged within mathematics, he ultimately chose to disengage from them personally. Grothendieck passed away in 2014, leaving behind an immeasurable mathematical legacy.

Topics:

Source: www.newscientist.com

“Mozart of Mathematics” Stays Silent on Politics—Until Funding Cuts Spark Change.

Terence Tao, widely recognized as one of the world’s leading mathematicians—often dubbed the “Mozart of Mathematics”—tends to avoid discussions on politics.

As Tao stated, “I’m focused on scientific research. I participate in voting and sign petitions, but I don’t view myself as an activist.”

Following the halting of a $584 million federal grant at UCLA in July, Tao expressed concern regarding the potential impact on scientists, suggesting that if the current trend persists, it could lead to indiscriminate cuts affecting many, himself included.

“This administration has exhibited extreme radicalism, particularly in its alteration of scientific landscapes in ways even the first Trump administration did not,” Tao commented. “This is not normal, and I believe many people are unaware of the damage occurring.”

Tao is among a select group of prominent mathematicians who openly challenge the regime’s actions, labeling them as “existential threats” to his field and the broader academic science community. He has prioritized public advocacy over his research for the time being.

“The U.S. is the leading global funder of scientific research, and the administration is focused on consolidating America’s innovative edge. However, federal research funding isn’t a constitutional guarantee,” remarked White House spokesperson Kush Desai. “The administration’s duty is to ensure taxpayer-funded research aligns with the priorities of American citizens.”

During the Trump administration, UCLA faced scrutiny through the suspension of its federal grants, based on claims of racism and failure to maintain a “non-biased research environment.” Investigations noted these issues.

Having emigrated to the United States from Australia at the age of 16, Tao was recognized as a mathematical prodigy early on. He has developed a significant career at UCLA and was awarded the 2016 Fields Medal, often regarded as the equivalent of a Nobel Prize in Mathematics. Additionally, he has earned a MacArthur Fellowship and other prestigious honors.

As part of a comprehensive federal lawsuit against UCLA, the National Science Foundation suspended two of its TAO grants, one of which directly backed Tao’s contributions at UCLA and his work with the University’s Institute of Pure and Applied Mathematics (IPAM). This was designated as a special project.

On August 12th, U.S. District Judge Rita F. Lynn mandated the reinstatement of the university’s NSF grants and the enforcement of previous provisional injunctions amid ongoing legal disputes. This ruling specifically pertains only to NSF grants at UCLA, including Tao’s. Other federal grants from agencies like the National Institutes of Health and the Energy Division remain suspended.

An NSF spokesperson confirmed, “The National Science Foundation has reinstated the awards that were suspended at the University of California, Los Angeles,” while withholding any further comment on Tao’s remarks.

Looking ahead, IPAM funding—established in 2000 to enhance collaboration among mathematicians, industry professionals, and engineers—remains at risk. The current grant is set to expire next year and awaits renewal, with the Trump administration proposing a 57% budget reduction for the NSF. Requests for 2026 are under consideration.

Tao’s NSF-funded research delves into advanced mathematical concepts, particularly focusing on understanding patterns in long numbers. Although this research may seem basic and lacks immediate practical applications, Tao suggests that its findings could influence encryption methods for security purposes.

On the other hand, IPAM’s research has yielded substantial public benefits. Two decades ago, Tao collaborated with other scientists to address signal processing challenges in medical imaging.

“An algorithm we developed with IPAM is routinely used in modern MRI machines, sometimes enhancing scanning speed by tenfold,” Tao noted.

The Trump administration has employed funding cuts or suspensions as leverage to push for reforms on university campuses, employing a multifaceted strategy. Initially, they sought to slash funding for scientific endeavors by reducing federal reimbursements for indirect costs like equipment and maintenance.

Subsequently, they focused on specific types of grants, including those addressing diversity, equity, inclusion, and gender identity.

The administration also singled out institutions like Harvard University, Columbia University, and, more recently, UCLA, over allegations of racism and anti-Semitism.

The lawsuit corresponds with numerous funding initiatives, leading to ongoing legal disputes which resulted in the cancellation and subsequent restoration of several grants.

Tao expressed that the recent disruption in financing for his project has compelled him to defer part of his own salary to maintain support for graduate students. His recent activities have shifted from mathematics to attending urgent meetings with university authorities, seeking donor contributions, and writing an opinion opposing the funding cuts.

“This is typically when I focus on my research, but this has become a top priority,” Tao emphasized.

He grows increasingly anxious about the bigger picture, believing that the administration’s actions could dissuade young scientists from remaining in the U.S., asserting that if this pattern continues, he himself may have to reconsider his position.

Tao has observed from his vantage point at UCLA that graduate and postdoctoral students are increasingly inclined to seek opportunities outside the U.S. as funding uncertainty looms.

“In past eras, other countries with distinguished scientific heritages faced turmoil and conflict, prompting many to flee to the U.S. as a safe haven,” Tao remarked. “It’s paradoxical that we are now witnessing an inverse trend where other countries might begin to attract skilled talent currently based in the U.S.”

Just a year ago, Tao hadn’t considered leaving UCLA or the U.S., but he has received a handful of recruitment inquiries and is beginning to contemplate his future in America if the current situation continues.

“I’ve established my roots here. I raised my family here, so it would take significant incentives to uproot me. Nonetheless, these days, predicting the future is increasingly challenging,” Tao concluded. “I never envisioned moving at all; it was never on my radar. Yet now, whether for better or worse, all possibilities must be taken into account.”

Source: www.nbcnews.com

DeepMind and OpenAI Achieve Victory in the International Mathematics Olympiad

AIs are improving at solving mathematics challenges

Andresr/ Getty Images

AI models developed by Google DeepMind and OpenAI have achieved exceptional performance at the International Mathematics Olympiad (IMO).

While companies herald this as a significant advancement for AIs that might one day tackle complex scientific or mathematical challenges, mathematicians urge caution, as the specifics of the models and their methodologies remain confidential.

The IMO is one of the most respected contests for young mathematicians, often viewed by AI researchers as a critical test of mathematical reasoning, an area where AI traditionally struggles.

Following last year’s competition in Bath, UK, Google investigated how its AI systems, Alpha Proof and Alpha Jometry, achieved silver-level performance, though their submissions were not evaluated by the official competition judges.

Various companies, including Google, Huawei, and TikTok’s parent company, approached the IMO organizers requesting formal evaluation of their AI models during this year’s contest, as stated by Gregor Drinner, the President of IMO. The IMO consented, stipulating that results should be revealed only after the full closing ceremony on July 28th.

OpenAI also expressed interest in participating in the competition but did not respond or register upon being informed of the official procedures, according to Dolinar.

On July 19th, OpenAI announced the development of a new AI that achieved a gold medal score alongside three former IMO medalists, separately from the official competition. OpenAI stated the AI correctly answered five out of six questions within the same 4.5-hour time limit as human competitors.

Two days later, Google DeepMind revealed that its AI system, Gemini Deep Think, had also achieved gold-level performance within the same constraints. Dolinar confirmed that this result was validated by the official IMO judges.

Unlike Google’s Alpha Proof and Alpha Jometry, which were designed for competition, Gemini Deep Think was specifically crafted to tackle questions posed in a programming language used by both Google and OpenAI.

Utilizing LEAN, the AI was capable of quickly verifying correctness, although the output is challenging for non-experts to interpret. Thang Luong from Google indicated that a natural language approach can yield more comprehensible results while remaining applicable to broadly useful AI frameworks.

Luong noted that advancements in reinforcement learning—a training technique designed to guide AI through success and failure—have enabled large language models to validate solutions efficiently, a method essential to Google’s earlier achievements with gameplay AIs, such as AlphaZero.

Google’s model employs a technique known as parallel thinking, considering multiple solutions simultaneously. The training data comprises mathematical problems particularly relevant to the IMO.

OpenAI has disclosed few specifics regarding their system, only mentioning that it incorporates augmented learning and “experimental research methods.”

“While progress appears promising, it lacks rigorous scientific validation, making it difficult to assess at this point,” remarked Terence Tao from UCLA. “We anticipate that the participating companies will publish papers featuring more comprehensive data, allowing others to access the model and replicate its findings. However, for now, we must rely on the companies’ claims regarding their results.”

Geordy Williamson from the University of Sydney shared this sentiment, stating, “It’s remarkable to see advancements in this area, yet it’s frustrating how little in-depth information is available from inside these companies.”

Natural language systems might be beneficial for individuals without a mathematical background, but they also risk presenting complications if models produce lengthy proofs that are hard to verify, warned Joseph Myers, a co-organizer of this year’s IMO. “If AIs generate solutions to significant unsolved questions that seem plausible yet contain subtle, critical errors, we must be cautious before putting confidence in lengthy AI outputs.”

The companies plan to initially provide these systems for testing by mathematicians in the forthcoming months before making broader public releases. The models claim they could potentially offer rapid solutions for challenging problems in scientific research, as stated by June Hyuk Jeong from Google, who contributed to Gemini Deep Think. “There are numerous unresolved challenges within reach,” he noted.

Topics:

Source: www.newscientist.com

Mathematicians Pursue Numbers That Might Uncover the Boundaries of Mathematics

What’s lurking at the edge?

Kertlis/Getty Images

Amateur mathematicians find themselves ensnared in a vast numerical puzzle.

This conundrum stems from a deceptively simple query: How can one determine if a computer program will execute indefinitely? The roots of this question trace back to mathematician Alan Turing, who in the 1930s demonstrated that computer algorithms could be represented through a hypothetical “Turing machine” that interprets and records 0s and 1s on infinitely long tapes, utilizing more intricate algorithms that necessitate additional states and adhering to a specific set of instructions.

<p>For numerous states, like 5 or 100, the corresponding Turing machines are finite; however, it remains uncertain how long these machines will operate. The longest conceivable run time for each state count is termed the busy beaver number or BB(n), and this sequence grows exceedingly rapidly. For instance, BB(1) equals 1, while BB(2) is 6, and the fifth busy beaver number reaches 47,176,870.</p>
<p>The exact value of the next busy beaver number, the sixth, has not yet been determined, but the online community known as the Busy Beaver Challenge is <a href="https://bbchallenge.org/story">on the verge of discovery</a>. They succeeded in uncovering BB(5) in 2024, concluding a 40-year search, currently attributed to a participant called "MXDYS." <a href="https://bbchallenge.org/1RB1RA_1RC---_1LD0RF_1RA0LE_0LD1RC_1RA0RE">It must be at least as vast as a significantly large value, making even its explanation a challenge.</a></p>
<p>"This number surpasses the realm of physical comprehension. It's simply not intriguing," states <a href="https://www.sligocki.com/about/">Shawn Ligokki</a>, a software engineer and contributor to the Busy Beaver Challenge, who likens the search for Turing machines to fishing in uncharted mathematical oceans filled with strange and elusive entities lurking in the darkness.</p>
<section>

</section>
<p>The threshold for BB(6) is so immense that it necessitates a mathematical framework that goes beyond exponents, demanding the raising of one number to another x power, or n<sup>x</sup>2 days etc. For instance, 2*2*2 equals 8. The concept of a tetrol sometimes represented as <sup>x</sup>n <sup>3</sup>2 is raised to the second power and subsequently elevated to the second power again, resulting in a value of 16.</p>
<p>Surprisingly, MXDYS posits that BB(6) is at least two tetroized. The number 2 is illuminated by multiplying two tetroized, resulting in nine. In comparison, the estimated quantity of all particles in the universe seems diminutive, according to Ligokki.</p>

<p>However, the significance of the busy beaver numbers extends beyond their sheer size. Turing established that certain Turing machines must exist that cannot reliably predict behavior under the ZFC theory. This notion was influenced by the mathematician Kurt Gödel's "Incompleteness Theorem," which concluded that using the ZFC rules, it is impossible to affirm that the theory is entirely devoid of contradictions.</p>
<p>"The exploration of busy beaver numbers provides a concrete, quantitative representation of a phenomenon identified by Gödel and Turing almost a century ago," remarks <a href="https://www.cs.utexas.edu/people/faculty-researchers/scott-aaronson">Scott Aaronson</a> from the University of Texas at Austin. "I’m not merely suggesting that a Turing machine could displace ZFC capabilities and ascertain its behavior after a finite stage; rather, is this already occurring with machines possessing six states, or is it restricted to machines with 600 states?" Research has confirmed that BB(643) does eliminate ZFC theory, though numerous examples remain to be investigated.</p>
<p>"The busy beaver problem offers a comprehensive scale to navigate the forefront of mathematical understanding," states Tristan Stérin, a computer scientist who initiated the Busy Beaver Challenge in 2022.</p>
<p>In 2020, <a href="https://scottaaronson.blog/?p=4916">Aaronson wrote</a> that the busy beaver feature "encapsulates most intriguing mathematical truths within its first 100 values," and BB(6) is no exception. It seems to relate to Korizat's hypothesis, an esteemed unsolved mathematical problem that conducts simple arithmetic operations with numbers to determine if they resolve to 1. The discovery of a machine that halts might imply that the particular version of the hypothesis possesses a computational proof.</p>

<p>The numerical challenges that researchers encounter are astonishing in scale, yet the busy beaver framework serves as a tangible measurement tool that otherwise becomes a nebulous expanse of mathematics. In Stérin’s perspective, this aspect continues to captivate many contributors. He estimates that numerous individuals are presently dedicated to the discovery of BB(6).</p>
<p>Thousands of "hold-out" Turing machines remain unexamined for halting behavior, he notes. "There might exist a machine unbeknownst to you lurking just around the corner," Ligokki asserts. In essence, it exists independently of ZFC and lies beyond the boundaries of contemporary mathematics.</p>
<p>Is the precise value of BB(6) also lurking nearby? Ligokki and Stérin acknowledge their reluctance to forecast the future of busy beavers, yet recent achievements in defining boundaries give Ligokki a sense of "intuition that it’s approaching closer."</p>

<section class="ArticleTopics" data-component-name="article-topics">
    <p class="ArticleTopics__Heading">Topic:</p>
</section>

Source: www.newscientist.com

AI Could Revolutionize Our Approach to Mathematics

AI is Improving in Mathematical Research

lucadp/getty images

Is the AI Revolution poised to revolutionize mathematics? Many prominent mathematicians think so, as automated tools enhance the ability to provide evidence of significant advancements, fundamentally altering the landscape of mathematical research.

In June, around 100 leading mathematicians convened at Cambridge University to discuss the potential of computers in solving enduring questions about the validity of their proofs. This process, called formalization, didn’t prominently feature AI in a similar conference held in Cambridge back in 2017.

Yet, eight years later, AI has made a significant impact. Particularly notable are the advancements in large-scale language models powering tools like ChatGPT, which have renewed interest in the role of AI in mathematics. These advancements range from translating human-written proofs into machine-checkable formats to verifying their correctness automatically.

“It’s a bit overwhelming,” said Jeremy Abigad, who helped organize the Carnegie Mellon University conference. “It’s fantastic. I’ve been at this for a long time, and it used to be considered niche. Suddenly, it’s in the spotlight.”

Google DeepMind presented two lectures, highlighting the achievement of their AI system, Alphaproof, which earned a silver medal at the International Mathematics Olympiad (IMO), a prestigious competition for young mathematicians. “If you’d asked a mathematician about [AlphaProof] after the IMO, their response might differ. Some might view these as challenging high school problems, while others might consider them relatively trivial,” remarked Thomas Hubert, a research engineer at DeepMind.

Hubert and his team demonstrated that Alphaproof could assist in formalizing aspects of key theorems beyond the IMO competition, contributing significantly to number theory. While mathematics had previously been translated into Lean, a programming language, Alphaproof was able to verify the correctness of the theorem. “We aimed to showcase how Alphaproof can be applied in real-world scenarios,” Hubert stated.

Morph Labs, a US-based AI startup, also introduced an AI tool named Trinity, designed to automatically translate handwritten mathematical notation into fully formalized, verified proofs in Lean. Bhavik Mehta demonstrated Trinity’s capability to prove theorems related to ABC conjecture at Imperial College London, collaborating with Morph Labs.

This proof represented only a fraction of the total evidence required for the ABC conjecture, and while Trinity needed a slightly more elaborate version of the handwritten proof than what was initially published, the accuracy of the mathematical code produced by the tool surprised many.

“The difference between what Morph did and previous attempts is that they took an entire math paper. [Then] they broke the argument down into manageable segments, allowing the machine to translate everything into Lean,” noted Kevin Buzzard from Imperial College London. “I don’t think anything like this has been seen before.”

Nevertheless, it remains uncertain how effective this approach will be in other mathematical domains, Mehta acknowledged. “It was essentially the first attempt, and it was successful. I might just be lucky.”

Christian Szegedy from Morph Labs asserted that once the tool is fully operational, it would expand rapidly. “A feedback loop establishes itself, reducing the necessity for detailed theorem guidance. Essentially, it triggers a chain reaction facilitating extensive mathematical work,” he indicated.

Individuals like Timothy Gorwards from Cambridge University believe that tools such as these can significantly benefit mathematicians already. “It requires considerable effort to develop them, and there are many eager participants willing to contribute. I anticipate significant strides in the next few years in standardized mathematical notation, arXiv [an online research paper platform], and Google,” he remarked.

Nonetheless, not all mathematicians are convinced about the merits of Morph Labs’ findings. Rodrigo Furrigo from Leiden University in the Netherlands expressed skepticism, stating they lacked sufficient information about the methodology involved. “They only shared the output from one of the systems, which raises concerns about possible selective reporting. There was no documentation published or details on testing with other theorems,” he commented. “When the audience inquired about the computational load the model requires, they repeatedly declined to elaborate, making it challenging to evaluate the significance of the outcomes.”

There remains skepticism regarding the utility of AI tools in mathematics. Many mathematicians continue to operate without automated tools, and it’s unclear if opinions will shift as these tools become more advanced, noted Minhyun Kim at the International Mathematics Science Centre in the UK. “Mathematics and mathematicians exhibit diverse perspectives. Some will employ AI tools inventively and effectively, while others may prefer to keep their distance.”

“People often underestimate the sophistication, creativity, and nuance involved in mathematical research,” observes Ochigame. This is why much research continues to be conducted using traditional methods—pen, paper, and deep contemplation. “There exists a substantial gap between high school mathematics competitions such as IMO and cutting-edge research,” he concludes.

Topics:

Source: www.newscientist.com

Mathematics Reveals the Ideal Strategy for Winning the Lottery

How can mathematics help you win the lottery?

Brandon Bell/Getty Images

I’ve got a foolproof method that guarantees you’ll win the lottery you desire. Just follow my simple technique and you’ll capture the biggest jackpot imaginable. The only caveat? You need either millions yourself or a circle of wealthy friends.

Let’s use the US Powerball as an illustration. To participate, you must select five unique “white” numbers from 1 to 69, along with a sixth “red” number from 1 to 26. Notably, this last number can replace one of the white ones. How many unique lottery tickets can you create? To find out, we turn to a branch of mathematics known as Combinatorics, which helps calculate the number of potential combinations of items.

This situation is analogous to the “n choose k” problem in which n signifies the total number of objects available for selection (69 for the white Powerball numbers) and k refers to the number of objects you wish to pick. It’s essential to note that these selections occur without replacement—each winning number drawn removes it from the pool of available choices.

For this, mathematicians employ a useful formula for solving n choose k problems: n! /(k! ×(n k)!). If this notation is unfamiliar, don’t worry! It’s simply a representation of the product of all whole numbers leading up to a given integer. For instance, 3! = 3×2×1 = 6.

Applying 69 for n and 5 for k results in a total of 11,238,513 combinations. While that sounds substantial, we’ll see shortly that it might not be enough. Enter the Red Powerball. Essentially, this means you’re effectively playing two lottery tickets at once, raising the stakes for winning the grand prize. Merely adding a sixth white ball, the combinations soar to 119,877,472 in total. However, since there are 26 possibilities for the red ball, you would multiply the white ball combinations by 26, yielding a grand total of 292,201,338 potential outcomes.

Now we’re talking about over 292 million possible Powerball tickets. The ultimate trick to guaranteed victory? Simply purchase every possible ticket. Of course, the logistics involved complicate this idea. Most importantly, you’d need over $5 billion on hand, as each ticket costs $2.

Is that enough to ensure a significant payout? It’s a bit complicated to answer. The Powerball jackpot accumulates weekly, often remaining unclaimed, which means the prizes can vary. However, there are about 15 instances of jackpots exceeding $584 million, which would not be worth pursuing under the buy-all-tickets approach. Profits are further diminished by the prospect of multiple winners choosing the same combination and approximately 30% of winnings being deducted for taxes.

It’s not surprising, really. If winning the lottery and making a profit were guaranteed, people would be doing this all the time, leading lottery operators to go bankrupt. Yet, surprisingly, poorly designed lotteries do appear, leaving savvy investors at a disadvantage.

One of the earliest noted incidents of this kind involved the writer and philosopher Voltaire, who collaborated with mathematician Charles Marie de la Codamine to create a syndicate aimed at buying all tickets in a lottery tied to French government debts. While the exact methods remain vague, there are suggestions of devious tactics employed that allowed them to circumvent the full ticket payment, resulting in the syndicate winning repeatedly before authorities shut down the lottery in 1730. In a letter to a colleague, Voltaire remarked, “The group that won the victory and purchased all the tickets triumphed over a million players.”

Modern lotteries have faced similar fates. A notable instance is the Irish National Lottery, which was taken over in 1992 by numerous syndicates. At the time, players had to select six numbers from 1 to 36. The n choose k formula indicates 1,947,792 possible tickets. With each ticket costing 50 Irishpense (the currency then), the conspirators managed to raise £973,896 and began acquiring tickets poised for an estimated £1.7 million prize pool.

Lottery organizers caught wind of this scheme and began restricting the number of tickets any one vendor could sell. This limitation meant the syndicate could only purchase roughly 80% of the possible combinations. The outcome was a shared jackpot with two other winners, leading to a loss of £568,682 for the syndicate. Thankfully, the lottery had introduced a £100 guaranteed prize for matching four numbers, bringing their total to £1,166,000.

In response to the incident, the Irish National Lottery quickly revised its rules. Players now must select six numbers from 47, elevating the total number of tickets to 10,737,573. Though the jackpot is capped at 18.9 million euros, the £2 price per ticket makes lottery investments unprofitable.

Despite ample awareness regarding the pitfalls of poorly structured lotteries, such phenomena may still arise. One extraordinary instance emerged in 2023, when a syndicate won a $95 million jackpot in the Texas lottery. Texas lottery tickets involved 54 choices, allowing for 25,827,165 possibilities, with each ticket priced at $1, making this a significant venture. However, there were speculations that the syndicate had possible support from the lottery organizers themselves. Fallout from this controversy is still ongoing, raising questions about legality. The syndicate may have collaborated through local retailers and acquired a ticket printing terminal from the Texas lottery, simplifying logistics. Organizers at the time deny any involvement in unlawful activities, and no criminal charges have been filed. As a lawyer representing the syndicate stated, “All applicable laws, rules, and regulations were adhered to.”

So there you have it. If you can secure an ample amount of upfront cash and the organizers fail to implement the n choose k formula effectively, you might just make a decent profit. Good luck!

Topics:

Source: www.newscientist.com

Were you able to solve it? Thinking like an engineer in mathematics

Today, we have two questions about fascinating objects that we will share with you along with their answers.

1. Pythagoras’ Cup

Pythagoras, a Greek mathematician and mystic, created a cup with interesting properties:

1) When filled to a certain point, it acts like a regular cup.

2) If you pour above that level, the liquid drains out through a hole in the bottom of the cup.

Can you illustrate how this cup works?

The cup has a simple internal mechanism with no moving parts. It’s a clever metaphor for moderation in life – overflow even slightly, and you lose it all.

Solution:




Cross-section of a Pythagorean cup filled with water. At B, the liquid in the cup can be drunk, but at C, the liquid flows down due to the siphon effect. Illustration: Nevit Dilmen

The cup has a central chamber that fills from the bottom, and when it overflows, a siphon is formed to empty the water. This mechanism is similar to flushing toilets and fabric softener trays in washing machines.

2. A Backwards Old Car

Design a simple mechanism for a toy car with four wheels that moves forward when a string is pulled backward.

Solution:

To achieve this, you need a pulley system as shown in the video. A string is wrapped around a shaft, and when it unwinds, it moves a belt connected to the wheel axle.

We hope you enjoyed today’s puzzles, and we’ll be back in 2 weeks!

Since 2015, we’ve been sharing puzzles every other Monday. If you have any suggestions, feel free to email us!

Source: www.theguardian.com

Michel Taragran awarded 2024 Abel Prize for breakthroughs in understanding randomness in mathematics

Michel Taragrand: “Life is horribly random.”

Peter Budge/Typos1/Abel Prize 2024

Michel Taragran won the 2024 Abel Prize, also known as the Nobel Prize of mathematics, for his work on probability theory and the description of randomness. The news came as a surprise to Taragrand. He learned what he thought was his Zoom call within the department. He said: “My brain completely shut down for five seconds. It was an amazing experience. I never expected anything like this.”

Tara GrandBased at the French National Center for Scientific Research (CNRS), he has spent much of his 40-year career on extreme characterization of random or stochastic systems. These problems are common in the real world. For example, a bridge builder may need to know the maximum wind strength expected from the local weather.

Such random systems are often very complex and may contain many random variables, but Talagrand’s method of converting these systems into geometric problems allows us to extract useful values. can. “He is a master at getting accurate estimates, and he knows exactly what to add or subtract to get an accurate estimate,” he says. Helge HoldenChairman of the Abel Prize Committee.

Taragrand also developed mathematical tools and equations for systems that are random but exhibit some degree of predictability within that randomness, a statistical principle called concentration of measurements. His equation, known as the Taragrand inequality, can be used for many systems that exhibit concentration of measurements. Asaf Naor At Princeton University, he developed famous algorithmic puzzles such as the Traveling Salesman Problem. “Not only is he a great discoverer in his own right, but he is also an influence. He has provided the world with an amazing collection of insights and tools,” Naor says.

Perhaps inspired by his own work, Taragrand says he views his career as a random process. “It’s really scary when you look at your life and the important things that happened. They were determined by small random influences and there was no plan at all,” he says.

Although many of his works were general, he also had a particular interest in the mathematical basis of spin glasses. Spin glass is an unusual magnetic arrangement in which the atoms of a material can act like tiny magnets, pointing in random directions and exhibiting no apparent order. Repeating crystal structure in ordinary glass.

“This award is definitely well-deserved,” he says Giorgio Parisi from Sapienza University in Rome, Italy, won the 2021 Nobel Prize in Physics for his work on spin glasses. Parisi and his colleagues first proposed a formula to describe these materials, named after Parisi, but it was not proven mathematically until the work of Taragrand and Italian physicist Francesco Guerra. . “It’s one thing to believe that a guess is correct, but it’s another to prove it. I believed it was a very difficult problem to prove,” Parisi says.

It also helped draw the field to the attention of other mathematicians, Parisi said. “This was a great proof and completely changed the game, because it was the starting point for a deeper understanding of the theory.”

For Taragrand, one of the keys to success was persistence. “You can’t learn mathematics easily. You have to work. It takes a lot of time and you have bad memories. You forget things. So despite these handicaps, I have to work. My way of working has always been to try to understand simple things really well.”

topic:

Source: www.newscientist.com

What is the reason behind science’s heavy reliance on mathematics?

The following is an excerpt from the Lost in Space-Time newsletter. Every month, we Give a keyboard to a physicist or mathematician and let them talk about some fascinating ideas from their corner of the universe. You can sign up for Lost in Space-Time for free here.

“Science is written in the language of mathematics,” Galileo declared in 1623. And over the past few centuries, science has become increasingly mathematical. Mathematics now seems to have complete supremacy, especially in the fields of quantum physics and relativity. Modern physics education seems to include deriving theories such as…

Source: www.newscientist.com