AI is Improving in Mathematical Research
lucadp/getty images
Is the AI Revolution poised to revolutionize mathematics? Many prominent mathematicians think so, as automated tools enhance the ability to provide evidence of significant advancements, fundamentally altering the landscape of mathematical research.
In June, around 100 leading mathematicians convened at Cambridge University to discuss the potential of computers in solving enduring questions about the validity of their proofs. This process, called formalization, didn’t prominently feature AI in a similar conference held in Cambridge back in 2017.
Yet, eight years later, AI has made a significant impact. Particularly notable are the advancements in large-scale language models powering tools like ChatGPT, which have renewed interest in the role of AI in mathematics. These advancements range from translating human-written proofs into machine-checkable formats to verifying their correctness automatically.
“It’s a bit overwhelming,” said Jeremy Abigad, who helped organize the Carnegie Mellon University conference. “It’s fantastic. I’ve been at this for a long time, and it used to be considered niche. Suddenly, it’s in the spotlight.”
Google DeepMind presented two lectures, highlighting the achievement of their AI system, Alphaproof, which earned a silver medal at the International Mathematics Olympiad (IMO), a prestigious competition for young mathematicians. “If you’d asked a mathematician about [AlphaProof] after the IMO, their response might differ. Some might view these as challenging high school problems, while others might consider them relatively trivial,” remarked Thomas Hubert, a research engineer at DeepMind.
Hubert and his team demonstrated that Alphaproof could assist in formalizing aspects of key theorems beyond the IMO competition, contributing significantly to number theory. While mathematics had previously been translated into Lean, a programming language, Alphaproof was able to verify the correctness of the theorem. “We aimed to showcase how Alphaproof can be applied in real-world scenarios,” Hubert stated.
Morph Labs, a US-based AI startup, also introduced an AI tool named Trinity, designed to automatically translate handwritten mathematical notation into fully formalized, verified proofs in Lean. Bhavik Mehta demonstrated Trinity’s capability to prove theorems related to ABC conjecture at Imperial College London, collaborating with Morph Labs.
This proof represented only a fraction of the total evidence required for the ABC conjecture, and while Trinity needed a slightly more elaborate version of the handwritten proof than what was initially published, the accuracy of the mathematical code produced by the tool surprised many.
“The difference between what Morph did and previous attempts is that they took an entire math paper. [Then] they broke the argument down into manageable segments, allowing the machine to translate everything into Lean,” noted Kevin Buzzard from Imperial College London. “I don’t think anything like this has been seen before.”
Nevertheless, it remains uncertain how effective this approach will be in other mathematical domains, Mehta acknowledged. “It was essentially the first attempt, and it was successful. I might just be lucky.”
Christian Szegedy from Morph Labs asserted that once the tool is fully operational, it would expand rapidly. “A feedback loop establishes itself, reducing the necessity for detailed theorem guidance. Essentially, it triggers a chain reaction facilitating extensive mathematical work,” he indicated.
Individuals like Timothy Gorwards from Cambridge University believe that tools such as these can significantly benefit mathematicians already. “It requires considerable effort to develop them, and there are many eager participants willing to contribute. I anticipate significant strides in the next few years in standardized mathematical notation, arXiv [an online research paper platform], and Google,” he remarked.
Nonetheless, not all mathematicians are convinced about the merits of Morph Labs’ findings. Rodrigo Furrigo from Leiden University in the Netherlands expressed skepticism, stating they lacked sufficient information about the methodology involved. “They only shared the output from one of the systems, which raises concerns about possible selective reporting. There was no documentation published or details on testing with other theorems,” he commented. “When the audience inquired about the computational load the model requires, they repeatedly declined to elaborate, making it challenging to evaluate the significance of the outcomes.”
There remains skepticism regarding the utility of AI tools in mathematics. Many mathematicians continue to operate without automated tools, and it’s unclear if opinions will shift as these tools become more advanced, noted Minhyun Kim at the International Mathematics Science Centre in the UK. “Mathematics and mathematicians exhibit diverse perspectives. Some will employ AI tools inventively and effectively, while others may prefer to keep their distance.”
“People often underestimate the sophistication, creativity, and nuance involved in mathematical research,” observes Ochigame. This is why much research continues to be conducted using traditional methods—pen, paper, and deep contemplation. “There exists a substantial gap between high school mathematics competitions such as IMO and cutting-edge research,” he concludes.
Topics:
Source: www.newscientist.com












