Can plants count from 1 to 10 using their root tendrils? No. However, researchers have discovered that some plants possess the fascinating ability to detect insect intruders and monitor their own food supply, allowing them to perform basic counting and mathematics.
Take, for instance, Venus flytraps, which are renowned for snapping shut when they detect movement from an insect or other triggers. Interestingly, this is only activated if the object moves twice within a time frame of approximately 15-20 seconds.
These movements are captured by delicate “trigger” hairs on the leaves, which convert the sensory input into electrical signals that travel through the plant as waves of charged atoms (ions). The leaves then close upon receiving two triggering electrical signals.
Additionally, a group of international scientists noted in a 2016 exhibition that Venus flytraps can tally multiple counts before reacting.
They wait to receive a minimum of three electrical signals before producing the necessary enzymes to digest their prey, potentially to avoid wasting energy on false alarms.
Even prior to this finding, scientists had proposed that the mustard plant (Arabidopsis), a common research subject, exhibits behaviors akin to division.
During daylight hours, plants harness sunlight to accumulate food stores (starches) through photosynthesis.
To sustain themselves overnight, they must establish a balanced starch consumption rate (starch divided by time) by gauging the starch stored in their leaves alongside their circadian rhythms.
Experts caution against labeling these unique counting abilities as “intelligent” or indicative of a primitive brain structure; instead, they are vital survival mechanisms that demonstrate remarkable sophistication.
This article answers a query posed by Llewi Evans from Monmouthshire.
Feel free to send your questions via emailQuestion @sciencefocus.com or MessageFacebook,Twitter, or InstagramPage (please include your name and location).
Explore our ultimateFun factsand more incredible science content.
“When you search for stock market prices, you may see patterns…”
Muhla1/Getty Images
Flipping through the front page of a newspaper, one is greeted by a myriad of numbers—metrics about populations, lengths, areas, and more. If you were to extract these figures and compile them into a list, it might seem like a random assortment.
However, these figures are not as arbitrary as they may appear. In reality, the leading digit of many numbers, such as total revenues or building heights, tends to be predominantly the number 1. While true randomness would suggest that each digit has an equal chance of leading, the actual data shows that about one-third of the time, the first digit is a 1. The number 9, interestingly, appears as the leading digit in about 5% of cases, with other digits following such a trend.
This phenomenon is referred to as Benford’s Law, which illustrates the expected distribution of first digits within a dataset of a certain type—especially those spanning a wide, unspecified range. Although values like human height (where numbers are confined within a limited spectrum) or dates (which also have defined limits) don’t follow this law, others do.
Consider checking your bank balance, numbering a house, or analyzing stock prices (as displayed). Such numbers commonly exhibit a distribution with varied digit lengths. In neighborhoods with just a handful of houses, you might see a balance of numbers, whereas in larger towns, hundreds may share similar leading digits.
Picture a street hosting nine houses. The proportion of leading digits resembles an even split among the nine options. Conversely, on a street with 19 houses, a larger fraction—often over fifty percent—will begin with 1. As the housing number increases, this pattern persists. With 100 houses, you would observe a fairly uniform distribution across all digits, yet with 200 occupants, once again, more than half will typically start with 1.
Due to the diverse origins of data in real-world collections, the average likelihood of seeing numbers that start with 1 fluctuates between these two extremes. Similar calculations can be made for other digits, resulting in an overall frequency distribution observable in extensive datasets.
This characteristic is particularly useful in identifying potential data fabrication. When analyzing a company’s financial records, a Benford-like distribution is expected in their sales figures. However, when someone generates random numbers, the frequency distribution of the leading digits lacks a defined curve. This principle serves as one of the many tools forensic accountants employ to root out dubious activities.
The next time you examine your bank statement or compare river lengths, take note of how often those numbers start with 1.
Katie Steckles is a mathematician, lecturer, YouTuber, and author based in Manchester, UK. She also contributes advice to Brent Wister, a puzzle column for New Scientist. Follow her @stecks
For additional projects, please visit newscientist.com/maker
“Like any other mathematical concept, this idea is open to exploration.”
Peter Rowlett
As a child, Mary Everest Boole discovered several cards adorned with evenly spaced holes along the edges. By tightening threads from each hole to its opposite, she created a line that gracefully crossed the center. This exercise allowed her to form a symmetrical curve and fostered her intuition for formal geometry.
A few years later, in 1864, she found herself a widow with five children. Despite the academic establishment’s disregard for women’s contributions, she persevered as a librarian and math tutor in London.
Boole believed that engaging children with mathematical objects, like her curve stitching activities, could deepen their understanding. She connected mathematical imagination and creativity in various ways, using fables and history to elucidate logic and algebra.
Now you can explore by creating a “string art” image inspired by her work. Begin with a pair of horizontal and vertical axes, each 10 cm long and marked with numbers 1-10 spaced 1 cm apart. Create a straight line from point 1 on the horizontal axis to point 10 on the vertical axis. Continue connecting points 2 to 9, 3 to 8, and so forth. While all lines are straight, the intersections will form curves.
You may have used drawing software to control the path’s shape via two endpoints. These represent Bezier curves, crucial in computer-aided design, reminiscent of Boole’s early stitching curves fixed to the axes and their intersection points.
With practice, you should be able to draw lines without numbering them—experiment with different colors as well. She recommended it as a stitching exercise rather than a drawing, which can also be approached using threads. Simply substitute the dots with holes.
Like other mathematical concepts, this idea invites exploration. For instance, alter the axes to meet at varying angles, or examine what occurs when the distances between dots differ, such as 1 cm for one line and 2 cm for another.
Consider drawing a circle or another shape, distributing dots evenly around it, then systematically connecting them. For example, connect all dots in a clockwise fashion for ten dots. You can even recreate the boat-like image shown above (center, right). What else can you create?
For more creative projects, visit newscientist.com/maker
Are things equal or aren’t they? At least mathematically, that’s a question worth considering. Eugenia Chen argues in her new book, Inequality: With mathematics and tactics when things are done. In maths, as in life, some aspects have more weight than others.
Consider this: the equation 180 = 180 reveals nothing, yet x + y + z = 180°, where x, y, and z are the angles of a triangle, conveys a deeper insight. This statement holds true only under specific circumstances—yes, but not on the surface of a sphere.
Chen aims to investigate how mathematics identifies things as “equal.” Her methodology blends playfulness with the gravity of abstract concepts, linking them to diverse topics such as knitting and creating Battenberg cakes. She isn’t shy about tackling significant political and rights-related questions surrounding equality.
When simplifying through numbers, Chen humorously remarks that their dullness helps clarify potentially overwhelming complexities into a manageable figure. Numbers can be potent tools, focusing on a specific element of a situation.
However, overlooking this simplification can lead to misunderstandings. For instance, assuming two individuals with identical IQ scores are equally intelligent is misleading. As Chen remarks, “It’s alright to disregard the details, but you must remember that you have.”
Fortunately, mathematics encompasses more than mere numbers. Chen delves into the concepts of “local” and “global,” engaging in extensive discussions. Essentially, she explores surfaces formed by stitching together smaller flat areas.
By promoting “diverse thinking,” she proposes a valuable lens through which to view reality. In mathematics, debating whether a sphere and a torus are “the same” is futile. They can be understood as locally distinct but globally different. Similarly, in political discourse, it’s crucial to recognize when one faction utilizes localized arguments (“individual women benefit from the right to choose regarding abortion”) while the opposing side employs global ones (“all abortions constitute murder,” etc.).
Chen ventures deeply into abstract discussions regarding identity within categorical theory, guiding the reader through theoretical territories. Some of the most remarkable creations in art, literature, and music are indeed complex, yet we appreciate them without fully grasping the intricacies of chiaroscuro, counterpoints, or other sophisticated elements. Chen devotes herself to exploring the formal definitions of categories. Like art, we all appreciate certain abstract notions, but discovering their depth is worthwhile.
“If you believe that mathematics is solely about equations, seeing them as rigid black-and-white facts, then you likely perceive mathematics as solely stringent and binary,” states Chen. This book serves as a compelling counterargument to that misapprehension. Delving into the nuances of “equality” in mathematics will enrich your understanding of this field’s complexity and illuminate how the idea of equality is applied (and misapplied).
Sarah Hart is Professor Emelita and Provost of Geometry at the University of Gresham, UK. She authored Once Upon Prime.
New Scientist Book Club
Are you an avid reader? Join a welcoming group of fellow book enthusiasts. Every six weeks, we explore exciting new titles, offering members exclusive access to excerpts, author articles, and video interviews.
DeepMind’s AlphaProof AI can tackle a wide range of math problems
Google DeepMind
Google DeepMind’s AI won a silver medal at this year’s International Mathematical Olympiad (IMO), the first time an AI has made it onto the podium.
The IMO is considered the world’s most prestigious competition for young mathematicians, and answering the exam questions correctly requires mathematical ability that AI systems typically lack.
In January, Google DeepMind showed off AlphaGeometry, an AI system that could answer IMO geometry problems as well as humans could, but it wasn’t in a real competition and couldn’t answer questions in other areas of math, such as number theory, algebra, or combinatorics, that are needed to win an IMO medal.
Google DeepMind has now released a new AI called AlphaProof that can solve a wider range of math problems, and an improved version of AlphaGeometry that can solve more geometry problems.
When the team tested both systems together on this year’s IMO problems, they got four out of six questions right, earning them 28 points out of 42 possible points – good enough for a silver medal, just one point short of this year’s gold medal threshold.
At the competition held in Bath, England, last week, 58 athletes won gold medals and 123 won silver medals.
“We all know that AI will eventually be better than humans at solving most mathematical problems, but the rate at which AI is improving is astounding,” he said. Gregor Doliner“It’s incredible to have missed out on gold at IMO 2024 by just one point just a few days ago,” said IMO Chairman Jonathan McClellan.
At a press conference, Timothy Gowers A University of Cambridge researcher who helped grade AlphaProof’s solutions said the AI’s performance was surprising, and that it seemed to have found the “magic keys” to solve the problems in a way that was similar to humans. “We thought that these magic keys would probably be a bit beyond the capabilities of an AI, so we were quite surprised in one or two cases where the program actually found them,” Gowers said.
AlphaProof works similarly to Google DeepMind’s previous AIs that can beat the best humans at chess and Go. All of these AIs rely on a trial-and-error approach called reinforcement learning, in which the system finds its own way of solving a problem by trying it again and again. However, this method requires a large number of problems written in a language that the AI can understand and verify, and IMO most such problems are written in English.
To avoid this, Thomas Hubert Using Google’s Gemini AI, a language model like the one that powers ChatGPT, the DeepMind researchers and his colleagues transformed these problems into a programming language called Lean, allowing the AI to learn how to solve them.
“You’ll start by solving maybe the simplest problems, and then you’ll be able to learn from solving those simple problems and then tackle the harder problems,” Hubert said at the press conference, and the answers will be generated in a lean language so they can be immediately verified for correctness.
Despite AlphaProof’s impressive performance, it was slow, taking three days to find a solution. That’s compared to 4.5 hours for the contestants, but AlphaProof failed to solve either of the two problems. The problems were about combinatorics, the study of counting and arranging numbers. “We’re still working on figuring out why that is, and if we can do that, that will help us improve the system,” AlphaProof says. Alex Davis At Google DeepMind.
It’s also not clear how AlphaProof arrives at its answers, or whether it uses the same mathematical intuition as humans, Gowers said. But he said Lean’s ability to translate proofs into English makes it easy to check whether they’re correct.
“The results are impressive and a significant milestone,” Jordy Williamson “There have been many attempts to apply reinforcement learning based on formal proofs, but none have been very successful,” say researchers at the University of Sydney in Australia.
Systems like AlphaProof may help working mathematicians develop proofs, but they obviously don’t help them identify the problems they need to solve and tackle, which takes up the majority of researchers’ time, he says. He Yanghui At the London Mathematical Institute.
Hubert said the team hopes that by reducing false responses, AlphaProof can help improve Google’s large-scale language models like Gemini.
Trading firm XTX Markets is offering a $5 million prize to any AI that can win a gold medal at the IMO (dubbed the AI Mathematics Olympiad), but AlphaProof is ineligible because it is not publicly available. “We hope that DeepMind’s progress will encourage more teams to apply for the AIMO prize, and of course we would welcome a public submission from DeepMind itself,” said Alex Gerko of XTX Markets.
There’s a mathematical trick to get out of any maze
Klaus Wedfeld/Digital Vision/Getty Images
It’s almost March 14th, or Pi Day. We celebrate this annual celebration of the great mathematical constants by new scientist Let’s recall some of our favorite recent stories from the world of mathematics. To whet your appetite, we’ve extracted a list of amazing facts from it, but if you want to indulge in Pi Day, click through for the full article. These are normally only available to subscribers, but to respect the circumference and diameter of the world, we have decided to make them free for a limited time.
The world’s best kitchen tiles
There are shapes called “hats” that can completely cover a surface without creating a repeating pattern. For decades, mathematicians have wondered whether a single tile exists that can do such a thing. Roger Penrose discovered a pair of tiles that could do the job in his 1970s, but no one could find a single tile that had the same effect when placed. Things changed last year when this hat was discovered.
why are you so unique
You are an irreplaceable person.Or actually he should be a tenth10^68. Called the doppelgängion by mathematician Antonio Padilla, this number is so large that it’s difficult to wrap your head around it. This is a 1 followed by 100 million trillion zeros, and has to do with the possibility of finding exactly the same person somewhere in the universe. It is so difficult to imagine numbers of this size that the quantum physics required to calculate them seems almost trivial in comparison. There is a finite number of quantum states that can exist in the same size part of the universe. Add them all up and you arrive at Doppelgängion. Padilla also wrote about four other surprising numbers of his. new scientist. We’re all here.
amazing tricks
There is a simple mathematical trick to get out of any maze. That means always turning to the right. This method always works, no matter how complex the maze, no matter how many twists and turns and dead ends there are. I got the trick. Can you see why it always leads to success?
and the next number is
There may be a sequence of numbers that is very difficult to calculate, and the mathematician has just found number 9 in the sequence, and it may be impossible to calculate number 10. These numbers are called Dedekind numbers, after the mathematician Richard Dedekind, and represent the number of ways a set of logical operations can be combined. When a set contains a small number of elements, it is relatively easy to calculate the corresponding Dedekind number, but as the number of elements increases, the Dedekind number grows “at twice the exponential rate.” His number 9 in this series was 42 digits long and took him a month to find.
You can’t see the forest for the trees (3)
There are numbers too large to fit in the universe. TREE(3) comes from a simple math game. The game involves generating a forest of trees using different combinations of seeds according to some simple rules. If there is one type of seed, then the maximum allowed forest can contain one tree. For two types of seeds, the largest forest will have three trees. But for three types of seeds, the largest forest has TREE (3) trees, which is too large a number for the universe.
language of the universe
There is an eight-dimensional number system called the octanion that physicists use to try to describe the universe mathematically. The best way to understand octonions is to first think about taking the square root of -1. Among the real numbers (including all counting numbers, fractions, pi, etc.) there is no such number that is the result of its calculation, so mathematicians add another number called . I. When combined with real numbers, we get a system called complex numbers, which consists of a real part and an “imaginary part,” such as 3+7i. That is, it is two-dimensional. Octonion occurs by continuing to build systems until you reach the 8th dimension. However, this is more than just mathematical fun and games. There is reason to believe that octonions may be a necessary number system for understanding the laws of nature.
so many new solutions
Mathematicians went looking for solutions to the three-body problem and found 12,000 solutions. The three-body problem is a classic astronomical problem that asks how three objects can form stable orbits around each other. Such an arrangement is explained by Isaac Newton’s laws of motion, but finding a solution that is actually acceptable is incredibly difficult. In 2007, mathematicians were able to find his 1,223 new solutions to this problem, but this was significantly surpassed last year when the team discovered more than 12,000 more solutions.
DeepMind’s FunSearch AI can tackle mathematical problems
Arengo/Getty Images
Google DeepMind claims to have made the first ever scientific discovery in an AI chatbot by building a fact checker that filters out useless output and leaves behind only reliable solutions to mathematical or computing problems. Masu.
DeepMind’s previous achievements, such as using AI to predict the weather or the shape of proteins, rely on models created specifically for the task at hand and trained on accurate, specific data. I did. Large-scale language models (LLMs), such as GPT-4 and Google’s Gemini, are instead trained on vast amounts of disparate data, yielding a wide range of capabilities. However, this approach is also susceptible to “hallucinations,” which refers to researchers producing erroneous output.
Gemini, released earlier this month, has already shown hallucination tendencies and even gained simple facts such as: This year’s Oscar winners were wrong. Google’s previous AI-powered search engine even had errors in its self-launched advertising materials.
One common fix for this phenomenon is to add a layer on top of the AI that validates the accuracy of the output before passing it on to the user. However, given the wide range of topics that chatbots may be asked about, creating a comprehensive safety net is a very difficult task.
Al-Hussein Fawzi Google’s DeepMind and his colleagues created a general-purpose LLM called FunSearch based on Google’s PaLM2 model with a fact-checking layer they call an “evaluator.” Although this model is constrained by providing computer code that solves problems in mathematics and computer science, DeepMind says this work is important because these new ideas and solutions are inherently quickly verifiable. is a much more manageable task.
The underlying AI may still hallucinate and provide inaccurate or misleading results, but the evaluator filters out erroneous outputs, leaving only reliable and potentially useful concepts. .
“We believe that probably 90% of what LLM outputs is useless,” Fawzi says. “If you have a potential solution, it’s very easy to tell whether this is actually the correct solution and evaluate that solution, but it’s very difficult to actually come up with a solution. So , mathematics and computer science are a particularly good fit.”
DeepMind claims the model can generate new scientific knowledge and ideas, something no LLM has ever done before.
First, FunSearch is given a problem and a very basic solution in the source code as input, and then generates a database of new solutions that are checked for accuracy by evaluators. The best reliable solutions are returned as input to the LLM with prompts to improve the idea. According to DeepMind, the system generates millions of potential solutions and eventually converges on an efficient result, sometimes even exceeding the best known solution.
For mathematical problems, a model creates a computer program that can find a solution, rather than trying to solve the problem directly.
Fawzi and his colleagues challenged FunSearch to find a solution to the cap set problem. This involves determining the pattern of points where three points do not form a straight line. As the number of points increases, the computational complexity of the problem increases rapidly. The AI discovered a solution consisting of 512 points in eight dimensions, larger than previously known.
When tackling the problem of bin packing, where the goal is to efficiently place objects of different sizes into containers, FunSearch discovered a solution that outperformed commonly used algorithms. The result is a result that can be immediately applied to transportation and logistics companies. DeepMind says FunSearch could lead to improvements in more math and computing problems.
mark lee The next breakthrough in AI will not be in scaling up LLM to ever-larger sizes, but in adding a layer to ensure accuracy, as DeepMind has done with FunSearch, say researchers at the University of Birmingham, UK. It is said that it will come from.
“The strength of language models is their ability to imagine things, but the problem is their illusions,” Lee says. “And this study breaks that down, curbs that, and confirms the facts. It’s a nice idea.”
Lee says AI should not be criticized for producing large amounts of inaccurate or useless output. This is similar to how human mathematicians and scientists work: brainstorm ideas, test them, and follow up on the best while discarding the worst.
This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.
Strictly Necessary Cookies
Strictly Necessary Cookie should be enabled at all times so that we can save your preferences for cookie settings.