In September, Openai announced a new version of ChatGPT, designed to infer through tasks that include mathematics, science, and computer programming. Unlike previous versions of chatbots, this new technology allows you to spend time “thinking” through complex problems before you settle for an answer.
Soon, the company said the new inference technology outperformed the industry’s leading systems in a series of tests tracking advances in artificial intelligence.
Currently, other companies such as Google, Anthropic, and China’s Deepseek offer similar technologies.
But can AI actually reason like a human? What does computers mean? Are these systems really close to true intelligence?
This is the guide.
What does that mean if an AI system infers?
Inference means that chatbots spend more time tackling the problem.
“We’re committed to providing a new technology to our AI startup,” said Dan Klein, professor of computer science at the University of California, Berkeley and chief technology officer at Scaled Cognition, an AI startup.
You could try to split the problem into individual steps or try to solve it via trial and error.
The original ChatGpt answered the question immediately. A new inference system can resolve problems in seconds or minutes before answering.
Can you make it more specific?
In some cases, the inference system will improve its approach to the question and repeatedly attempt to improve the selected method. Otherwise, you may try several different ways to approach the problem before you settle on one of the problems. Or maybe it’s back and check out some work that I did a few seconds ago to see if it’s correct.
Essentially, the system will try to do everything possible to answer your questions.
This is like an elementary school student struggling to find a way to solve a math problem, scribbling several different options on paper.
What questions should infer AI systems?
It can potentially infer about something. However, when asking questions that involve mathematics, science, and computer programming, reasoning is most effective.
How is inference chatbots different from previous chatbots?
You can ask previous chatbots and check your work to show how they reached a specific answer. The original ChatGpt also allows for this kind of self-reflection as they learned from texts on the internet, showing how people reached their work and how they checked their work.
However, the reasoning system is moving further. You can do these kinds of things without being asked. And you can do them in a broader and more complicated way.
Companies call it the inference system. Because it feels like it behaves like someone who is thinking about difficult problems.
Why is AI reasoning so important now?
Companies like Openai believe this is the best way to improve chatbots.
For years, these companies relied on simple concepts. The more internet data you pump to your chatbot, the better these systems were running.
But in 2024, they ran out of almost all of the texts on the internet.
That is, we needed a new way to improve chatbots. So they began building an inference system.
How do you build an inference system?
Last year, companies like Openai began to lean heavily towards a technology known as Rencemone Learning.
While this process can be extended over several months, AI systems can learn to do things through extensive trial and error. For example, by solving thousands of mathematics problems, you can learn which methods lead to the correct answer and which ones not.
Researchers have designed a complex feedback mechanism that shows the system when it does the right thing and when it does something wrong.
“It’s a bit like training a dog,” said Jerry Tworek, a researcher at Openai. “If the system works out, we give you cookies. If that doesn’t work, we say ‘bad dogs.’ “
(New York Times sued Openai and its partner Microsoft in December for copyright infringement of news content related to AI systems.)
Does reinforcement learning work?
It works very well in certain fields, such as mathematics, science, computer programming. These are areas where companies can clearly define good and bad behavior. There is a definitive answer to mathematics problems.
Reinforcement learning also does not work well in areas such as creative writing, philosophy, and ethics. Researchers say that this process can generally improve the performance of AI systems, even if it answers questions outside of mathematics and science.
“It gradually learns the patterns of reasoning that leads it in the right direction, and learns which isn’t,” said Jared Kaplan, chief science officer of humanity.
Are reinforcement learning and inference systems the same thing?
no. Reinforcement learning is the method companies use to build inference systems. Finally, the chatbot can infer is during the training phase.
Are these inference systems still making mistakes?
absolutely. Everything a chatbot does is based on probability. It chooses the path that most resembles the data it learns, whether it comes from the Internet or is generated through reinforcement learning. Sometimes I choose an option that’s wrong or makes no sense.
Is this the path to a machine that suits human intelligence?
AI experts are split on this question. These methods are still relatively new, and researchers are still trying to understand their limitations. In the AI field, new methods often progress very quickly at first.
Source: www.nytimes.com