If you ask if ChatGPT thinks like a human, the chatbot will say it doesn’t. “I can process and understand languages to some extent,” writes ChatGPT. But, “My understanding is based on patterns in the data. [not] Human understanding. ”
Still, when you talk to this artificial intelligence (AI) system, the following can happen: feel It’s like talking to a human. He’s a pretty smart and talented guy in that regard. ChatGPT can answer your math and history questions on demand and in a variety of languages. You can create lots of stories and computer code. Other similar “generative” AI models can also create artwork and videos from scratch.
Learn about artificial intelligence
“These things seem really smart,” Melanie Mitchell said. She is a computer scientist at the Santa Fe Institute in New Mexico. She spoke at the annual meeting of the American Association for the Advancement of Science.. It was held in February in Denver, Colorado.
Many people are concerned about the increasing “smartness” of AI. They worry that generative AI will take people’s jobs or take over the world. But Mitchell and other experts believe these concerns are overblown. At least for now.
These experts say the problem is exactly what ChatGPT says it is. Today’s best AIs still don’t fully understand what they’re saying or doing the way humans do. And that imposes severe limitations on its capabilities.
Concerns about AI are not new
For decades, people have worried that machines are becoming too smart. This fear dates back to at least 1997. That’s when the computer Deep Blue defeated world chess champion Garry Kasparov.
But back then, it was still easy to show that AI fails miserably at many things that humans do well. Sure, computers can play sneaky chess games. But can it diagnose disease? Or transcribe speech? Not very good. In many important areas, humans still had the upper hand.
A similar Deep Blue computer defeated world chess champion Garry Kasparov in 1997.Christina Hsu/Wikimedia Commons (CC BY 2.0 Certificate)
About 10 years ago, that started to change.
Computer brains, known as neural networks, have been greatly enhanced by a new technology called .deep learning. This is a powerful type of machine learning.in machine learningComputers learn skills by practicing and looking at examples.
Thanks to deep learning, computers can suddenly rival humans for many tasks. Machines can identify images, read signs, and enhance photos. You can also reliably convert audio to text.
However, there were limits to their abilities. First, deep learning neural networks can be easily fooled. For example, if a stop sign had several stickers on it, the AI would think the sign said “80 speed limit.” These “smart” computers also required extensive training. To learn new skills, I needed to see tons of examples of what to do.
So deep learning has produced AI models that are better at very specific jobs. However, these systems have not been able to successfully adapt their expertise to new tasks. For example, you can’t use an AI translator from English to Spanish to help you with your French homework.
But now things are changing again.
“We are in a new era of AI,” Mitchell says. “We are beyond the deep learning revolution of the 2010s, and we are now in the era of generative AI of the 2020s.”
The AI behind tools like ChatGPT can do a variety of tasks, from summarizing text to spell-checking essays. So you might want to use these tools to help with homework and other tasks. However, proceed with caution. AI systems like ChatGPT have also been known to fabricate information because they don’t really understand what they’re doing the way humans do. Klong Kaeo/Moment/Getty Images
This Gen-AI era
Generative AI systems are systems that can generate text, images, and other content on demand. This type of AI can create many things that were long thought to require human creativity. This includes everything from brainstorming ideas to writing poetry.
Many of these abilities come from large-scale language models (LLMs for short). ChatGPT is an example of a technology based on LLM. Such language models are said to be “big” because they are trained on vast amounts of data. Basically, they study everything on the internet. This also includes scanned copies of countless printed books.
“Big” can also refer to the number of different types of things that LLMs can “learn” by reading. These models do more than just learn words. You will also learn phrases, symbols, and mathematical equations.
Here we briefly summarize what large-scale language models are and what they can do.
By learning the patterns of how the components of a language are put together, LLM can predict the order in which words should be placed. This helps the model write sentences and answer questions. Essentially, LLM calculates the probability that one word follows another within a given context.
This allows LLMs to write and solve riddles in the style of any author.
Some researchers suggest that when LLMs accomplish these feats, they know what they are doing. These researchers believe that LLMs can reason like humans, or even be conscious in some sense.
But Mitchell and others argue that LLMs don’t really understand the world (yet). At least not like humans.
narrow-minded AI
Mitchell and colleague Martha Lewis recently identified one major limitation of the LLM. Lewis studies language and concepts at the University of Bristol, UK. The two shared their work on arXiv.org. (Research published there typically has not yet been reviewed by other scientists.)
A new paper shows that LLM still falls short of the ability of humans to adapt skills to new situations. Consider this string problem. Start with one string, ABCD. Then we get the second string “ABCE”.
Most people can tell the difference between the two strings. The last character of the first string is replaced with the next letter of the alphabet in the second string. So when a human is shown another string, such as IJKL, he can infer that the second string should be IJKM.
Most LLMs can also solve this problem. That’s natural. After all, the model is well trained on the English alphabet.
.cheat-sheet-cta { border: 1px solid #ffffff; margin top: 20px; background image: url(“https://www.snexplores.org/wp-content/uploads/2022/12/cta-module@2x -2048×239-1.png”); Padding: 10px; Clear: Both. }
Have a science question? We can help!
Submit your question hereI might answer that in the next issueExploring science news
But suppose you raise the problem with a different alphabet. Perhaps they are mixing up the letters of the alphabet and changing their order. Or use symbols instead of letters. Humans are still good at solving string problems. But LLMs usually fail. They are unable to apply concepts learned in one alphabet to another. All his GPT models tested by Mitchell and Lewis suffered from this kind of problem.
Other similar tasks have also shown that LLMs perform poorly in untrained conditions. As such, Mitchell does not believe that they represent what humans would call an “understanding” of the world.
the importance of understanding
“I think being authentic and taking the right action in new situations is at the core of what understanding really means,” Mitchell said at the AAAS meeting.
Human understanding, she says, is based on “concepts.” These are mental models of categories, situations, events, etc. Concepts allow people to determine cause and effect. It also helps predict the likely outcomes of various actions. And people can do this in situations they’ve never seen before.
AI models may one day be able to truly intelligently understand the world. However, machine understanding may not resemble human understanding at all. Boris SV/Moment/Getty Images
“What’s really amazing about humans is that we can abstract our concepts into new situations,” Mitchell says.
She does not deny that AI may one day reach a level of intellectual understanding similar to humans. But machine understanding may prove to be different from human understanding, she added. No one knows what technology will enable that understanding. But if it’s anything like human understanding, it’s probably not based on her LLM.
After all, LLMs learn in the opposite way to humans. These models start with language learning. They then use that knowledge to try to understand abstract concepts. Human babies, on the other hand, first learn a concept and then the language to explain it.
So talking to ChatGPT can feel like talking to a friend, teammate, or tutor. But behind the scenes, the computerized number crunching is still quite different from the human mind.
Source: www.snexplores.org