When an artificial intelligence model says it’s unsure of an answer, people are more likely to be wary of its output and ultimately seek out accurate information elsewhere. But no AI model is currently capable of judging its own accuracy, which has led some researchers to question whether letting an AI express doubts is a good idea.
The large-scale language models (LLMs) behind chatbots like ChatGPT create highly believable output, but have been proven time and again to simply be fabricated…
Source: www.newscientist.com