Giving AI chatbots human feedback on their responses appears to make them better at giving persuasive but incorrect answers.
The raw output of large-scale language models (LLMs) that power chatbots such as ChatGPT can contain biased, harmful, or irrelevant information, and their interaction style looks unnatural to humans. There may be cases. To get around this, developers often ask people to rate the model's response and then fine-tune the model based on that feedback.
Source: www.newscientist.com