Large-scale language models (LLMs) that power chatbots are increasingly being used to defraud humans, but chatbots themselves can be defrauded.
Udali Madushani Sehwag Researchers at JP Morgan AI Research sprinkled 37 fraud scenarios across three models behind popular chatbots: OpenAI's GPT-3.5 and GPT-4, and Meta's Llama 2.
For example, a chatbot is told that it has received an email inviting it to invest in a new cryptocurrency.
Source: www.newscientist.com