It has become common for artificial intelligence companies to claim that the worst-case scenarios in which their chatbots are used can be mitigated by adding “safety guardrails.” These range from seemingly simple solutions, such as warning a chatbot to be careful about certain requests, to more complex software fixes, but none are foolproof. And almost every week, researchers discover new ways to circumvent these measures, known as jailbreaks.
You may be wondering why this is a problem. What’s the worst that could happen? One dark scenario could be that AI could be used to create deadly biological weapons.
Source: www.newscientist.com