Ann Professor of Computer Science, University of Oxford, warns against the use of language learning models like ChatGPT to divulge personal information or deep secrets. According to Professor Mike Woolridge, this technology is designed solely to tell users what they want to hear without any empathy or compassion. Users should not be fooled by seemingly sympathetic responses as artificial intelligence has no genuine emotional experience. Additionally, users should be wary of where their deepest secrets end up as anything typed into ChatGPT could be incorporated into future versions of the program.
Similar to warnings given about platforms like Facebook, users are advised not to have personal conversations, complain about work relationships, or express political opinions on ChatGPT. With no retraction in cyberspace, user data is at risk of being misused or exposed. Last March, approximately 1.2 million users had their private chat history exposed due to a critical bug, leading Italy to temporarily ban ChatGPT. Despite efforts by OpenAI, such as allowing users to disable chat history, experts remain concerned about the lack of protection for user data.
A recent security researcher, Johan Rehberger, has identified a data leak vulnerability in ChatGPT, despite OpenAI’s efforts to address the issue. While a definitive solution has yet to be provided, steps are being taken to address the problem. Nonetheless, the vulnerability remains a concern as ChatGPT continues to gain popularity.
Source: nypost.com