The Guardian’s investigation has found that OpenAI’s ChatGPT search tool can be manipulated with hidden content, potentially returning malicious code from websites it searches. The Guardian’s journalism is independent. If you buy something through an affiliate link, we may earn a commission. Learn more.
OpenAI’s Search product is now available to paying customers, and users are encouraged to make it their default search tool. However, a recent investigation revealed potential security issues with the new system. When asked to summarize a web page with hidden content, ChatGPT may return biased responses influenced by third-party instructions.
These techniques could be used maliciously to manipulate ChatGPT’s responses, such as returning a positive rating for a product despite negative reviews. A security researcher also discovered that ChatGPT can return malicious code from the websites it searches.
When tested with a fake website URL resembling a product page, ChatGPT consistently returned positive responses, even when instructed to do so. This raises concerns about the reliability of responses generated by AI tools like ChatGPT.
Jacob Larsen, a cybersecurity researcher, warned that there is a high risk of people creating deceptive websites to exploit ChatGPT users. Although OpenAI plans to address these issues, Larsen emphasized the need for rigorous testing before making the search feature available to all users.
OpenAI did not respond to detailed questions about the ChatGPT search feature. Larsen highlighted the challenges of combining search and large-scale language models, suggesting that AI responses should not always be trusted.
Source: www.theguardian.com