The UK’s Cybersecurity Agency has expressed concerns that the introduction of artificial intelligence will result in increased difficulty in distinguishing genuine emails from fraudulent ones, particularly those requesting users to reset their passwords.
The National Cyber Security Center (NCSC) has warned that the advancement of AI tools will make it harder for individuals to spot phishing messages, increasing the risk of users being deceived into disclosing their passwords and personal information.
The NCSC highlighted the availability of generative AI technology, which is capable of creating compelling text, audio, and images, as a contributing factor to these concerns.
The NCSC’s assessment foresees a significant increase in cyber-attacks due to the proliferation and sophistication of AI, with ransomware attacks expected to escalate as well.
Additionally, the NCSC pointed out that generative AI tools could be harnessed by cybercriminals to create more convincing bait documents for phishing attempts, while also warning that state actors may exploit AI for conducting advanced cyber operations.
Despite the potential risks posed by AI in cyber threats, the NCSC acknowledged its defensive potential and its ability to bolster security measures.
On the government front, new guidelines have been issued to improve businesses’ preparedness in dealing with ransomware attacks, aiming to elevate information security to the level of financial and legal controls.
Cybersecurity experts, however, believe that stronger actions are needed to address the ransomware threat, emphasizing the need for the UK to reconsider its approach and establish stricter regulations around ransom payments.
Source: www.theguardian.com