Microsoft announced on Wednesday that adversaries of the United States, primarily Iran and North Korea, and to a lesser extent Russia and China, are starting to take advantage of generative artificial intelligence to launch or coordinate offensive cyber operations.
Microsoft disclosed that it collaborated with business partner OpenAI to identify and prevent numerous threats exploiting the AI technology it developed.
In a blog post, the company stated that these techniques are still in their early stages and are not particularly novel or unique, but they do broaden the capabilities of U.S. rivals to use large-scale language models to infiltrate networks and exert influence, emphasizing the importance of publicly exposing this.
Cybersecurity companies have been using machine learning to detect anomalous behavior within networks for years, but the introduction of OpenAI’s ChatGPT-led large-scale language model has intensified the cat-and-mouse game, as both criminals and aggressive hackers are leveraging it.
Microsoft’s investment in OpenAI is substantial, and the company noted in its announcement on Wednesday that generative AI is anticipated to power malicious social engineering and lead to the development of more advanced deepfakes and voice clones, at a time when disinformation is on the rise and threats to democracy are rampant, with more than 50 countries holding elections in a year.
Microsoft provided examples of how adversaries were using large-scale language models, including the disabling of AI accounts and assets for specific groups.
The North Korean cyber-espionage group known as Kimsky used the model to study foreign think tanks and generate content for spear-phishing hacking campaigns.
The Iranian Revolutionary Guards Corps utilized large-scale language models for social engineering, troubleshooting software issues, and researching ways to bypass detection on compromised networks, using phishing emails and accelerated email creation.
The Russian military intelligence unit, Fancy Bear, employed the model to study satellite and radar technology potentially linked to the Ukraine war.
China’s cyber-espionage group known as Aquatic Panda targeted various industries, higher education, and governments from France to Malaysia, with limited exploration of how large-scale language models can enhance technical operations, and another Chinese group, Maverick Panda, interacted with the model to gather information on high-profile individuals and regions.
On another blog, OpenAI announced that its current GPT-4 model chatbots are “limited to malicious cybersecurity tasks beyond what is already achievable with publicly available non-AI-powered tools,” a situation that cybersecurity researchers aim to change.
Jen Easterly, head of the U.S. Cybersecurity and Infrastructure Security Agency, informed Congress of the growing threat from China and the potential impact of artificial intelligence, stressing the need to develop AI with security in mind.
Amidst concerns about the irresponsible release of large language models, Microsoft and other companies are facing criticism for not taking focused action to address vulnerabilities, which has disappointed some cybersecurity experts who advocate for creating more secure underlying models to counter potential misuse.
Edward Amoroso, a professor at New York University and former AT&T chief security officer, emphasized the increasingly powerful role of AI and large-scale language models as potential weapons in cyber warfare, stating that they ultimately pose a threat to every nation-state.
Source: www.theguardian.com