15-2-2024 (WASHINGTON) In a report published on Wednesday, Microsoft disclosed that state-sponsored hackers from Russia, China, and Iran have been utilizing tools developed by OpenAI, an organization supported by Microsoft, to refine their hacking techniques and deceive their targets. Microsoft’s report stated that it had been tracking hacking groups associated with Russian military intelligence, Iran’s Revolutionary Guard, and the governments of China and North Korea as they attempted to enhance their hacking campaigns by leveraging large language models. These computer programs, often referred to as artificial intelligence (AI), employ vast amounts of text data to generate responses that resemble human language.
As part of its findings, Microsoft announced a blanket ban on state-backed hacking groups accessing its AI products. “Regardless of whether there is a violation of the law or our terms of service, we do not want these actors, whom we have identified and tracked as various types of threat actors, to have access to this technology,” said Tom Burt, Microsoft Vice President for Customer Security, in an interview with Reuters.
At the time of writing, there has been no immediate response from Russian, North Korean, or Iranian diplomatic officials regarding the allegations.
The Chinese spokesperson from the US embassy, Liu Pengyu, expressed opposition to “baseless smears and accusations against China.” Liu emphasized the need for the “safe, reliable, and controllable” deployment of AI technology to benefit all of humanity.
The revelation that state-sponsored hackers have been caught utilizing AI tools to enhance their spying capabilities is likely to raise concerns about the rapid proliferation of the technology and its potential for misuse. Western senior cybersecurity officials have been warning about the abuse of such tools by rogue actors since last year, although specific details have been scarce until now.
Bob Rotsted, the leader of cybersecurity threat intelligence at OpenAI, described this as one of the first instances, if not the first, where an AI company publicly discusses how cybersecurity threat actors employ AI technologies.
Both OpenAI and Microsoft characterized the hackers’ use of their AI tools as being in the early stages and incremental. Burt stated that they had not observed any major breakthroughs by cyber spies using the technology. “We really saw them just using this technology like any other user,” he said.
According to the report, the hacking groups employed the large language models for different purposes. Hackers affiliated with the Russian military intelligence agency, commonly known as the GRU, used the models to research “various satellite and radar technologies that may pertain to conventional military operations in Ukraine,” as stated by Microsoft. North Korean hackers utilized the models to generate content likely for use in spear-phishing campaigns against regional experts. Iranian hackers also relied on the models to compose more convincing emails, including one instance where they drafted a message attempting to lure “prominent feminists” to a website rigged with malware. Chinese state-backed hackers were found to be experimenting with large language models to inquire about rival intelligence agencies, cybersecurity issues, and “notable individuals.”
Burt and Rotsted declined to provide details on the volume of activity or the number of accounts suspended. Burt defended the zero-tolerance ban on hacking groups, which does not extend to Microsoft offerings such as its search engine Bing, by emphasizing the novelty of AI and the concerns surrounding its deployment. “This technology is both new and incredibly powerful,” he stated.