When ChatGPT burst into the spotlight in 2022, people hailed it for its sophistication and versatility. However, artificial intelligence experts and government officials soon raised concerns about potential exploitation by malicious actors.
In a report Wednesday, Feb. 14, Microsoft and OpenAI found that hacker groups from China, Russia, North Korea and Iran are leveraging AI services to enhance their cyberattack capabilities.
OpenAI said it terminated every known account associated with five nation-state hacker groups. Microsoft, a significant financial backer of OpenAI, said these countries should not have access to such crucial technology.
While Microsoft and OpenAI haven’t discovered any significant attacks utilizing tools like ChatGPT, they have classified the hackers’ usage as “early-stage” research.
“These actors generally sought to use OpenAI services for querying open-source information, translating, finding coding errors, and running basic coding tasks,” OpenAI said in a report.
Currently, Microsoft tracks more than 300 hacking groups. Once identified, the company disrupts their activities, limits their access and ultimately terminates their services.
“We’re seeing this evolve across a number of ways again increasing sophistication by the actor,” Tom Burt, vice president of customer security at Microsoft said. “The use in some cases of AI-generated voicemail as the means of of conducting the fraud as well as email and the use of of cloud-based infrastructure to engage in these crimes.”
Microsoft acknowledged it can’t see every malicious actor, but the company is committed to taking every possible measure to disrupt threat actors.
The tech giant said it works with more than 15,000 partners to boost its security ecosystem, hoping to prevent the powerful tool from becoming a weapon against the U.S.