China’s AI-generated content targets US, India and South Korea elections


Full story

China and North Korea are intensifying their cyber operations, leveraging artificial intelligence to influence global opinions and elections in the U.S., South Korea, India, and Taiwan, according to Microsoft’s latest Threat Intelligence report. The report details China’s use of AI-generated content, such as videos, memes and fake news anchors, to sway public opinion and influence the outcomes of key elections.

https://twitter.com/MsftSecIntel/status/1776296340701462936

Clint Watts of Microsoft’s Threat Analysis Center highlights China’s use of fake social media accounts to probe divisive U.S. domestic issues, aiming to understand what divides U.S. voters. These influence operations have targeted various issues within the U.S., gathering intelligence on American political views.

Last year, the group Storm 1376 falsely claimed the Maui fires were set by the U.S. government as a test of a “weather weapon.” The challenge lies in combating misinformation from realistic generative AI, as many refuse to accept such content as false, especially when it aligns with their beliefs and values.

In Taiwan, a suspected AI-generated audio falsely portrayed Foxconn’s Terry Gou endorsing another presidential candidate, which YouTube quickly removed. Meanwhile, North Korea has focused on cryptocurrency thefts and supply chain attacks, funding its military ambitions and enhancing intelligence collection through AI.

The U.N. estimates that since 2017, North Korean cyber actors have stolen over $3 billion in cryptocurrency. Microsoft acknowledges that while AI-generated content currently has a minimal impact on elections, China’s ongoing efforts to refine this content could become more effective as technology advances.

Tags: , , , , , , ,

Full story

China and North Korea are intensifying their cyber operations, leveraging artificial intelligence to influence global opinions and elections in the U.S., South Korea, India, and Taiwan, according to Microsoft’s latest Threat Intelligence report. The report details China’s use of AI-generated content, such as videos, memes and fake news anchors, to sway public opinion and influence the outcomes of key elections.

https://twitter.com/MsftSecIntel/status/1776296340701462936

Clint Watts of Microsoft’s Threat Analysis Center highlights China’s use of fake social media accounts to probe divisive U.S. domestic issues, aiming to understand what divides U.S. voters. These influence operations have targeted various issues within the U.S., gathering intelligence on American political views.

Last year, the group Storm 1376 falsely claimed the Maui fires were set by the U.S. government as a test of a “weather weapon.” The challenge lies in combating misinformation from realistic generative AI, as many refuse to accept such content as false, especially when it aligns with their beliefs and values.

In Taiwan, a suspected AI-generated audio falsely portrayed Foxconn’s Terry Gou endorsing another presidential candidate, which YouTube quickly removed. Meanwhile, North Korea has focused on cryptocurrency thefts and supply chain attacks, funding its military ambitions and enhancing intelligence collection through AI.

The U.N. estimates that since 2017, North Korean cyber actors have stolen over $3 billion in cryptocurrency. Microsoft acknowledges that while AI-generated content currently has a minimal impact on elections, China’s ongoing efforts to refine this content could become more effective as technology advances.

Tags: , , , , , , ,