OpenAI recently took action against a group of ChatGPT accounts associated with an Iranian influence campaign focused on the U.S. presidential election. The company revealed that AI-generated articles and social media posts were being churned out by this operation, although it did not appear to have a significant impact on the audience.
State-affiliated actors misusing ChatGPT for malicious purposes is not a new phenomenon, as OpenAI previously thwarted similar campaigns aimed at manipulating public opinion. These tactics echo previous attempts by state actors to sway election outcomes through social media platforms like Facebook and Twitter. Now, with the emergence of generative AI, such groups are leveraging technology to disseminate misinformation across social channels.
OpenAI’s investigation into these accounts was aided by a Microsoft Threat Intelligence report, which identified the group Storm-2035 as part of a broader scheme to influence U.S. elections dating back to 2020. This Iranian network, masquerading as news sites, engaged with U.S. voter groups on divisive topics such as presidential candidates, LGBTQ rights, and the Israel-Hamas conflict, aiming to sow discord rather than promote specific policies.
Storm-2035 operated through various website fronts, posing as both progressive and conservative news outlets to lend credibility to their fabricated narratives. ChatGPT was utilized to craft deceptive articles and rewrite political comments for dissemination on social media platforms. Despite these efforts, OpenAI observed minimal engagement with the content, indicating the limited success of such disinformation campaigns. As the election draws near and online debates intensify, similar crackdowns on AI-fueled propaganda are expected to continue.
