Disinformation is rapidly spreading, fueled by easily accessible AI tools. A recent survey shows that 85% of people are concerned about online disinformation, and the World Economic Forum has identified AI-generated disinformation as a major global risk.
Combatting Disinformation with AI
Pamela San Martín, co-chair of Meta’s Oversight Board, believes that AI can both combat and create disinformation. While acknowledging AI’s imperfections, she is optimistic that it will improve over time. San Martín highlights the role of automation and AI in moderating social media content, emphasizing the potential for AI moderation models to effectively address disinformation.
Challenges and Solutions
Despite advancements in AI moderation, the decreasing cost of spreading disinformation through AI poses a significant challenge. Imran Ahmed, CEO of the Center for Countering Digital Hate, points out how social media platforms inadvertently amplify disinformation, incentivizing users to share misleading content. San Martín notes the Oversight Board’s efforts to address misleading AI-generated content and nonconsensual deepfake imagery, emphasizing the need for continued vigilance.
Regulation as a Solution
Some experts, such as UC Berkeley professor Brandie Nonnecke, argue that self-regulation alone may not be sufficient to combat disinformation. They advocate for regulatory measures, such as product liability tort and AI content watermarking, to hold platforms accountable for disseminating harmful content. While facing challenges, policymakers are exploring regulatory options to address the spread of AI-generated disinformation and safeguard democracies.
Looking Ahead
Despite obstacles, there is growing optimism for the implementation of regulations to mitigate the impact of AI-generated disinformation. With initiatives like content watermarking and content moderation laws gaining traction, there is hope for a more regulated digital landscape to protect individuals and societies from the harmful effects of disinformation.
