The Alarming Concerns of Free Speech Experts over a New Anti-Revenge Porn Legislation

Opinion: Anthropic CEO Dario Amodei attempts to avoid testimony in OpenAI copyright lawsuit

Anthropic CEO Dario Amodei is facing challenges in a copyright lawsuit against OpenAI, as reported in new court filings. The Read more

Unlocking the Complexity of Artificial Intelligence Terminology

Artificial Intelligence Terminology: A Comprehensive Guide Artificial General Intelligence (AGI) Artificial General Intelligence (AGI) surpasses conventional AI capabilities by excelling Read more

EU bans AI systems with ‘unacceptable risk’

On Sunday, the European Union granted regulators the authority to prohibit the use of AI systems considered to pose an Read more

Data broker LexisNexis reports breach affecting personal data of more than 364,000 individuals

LexisNexis Risk Solutions, a data broker that collects and uses consumers’ personal data to help its paying corporate customers detect Read more

Privacy and digital rights advocates are sounding the alarm over a federal crackdown on revenge porn and AI-generated deepfakes. The newly signed Take It Down Act has made it illegal to publish nonconsensual explicit images, whether real or AI-generated, and gives platforms just 48 hours to comply with a victim’s takedown request or face liability.

Experts have praised the law as a long-overdue win for victims, but have also raised concerns about its vague language, lax standards for verifying claims, and tight compliance window. India McKinney, director of federal affairs at Electronic Frontier Foundation, warned that content moderation at scale often leads to censorship of necessary speech.

Senator Marsha Blackburn, a co-sponsor of the Take It Down Act, has also sponsored the Kids Online Safety Act, which aims to protect children from harmful online content. However, concerns have been raised about the potential for overreach and censorship of legitimate content under these laws.

See also  AI's Impact on Entry-Level Tech Jobs: Evidence of Shrinking Opportunities from Recent Research

Tech and VC heavyweights, such as Snapchat and Meta, have expressed support for the law but have not provided details on how they will verify takedown requests. Decentralized platforms like Mastodon may face challenges in complying with the 48-hour takedown rule, as they rely on independently operated servers.

McKinney predicts that platforms will begin proactive monitoring of content to avoid potential takedowns in the future. Companies like Hive are already using AI to detect harmful content, such as deepfakes and child sexual abuse material. Reddit, for example, uses internal tools and partnerships with nonprofits to address and remove nonconsensual intimate imagery.
McKinney warns of potential future monitoring of encrypted messages

McKinney warns that the monitoring required by the Take It Down Act could potentially extend into encrypted messages in the future. While the law currently focuses on public or semi-public dissemination of content, it also mandates platforms to take action against the distribution of nonconsensual intimate images. This could lead to platforms implementing proactive scanning of all content, even in encrypted spaces. Notably, the law does not provide exemptions for end-to-end encrypted messaging services like WhatsApp, Signal, or iMessage.

Tech companies remain silent on plans for encrypted messaging

Despite the implications of the Take It Down Act on encrypted messaging services, major tech companies like Meta, Signal, and Apple have not disclosed their plans or responses to inquiries regarding encrypted messaging capabilities. The lack of transparency from these companies raises questions about their stance on privacy and security in the face of potential legal requirements.

See also  India allocates $1 billion for new startup investments

Broader implications for free speech and content moderation

The broader implications of the Take It Down Act extend to concerns about free speech and content moderation. President Trump’s public endorsement of the legislation and his comments about using it for his own benefit raise alarms about the potential abuse of such laws to suppress unfavorable speech. The recent actions taken against Harvard University and other entities further highlight the contentious debates surrounding content moderation and censorship in various sectors. The growing calls for extensive content moderation from both political parties are concerning for those who have worked in content moderation, as it signals a shift towards greater regulation and control over online discourse.

They stripped Linda Yaccarino of her blue check!

How to Watch Apple’s WWDC 2025 Keynote