Bluesky released its inaugural transparency report for 2025, detailing a massive scale-up in moderation enforcement and legal compliance as user reports and platform activity reached record levels. The document outlines how the decentralized social network is managing growth while combatting spam, harassment, and influence operations.
Analysis of User Reports and Moderation Categories
The report highlights a complex landscape of user-generated reports. A broad “other” category accounted for 22.14% of total reports, covering issues that did not specifically fall under violence, child safety, or self-harm. Within the “misleading” category, which saw 4.36 million reports, spam was the primary driver with 2.49 million instances.
Harassment and Antisocial Behavior
Harassment remains a significant focus for the platform, totaling 1.99 million reports. Hate speech represented the largest specific share within this group at 55,400 reports, followed by targeted harassment (42,520), trolling (29,500), and doxxing (3,170). Bluesky noted that a majority of harassment reports involved “antisocial behavior”—rude remarks that, while unpleasant, do not always violate specific policies like hate speech.
Sexual Content and Metadata Accuracy
Regarding sexual content, the platform received 1.52 million reports, though most concerned mislabeling rather than prohibited material. Bluesky reported that adult content often lacked the necessary metadata tags required for users to filter their own experience. More severe violations included nonconsensual intimate imagery (7,520), abuse content (6,120), and deepfakes (over 2,000).
Automated Systems and Violence Prevention
Reports concerning violence totaled 24,670, categorized into threats or incitement (10,170), glorification of violence (6,630), and extremist content (3,230). Beyond manual reporting, Bluesky’s automated systems proactively flagged 2.54 million potential violations.
Efficiency gains were noted following the implementation of a system that hides toxic replies behind an extra click. This change resulted in a 79% drop in daily reports of antisocial behavior. Overall, reports per 1,000 monthly active users declined by 50.9% between January and December.
Takedowns, Legal Requests, and State Actors
The platform took decisive action against foreign interference, removing 3,619 accounts linked to suspected influence operations, primarily originating from Russia. This aligns with Bluesky’s stated goal of becoming more aggressive regarding enforcement.
Growth in Enforcement Actions
In 2025, Bluesky removed 2.44 million items, including content and entire accounts. This represents a massive increase from the previous year, where only 66,308 accounts were removed. Automated tools were responsible for 35,842 of those prior takedowns, while manual moderators removed 6,334 records.
Suspensions vs. Labeling Strategy
Bluesky issued 3,192 temporary suspensions and 14,659 permanent removals for ban evasion in 2025. Permanent bans were largely concentrated on inauthentic behavior, spam networks, and impersonation. Despite these figures, the data suggests a strategic preference for labeling over account termination. Content labels rose 200% year-over-year to 16.49 million, while account takedowns grew by 104% (from 1.02 million to 2.08 million). Most labels were applied to nudity and suggestive adult content.
