r/RedditSafety Oct 16 '24

Reddit Transparency Report: Jan-Jun 2024

Hello, redditors!

Today we published our Transparency Report for the first half of 2024, which shares data and insights about our content moderation and legal requests from January through June 2024.

Reddit’s biannual Transparency Reports provide insights and metrics about content moderation on Reddit, including content that was removed as a result of automated tooling and accounts that were suspended. It also includes legal requests we received from governments, law enforcement agencies, and third parties around the world to remove content or disclose user data.

Some key highlights include:

  • ~5.3B pieces of content were shared on Reddit (incl. posts, comments, PMs & chats) 
  • Mods and admins removed just over 3% of the total content created (1.6% by mods and 1.5% by admins)
  • Over 71% of the content removed by mods was done through automated tooling, such as Automod.
  • As usual, spam accounted for the majority of admin removals (66.5%), with the remainder being for various Content Policy violations (31.7%) and other reasons, such as non-spam content manipulation (1.8%)
  • There were notable increases in legal requests from government and law enforcement agencies to remove content (+32%) and in non-emergency legal requests for account information (+23%; this is the highest volume of information requests that Reddit has ever received in a single reporting period) compared to the second half of 2023
    • We carefully scrutinize every request to ensure it is legally valid and narrowly tailored, and include the data on how we’ve responded in the report
    • Importantly, we caught and rejected a number of fraudulent legal requests purporting to come from legitimate government and law enforcement agencies; we subsequently reported these bogus requests to the appropriate authorities.

You can read more insights in the full document: Transparency Report: January to June 2024. You can also see all of our past reports and more information on our policies and procedures in our Transparency Center.

Please let us know in the comments section if you have any questions or are interested in learning more about other data or insights. 

68 Upvotes

117 comments sorted by

View all comments

1

u/the9trances Oct 16 '24

Content manipulation is a term we use to combine things like spam, community interference, etc.

Does community interference specifically refer to communities that brigade other subreddits?

3

u/Bardfinn Oct 16 '24

Yeah, Community Interference is the term they use for the phenomenon colloquially known as brigading.

1

u/the9trances Oct 16 '24

So subreddits like /r/bestof and /r/SubredditDrama who constantly and routinely perform "community interference" are exempt from the rules?

4

u/Bardfinn Oct 16 '24

They interpret CI according to the effect it has.

Most people and communities welcome the participation of r/BestOf. SRD, they may or may not welcome the participation of.

The question comes down to,

Does SubredditX referencing SubredditY have an effect of participation that breaks SubredditY’s rules, boundaries, standards — and/or Sitewide Rules.

Do SubredditX’s audience and/or operators continue to reference, despite knowing that such action violates SubredditY’s rules, boundaries, standards, and/or Sitewide Rules.

In general, “This is cool / awesome / super / great” isn’t CI. In general, several instances of “These People …” is a red flag that CI is occurring.

There’s a lot of stuff that happens on Reddit that draws legitimate, good faith commentary on it, in and out of a given community. There’s also people still using this site after having been kicked off it over a hundred times, in a spiteful crusade to harass specific groups or individuals.

There’s a spectrum of such speech and actions, and there’s not yet a clear, bright line between “that sucks” and speech acts that have the effect of reasonably causing someone or some group to cease using the service - but it’s clearer and brighter now than it was a decade ago.