r/AgainstHateSubreddits • u/Bardfinn Subject Matter Expert: White Identity Extremism / Moderator • Sep 26 '24
People who share encounters with racism are silenced online by humans and machines, but a guideline-reframing intervention holds promise
https://www.pnas.org/doi/10.1073/pnas.23227641218
u/Bardfinn Subject Matter Expert: White Identity Extremism / Moderator Sep 26 '24
Significance
Content moderation practices on social media risk silencing voices of historically marginalized groups. We find that posts in which users share personal experiences of racism are disproportionately flagged by both algorithms and humans. Not only does this hinder the potential of social media to give voice to marginalized communities, we also find that witnessing such suppression could exacerbate feelings of isolation, both online and offline.
We offer a path to reduce flagging among users through a psychologically informed reframing of moderation guidelines. In an increasingly diverse nation where online interactions are commonplace, these findings highlight the need to foster more productive and inclusive conversations about race-based experiences and we demonstrate how content moderation practices can help or hinder this effort.
5
u/Bardfinn Subject Matter Expert: White Identity Extremism / Moderator Sep 26 '24
From the Abstract:
Although content moderation practices aim to create safe and inclusive online environments, there is growing concern that these efforts may, paradoxically, discriminate against marginalized voices (10, 11). Content created by users from marginalized groups, for example, can face unwarranted removal even when they do not violate platform guidelines or create harm. One plausible cause for such removal is that when people share their perspectives and racialized experiences online, content moderation algorithms may struggle to discern the difference between race-related talk and racist talk (12). Moreover, human reviewers may opt to remove race-related content, deeming such content uncomfortable, inappropriate, or contentious (13–16).
I’ve seen an instance of the latter recently - https://ghostarchive.org/archive/1W4Qm
An overview of the top level comments on the post show that an overwhelming majority of Reddit moderators responding to the situation as framed by the OP, did so uncritically, accepting the framing of “a black woman being racist towards white redditors” at face value — https://ghostarchive.org/archive/R8sg4
The incident resulted in the black woman concluding that Reddit is unsafe for her and silences black people. She deleted her account.
So there is clearly a need for this reframing for content moderation.
•
u/AutoModerator Sep 26 '24
🛑 ↪ Help Stop Hate! ↩ 🛑
Step 1: Don’t Participate in Hate Groups!
Step 2: Report Hate!
Step 3: Report Hate Subreddit Operators!
(⁂ Sitewide Rule 1 (SWR1) - Prohibiting Promoting Hate Based on Identity or Vulnerability ⁂) - (All Sitewide Rules) - AHS COMMUNITY RULES - AHS FAQs
I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.