Techdirt's think tank, the Copia Institute, is working with the Trust & Safety Professional Association and its sister organization, the Trust & Safety Foundation, to produce an ongoing series of case studies about content moderation decisions. These case studies are presented in a neutral fashion, not aiming to criticize or applaud any particular decision, but to highlight the many different challenges that content moderators face and the tradeoffs they result in. Find more case studies here on Techdirt and on the TSF website.

Content Moderation Case Study: Discord Adds AI Moderation To Help Fight Abusive Content (2021)

from the ai-to-the-rescue dept

Summary: In the six years since Discord debuted its chat platform, it has seen explosive growth. And, over the past half-decade, Discord’s chat options have expanded to include GIFs, video, audio, and streaming. With this growth and these expanded offerings, there have come a number of new moderation challenges and required adapting to changing scenarios.

Discord remains largely text-based, but even when limited to its original offering — targeted text-oriented forums/chat channels — users were still subjected to various forms of abuse. And, because the platform hosted multiple users on single channels, users sometimes found themselves targeted en masse by trolls and other malcontents. While Discord often relies on the admins of servers to handle moderation on those servers directly, the company has found that it needs to take a more hands on approach to handling content moderation.

Discord’s addition of multiple forms of content create a host of new content moderation challenges. While it remained text-based, Discord was able to handle moderation using a blend of AI and human moderators.

Some of the moderation load was handed over to users, who could perform their own administration to keep their channels free of content they didn’t like. For everything else (meaning content that violates Discord’s guidelines), the platform offered a mixture of human and AI moderation. The platform’s Trust & Safety team handled content created by hundreds of millions of users, but its continued growth and expanded offerings forced the company to find a solution that could scale to meet future demands.

To continue to scale, Discord ended up purchasing Sentropy, an AI company that only launched last year with the goal of building AI tools to help companies moderate disruptive behavior on their platforms. Just a few months prior to the purchase, Sentropy had launched its first consumer-facing product, an AI-based tool for Twitter users to help them weed out and block potentially abusive tweets. However, after being purchased, Sentropy shut down the tool, and is now focused on building out its AI content moderation tools for Discord.

Discord definitely has moderation issues it needs to solve — which range from seemingly-omnipresent spammers to interloping Redditors with a taste for tasteless memes — but it remains to be seen whether the addition of another layer of AI will make moderation manageable.

Company Considerations:

  • What advantages can outside services offer above what platforms can develop on their own? 
  • What are the disadvantages of partnering with a company whose product was not designed to handle a platform’s specific moderation concerns?
  • How do outside acquisitions undermine ongoing moderation efforts? Conversely, how do they increase the effectiveness of ongoing efforts? 
  • How should platforms handle outside integration of AI moderation as it applies to user-based moderation efforts by admins running their own Discord servers?
  • How much input should admins have in future moderation efforts? How should admins deal with moderation calls made by AI acquisitions that may impede efforts already being made by mods on their own servers?

Issue Considerations:

  • What are the foreseeable negative effects of acquiring content moderation AI designed to handle problems observed on different social media platforms?
  • What problems can outside acquisitions introduce into the moderation platform? What can be done to mitigate these problems during integration?
  • What negative effect can additional AI moderation efforts have on “self-governance” by admins entrusted with content moderation by Discord prior to acquisition of outside AI?

Resolution: So far, the acquisition has yet to produce much controversy. Indeed, Discord as a whole has managed to avoid many of the moderation pitfalls that have plagued other platforms of its size. Its most notorious action to date was its takeover of the WallStreetBets server as it went supernova during a week or two of attention-getting stock market activity. An initial ban was rescinded once the server’s own moderators began removing content that violated Discord guidelines, accompanied by Discord’s own moderators who stepped in to handle an unprecedented influx of users while WallStreetBets continued to make headlines around the nation.

Other than that, the most notable moderation efforts were made by server admins, rather than Discord itself, utilizing their own rules which (at least in one case) exceeded the restrictions on content delineated in Discord’s terms of use.

Originally posted to the Trust & Safety Foundation website.

Filed Under: , ,
Companies: discord

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Content Moderation Case Study: Discord Adds AI Moderation To Help Fight Abusive Content (2021)”

Subscribe: RSS Leave a comment
4 Comments
Emelie (profile) says:

Companies refuse to make good decisions.

Because AI moderation works SO good for youtube, twitter, google, facebook, etc / sarcasm

Not to mention that users have already figured out how to work around AI mods. For example some youtube channels ban the word sex so we use an alternative for example "bed time fun", "slimy/wet snuggle", "adult cuddling", "child free service", etc. There are so many ways to use obfuscation language that you simply can’t moderate with an AI currently. It’s also hard to figure out if the meaning is literal or suggestive like the examples above which is something no AI can do yet and is hard to achieve with good accuracy. Also AI don’t understand context at all which is why so many comments/posts get removed by mistake. Youtube videos get flagged and removed because covid is mentioned or the text is displayed. It doesn’t understand the video is reporting news about it. There are so many examples of getting flagged incorrectly and there is no way to get hold of a human to correct it so the only recourse is to have multiple accounts and in so doing violate site rules. These sites force you into violating the multi-account rule. Sites shooting themselves in both feet just to be thorough. And so many ways to avoid getting detected of using multiple accounts.

We the victims (users) of these abusive behaviors like AI mods without human oversight unless you’re "important" like to share these tricks with each other. "Sharing is caring". Internet was made to share knowledge.

Anonymous Coward says:

Re: Companies refuse to make good decisions.

And people have been using alternative words/codewords/steganography/dogwhistling/whathaveyou since forever.

While I generally agree that AI mods need human oversight, the alternative is even less feasible, ie, have an army of human mods. And all it takes is one or a few powertripping, easily offended bastards to ruin the work of YEARS. And there’s plenty of examples of that, including how the community manager of Mighty No.9 ruined the game by not letting through playtesting reports of people they hated…

Jono793 says:

Discord has unique moderation challenges.

Unlike a lot of social media, individual discord channels aren’t public facing. Even the most niche of subreddits, could theoretically end up on the front page one day.

With discord, you have to opt in or opt out. Which deals with 99% of the moderation issues right there! Using being an abusive troll? Server admins will kick them out pretty quickly. Server degenerating into a cesspit? (or just not interesting any longer).Users can leave it at the touch of a button.

Discord’s moderation challenge is that the majority of the bad behaviour happens outside of discord!
Some prominent examples include rogue servers used to organise "hate raids" on Twitch. Or being used by neo-nazi extremists to organise and coordinate.

In a way, it’s a lot closer to services like WhatsApp or telegram, rather than public facing social media.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...
Older Stuff
15:43 Content Moderation Case Study: Facebook Struggles To Correctly Moderate The Word 'Hoe' (2021) (21)
15:32 Content Moderation Case Study: Linkedin Blocks Access To Journalist Profiles In China (2021) (1)
16:12 Content Moderation Case Studies: Snapchat Disables GIPHY Integration After Racist 'Sticker' Is Discovered (2018) (11)
15:30 Content Moderation Case Study: Tumblr's Approach To Adult Content (2013) (5)
15:41 Content Moderation Case Study: Twitter's Self-Deleting Tweets Feature Creates New Moderation Problems (2)
15:47 Content Moderation Case Studies: Coca Cola Realizes Custom Bottle Labels Involve Moderation Issues (2021) (14)
15:28 Content Moderation Case Study: Bing Search Results Erases Images Of 'Tank Man' On Anniversary Of Tiananmen Square Crackdown (2021) (33)
15:32 Content Moderation Case Study: Twitter Removes 'Verified' Badge In Response To Policy Violations (2017) (8)
15:36 Content Moderation Case Study: Spam "Hacks" in Among Us (2020) (4)
15:37 Content Moderation Case Study: YouTube Deals With Disturbing Content Disguised As Videos For Kids (2017) (11)
15:48 Content Moderation Case Study: Twitter Temporarily Locks Account Of Indian Technology Minister For Copyright Violations (2021) (8)
15:45 Content Moderation Case Study: Spotify Comes Under Fire For Hosting Joe Rogan's Podcast (2020) (64)
15:48 Content Moderation Case Study: Twitter Experiences Problems Moderating Audio Tweets (2020) (6)
15:48 Content Moderation Case Study: Dealing With 'Cheap Fake' Modified Political Videos (2020) (9)
15:35 Content Moderation Case Study: Facebook Removes Image Of Two Men Kissing (2011) (13)
15:23 Content Moderation Case Study: Instagram Takes Down Instagram Account Of Book About Instagram (2020) (90)
15:49 Content Moderation Case Study: YouTube Relocates Video Accused Of Inflated Views (2014) (2)
15:34 Content Moderation Case Study: Pretty Much Every Platform Overreacts To Content Removal Stimuli (2015) (23)
16:03 Content Moderation Case Study: Roblox Tries To Deal With Adult Content On A Platform Used By Many Kids (2020) (0)
15:43 Content Moderation Case Study: Twitter Suspends Users Who Tweet The Word 'Memphis' (2021) (10)
15:35 Content Moderation Case Study: Time Warner Cable Doesn't Want Anyone To See Critical Parody (2013) (14)
15:38 Content Moderation Case Studies: Twitter Clarifies Hacked Material Policy After Hunter Biden Controversy (2020) (9)
15:42 Content Moderation Case Study: Kik Tries To Get Abuse Under Control (2017) (1)
15:31 Content Moderation Case Study: Newsletter Platform Substack Lets Users Make Most Of The Moderation Calls (2020) (8)
15:40 Content Moderation Case Study: Knitting Community Ravelry Bans All Talk Supporting President Trump (2019) (29)
15:50 Content Moderation Case Study: YouTube's New Policy On Nazi Content Results In Removal Of Historical And Education Videos (2019) (5)
15:36 Content Moderation Case Study: Google Removes Popular App That Removed Chinese Apps From Users' Phones (2020) (28)
15:42 Content Moderation Case Studies: How To Moderate World Leaders Justifying Violence (2020) (5)
15:47 Content Moderation Case Study: Apple Blocks WordPress Updates In Dispute Over Non-Existent In-app Purchase (2020) (18)
15:47 Content Moderation Case Study: Google Refuses To Honor Questionable Requests For Removal Of 'Defamatory' Content (2019) (25)
More arrow