Combating Undesirable Content

As a publisher's popularity increases, the volume of comments increases, which is great! However, with higher overall comment volume comes more spam, toxicity, and other comments that don't contribute to the conversation. For many large publishers, moderation at scale can become a huge burden that obfuscates the value of comments. Some publishers spend hours moderating hundreds or thousands of comments a day. In an effort to reduce this cost, combat undesirable content, and resurface the value of comments, my team designed several moderation tools to give moderators access to more robust options while maintaining their unique moderation preferences. For this series of projects, I was the lead product designer responsible for the end-to-end product from user research and interaction design to visual design.

Note: The term 'publisher' and 'moderator' are often synonymous for smaller sites. In larger sites, a publisher is often the overarching entity that may have a moderation team that manages Disqus comments.

Problem

As a publisher grows, they can become overwhelmed by the large number of comments that require moderation, especially comments that don't contribute to the conversation. The burden of spending time moderating comments can often obfuscate the value of comments.

Solution

Through a series of iterative research and ideation phases, we designed a tool called Moderation Rules that allows moderators to assign automatic actions to comments with a defined set of characteristics.

Moderation can be tedious, hard work even with a team.

Team alignment

In 2017, hate speech became a big deal in the media and all eyes turned to social media and communication platforms to see how they would define and react to hate speech on their network. As one of the largest communication platforms with over 2 billion users, we knew that we had a duty to respond. I kicked off this project by getting alignment with the team around what we know, what our assumptions are, and what we want to achieve. We started off with a whiteboarding and post-it noting session with the team to think about the problems and stories that the different types of users on our network face regarding hate speech. From there, we realized that a lot of what we brainstormed were assumptions, so we looked to user research.

User stories for a publisher, moderator, commenter, and Disqus.

Discovery interviews with moderators

We interviewed several Disqus publishers about their stance on hate speech, what they thought Disqus' stance should be, and since there was a bit of an urgent timeline to publicly address our stance on hate speech, I also got early feedback on paper prototypes of a few different solutions that were based on our assumptions. From the user research, we learned that:

  1. Definitions of hate speech varied too much between publishers to rely on any consensus being reached by crowd-sourcing opinions.
  2. Publishers wanted Disqus to maintain a neutral stance on hate speech and continue working on building better tools to help publishers maintain control over how they define their communities.
  3. A lot of our paper prototypes were highly desired and useful!

From this research, we publicly stated that our commitment to fight hate speech would be fought alongside publishers and that we would be aggressively building more robust tools to help. As a result, we worked quickly to create tools that publishers from our feedback sessions needed like shadow banning, timeouts, and comment policy.

We built some tools to help moderators be efficient, but the work was still hard.

Dreaming bigger, tech constraints, and 3rd parties

While the tools we made were incremental steps toward helping relieve the burden of moderating undesirable content, we recognized that they were only incremental steps and there was still a lot more that could be done. We wanted to invest some time and research into dreaming a bit bigger to see if we could make a substantial impact to moderation.

Our kind of bare-bones moderation and community preferences.
We wanted to dream big. What if we could leverage machines to our advantage?

We initially wanted to build a machine learning algorithm that could automatically learn a publisher's unique moderation preferences over time. Unfortunately, despite seeking partnerships with 3rd parties, we didn't have enough technical expertise to build a machine learning system that could do this well enough to earn a publisher's trust. Additionally, when shown this concept, we learned that many moderators deeply valued the voices of their commenters and didn't want machines passing judgment on human thought. Because of this, we decided to completely pivot away from machine learning as a solution. Instead, we explored options that would allow human moderators to remain in control.

People want people, not robots, to pass judgment on human thought.

Pivoting to moderation rules

Taking all that we'd learned so far, this is what we knew:

  • User feedback informed us that moderators deeply respected the voices of their commenters. They didn't want a machine passing judgment on human thought.
  • Publishers and moderators want complete ownership over the definition of what "undesirable content" is on their site.
  • We needed a scalable and robust solution that didn't involve overly complicated technology.

Keeping in mind that moderators want to be the ones passing judgment on comments, not robots, my team and I held a number of discovery sessions to pivot our designs. After exploring several options with a lot of input regarding technical feasibility from our engineer, we decided to design a rules-based engine called Moderation Rules. This new feature allows moderators to assign moderation actions to comments with certain characteristics. After a task-based usability test of the Moderation Rules concept with 5 moderators and an unprecented 100% success rate on our primary workflow, we decided to move forward with buliding this feature. Additionally, instead of building out an entirely robust tool with hundreds of options that were potentially not useful, we decided to opt for a beta release with an increasing rollout to gather iterative user feedback as we continue to build out the product.

Onboarding for our new Moderation Rules feature.
A screenshot of the Moderation Rules page with behavioral analytics to help publishers decide what rules are best for them.

Potential next steps

Our analytics shows that even with a limited set of filters, we've already begun to see healthy adoption from certain members in our beta group, who we hope to work with closely for future iterations. So far, some of feedback from our beta group has us thinking:

  • What rules can we suggest to a publisher based on what other similar publishers are using?
  • How can we use "and" logic to create more specific rules?
  • Can we create intelligently-defined and very specific rules that match up to spam patterns?

Starting with the feedback above, we intend to continue iterating on the most useful parts of the product by relying on the feedback and needs of our publishers. We're so glad we were able to build a human-focused solution that reduces tedium and is a huge step for us toward building healthier, more engaging communities.

We were able to build a scalable solution that could be tailored to each persons' unique needs and preferences while keeping humans in control. :)

BACK TO PORTFOLIO