Moderation is generally defined as staying within reasonable limits that aren’t not excessive or extreme and avoiding. In the context of community members content moderation, it refers to the practise of monitoring submissions and applying a set of rules which define what is acceptable and what is not. Unacceptable content is then removed.
There are 6 common types of moderation which, as a Community Manager or Moderator, you need to consider when deciding how to maintain some sense of order within your community.
When someone submits content to your website and you have it placed in a queue to be checked by a moderator before it is visible to all, you are pre-moderating. Pre-moderation has the benefit of ensuring (in the hands of a good moderator) that content you deem to be undesirable, particularly libelous content, is kept off the visible community sections of your site. It is also a popular moderation choice for online communities targeted at children, as a way to pick up on bullying or sexual grooming behaviour.
While pre-moderation provides high control of what community content ends up being displayed on your site, it has many downsides. Commonly thought to cause the death of online communities, it creates a lack of instant gratification on the part of the participant, who is left waiting for their submission to be cleared by a moderator. In turn, content that is conversational become stilted and judder to a halt if the time delay between submission and display is too long. The other disincentive to use pre-moderation is the high cost involved if and when your community grows and submissions cross a threshold of user-generated content unmanageable by your maximum team of moderators.
It is most suited to communities with a high level of legal risk such as celebrity-based ones, or communities where child protection is vital. If content is not conversational or time-sensitive, such as reviews or photos, it can also be deployed without affecting the community’s dynamic too much.
In an environment where active moderation must take place, post-moderation is a better alternative to pre-moderation from a user experience perspective, as all content is displayed on the site immediately after submission, but replicated in a queue for a moderator to pass or remove afterwards.
The main benefit of this type of moderation is that conversations take place in real time, which makes for a faster paced community. People expect a level of immediacy when interacting on the web, and post moderation allows for this whilst also allowing moderators to ensure security, beavhioural and legal problems can be identified and acted upon in a timely manner.
Unfortunately, as the community grows, cost can become prohibitive. As well as this, as each piece of content is viewed and approved or rejected, the website operator legally becomes the publisher of the content, which can prove to be too much of a risk for certain communities such as gossip ones which attract salacious and potentially defamatory submissions. Given that the number of times content is viewed will directly impact on the size of damages awarded should a court case result from publication of a submission, a short time frame for the review of content is advisable.
3. Reactive moderation
Reactive moderation is defined by relying on your community members to flag up content that is either in breach of your House Rules, or that the members deem to be undesirable. It can be utilised alongside pre- and post- moderation, as a ‘safety net’ in case anything gets through the moderators, or more commonly as the sole moderation method.
The members themselves essentially become responsible for reporting content that they feel are inappropriate as they encounter this content on the site or community platform. The process is usually to include a reporting button on each piece of user-generated content, that if clicked, will file an alert with the administrators or moderator team for that content to be looked at, and if in breach of the site’s rules of use, to remove.
The main advantage of this method of moderation is that it can scale with your community growth without putting extra strain on your moderation resource or cost, as well as theoretically avoiding responsibility for defamatory or illegal content uploaded by the users of your website, as long as your process for removing content upon notification within an acceptable timeframe is in place.
However, if your company is particularly concerned about how their brand is viewed, you might not be willing to take the risk that some undesirable content will be visible on your site for any period of time, as you are relying on your members to see and bother reporting this content. In addition to this, a recent court case in Italy involving Google suggests that reactive moderation provides legal protection.
4. Distributed moderation
Distributed moderation is still a somewhat rare type of user generated content moderation method. It usually relies on a rating system which members of the community use to vote on whether submissions are either in line with community expectations or within the rules of use. It allows control of comments, or forums posts to mostly reside within the community, usually with guidance from experienced senior moderators.
Expecting the community to self-moderate is very rarely a direction companies are willing to take, for legal and branding reasons.For this reason, a distributed moderation system can also be applied within an organisation, using several members of staff to process contributions and aggregating an average score to determine whether content should be allowed to stay public or be reviewed. A popular example of such a member-controlled system in place is Slashdot. There are also companies such as SocialMod who leverage the Amazon service Mechanical Turk to offer a moderation service relying on thousands of workers to process content.
5. Automated moderation
In addition to all of the above human-powered moderation systems, automated moderation is a valuable weapon in the moderator’s arsenal. It consists of deploying various technical tools to process UGC and apply defined rules to reject or approve submissions.
The most typical tool used is the word filter, in which a list of banned words is entered and the tool either stars the word out or otherwise replaces it with a defined alternative, or blocks or rejects the message altogether. A similar tool is the IP ban list. There are also a number of more recent and sophisticated tools being developed, such as those supplied by Crisp Thinking. These include engines that allow for automated conversational pattern analytics, and relationship analytics.
6. No moderation
As an ex-fulltime moderator, I can’t in good conscience ever suggest not moderating your community at all. As a Community Manager, even less! But there are all sorts of reasons why you might choose not to regulate in any way the content submitted by your members.
Maybe you simply don’t have the resource or finance, or you don’t believe in any form of molding or control on content. From a legal standpoint, you might feel that your community is small enough to fly under the radar. Be that as it may, there are big benefits to using one of the moderation types covered above.
Without some form of moderation, your community will quickly descend into anarchy, and the atmosphere will probably become so unpleasant it will turn off potential new members. You could point to communities such as somethingawful.com forums as examples that anarchy and unpleasantness is not a bad thing, but dig a bit deeper and you’ll see they employ a moderation system (as ambiguous and random it may be).
Basically, without moderation you are not in any control of your community, which leaves you wide open to all sorts of abuse, both anti-social as well as illegal. I don’t recommend it.
Which moderation system do you use for your community?
[photo by Michael David Pedersen]