top of page
  • Writer's pictureVejeps Ephi Kingsly

Human Bias and Trust and Safety

Updated: Nov 19, 2020

As humans, we are all biased. Bias is a very central virtue for each of us. Our likes, dislikes, afflictions, and rejections are all due to the biases we have as individuals. Each of us is unique because of the biases we have.

Trust and safety is built on a foundation of policies which are instituted keeping, regulations, standards, morality, practices, technology, sanctity, and sanity of the platform. Unbiased decisioning is a core necessity for the work in Trust and Safety. Every user interaction on a platform has to remain unbiased for an equal, safe experience for everyone. Information/content/tools on platforms have to meet a minimum set requisite to prevail on a platform. AI built on data from the inputs from human decisions have to absolutely be bias-free for AI to be bias-free.

This poses a unique challenge to address in human-centric operations. To help the moderators/agents keep their personal biases aside when they work on decisioning based on policies.

I have usually seen bias addressed as an unwanted distinction to have. A lot of pieces of training surround to address “the unconscious bias” of an individual. Let's be honest, Bias is a behavior/characteristic that is ingrained that cannot be changed with training for an hour. These need to be addressed differently, at least in Trust and Safety.

Training people to ignore their biases is sensitive and has to be more carefully managed. In my experience of managing trust and safety, here are different biases I have noticed. Addressing all these are very individualistic or have to be specific to the content type, demographics of individuals, or teams, etc. We cannot pull to distort a person's core but can help distinguish when working and when necessary.

I am adding a few examples of biases and these have to be addressed where we see it affect the work and output rather than an umbrella approach to bias managment.

Attribution Bias

In the contextual setting of content moderation, the answer to why a certain behavior pertains prevail. While answering this specific question, often people tend to explain a behavior pattern distorting, inducing a bias while deciding on the apt decisions.

Confirmation Bias

When working on an emotionally charged subject or content that conflicts or sides on the core beliefs of a person, these biases can occur.

Halo Effect

This is a common type of bias that exists with individuals in regards to a certain org, person, brand, etc.

Self Serving Bias

Typically seen when individuals with higher experience or expertise, when they are confident in decisions while there could be an error.

Anchoring Bias

A very common bias where any decision made is not thought through enough and the decision relies on the first piece of evidence that is seen.

Reporting Bias

A lot of reporting/analysis is done in identifying and inclined towards patterns. The data taken usually is the available data and structure. But we tend to miss the unstructured data and non-collected information that has not been accounted for in the analysis.

Observer Effect

Usually seen as how the majority of individuals in a team, observe/react/follow the teacher/trainer. The cases/examples are not always the same as what is seen on live scenarios and induction of this kind of bias would hamper decisioning in live tests.

And More…..

These are a few of the biases, of many that have been defined and can be observed in individuals. Support staffs have to be trained in identifying and addressing biases for individuals. They play the first observer and responder most of the time. A bias training when designed needs to be more elaborate, empathetic, thoughtful, and has to be an ongoing process.

bottom of page