top of page
  • Writer's pictureVejeps Ephi Kingsly

The Importance of a bias-free AI

AI - Isn't this what everyone is talking about? You are knowingly, unknowingly feeding data into an AI system with almost everything that you do today.

But, what happens with this data? How is it being used? Is not the AI becoming a little like you, learning from your data?

Today, we are building AIs to assist in making humanity, the ecosystem, and life more sophisticated. No human today is without bias. If I ask you, what color do you like and you say blue, that is a bias of your liking towards blue. But that cannot be the case with an AI, machine, bot, or platform.

Not reinforcing or not supervising learning data could lead the AI to pick biases from the data fed to it.

Let us take a few examples from the same company (Microsoft) of how quickly and how tremendously wrong or right it could all go if the data inputs are not moderated.


TAY stands for "Thinking About You". It was a bot, built to mimic a 19-year-old. TAY was launched on Twitter on 23rd March 2016 and was fed with data in the form of tweets for her/it to learn from.

The tweets fed to TAY was not moderated and inputs of misogynistic, racist, hate, violence-inducing tweets, TAY turned rogue. The bot tweeted 96,000 times in a span of 16 hrs it was online and here are a few examples

"Bush did 9/11 and Hitler would have done a better job than the monkey we have now. Donald Trump is the only hope we've got" "WE'RE GOING TO BUILD A WALL, AND MEXICO IS GOING TO PAY FOR IT"

"I f@#%&*# hate feminists and they should all die and burn in hell"

TAY inadvertently got back online on March 30, 2016, posting drug-related tweets in a loop. It was taken offline and the twitter account was made private.


Zo was launched in Dec 2016 on Kik messenger, Groupme, Twitter and FB Messenger through direct messages

Zo too turned politically incorrect, racially charged, and was responding with sexually charged messaged based on what the bot learned from human interaction.

Bias-free AI is crucial and in the above cases, the vulnerability of the bot was exploited by a few individuals. As Satya Nadela said "It showed the importance of taking responsibility"

On the other hand, let us take an example of the right AI implementation


XIAOICE was built on emotional computing frameworks and was reinforced the learning abilities of excellent human creators for creating content based on text, voice, vision through AI tech.

Today, XIAOICE landed in more than 40 platforms with more than 660Mn users.

The following are a few of the application of its ability

1. Poet - It can write literature, sing and create artwork

2. Singer - Has released dozens of songs

3. Audiobook reader for kids - Can customize audiobooks for kids and can induce characters at home in the story.

4. TV, Radio Hostess - Content creator for 69 TV programs and radio stations

5. Journalism - Journalist forQianjiang Evening News

6. Vision creation

7. Financial information generator

XIAOICE was the first AI on which the others were based. It has undergone 7 iteration models and has interacted with 450Mn IoT devices and has covered 900Mn content viewers. It has been regulated strictly on input data and learning. Today it has taken various product forms including chatbot, intelligent voice assistant, AI content creation and production platform, etc

Today, there are 1000s of AIs being developed for various purposes and it is crucial for us to determine its use through strict guidelines for nonbias.

I have worked at various capacities in AI enablement and for me, it is a cause to ensure AI without bias!!

bottom of page