It's Not A Bug, It's A Bias

Author : Anna-Livia Gomart, Alterconf, July 9th 2016.

Even though Apple's Siri came out with a built-in response to where to hide a body, it was incapable of pointing a user to an abortion clinic.
How did an Artificial Assistant get iffy about abortion?
And how can I stop my own biases from seeping into the product and services I create.

In this talk I explore our preconceptions about the nature of algorithms as well as the effect the people who make and use digital product have on the discriminatory impact they produce.

I end the talk by proposing a change of mindset for product designers, to be more aware of our capacity to let our bias infuse our services and to put in place tools to make more inclusive experiences for our clients.

Table of content

«Not-on-purpose» discrimination pops-up a lot in the tech and game industry.

2016 has been an eventful year so far. But a great one for Apple, because they finally managed to clear out one of those nagging bugs that really tampered the customer experience: 5 years ago, when Apple introduces Siri, it seemed we could ask it anything. For example, when asked where one could hide a body, it would respond with charm and humor.
However, because of a bug on the platform, Siri could not provide you with abortion clinics if you needed one.
And finally, they fixed it. It only took them 5 years.1
So how did an Artificial Intelligence get iffy about abortion?

Screenshot of Siri responding to body dump location request
Screenshot of Siri responding to abortion clinic location request

We are blinded by the perceive rationality of algorithms

We perceive algorithms as neutral robotic operators but we are wrong

We view algorithms as neutral. Computer have no emotions, childhood trauma, or parental influences to steer them towards ignoring their own privilege and treat people differently because of gender, race, religion.
Algorithmes and computers are supposed to be like Mr. Spock : Factual, reasonable, cold. That is the main reason we let them assess the likelihood of crime in a neighborhood, and which links are the most relevant and trustworthy.

The problem is, processes and algorithm are actually the expression of principles and values

In his book Persuasive Games, Ian Bogost, argues that the procedures our games and programs are based on are far from Neutral. He calls the analysis of the meaning of procedures «Procedural rhetoric».2 For example:
Sid Meier's Civilization teaches you about what the game makers though made empires rise and fall.
Facebook teaches you a lot about what they believes friendship is.

Dominique Cardon, Sociologist, wrote this in a 2015 article about the political role of algorithms:

«As soon as we open the black box of algorithms, we realize that the choices they make for us are questionable and should be discussed because they offer different visions of society.»3

When we design digital products, we create procedures that are, in our minds, an accurate representation of the world. However we infuse them with our principles and values that are at the core of every decision we make.

The consequence is that our experience of reality is embedded in our algorithms

As digital designers, as we build procedures, we make many assumptions that are based on our own experience of the world. We are often blind to those assumptions and believe them to be universal.
Facebook, for example, has a «Real Name Policy», that will block people if they feel that you do not use a «real name».
The issue is that the Facebook team is building a global tool with a limited experience of what real names are around the world, and are putting themselves as an arbitrator of people's identities.
Native americans, Drag Kings and Queens have had their account blocked and needed official documentation to re-open them.4

Algorithms are impacted by the people who make and use them

Example : Ubisoft

On July 5th 2016, Ubisoft published a gamer survey for their players. Because gender is not an issue for them, the first question they asked was «Are you a male of a female?» and then, if you picked female,
The survey would end.5

Screenshot of Ubisoft survey before fix Source

Their answer to the disbelief was:

@doctorow Hi Cory, there was an error with the setup of the survey, it is now resolved & available to everyone. Apologies for any confusion.

— Ubisoft (@Ubisoft) 5 juillet 2016

Here is what it says about biases and digital product design :

Example : Grading

The monopoly of certain platform such as Uber or Trip Advisor has given a lot of power to customers who give feedback on their experience. However, the notation system does not take into account the damage one bad review, one angry commenter can have on a business.

From social science we know how irrational customers are about reviews:

And still, we keep on creating services that put all comments, all grades on the same level.

Example : (Ghost in the) Machine Learning

Machine learning algorithms have a tremendous amount of potential : They can adapt, recommend and discover, but they are very sensitive to :

In an Episode of the show The Good Wife, that sadly aired its last season, a search engine company is accused of diverting foot traffic away from a prominently black neighborhood.
Their defense argues that they are not responsible for the discriminatory consequences because the data is user generated.8
This issue is not only present in works of fiction : Government agencies and researchers around the world are asking themselves how to deal with the black box nature of algorithms.910

In an article Burkhard Schafer, Professor of Computational Legal Theory says:

«What I'm much, much more worried about with machine learning is that we get a type of harm that is much less visible, much less quantifiable, and much less comfortable with our normal rules and procedure.» 11

We are blind to our own biases and the impacts are grave

We need to have a conversation on the societal impact that games, programs and procedures have on us and the responsibility of the people building the tools to prevent foreseeable consequences.

Machine learning sifts through a huge amount of information to bring you the most relevant.
The problem is that the most relevant results are often the ones that contribute to your confirmation bias.

Dominique Cardon, the French sociologist said :

«If people have monotonous behavior , if they have friends who have the same ideas and the same tastes, if they always follow the same path, then the calculators lock them in their regularity. If the user only listens to Beyoncé , he will have Beyoncé !» 3

An idea that keeps on popping up again and again about the Brexit vote, is that voters on one side never really met the voters from the other side. I wonder how much the very comfortable filter bubble we now live in is responsible for this.1213

Algorithms are blackboxes build by biased people, on historical biased data and enriched by current biased data.14 So how can we stay aware of the alienating power of the digital products we make and build better programs?

Approach all design like game design

The solution I would like to propose today is to approach design like a game designer :

You product exists for your players

Your product should serve your users, with their human limitation taken into account.
Playtest, especially with people you would never hang out with and Bias proof your QA

Prepare for people to cheat, hack, exploit loopholes

As Microsoft learned with their AI bot Tay, since then on hiatus, people on the internet have asocial behaviors. This is a known fact of game design, and by planning for it, you might find very elegant solutions to the problem.

Prepare to be wrong, so design for failure, feedback loops and learning curves.

As we learn with research on implicit bias, we are all infused with our culture and social norms and we are blind to it.
Therefore, you need to design a way to find bias, plan to correct it, and plan to learn from it.
It can be as simple as having a contact form and encouraging people to speak out with codes of conduct.

By acknowledging that we are biased designers, creating for biased users we can put in place tools to help us build a more inclusive world.


Here is a list of sources. Some I used in the talk, others, I found very interesting.

Used in the talk

  1. Siri, find me an abortion provider: Apple's weird anti-choice glitch is finally on its way out - Salon - JAN 29, 2016 [Link]
  2. Persuasive games, Ian Bogost - 2007 [Link]
  3. En calculant nos traces, les algorithmes reproduisent les inégalités entre les individus» - Libération - OCT 9, 2015 [French] [Link]
  4. Say my name: Facebook' unfair «real names» policy continues to harm vulnerable users Real names - Salon - MAR 30, 2015 [Link]
  5. Ubisoft's gamer survey first asks if you're female, and terminates if you say "yes" -- UPDATED - Boing Boing - JUL 5, 2016 [Link]
  6. Online Ratings and Population Perceptions of Quality - MIS Quarterly - SEPT 2015 [Link]
  7. Negativity bias - Wikipedia - [Link]
  8. The Good Wife Tackles Algorithmic Discrimination. Meanwhile, in Real Life - ACLU - DEC 3, 2015 [Link]
  9. Big Data Can Be Used To Violate Civil Rights Laws, and the FTC Agrees - ACLU - JAN 14, 2016 [Link]
  10. Big Data: A Report on Algorithmic Systems, Opportunity, and Civil Rights- The White House - MAY, 2016 [Link]
  11. Computer says no: justice, accountability and clarity in the age of algorithms - The Long And Short - MARCH 4, 2016 [Link]
  12. Alexander Betts: Why Brexit happened -- and what to do next - TED - JULY 6, 2016 [Link]
  13. Episode 707: Brexit - Planet Money NPR Podcast - JUNE 24, 2016 [Link]
  14. When Discrimination Is Baked Into Algorithms - The Atlantic - SEPT 6, 2015 [Link]

Useful links

Since the talk