2016 has been an eventful year so far. But a great one for Apple, because they finally managed to clear out one of those nagging bugs that really tampered the customer experience: 5 years ago, when Apple introduces Siri, it seemed we could ask it anything. For example, when asked where one could hide a body, it would respond with charm and humor. However, because of a bug on the platform, Siri could not provide you with abortion clinics if you needed one. And finally, they fixed it. It only took them 5 years.1 So how did an Artificial Intelligence get iffy about abortion?
We view algorithms as neutral. Computer have no emotions, childhood trauma, or parental influences to steer them towards ignoring their own privilege and treat people differently because of gender, race, religion. Algorithmes and computers are supposed to be like Mr. Spock : Factual, reasonable, cold. That is the main reason we let them assess the likelihood of crime in a neighborhood, and which links are the most relevant and trustworthy. The problem is, processes and algorithm are actually the expression of principles and values In his book Persuasive Games, Ian Bogost, argues that the procedures our games and programs are based on are far from Neutral. He calls the analysis of the meaning of procedures «Procedural rhetoric».2 For example: Sid Meier's Civilization teaches you about what the game makers though made empires rise and fall. Facebook teaches you a lot about what they believes friendship is. Dominique Cardon, Sociologist, wrote this in a 2015 article about the political role of algorithms:
«As soon as we open the black box of algorithms, we realize that the choices they make for us are questionable and should be discussed because they offer different visions of society.»3
When we design digital products, we create procedures that are, in our minds, an accurate representation of the world. However we infuse them with our principles and values that are at the core of every decision we make. The consequence is that our experience of reality is embedded in our algorithms As digital designers, as we build procedures, we make many assumptions that are based on our own experience of the world. We are often blind to those assumptions and believe them to be universal. Facebook, for example, has a «Real Name Policy», that will block people if they feel that you do not use a «real name». The issue is that the Facebook team is building a global tool with a limited experience of what real names are around the world, and are putting themselves as an arbitrator of people's identities. Native americans, Drag Kings and Queens have had their account blocked and needed official documentation to re-open them.4
On July 5th 2016, Ubisoft published a gamer survey for their players. Because gender is not an issue for them, the first question they asked was «Are you a male of a female?» and then, if you picked female, The survey would end.5
Their answer to the disbelief was:
@doctorow Hi Cory, there was an error with the setup of the survey, it is now resolved & available to everyone. Apologies for any confusion.— Ubisoft (@Ubisoft) 5 juillet 2016
Here is what it says about biases and digital product design :
The monopoly of certain platform such as Uber or Trip Advisor has given a lot of power to customers who give feedback on their experience. However, the notation system does not take into account the damage one bad review, one angry commenter can have on a business.
From social science we know how irrational customers are about reviews:
And still, we keep on creating services that put all comments, all grades on the same level.
Machine learning algorithms have a tremendous amount of potential : They can adapt, recommend and discover, but they are very sensitive to :
In an Episode of the show The Good Wife, that sadly aired its last season, a search engine company is accused of diverting foot traffic away from a prominently black neighborhood. Their defense argues that they are not responsible for the discriminatory consequences because the data is user generated.8 This issue is not only present in works of fiction : Government agencies and researchers around the world are asking themselves how to deal with the black box nature of algorithms.910
In an article Burkhard Schafer, Professor of Computational Legal Theory says:
«What I'm much, much more worried about with machine learning is that we get a type of harm that is much less visible, much less quantifiable, and much less comfortable with our normal rules and procedure.» 11
We need to have a conversation on the societal impact that games, programs and procedures have on us and the responsibility of the people building the tools to prevent foreseeable consequences. Machine learning sifts through a huge amount of information to bring you the most relevant. The problem is that the most relevant results are often the ones that contribute to your confirmation bias. Dominique Cardon, the French sociologist said :
«If people have monotonous behavior , if they have friends who have the same ideas and the same tastes, if they always follow the same path, then the calculators lock them in their regularity. If the user only listens to Beyoncé , he will have Beyoncé !» 3
An idea that keeps on popping up again and again about the Brexit vote, is that voters on one side never really met the voters from the other side. I wonder how much the very comfortable filter bubble we now live in is responsible for this.1213 Algorithms are blackboxes build by biased people, on historical biased data and enriched by current biased data.14 So how can we stay aware of the alienating power of the digital products we make and build better programs?
The solution I would like to propose today is to approach design like a game designer :
Your product should serve your users, with their human limitation taken into account. Playtest, especially with people you would never hang out with and Bias proof your QA
As Microsoft learned with their AI bot Tay, since then on hiatus, people on the internet have asocial behaviors. This is a known fact of game design, and by planning for it, you might find very elegant solutions to the problem.
As we learn with research on implicit bias, we are all infused with our culture and social norms and we are blind to it. Therefore, you need to design a way to find bias, plan to correct it, and plan to learn from it. It can be as simple as having a contact form and encouraging people to speak out with codes of conduct. By acknowledging that we are biased designers, creating for biased users we can put in place tools to help us build a more inclusive world.
Here is a list of sources. Some I used in the talk, others, I found very interesting.