Reading Time: 6 minutes

Now, during the very infancy of the development of artificial intelligence (AI), we are seeing bias programmed into it, to influence a significant level of unfairness.  We naturally expected that AI would be impartial, equitable, and able to compute solutions to problems without any ulterior motives or undue influences.  However, this does not appear to be the case and AI is clearly being heavily influenced.  It is amplifying the prejudiced inherent in the coders writing it.  This is wrong, and it must stop now, before it proceeds to far and perhaps beyond the point of no return.

eset_ciber-660x330

We expected that AI would be emotionless, calculating, and sterile.  It would mathematically deduce outcomes that were logical, practical, and without unjust effect.  It was suppose to be much better than how we humans come to our decision-making, which is polluted with coloured prejudices, lopsided opinions, and personal hatreds.

For example:

  • Google’s first generation of visual AI identified images of people of African decent as gorillas
  • Voice command software in cars struggled to understand females, while working just fine for males
  • During the 2016 presidential election, Facebook’s algorithms spread fear-stoking lies to its most vulnerable users, allowing a foreign power to meaningfully swing the election of the most powerful office in the world
  • The first wave of virtual assistants reinforced sexist gender roles: those assistants that execute basic tasks (Apple’s Siri, Amazon’s Alexa) have female voices, while more sophisticated problem-solving bots (IBM’s Watson, Microsoft’s Einstein) have male ones

Like the human brain, artificial intelligence is subject to cognitive bias.  Human cognitive biases are heuristics, mental shortcuts that skew decision-making and reasoning, resulting in reasoning errors.  Examples of cognitive biases include stereotyping, the bandwagon effect, confirmation bias, priming, selective perception, the gambler’s fallacy, and the observational selection bias.  The total number of cognitive biases is constantly evolving, due to the ongoing identification of new biases.

20171206183858-GettyImages-496822526

Human cognitive bias influences AI through data, algorithms, and interaction.  Machine learning (ML), a subset of AI, is the ability for computers to learn without explicit programming.  AI’s learning is shaped by data, algorithms, and experience through interactions and iterations.  The size, structure, collection methodology, and sources of data impact machine learning.  Machine learning is dependent on the quality of learning data sets.  Just like in humans, in AI the more objective the data and the larger the data set, the less possibility of distortion.

The problem of bias in machine learning is likely to become more significant as the technology spreads to critical areas like medicine and law, and as more people without a deep technical understanding are tasked with deploying it.  Some experts warn that algorithmic bias is already pervasive in many industries, and that almost no one is making an effort to identify or correct it.

machine-learning-features

“It’s important that we be transparent about the training data that we are using, and are looking for hidden biases in it, otherwise we are building biased systems.”  “If someone is trying to sell you a black box system for medical decision support, and you don’t know how it works or what data was used to train it, then I wouldn’t trust it.” (Giannandrea, 2017).

Diversifying the AI talent pool is not just about gender.  Currently, AI development is a PhD’s game.  The community of credentialed people creating scalable AI for businesses is relatively small.  While the focus on quality and utility needs to remain intact, expanding the diversity of people working on AI to include people with nontechnical professional backgrounds and less advanced degrees is vital to AI’s sustainability.  As a start, companies developing AI should consider hiring creatives, writers, linguists, sociologists, and passionate people from non-traditional professions.  Over time, they should commit to supporting training programs that can broaden the talent pool beyond those who have graduated from elite universities.  Recruiting diverse sets of people will also help to improve and reinvent AI user experiences.

As former Googler Yonatan Zunger wrote in an exceptionally thoughtful post on AI bias, the minute we start building an ML model we run into an inconvenient truth: The “biggest challenges of AI often start when writing it makes us have to be very explicit about our goals, in a way that almost nothing else does.”

In other words, machines reflect and amplify our bias, rather than eradicating them.  As we turn to AI and machine learning for everything from marketing to judicial sentencing, we need to be hyper-aware of this.

Or, as he summarized: “AI models hold a mirror up to us; they don’t understand when we really don’t want honesty.  They will only tell us polite fictions if we tell them how to lie to us ahead of time.”  An AI model isn’t some neutral arbiter of truth, in other words: We tell it our truths, and it spits them back at us.

Bias

What emerges is less a concern that we will never be able to teach cars to drive so much as a worry that we’re already expecting too much of AI and machine learning in how we use computers to speak to, or for, human agency.

When we program AI and machine learning algorithms, we must make explicit decisions about what matters, and that can make us extremely uncomfortable.  (For example, if you’re programming a car do you tell it to kill the child that ran out into the road or the driver? Choose one.)  Perhaps that discomfort is a learning opportunity for us all.  Maybe, just maybe, in being forced to overtly face our biases in programming these models, we may learn to overcome them, even if our machines cannot.


References:

Asay, M. (2018). Why AI bias could be a good thing. TechRepublic. Retrieved on April 23, 2018 from, https://www.techrepublic.com/article/why-ai-bias-could-be-a-good-thing/

Byrne, W. (2018). Now Is The Time To Act To End Bias In AI. Fast Company. Retrieved on April 23, 2018 from, https://www.fastcompany.com/40536485/now-is-the-time-to-act-to-stop-bias-in-ai

Giannandrea, J. (2017). Forget Killer Robots—Bias Is the Real AI Danger. MIT Technology Review. Retrieved on April 23, 2018 from, https://www.technologyreview.com/s/608986/forget-killer-robotsbias-is-the-real-ai-danger/

Rosso, C. (2016). The Conundrum of Machine Learning and Cognitive Biases. Medium. Retrieved on April 23, 2018 from, https://medium.com/@camirosso/the-conundrum-of-machine-learning-and-cognitive-biases-ce4b82a87f49

Rosso, C. (2018). The Human Bias in the AI Machine. Psychology Today. Retrieved on April 23, 2018 from, https://www.psychologytoday.com/us/blog/the-future-brain/201802/the-human-bias-in-the-ai-machine

Sharma, K. (2018). Can We Keep Our Biases from Creeping into AI? Harvard Business Review. Retrieved on April 23, 2018 from, https://hbr.org/2018/02/can-we-keep-our-biases-from-creeping-into-ai


About the Author:

Michael Martin has more than 35 years of experience in systems design for broadband networks, optical fibre, wireless and digital communications technologies.

He is a Senior Executive with IBM Canada’s GTS Network Services Group. Over the past 13 years with IBM, he has worked in the GBS Global Center of Competency for Energy and Utilities and the GTS Global Center of Excellence for Energy and Utilities. He was previously a founding partner and President of MICAN Communications and before that was President of Comlink Systems Limited and Ensat Broadcast Services, Inc., both divisions of Cygnal Technologies Corporation (CYN: TSX).

Martin currently serves on the Board of Directors for TeraGo Inc (TGO: TSX) and previously served on the Board of Directors for Avante Logixx Inc. (XX: TSX.V). 

He serves as a Member, SCC ISO-IEC JTC 1/SC-41 – Internet of Things and related technologies, ISO – International Organization for Standardization, and as a member of the NIST SP 500-325 Fog Computing Conceptual Model, National Institute of Standards and Technology.

He served on the Board of Governors of the University of Ontario Institute of Technology (UOIT) and on the Board of Advisers of five different Colleges in Ontario.  For 16 years he served on the Board of the Society of Motion Picture and Television Engineers (SMPTE), Toronto Section. 

He holds three master’s degrees, in business (MBA), communication (MA), and education (MEd). As well, he has diplomas and certifications in business, computer programming, internetworking, project management, media, photography, and communication technology.


This article is a mash-up of various other articles as referenced above.