An every-day example of algorithm bias occurs when 2 people search for the same information using Google. Their individual pre-existing browsing history will influence what information Google delivers.
However, author Rob Smith provides many examples where the bias has much more serious implications.
Smith is an expert in developing evolutionary algorithms but this is not a book for tech geeks. Smith writes compellingly about the history and development of algorithms and their potentially positive and negative impacts.
Algorithms Use Simplified Models
He helps us to understand that for an algorithm to derive generalisations/information from big data it must be based on a simplified model. He gives a compelling example about flawed analysis by an algorithm on the likelihood of different job types being replaced by technology.
The results of this analysis have been discussed at the World Economic Forum in Davos and widely reported in the media with strong focus on the finding that 47% of jobs are highly computerisable. You’ve probably seen media reports on this.
Flawed Algorithm Produces Dumb Result
Smith drills into the model underpinning the analysis by the algorithm. This model included numeric scores for the importance of a range of human features/skills like ‘creativity’ and ‘manual dexterity’ to any specific job. The more important these human features/skills were to any specific job then the less likely it would be for that job to be replaced by computerisation.
The Algorithm Found That Male And Female Catwalk Fashion Models Were Extremely Likely To Be Replaced By Computerisation!
This ridiculous result occurred because the algorithm analysed each job against the human features/skills specified in the algorithm model – very few of these human features/skills were required to be a successful fashion model so the algorithm concluded that the job of fashion model was highly likely to be replaced by computerisation.
As the British statistician George E.P. Box observed – “all models are wrong, but some are useful”.
The insights in this book underscore the absolutely vital importance of understanding the model on which an algorithm is based before you draw any conclusions from the information provided by the algorithm.
Inherent Risks of AI
So if your organisation is using algorithms to interrogate big data please keep this in mind before using outputs from the algorithms to help with performance improvement, customer insights, risk management or any other aspect of your business.
Would you like to strengthen the Risk Culture in your organisation? Contact us to find out how.
About the Authors
John P Dawson & Carmel McDonald are the co-owners of Dawson McDonald Consulting. They’ve been running Risk Culture Assessments since 2008 to help clients protect their organisations and build resilience.
Check out their book which Risk Managers will find helpful BUILD Your Business.
“This is an impressive and very readable book! … I can speak with the experience of running a postgraduate degree in Risk Management. This book is one I wish I had had for the students… mature-age managers. It would have shown them … how to build a trusting, supportive team and empower it with an understanding of the culture surrounding them. That is, get their job done harmoniously!”
Dr John Browne – Past convenor of the Graduate Diploma of Risk Management and Master of Risk Management at Swinburne University of Technology, Melbourne, Australia.
To get your copy of this book, or to download a free sample chapter, click here
*Smith, Robert Elliott, Rage inside the machine : the prejudice of algorithms, and how to stop the internet making bigots of us all. London, Bloomsbury Academic, 2019.