Review From User :
This is a very important little book ('little' isn't derogatory - it's just quite short and in a small format) - it gets to the heart of the problem with applying artificial intelligence techniques to large amounts of data and thinking that somehow this will result in wisdom.
Gary Smith as an economics professor who teaches statistics, understands numbers and, despite being a self-confessed computer addict, is well aware of the limitations of computer algorithms and big data. What he makes clear here is that we forget at our peril that computers do not understand the data that they process, and as a result are very susceptible to GIGO - garbage in, garbage out. Yet we are increasingly dependent on computer-made decisions coming out of black box algorithms which mine vast quantities of data to find correlations and use these to make predictions. What's wrong with this We don't know how the algorithms are making their predictions - and the algorithms don't know the difference between correlation and causality.
The scientist's (and statistician's) mantra is often 'correlation is not causality.' What this means is that if we have two things happening in the world we choose to measure - let's call them A (it could be banana imports) and B (it could number of pregnancies in the country) and if B rises and falls as A does, it doesn't mean that B is caused by A. It could be that A is caused B, A and B are both caused by C, or it's just a random coincidence. The banana import/pregnancy correlation actually happened in the UK for a number of years after the second world war. Human statisticians would never think the pregnancies were caused by banana imports - but an algorithm would not know any better.
In the banana case there was probably a C linking the two, but because modern data mining systems handle vast quantities of data and look at hundreds or thousands of variables, it is almost inevitable that they will discover apparent links between two sets of information where the coincidence is totally random. The correlation happens to work for the data being mined, but is totally useless for predicting the future.
This is the thesis at the heart of this book. Smith makes four major points that really should be drummed into all stock traders, politicians, banks, medics, social media companies... and anyone else who is tempted to think that letting a black box algorithm loose on vast quantities of data will make useful predictions. First, there are patterns in randomness. Given enough values, totally random data will have patterns embedded within it - it's easy to assume that these have a meaning, but they don't. Second, correlation is not causality. Third, cherry picking is dangerous. Often these systems pick the bits of the data that work and ignore the bits that don't - an absolute no-no in proper analysis. And finally, data without theory is treacherous. You need to have a theory and test it against the data - if you try to derive the theory from the data with no oversight, it will always fit that data, but is very unlikely to be correct.
My only problems with book is that Smith insists for some reason on making databases two words ('data bases' - I know, not exactly terrible), and the book can feel a bit repetitious because most of it consists of repeated examples of how the four points above lead AI systems to make terrible predictions - from Hillary Clinton's system mistakenly telling her team where to focus canvassing effort to the stock trading systems produced by 'quants'. But I think that repetition is important here because it shows just how much we are under the sway of these badly thought-out systems - and how much we need to insist that algorithms that affect our lives are transparent and work from knowledge, not through data mining.
As Smith points out, we regularly hear worries that AI systems are going to get so clever that they will take over the world. But actually the big problem is that our AI systems are anything but intelligent: 'In the age of Big Data, the real danger is not that computers are smarter than us, but that we think computers are smarter than us and therefore trust computers to make important decisions for us.'
This should be big-selling book. A plea to the publisher: change the cover (it just looks like it's badly printed and smudged) and halve the price to give it wider appeal.
Category: Misc. Non-fiction, Science
We live in an incredible period in history. The computer revolution may be even more life-changing than the Industrial Revolution. We can do things with computers that could never be done before, and computers can do things for us that could never be done before. But our love of computers should not cloud our thinking about their limitations.
We are told that computers are smarter than humans and that data mining can identify previously unknown truths or make discoveries that will revolutionize our lives. Our lives may well be changed, but not necessarily for the better. Computers are very good at discovering patterns but are useless in judging whether the unearthed patterns are sensible because computers do not think the way humans think.
We fear that super-intelligent machines will decide to protect themselves by enslaving or eliminating humans. But the real danger is not that computers are smarter than us but that we think computers are smarter than us and, so, trust computers to make important decisions for us.
The AI Delusion explains why we should not be intimidated into thinking that computers are infallible, that data-mining is knowledge discovery, and that black boxes should be trusted.