This article describing real examples of how artificial intelligence can be used in good, bad and ugly ways, originally appeared in three separate parts in my weekly newsletter, That Space Cadet Glow.
Good Artificial Intelligence
There is a delicious irony in using Deep Neural Networks (DNNs), an artificial intelligence technology that is modelled on the structure of the brain, to help diagnose conditions in real brains. But there are two recent examples of where this is exactly the case.
In the first, reported in Nature, DNNs have been used to help early identification of autism in high risk babies. Previously, Autism Spectrum Disorder (ASD) could only be diagnosed through external symptoms, such as poor eye contact, from about the age of 2. But researchers believed that ASD was generally (though not exclusively) a result of higher than normal brain growth in very young babies or even when in the womb, which means that ASD might be predictable through the MRI imaging of high risk babies. As the article says, the researchers “found brain changes between 6 and 12 months, before ASD symptoms appeared. The cortical surface area — a measure of the size of folds on the outside of the brain — grew faster in infants later diagnosed with autism, compared with those who did not receive a diagnosis.” Using deep neural networks, they were then able to predict ASD at age 2 in 81% of the test subjects. It’s not clear yet how useful the results will be, but they will certainly give researchers a better chance of identifying and testing interventions.
In a second example, scientists have developed a method to help diagnose conditions such as Alzheimer’s and Parkinson’s. Again, DNNs are used to identify abnormal areas of the brain, but, because of the complexity of the task, different DNNs are used in different parts of the brain. The images of grey matter are compared to a digital atlas of the human brain called Automated Anatomical Labeling (AAL), with the outputs then fused to create a single view that can be used for diagnosis.
In both of these examples it is still early days – there are challenges in having enough data in the first place, and then the sheer effort required in imaging brains. But, if these ideas develop further, then they are bound to open up new opportunities to help diagnose and perhaps even cure some of these most horrible of conditions.
Bad Artificial Intelligence
As you can see from the examples above, artificial intelligence can provide some very positive benefits to society from the work being done in the medical field. It can also make a huge difference to commerce as well, but often the claims are over-inflated or simply irrelevant. A case in point is this article in The Guardian which talks about sustainable travel and how AI is impacting the travel sector as a whole. It covers the usual suspects such as pilotless planes, chatbots to help book your vacation and intelligent recommendation engines. But then it starts getting a bit ridiculous: “A waiter at a luxury hotel, for instance, could use information on you to predict what kind of drinks you like and recommend something from the menu. Or reception staff, with data on your spa use, might propose a particular service.” For me, this is wrong on so many levels. Firstly, it’s just creepy for the receptionist to know what you get up to in the spa. Secondly, they are only doing it to sell you more stuff, not for any philanthropic reason. And thirdly, who would want or allow this sort of data to be shared? The article itself even admits “travel companies need to avoid breaching customers’ privacy when they gather data on them”. Let’s not try and think up solutions to problems that don’t really exist (I like to chose my own drink, thank you very much) just because it’s AI-able. And let’s not try and claim AI is something that benefits the customer when it’s just there to upsell stuff to you. We know AI can do some incredible things, so let’s not belittle it by making up worthless and unusable examples.
Ugly Artificial Intelligence
But here’s an example of where artificial intelligence is being used in scary, creepy and very worrying ways. The Observer recently reported on its front page of how Robert Mercer, a multi-millionaire Trump supporter, had allegedly used an AI firm of which he is a major investor, Cambridge Analytica, to help both UKIP’s campaign for Britain to leave the EU and Trump’s presidential campaign. “On its website, Cambridge Analytica makes the astonishing boast that it has psychological profiles based on 5,000 separate pieces of data on 220 million American voters – its USP is to use this data to understand people’s deepest emotions and then target them accordingly. The system, according to Albright [a professor of communications at Elon University, North Carolina], amounted to a ‘propaganda machine’”. The article reports on an interview with Andy Wigmore, Leave.EU’s communications director: “Facebook was the key to the entire campaign, Wigmore explained. A Facebook ‘like’, he said, was their most “potent weapon”. “Because using artificial intelligence, as we did, tells you all sorts of things about that individual and how to convince them with what sort of advert. And you knew there would also be other people in their network who liked what they liked, so you could spread. And then you follow them. The computer never stops learning and it never stops monitoring.” Even if only some of that is true, it sounds really creepy to me. Even Wigmore admitted it was creepy. You should certainly read the whole article for all of the details (and this counter-piece on Bloomberg) and form your own opinion, but let me finish with a quote from Professor Rust, Director of Cambridge University’s Psychometric Centre: “It’s no exaggeration to say that minds can be changed. Behaviour can be predicted and controlled. I find it incredibly scary. I really do. Because nobody has really followed through on the possible consequences of all this. People don’t know it’s happening to them. Their attitudes are being changed behind their backs.”
Whether artificial intelligence will be successful doing good, bad or ugly things (all or three) remains to be seen. A lot will depend on the awareness amongst the users and potential users of AI, and how the public react. The more we write and read about them though, the more educated everyone will be so that they can make informed decisions. Just make sure that you do make a choice.