July 20, 2021
Can AI be trustworthy?
For some time now, artificial intelligence (AI) has been a significant part of our day to day lives. We might not understand it but we can be certain that the pace of its advance is quickening. Last week I encountered the Scottish AI Alliance which has been formed to ensure Scotland becomes ‘a leader in the development and use of trustworthy, ethical and inclusive AI.’ It was an intriguing conversation about how civil society might be able to engage with the AI Alliance. Coincidentally (or perhaps not) this article appeared in my inbox the next day.
When the apocalypse comes, most of us will barely notice it’s happening. Most technology-driven dystopias are far too interesting to be realistic: the end of the world will be a grinding, bureaucratic affair, its overriding spirit one of weary confusion — about how things work and who’s to blame when things go wrong.
Forget for a moment the flashier signals of technological process: AI-powered personal assistants, Boston Dynamics back-flipping robots or blockchain cheerleaders. The two most important trends in the field of technology are quiet and relentless: increasing volumes of data and declining cost of computing power. In the long run they mean machines will, despite frequent hiccups, keep improving. They already outperform humans in a small but growing number of narrow tasks, but it’s unlikely we’ll see general artificial intelligence any time soon — much less the AI-goes-rogue scenario. Still, machines will gradually take over more and more decision-making in important areas of life, including those which have ethical or political dimensions. Already there are signs of AI drifting into bail conditions, warfare strategy, welfare payments and employment
The problem isn’t whether machine decisions are better or worse — that’s often a question of values anyway — but whether it’ll get to the point where no one will be able to understand how decisions are made at all. Today’s algorithms already deal with millions of inputs, insane computing power and calculations far beyond human understanding. Frankenstein-like, most creators no longer really understand their own algorithms. Stuff goes in and stuff comes out, but the middle part is becoming a mysterious tangle of signals and weighting. Consider the example of AlphaGo, the AI system that astonished the world by thrashing the world’s best Go player, before astonishing it a second time by thrashing itself. Aeronautic engineers know precisely why their planes stay in the air; Alpha Go’s inner workings were and are a mystery to everyone. And by 2050, Alpha Go will be fondly remembered as a child-like simpleton.
There will be seminars, lessons, bootcamps, and online training courses about how to work with The Algorithm. Don’t worry yourself overly, human! Singletons: Learn the best combination of words to secure your dream date! Join our “beat the algo” seminar where you will learn how to ensure your CV outwits the HR filtering systems. Use our VPN to trick web browsers into thinking you’re from a poorer neighbourhood to secure a better price! A few months back a handful of bootcamps opened, where parents pay $2,000 for experts to teach their kids how to succeed on YouTube. Some scoffed, but I suspect similar courses will soon be the norm. These will be the warning signs of a confused and frightened society.
Imagine a 21-year-old happily bouncing through life in the 2050s. His entire life will have been datafied and correlated. His sleep patterns from birth captured by some helpful SmartSleep ap; his Baby Shark video consumption aged 2 safely registered on a server somewhere. All those tiny markers will help guide his future one day: his love life determined by a sophisticated personality matching software, while his smart fridge lectures him about meat consumption (insurance premiums may be impacted you know!); his employment prospects determined by a CV checking system 100 times more accurate than today’s. His cryptocurrency portfolio automatically updating every half nano-second based on pre-determined preferences. His political choices and opinions subtly shaped by what pops up on his screen controlled by AI-editors using preference algorithms that have been running for 50 years.
It sounds bad, but not apocalyptically bad, right? But imagine, now, that our 21-year-old is so impudent as to question or object to what these brilliantly clever systems are offering him up. There would probably be no obvious number to call with a complaint. He might try to sue the CV-checking software designed for the subtle discrimination he suffered — but the judges will throw the case out because the software designer has been dead for 30 years and they still don’t really understand what an algorithm is anyway.
The problem with such a machine-dependent world, then, is not what you might think. AI theorists spend a lot of time worrying about something called “value alignment”. It is a hypothetical future problem where a hyper-powerful AI takes instructions literally, with disastrous results. The most famous example is the “paperclip maximiser” where an unsuspecting factory owner asks an AI to make as many paperclips as possible — and it ends up turning the entire universe into paperclips. But I doubt you’ll need to worry about paperclips: you’ll be too busy on the phone to machine-like bureaucrats who can’t help with your application, because the machine has made a decision and the person who okayed it is off sick and the person who built the tech now works in Beijing and…
Confusing machines will annihilate accountability, which is one reason powerful people will like them. A couple of years ago UK health secretary Jeremy Hunt told the House of Commons that “a computer algorithm failure” meant 450,000 patients in England missed breast cancer screenings, and as a result many as 270 women might have had their lives shortened as a result. Who was responsible for this murderous and despicable “computer algorithm failure”? The tech guy who wrote the software, in good faith, years ago? The person who commissioned it? The people feeding the data in? Unsurprisingly, a subsequent inquiry into all this found that “no one person” was to blame. Nothing has been done in response, and nothing will. More recently, Boris Johnson blamed a “mutant algorithm” for the A-level fiasco — how convenient! Expect algorithms to become every politicians’ non-apology apology by the 2030s.
Around this time, the first casualties from driverless car accidents will start arriving in A&E. The subsequent enquiries will conclude that “no one person” is responsible for the deaths, either. It will instead be the fault of “unforeseen system incompatibilities” and “data corruptions” that make no sense, and offer no comfort, to anyone.
Presumably all this will be accompanied by a mild identity crisis. Some of us will pray to these God-like systems in the hope their mysterious inner workings are good to us. (An Uber driver was recently overheard muttering that, “The Algorithm has been good to me today”.) The less sanguine will presumably try to smash them to pieces. That will be destined to fail because, unlike the Spinning Jenny, software can’t be destroyed with a bat or arsonist. It’s somewhere you can’t reach it.
What will our leaders do about it? When people aren’t held to account, they tend behave worse — especially if someone or something tells them it’s OK. In his infamous experiment on the nature of authority, Stanley Milgram asked people to (they believed) electrocute other participants, which they generally did if a man in a white doctor’s jacket told them it was OK. He called this “agentic shift” — the process by which humans shift responsibility to abstract processes and systems, and in the process lose their own sense of right and wrong. People are worryingly good at following orders without question. Adolf Eichmann, the chief bureaucratic mastermind behind the Holocaust, is history’s most infamous rule-follower, but there were thousands like him inside the Nazi machine, telling themselves that they were only following orders, and so they were not to blame.
The Adolf Eichmanns of the future will be hip, jean-wearing technologists and bureaucrats who confidently assure everyone that they need to follow the complicated data models and respect the analytics. Outsourcing morality to a machine, writes Virginia Eubanks in her book Automating Inequality, gives the nation:
“the ethical distance it needs to make inhuman choices: who gets food and who starves, who has housing and who remains homeless, and which families are broken up by the state.”
Some form of ‘ethical distance’ is probably necessary for fair and objective government, but if it goes too far, the result is decision-makers who see little relationship between their decisions and the effect on people’s lives. Smart machines will likely make things worse because rather than just following rules and making sure your little jigsaw piece fits, bureaucrats will have a machine to rely on, an intelligence apparently smarter and wiser than they’ll ever be. The ultimate form of deniability.
If, one day in the future, a world-ending cyberwar breaks out — the most likely form the bureaucalypse might take — it won’t be caused by SkyNet going rogue. It will be initiated by a group of well-dressed and well-meaning civil servants who lack the courage or conviction to disagree with the machine-modelling and AI-Strategists which told them that overall well-being would be improved by 13.2 percentage points, that the risk of retaliation was minimal. Having spent the previous decades relying on machine advice for everything from music choices to cancer diagnosis, disagreeing with the supercomputers will seem impossible, maybe even immoral.
Obviously, we humans are too thin-skinned to give up on the idea that we’re the ones in charge, so we’ll still have the plebiscites, the MPs, the Select Committees and the opinion pages. But the whole point and purpose of democracy — to hold powerful people to account, to ensure well-informed citizens are ultimately in charge — would be reduced to a charade. Real power and authority will become centralised in a tiny group of techno-geniuses and black boxes that no-one understands.
If anything, as the range of problems politicians can actually solve shrinks, the fabricated outrage and manufactured disagreements will grow. Around the same time machines get to decide the most efficient tax rate, politicians will be literally throwing themselves onto pyres over survey question options or toilets signs. While, in the real world, algorithms will sort us by intelligence, ambition and attractiveness, politics will become at best an empty ritual, at worst a form of entertainment, like a WWE wrestling match. And the scariest thought of all is this: a world run by machines and rubber stamped by humans who’ve forgotten how to think — all divorced from a democracy that has been reduced to pure content — might not worry people at all. In fact, plenty of us will probably quite enjoy it.