Some AI proponents argue that Artificial Intelligence will usurp human intelligence or even make us obsolete. That kind of talk must stop, before we lose control of AI.
Artificial Intelligence (AI) is one of the leading Internet trends of 2016, particularly with large companies like Google and Facebook pouring resources into it. While there are many benefits to AI – for example, Facebook using it to make our news feeds smarter – the hype is getting hubristic. I’m particularly concerned about the language AI proponents are using.
I’m also concerned about how much control of our lives, and our ability to think, we’re giving to these AI systems. Or, more accurately, how much control we’re giving to the AI technology of companies like Google and Facebook.
The most popular example of AI from the current era is virtual assistant software in your smartphone. On iPhone it’s called Siri, on Android it’s Google Now, and on a Microsoft Windows phone it’s Cortana. As if to emphasise that they’re AI, these three apps talk to you in a robotic voice – like HAL 9000 from the movie 2001: A Space Odyssey. Although unlike HAL, virtual assistants like Siri are helpful and friendly. At least, they are now…
Let’s assume that in ten years time, Siri (or whatever ends up replacing it) is 100 times as powerful as today. Imagine that the 2026 Siri will be able to sort and auto-reply to the majority of your messages, manage your calendar without you having to lift a finger, do multiple daily tasks for you and only alert you if something is outside certain parameters, automatically gather information relevant to your job, and so on. That’s a conservative estimate of what Siri 2026 will do. Almost certainly it will automate a significant portion of your work and life.
I don’t mean to suggest that Siri will slowly turn into HAL 9000. Well maybe Cortana will, given Microsoft’s recent history with AI bots (I’m kidding, kind of). But we should question whether it’s wise to let AI make more and more decisions for us. In case you’re wondering if I’m overly paranoid about what the bigcos are doing with AI, let’s look at the current state of play.
Google is the Internet bigco with perhaps the biggest reason to develop Artificial Intelligence. After all, its raison d’etre is to “organize the world’s information.” Google’s most significant investment in AI so far was the 2014 acquisition of a British company called DeepMind, which went on to develop the AlphaGo software that defeated the world Go champion earlier this year. The co-founder and CEO of DeepMind, Demis Hassabis, recently told MIT Technology Review that DeepMind is aimed at “solving intelligence, and then using that to solve everything else.” In other words, Google wants to use AI as a platform for all its other projects – from search, to virtual assistants, to much more. The MIT article explained:
Hassabis wants to create what he calls general artificial intelligence—something that, like a human, can learn to take on just about any task. He envisions it doing things as diverse as advancing medicine by formulating and testing scientific theories, and bounding around in agile robot bodies.
So it’s fair to assume that AI is key to Google’s expansion plans. It doesn’t sound that evil, unless of course you’re concerned about robots “bounding around” us.
Facebook’s AI Backbone
Facebook is going big on AI too. At its annual F8 conference, Facebook revealed that it has an “AI backbone that powers much of the Facebook experience and is used actively by more than 25 percent of all engineers across the company.” The team that runs this backbone is called Applied Machine Learning (AML). Its director, Joaquin Quiñonero Candela, noted that Facebook is currently using AI to power translation, photo image search and real-time video classification.
In a recent Quora post, Candela went so far as to say that “Facebook could not exist without AI/ML.” In another Quora post, Candela explained how Facebook’s AI basically controls what you – the user – sees on Facebook:
Whenever a users logs into Facebook, these models are used to rank news feed stories (1B users every day, 1.5K stories per user per day on average), ads, search results (1B+ queries a day), trending news, friend recommendations and even rank notifications that a user receives, or rank the comments on a post.
It’s hardly surprising that both Google and Facebook are using Artificial Intelligence to make their computing systems smarter. But the flip side is that your computing experience is less under your control. Put it this way: who controls what you see on Facebook? To some degree, you do. But behind the scenes, Facebook’s AI system controls what gets into your news feed and what your search results are. So you don’t know what you’re not seeing.
It’s a relatively short leap from Facebook controlling your news feed in 2016 to the Siri of 2026 taking control of all the daily tasks in your life. My point is this: if we allow AI to do too much for us, it may nibble away at our ability to think for ourselves and do things for ourselves. Where’s the line between self-reliance and reliance on AI? We’re going to find out sooner than you think.
In an influential 2014 essay, author and Yale computing professor David Gelernter railed against the increasing “roboticism” of our culture. In particular he targeted a current employee of Google, the inventor and author Ray Kurzweil:
The Kurzweil Cult teaches that, given the strong and ever-increasing pace of technological progress and change, a fateful crossover point is approaching. He calls this point the “singularity.” After the year 2045 (mark your calendars!), machine intelligence will dominate human intelligence to the extent that men will no longer understand machines any more than potato chips understand mathematical topology.
Gelernter believes that we are already becoming “dogs with iPhones.” That we’re getting dumber every year because of our reliance on technology to do our thinking for us. One could also argue that smartphones augment our intelligence, that they help us achieve a higher level of thinking and activity. I suspect it’s a bit of both.
What concerns me more is a disturbing trend in the language used by AI proponents. Yuval Harari, the author of a book called Sapiens: A Brief History of Humankind, recently told an audience in South Korea that AI could “drive humans out of the job market and make many humans completely useless, from an economic perspective.” He’s also quoted as saying that AI has an “immense emotional statistical database” and hence computers may render humans obsolete even in the realm of emotional intelligence.
It’s that kind of talk that worries me about the development of AI. Humans – that’s us, folks – will never be “completely useless.” To even suggest that is to discount what human intelligence really is. It’s more than data; it’s our subjective, individualistic view of the world. It’s art and spiritualism. Artificial intelligence can’t come close to reaching that level of intelligence.
As for emotional intelligence, well it’s possible the male of our species is at risk of being usurped by computers. But I doubt AI will ever equal a woman’s emotional intelligence! 😉
It’s clear that Google, Facebook and many other Internet companies are already using AI to make their computer systems smarter. Don’t get me wrong, I’m all for having a more intelligent Facebook news feed and getting Siri to help organise my calendar. But I do push back on two things:
- We should be wary of giving up too much control of our thinking to AI. Especially to large Internet companies like Google and Facebook. Remember their motives are profit-driven, not humanistic.
- Humans will never be useless or obsolete, and I strongly disagree with that kind of terminology from AI proponents. It’s hubris, plain and simple.
Image credit: javierocasio, DeviantArt