Some AI proponents argue that Artificial Intelligence will usurp human intelligence or even make us obsolete. That kind of talk must stop, before we lose control of AI.
Artificial Intelligence (AI) is one of the leading Internet trends of 2016, particularly with large companies like Google and Facebook pouring resources into it. While there are many benefits to AI – for example, Facebook using it to make our news feeds smarter – the hype is getting hubristic. I’m particularly concerned about the language AI proponents are using.
I’m also concerned about how much control of our lives, and our ability to think, we’re giving to these AI systems. Or, more accurately, how much control we’re giving to the AI technology of companies like Google and Facebook.
Siri 2026
The most popular example of AI from the current era is virtual assistant software in your smartphone. On iPhone it’s called Siri, on Android it’s Google Now, and on a Microsoft Windows phone it’s Cortana. As if to emphasise that they’re AI, these three apps talk to you in a robotic voice – like HAL 9000 from the movie 2001: A Space Odyssey. Although unlike HAL, virtual assistants like Siri are helpful and friendly. At least, they are now…
Let’s assume that in ten years time, Siri (or whatever ends up replacing it) is 100 times as powerful as today. Imagine that the 2026 Siri will be able to sort and auto-reply to the majority of your messages, manage your calendar without you having to lift a finger, do multiple daily tasks for you and only alert you if something is outside certain parameters, automatically gather information relevant to your job, and so on. That’s a conservative estimate of what Siri 2026 will do. Almost certainly it will automate a significant portion of your work and life.
I don’t mean to suggest that Siri will slowly turn into HAL 9000. Well maybe Cortana will, given Microsoft’s recent history with AI bots (I’m kidding, kind of). But we should question whether it’s wise to let AI make more and more decisions for us. In case you’re wondering if I’m overly paranoid about what the bigcos are doing with AI, let’s look at the current state of play.
Google DeepMind
Google is the Internet bigco with perhaps the biggest reason to develop Artificial Intelligence. After all, its raison d’etre is to “organize the world’s information.” Google’s most significant investment in AI so far was the 2014 acquisition of a British company called DeepMind, which went on to develop the AlphaGo software that defeated the world Go champion earlier this year. The co-founder and CEO of DeepMind, Demis Hassabis, recently told MIT Technology Review that DeepMind is aimed at “solving intelligence, and then using that to solve everything else.” In other words, Google wants to use AI as a platform for all its other projects – from search, to virtual assistants, to much more. The MIT article explained:
Hassabis wants to create what he calls general artificial intelligence—something that, like a human, can learn to take on just about any task. He envisions it doing things as diverse as advancing medicine by formulating and testing scientific theories, and bounding around in agile robot bodies.
So it’s fair to assume that AI is key to Google’s expansion plans. It doesn’t sound that evil, unless of course you’re concerned about robots “bounding around” us.
Facebook’s AI Backbone
Facebook is going big on AI too. At its annual F8 conference, Facebook revealed that it has an “AI backbone that powers much of the Facebook experience and is used actively by more than 25 percent of all engineers across the company.” The team that runs this backbone is called Applied Machine Learning (AML). Its director, Joaquin Quiñonero Candela, noted that Facebook is currently using AI to power translation, photo image search and real-time video classification.
In a recent Quora post, Candela went so far as to say that “Facebook could not exist without AI/ML.” In another Quora post, Candela explained how Facebook’s AI basically controls what you – the user – sees on Facebook:
Whenever a users logs into Facebook, these models are used to rank news feed stories (1B users every day, 1.5K stories per user per day on average), ads, search results (1B+ queries a day), trending news, friend recommendations and even rank notifications that a user receives, or rank the comments on a post.
It’s hardly surprising that both Google and Facebook are using Artificial Intelligence to make their computing systems smarter. But the flip side is that your computing experience is less under your control. Put it this way: who controls what you see on Facebook? To some degree, you do. But behind the scenes, Facebook’s AI system controls what gets into your news feed and what your search results are. So you don’t know what you’re not seeing.
It’s a relatively short leap from Facebook controlling your news feed in 2016 to the Siri of 2026 taking control of all the daily tasks in your life. My point is this: if we allow AI to do too much for us, it may nibble away at our ability to think for ourselves and do things for ourselves. Where’s the line between self-reliance and reliance on AI? We’re going to find out sooner than you think.
Roboticism
In an influential 2014 essay, author and Yale computing professor David Gelernter railed against the increasing “roboticism” of our culture. In particular he targeted a current employee of Google, the inventor and author Ray Kurzweil:
The Kurzweil Cult teaches that, given the strong and ever-increasing pace of technological progress and change, a fateful crossover point is approaching. He calls this point the “singularity.” After the year 2045 (mark your calendars!), machine intelligence will dominate human intelligence to the extent that men will no longer understand machines any more than potato chips understand mathematical topology.
Gelernter believes that we are already becoming “dogs with iPhones.” That we’re getting dumber every year because of our reliance on technology to do our thinking for us. One could also argue that smartphones augment our intelligence, that they help us achieve a higher level of thinking and activity. I suspect it’s a bit of both.
What concerns me more is a disturbing trend in the language used by AI proponents. Yuval Harari, the author of a book called Sapiens: A Brief History of Humankind, recently told an audience in South Korea that AI could “drive humans out of the job market and make many humans completely useless, from an economic perspective.” He’s also quoted as saying that AI has an “immense emotional statistical database” and hence computers may render humans obsolete even in the realm of emotional intelligence.
It’s that kind of talk that worries me about the development of AI. Humans – that’s us, folks – will never be “completely useless.” To even suggest that is to discount what human intelligence really is. It’s more than data; it’s our subjective, individualistic view of the world. It’s art and spiritualism. Artificial intelligence can’t come close to reaching that level of intelligence.
As for emotional intelligence, well it’s possible the male of our species is at risk of being usurped by computers. But I doubt AI will ever equal a woman’s emotional intelligence! 😉
Concluding Thoughts
It’s clear that Google, Facebook and many other Internet companies are already using AI to make their computer systems smarter. Don’t get me wrong, I’m all for having a more intelligent Facebook news feed and getting Siri to help organise my calendar. But I do push back on two things:
- We should be wary of giving up too much control of our thinking to AI. Especially to large Internet companies like Google and Facebook. Remember their motives are profit-driven, not humanistic.
- Humans will never be useless or obsolete, and I strongly disagree with that kind of terminology from AI proponents. It’s hubris, plain and simple.
Let me know your own thoughts on AI, either with a comment on the blog post or on my Facebook or Twitter.
Image credit: javierocasio, DeviantArt
Thanks for another great post. Stepping back, it’s interesting to see how people are reacting to these issues, given they’re not “new” – part of SciFi/future thinking for decades and absorbed into our culture – but there’s a dawning realisation they’re becoming a reality. It’s like we’re finally being directed to star in our own movie, scripted years ago.
And yes, it’s worrying to see complacency around the growing disconnect between AI capability and “conscience” – if this is a gap that grows, fear is justified, given how humans would act with the same gap! This kind of statement underlines the problem:
>AI has an “immense emotional statistical database”
Even if that’s true, that’s just a database, not a framework for ethical decision-making. Going all the way back to “I, Robot”: no set of fixed laws can sufficiently anticipate all possible circumstances.
Maybe humans will become redundant as operators, but essential as ethicists?
Thanks Tom, excellent comment. This is absolutely what scifi is good at, trying to understand these issues and work towards solutions – years before we’ll need them. I agree that humans will continue to be essential, as ethicists and much more.
GREAT question. “Maybe humans will become redundant as operators, but essential as ethicists?”
This question is phenomenal and helps conclude a comment I was hoping to share this weekend about this article. I’ll add more color to my comment over the weekend, but the general summary is that chapter 6, “Chains of Consequence” in Kevin Ashton’s book ‘How to fly a horse: The secret history of creation, invention and discovery’, has a solid framework for thinking about the implications of AI. He doesn’t cover AI specifically, but speaks to invention and change.
There’s an interesting intersection between Ricmac’s article, your question, and chapter 6 of this book.
@jeffkauffmanjr
Great article, Richard. I look forward to every week’s post. I’m hoping to expand on your two concerns.
First up,
“1. We should be wary of giving up too much control of our thinking to AI. Especially to large Internet companies like Google and Facebook. Remember their motives are profit-driven, not humanistic.”
If I’m tracking with your definition of “thinking” then your concern is directly in line with what many describe as AI’s largest problem; that is, to improperly optimize towards a utility function. Meaning, an AI program could optimize towards solving it’s utility function at the expense of human life. For example, if an AI’s utility function is to solve world hunger, it could determine that to eliminate world hunger, the human population should decrease by 20%. In this scenario, you have a very noble utility function but a very tragic optimization decision. In terms of giving up too much of our “thinking”, I feel like this is exactly what you’re wanting to caution. I’ll just add that even if profit-driven motives from Facebook and Google are removed from the equation, and we assume that their motives for AI are good, the path AI could take towards solving it’s utility function could still be detrimental.
Have you heard of Elon Musk’s Open AI project? He clearly share’s your concern about giving up too much of our thinking to AI. In addition to this, the Open AI project is also concerned with the emergence of a single, dominate AI. Elon’s stance on mitigating risk associated with AI is that there must be multiple AI programs operating at a high level. Essentially, the more high level AI we have, the more likely we are to have a checks and balances system in terms of using AI to keep other AI in check.
Just by open sourcing AI, we hedge our bets against severe issues arising from a single, dominate AI. In terms of solving improper optimization towards a utility function, the Open AI project’s latest blog post discusses Reinforcement Learning. “Reinforcement learning (RL) is the subfield of machine learning concerned with decision making and motor control. It studies how an agent can learn how to achieve goals in a complex, uncertain environment.” The team just released a toolkit for developing and comparing RL algorithms. In context of improperly optimizing towards a utility function, you can see how the Open AI project is actively trying to prevent improper optimization.
Open source AI comes with its concerns. Over the next few decades, as two-thirds of the world’s population gains Internet access, we’ll have more people than ever who can contribute to and use very sophisticated AI. In the wrong hands, AI spells major trouble. But again, the alternative is to leave advanced AI in the hands of a few, and that seems less desirable.
——————————————————————————————
I’d also like to peel back the layers on your second concern.
“2. Humans will never be useless or obsolete, and I strongly disagree with that kind of terminology from AI proponents. It’s hubris, plain and simple.”
I agree with you, and will go as far to say that AI will have the exact opposite affect. Rather than becoming obsolete, we will see an explosion of creativity unlike we’ve ever seen in the history of mankind. I too am disappointed in Kurzweil’s claims that machine intelligence will surpass human intelligence in 2045. His reasoning is based on advanced knowledge of computing, which is only half of the equation. Most would agree that the human brain is far from being understood and that we have yet to unlock its full potential. If we cannot outline how the human brain works, how can we say that we know its limits? And if we don’t know the brain’s limits, how can we claim a point in time where something we create can surpass our brain’s potential?
If we control AI’s optimization problem, then AI will likely serve us in the same manner that every other significant invention has served mankind. It will allow us to access parts of our brain, and thinking power, that we previously could not access, or at the very least, use parts of our brain more because we’re freeing ourselves from other mental tasks. If any fear is warranted, it shouldn’t be directed towards AI. We should fear, or at the very least acknowledge, our newer self and our new world. We should ask more questions about our evolution. This is why Tom’s question in the first comment got me so excited. It acknowledges that invention throughout history has allowed mankind to take on high order roles. There are more people, writing more books, and sharing more ideas than ever before.
Which leads me into chapter 6 “Chains of Consequence” in Kevin Ashton’s book ‘How to fly a horse: The secret history of creation, invention and discovery’. And just to give readers of this comment perspective on who Kevin Ashton is, he coined and recognized the “Internet of Things” back in 1999. Here are a few words from his book…
“Chains of tools have chains of consequence. As creators, we can anticipate some of these consequences, and if they are bad, we should of course take steps to prevent them, up to and including creating something else instead. What we cannot do is stop creating. The answer to invention’s problems is not less invention but more.”
It is easier to say that new technology will make us obsolete, than it is to figure out the chains of consequence. Kurzweil’s claims are simply headline material. The truth is, we will not have our intelligence surpassed by machines in 2045. Claims such as this do not help us focus our attention on the real issues new technology can bring. This is why the work and focus of Open AI is so important. It recognizes that AI could cause serious problems, like any new technology, and is working towards figuring out what those problems are. If AI is properly managed, then humans have an exciting new era of unparalleled creativity ahead.
Sources: OpenAI.com, https://youtu.be/Ze0_1vczikA, How to fly a horse: The secret history of creation, invention and discovery by Kevin Ashton.
@jeffkauffmanjr
Thanks Jeff, epic comment!