PBS Newshour | How smart is today’s artificial intelligence?
May 8, 2015
PBS Newshour — May 8, 2015
PBS Newshour | How smart is today’s artificial intelligence? Artificial intelligence is creeping into our everyday lives through technology like check scanning machines and GPS navigation. How far away are we from making intelligent machines that actually have minds of their own? Hari Sreenivasan reports on the ethical considerations of artificial intelligence as part of our breakthroughs series.
related reading:
PBS | Newshour
PBS | YouTube channel: main
PBS | YouTube channel: Newshour
more on this theme on Kurzweil Network:
PBS Newshour | Why we’re teaching computers to help treat cancer
full transcript:
JUDY WOODRUFF: You may not realize it, but artificial intelligence is all around us. We rely on smart machines to scan our checks at ATMs, to navigate us on road trips and much more.
Still, humans have quite an edge. Just today, four of the world’s best Texas Hold ‘Em poker players won an epic two week tournament against, yes, an advanced computer program.
The field of artificial intelligence is pushing new boundaries. Hari Sreenivasan has the first in a series of stories about it and the concerns over where it may lead. It’s the latest report in our ongoing breakthroughs series on invention and innovation.
HARI SREENIVASAN: Artificial intelligence has long captured our imaginations.
ACTOR: Open the pod bay doors, Hal.
ACTOR: I’m sorry, Dave. I’m afraid I cannot do that.
HARI SREENIVASAN: With robots like Hal in 2001: A Space Odyssey and now Ava from the recently released Ex Machina.
ACTRESS: Hello. I have never met anyone new before.
HARI SREENIVASAN: And Chappie.
ACTRESS: A thinking robot could be the end of mankind.
HARI SREENIVASAN: The plots thicken when the intelligent machines question the authority of their makers, and begin acting on their own accord.
ACTRESS: Do you think I might be switched off?
ACTOR: It’s not up to me.
ACTRESS: Why is it up to anyone?
HARI SREENIVASAN: Make no mistake, these are Hollywood fantasies. But they do tap into real life concerns about artificial intelligence, or AI.
Elon Musk, founder and CEO of Tesla Motors & SpaceX, is not exactly a Luddite bent on stopping the advance of technology. But he says AI poses a potential threat more dangerous than nuclear weapons.
ELON MUSK, CEO, Tesla Motors & SpaceX: I think we should be very careful about artificial intelligence. If I were to guess at what our biggest existential threat is, it’s probably that. With artificial intelligence, we are summoning the demon.
HARI SREENIVASAN: Musk recently donated $10 million to the Future of Life Institute, which is focused on keeping AI research beneficial to humanity. Add his voice to a list of bright minds like physicist Stephen Hawking, Microsoft founder Bill Gates and several leaders in the field of artificial intelligence, among them, Stuart Russell, who heads the AI Lab at the University of California, Berkeley.
What concerns you about how artificial intelligence is already being used, or will be used shortly?
STUART RUSSELL, University of California: In the near term, the biggest problem is the development of autonomous weapons. Everyone knows about drones. Drones are remotely piloted. They’re not robots in a real sense. There’s a person looking through the camera that’s on the aircraft, and deciding when to fire.
An autonomous weapon would do all of that itself. It chooses where to go, it decides what the target is, and it decides when to fire.
HARI SREENIVASAN: He’s concerned about weapons like the British Taranis. It’s featured in this promotional video, by BAE Systems, a former Newshour underwriter.
The Taranis is currently operated remotely by humans, but this drone is outfitted with artificial intelligence, and will be capable of operating fully autonomously. Russell testified to the United Nations, which is considering a ban on such weapons that can target and kill humans without requiring a person to pull the trigger.
STUART RUSSELL: I think there’s a fundamental moral issue about whether it’s right for a machine to decide to kill a person. It’s bad enough that people are deciding to kill people, but at least they have perhaps some moral argument that they’re doing it to ultimately defend their families or prevent some greater evil.
HARI SREENIVASAN: While the defense industry is one use case of artificial intelligence, how close are we to building robots like the ones in the movies that are truly autonomous?
Down the hall from Russell, at University of California, Berkeley’s AI lab, Pieter Abbeel and his students are training their PR2 robot to think for itself.
PIETER ABBEEL, University of California: One of the main things we have been looking at is, how can we get a robot to think about situations it’s never seen before?
So, an example of that is, let’s say a robot is supposed to fold laundry or maybe tie a knot in a rope. Whenever you’re faced with even the same laundry article or the same rope, it’ll be in a different shape, and so you can’t just execute blindly the same set of motions and expect success.
HARI SREENIVASAN: Abbeel’s team is painstakingly training the PR2 to compete in an Amazon warehouse picking challenge in late May. So, right now, you’re just teaching it to grab this stack of soap — that’s it?
PIETER ABBEEL: Yes. We just started on this. And so right now, the robot is essentially learning how to grab soap bars out of the shelf. But really what we’re after is equipping the robot with the capability such that, if you come up with a whole new list of, let’s say, 1,000 new items, that we can very quickly equip it with the skill to pick any one of those 1,000 items.
HARI SREENIVASAN: On the day we visited, the PR2 was hobbled by a broken arm, and there were several times the robot failed at the task. Oh, no dice.
PIETER ABBEEL: Missed it this time.
HARI SREENIVASAN: Missed it. A tiny reminder that training a robot to think is no small task. So you think super-intelligence is still pretty far off and we don’t need to worry about it today?
PIETER ABBEEL: I would say it’s still pretty far off, yes.
HARI SREENIVASAN: But while training this robot may be tough today, not everyone thinks super-intelligence is that far out of our reach.
Ray Kurzweil is director of engineering at Google. He spoke to us in his capacity as an independent inventor of devices like the flatbed scanner. Among his many awards sits a technical Grammy for inventing the first computer based instrument that could realistically play like a piano.
Kurzweil says machines are on track to be on par with human intelligence in less than 15 years.
RAY KURZWEIL, inventor & futurist: By 2029, they will actually read at human levels and understand language and be able to communicate at human levels, but then do so at a vast scale.
The primary implication is that we’re going to combine our intelligence with computers. We’re going to make ourselves smarter. By the 2030s, they will literally go inside our bodies and inside our brains.
HARI SREENIVASAN: He calculates that, with exponential growth in computing and biotechnology, we will reach what he calls singularity within 25 years. That’s when machine intelligence exceeds human intellectual capacity.
RAY KURZWEIL: These technologies expand exponentially. They double in power roughly every year, so look at The Genome Project. It was a 15 year project. Halfway through the project, 7.5 years into it, 1 percent had been completed, so some people looked at it and said, well, 1 percent, we have just barely started. I looked at it and said 1 percent — we’re halfway through, because 1 percent’s only seven doublings from 100 percent, and it doubled every year. Seven years later it was finished.
So, from one perspective, we’re in the early stage in artificial intelligence, but exponentials start out slowly, and then they take off.
HARI SREENIVASAN: One such technology is the self driving car. In the 1990s, Kurzweil predicted it would happen, despite a chorus of experts who declared it impossible. Today, self-driving cars have been test driven, without incident, for hundreds of thousands of miles, but are not quite ready for consumers.
FEI-FEI LI, Stanford University: Yes, we have prototype cars that can drive by themselves. But without smart vision, they cannot really tell the difference between a crumpled paper bag, which can be run over, and a rock that size, which should be avoided.
HARI SREENIVASAN: Fei-Fei Li explains in a recent TED Talk.
FEI-FEI LI: Our smartest machines are still blind.
HARI SREENIVASAN: Li is director of Stanford University’s artificial intelligence lab. How hard is it to get a computer to see something and understand what it is?
FEI-FEI LI: So, it’s actually really, really hard. So think about it. A camera takes pictures. Right? We have millions of pixels, but these are just numbers. But they don’t really have meaning in themselves.
The task for artificial intelligence and computer vision algorithm is to take these numbers and convert them into meaningful objects.
HARI SREENIVASAN: How to infer meaning is not easy to teach a machine, even for this highly advanced dog robot. Humans have had thousands of years of evolution. Computers, Li cautions, are a ways off.
FEI FEI LI: We are very, very far from an intelligent system, not only the sensory intelligence, but cognition, reasoning, emotion, compassion, empathy. That whole full spectrum, we’re nowhere near that.
HARI SREENIVASAN: Robots like this one coming out of Stanford’s AI lab may be on proverbial training wheels today, but are part of the steady march toward super-intelligent machines.
For the PBS Newshour, I’m Hari Sreenivasan in Palo Alto, California. A new Associated Press investigation dated days after our segment was broadcast points out that 4 of the nearly 50 self driving cars now operating around California have gotten into minor accidents since September, when the state began issuing permits for companies to test them on public roads.