What We Can Learn from Robots
December 28, 2004 | Source: Technology Review
Mitsuo Kawato, director of the ATR Computational Neuroscience Laboratories in Kyoto, Japan, believes that experiments on humanoid robots can provide simplified models of what certain groups of neurons in the brain are doing.
Then, using advanced imaging techniques, he looks at whether brain cells in monkeys and humans accord with the models.
By combining magnetic-resonance imaging, which offers millimeter-level resolution, with electrical and magnetic recording techniques, which resolve brain activity down to milliseconds, Kawato’s group hopes to understand more of the details of what is happening among these neurons. It’s what Kawato calls “mind decoding”–reading out a person’s intent based solely on neural signal patterns. If successful, it would be a breakthrough in understanding how the mind works.
Translating the brain’s messages into language that a robot can understand is a step toward realizing a long-term technological ambition: a remote
“brain-machine interface” that lets a user participate in events occurring thousands of kilometers away. A helmet could monitor a person’s brain activity and report it, over the Internet, to a remote humanoid robot; in nearly real time, the person’s actions could be replicated by a digital double.
To build the system, researchers will need to look in the brain for specific signals, translate them, transmit the data wirelessly without large delays, and use them to control a device on the other end.
Kawato is lobbying the Japanese government to help fund a worldwide project to build a humanoid robot that would have the intelligence and capabilities of a five-year-old child. In addition to the technological payoff, says Kawato, the benefits to neuroscience would be immense.