Google’s autonomous cars may look cute, like a yuppie cross between a Little Tikes Cozy Coupe and a sheet of flypaper, but to make it in the real world they’re going to have to act like calculating predators.
At least, that’s what a handful of scientists at the Institute of Neuroinformatics at the University of Zurich in Switzerland believe. They recently taught a robot to act like a predator and hunt its prey—which was a human-controlled robot—using a specialized camera and software that allowed the robot to essentially teach itself how to find its mark.
The end goal of the work is arguably more beneficial to humanity than creating a future robot bloodsport, however. The researchers aim to design software that would allow a robot to assess its environment and find a target in real time and space. If robots are ever going to make it out of the lab and into our daily lives, this is a skill they’re going to need.
"One could imagine future luggage or shopping carts that follow you"
“Following [in large groups of self-driving cars or drones] is the obvious application, but one could imagine future luggage or shopping carts that follow you,” Tobi Delbruck, a professor at the Institute of Neuroinformatics, wrote me in an email. “This way, the problem is less like a predator and its prey and more like herding, or a parent and child.”
The predator robot’s hardware, like its behaviour, is modelled after animals. The key to its success as a hunter is a “silicon retina”—co-invented by Delbruck as part of the European Union-funded VISUALISE project—that mimics the human eye to speedily process visual data. A paper describing the approach is available on the ArXiv preprint server.
Raw data representing what the predator robot "sees." Screengrab: Moeys et al.
If a robot tracked its prey with a regular camera, the slow frame rate would result in a series of frames that didn’t really represent the true path of movement, especially if the escaping prey were moving quickly. The silicon retina, instead of producing frames, contains pixels that individually and autonomously detect changes in illumination and transmit that information in real time, which results in a steady flow of visual information instead of a series of disjointed images.
The data is then processed by a deep learning neural network—software that trains on massive amounts of data so that it can “learn” what to do the next time it receives similar inputs. That is, the robot will get better at tracking prey in the wild the more that it practices doing so.
“Robots guided by neural networks can perform extremely well in situations they have learned from through massive training,” Diederik Moeys, a PhD student and first author of the work, wrote me in an email. “But any unknown or new situation can produce the most unexpected—and sometimes hilarious—results.”
In other words, if machine predators are prowling our roadways in the near future, that might not be such a bad thing.