British engineers say it is now possible for machines to learn how natural or artificial systems work by simply observing them.
Dr Roderich Gross from the Department of Automatic Control and Systems Engineering at the University of Sheffield said they devised a variation of the Turing Test called Turing Learning.
In the original test, an interrogator exchanges messages with two players in a different room: one human, the other a machine. The interrogator has to determine which of the players is human. If they consistently fail, the machine has passed the test, and is considered to have human-level intelligence.
“In our case, we put a swarm of robots under surveillance and wanted to find out which rules caused their movements,” said Gross.
“To do so, we put a second swarm – made of learning robots – under surveillance too. The movements of all the robots were recorded, and the motion data shown to interrogators. However, our interrogators are not human but rather computer programs that learn by themselves. Their task is to distinguish between robots from either swarm. They are rewarded for correctly categorising the motion data from the original swarm as genuine, and those from the other swarm as counterfeit. The learning robots that succeed in fooling an interrogator – making it believe their motion data were genuine – receive a reward.”
Gross said the advantage of the Turing Learning approach is that humans no longer need to tell machines what to look for.
“Imagine you want a robot to paint like Picasso. Conventional machine learning algorithms would rate the robot’s paintings for how closely they resembled a Picasso. But someone would have to tell the algorithms what is considered similar to a Picasso to begin with,” he said.
“Turing Learning does not require such prior knowledge. It would simply reward the robot if it painted something that was considered genuine by the interrogators. Turing Learning would simultaneously learn how to interrogate and how to paint.”
He said their findings could be used to create algorithms that detect abnormalities in behaviour. This could prove useful for the health monitoring of livestock, for the preventive maintenance of machines, cars and airplanes, or in security applications, such as for lie detection or online identity verification.
The next step for Gross and his team is to reveal the workings of some animal collectives such as schools of fish or colonies of bees. This could lead to a better understanding of what factors influence the behaviour of these animals, and eventually inform policy for their protection.
https://www.engineersaustralia.org.au
Who is Roderich Gross?
Roderich Gross is a Senior Lecturer in the Department of Automatic Control and Systems Engineering at The University of Sheffield and an Executive Committee Member of Sheffield Robotics. He received a Ph.D. degree in engineering science in 2007 from Université libre de Bruxelles in 2007. He was a JSPS Fellow (Tokyo Institute of Technology) and a Marie Curie Fellow (EPFL & Unilever). He has authored more than 60 publications in distributed robotics, including articles in Proceedings of the IEEE, Journal of the Royal Society Interface, The International Journal of Robotics Research and IEEE Transactions on Robotics. His h-index is 21. Dr Gross serves as the General Chair of DARS 2016, Editor of IROS, and as an Associate Editor of Swarm Intelligence, IEEE Robotics and Automation Letters, and IEEE Computational Intelligence Magazine.