Science & technology
AI for vehicles
Is it smarter than a seven-month-old
How to improve the intelligence of self-driving cars
By the age of seven months, most children have learned that objects still exist even when they are out of sight.
Put a toy under a blanket and a child that old will know it is still there, and that he can reach underneath the blanket to get it back.
This understanding, of "object permanence", is a normal developmental milestone, as well as a basic tenet of reality.
It is also something that self-driving cars do not have. And that is a problem.
Autonomous vehicles are getting better, but they still don't understand the world in the way that a human being does.
For a self-driving car, a bicycle that is momentarily hidden by a passing van is a bicycle that has ceased to exist.
This failing is basic to the now-widespread computing discipline that has arrogated to itself the slightly misleading moniker of artificial intelligence (AI).
Current AI works by building up complex statistical models of the world, but it lacks a deeper understanding of reality.
How to give AI at least some semblance of that understanding—the reasoning ability of a seven-month-old child, perhaps—is now a matter of active research.
Modern AI is based on the idea of machine learning.
If an engineer wants a computer to recognise a stop sign, he does not try to write thousands of lines of code that describe every pattern of pixels which could possibly indicate such a sign.
Instead, he writes a program that can learn for itself, and then shows that program thousands of pictures of stop signs.
Over many repetitions, the program gradually works out what features all of these pictures have in common.