At its CES 2016 press conference, Nvidia talked up the potential of its new AI ‘supercomputer’ for self-driving cars – the Nvidia Drive PX 2.
Volvo will launch a public test next year with 100 autonomous cars that will hit the road at Volvo’s homeground, Gothenburg, Sweden.
Nvidia chief Jen-Hsun Huang believes that self-driving vehicles will make a big contribution to society, simply because humans are the most unreliable element in driving today. But the problem, it turns out, is that driving is rather difficult. That’s the key issue for machine learning to solve.
The main thing is that a series of events on any street is not easily predictable. Hazards such as pedestrians and bad driving from other drivers just can’t be legislated for. And there are huge numbers of different objects to be learnt; not every bus will look the same, for example.
That’s where Deep Learning comes in – training on neural networks. This does take time, but GPU acceleration is helping to speed this up.
Over the course of just a few hours, Audi trained a network that can recognise German road signs at a level that beats every hand-coded computer-vision approach. It achieves better perception of road signs than a human could.
Daimler has achieved perfect pixel recognition. Also using the technology is ZMP, a Japanese company creating a self-driving taxi; and BMW, Toyota partner Preferred Networks and Ford.
Nvidia also demonstrated that the technology can now identify different classes of item on the road, such as pedestrians or motorcyclists as well as what the in-car display could look like.
As you’d expect, the Drive PX 2 itself is no slouch, with 8 teraflops of compute power, which Nvidia says is equivalent to 150 MacBook Pros (spec unspecified, of course). There are 12 CPU cores and a Pascal graphics processor – it’s all water-cooled for maximum efficiency (and you can plumb it into existing water-cooled self-drive motors for testing).