Stay on Top of Enterprise Technology Trends
Get updates impacting your industry from our GigaOm Research Community
The next mobile chip from Nvidia is aimed at a really big mobile device: The automobile. Nvidia unveiled the Tegra X1, which it calls a “mobile super chip,” at its Sunday night CES press event. The company hopes the powerful silicon will form the basis for automotive interfaces, thanks to its highly efficient, graphics and processing capability.
The Tegra X1 is a big step up from last year’s Tegra K1, which is now appearing in a few tablets, phones and Chromebooks. Check this chart to see the difference, which shows the [company]Nvidia[/company] Tegra X1 and its 256 GPU cores offering a teraflop of computing performance.
That’s too much for a phone, according to Nvidia, but just right for the car of the future. All of that graphics capability can power what Nvidia expects will be many screens in a vehicle: The drive cluster, the infotainment center, and possibly even some other screens such as side mirrors that double as displays.
To that end, the Tegra X1 is the centerpiece of Nvidia’s Drive CX platform: A combination of hardware and software to power next-generation cars. It works with QNX, Linux and Android software so auto-makers can choose their platform and customize the dashboard. Nvidia has a reference design to work with, however, called Drive Studio.
Drive CX is just part of the automotive package though. Nvidia’s Drive PX uses a pair of Tegra X1 chips to work with cameras for a self-driving car, which the company says will need massive processing power. Even that’s not enough though, which is why Nvidia announced deep learning technology for cars to become “situationally aware.”
The idea is that cameras and computers can’t rely on every situation appearing in a database; instead, algorithms are needed to better understand always changing situations.
By feeding more data into the deep neural network and applying more processing power, vehicles don’t have to rely on a set number of distinct images to “see” around them. Instead, they can learn and recognize items around them, even if there’s not an exact match in the database. Here is an example of the system not just understanding what a traffic light looks like but what color some of them are showing.
The Drive PX demo classified between 5 and 10 objects yet it had power to spare. Nvidia said its new super chip was only using 10 percent of its capabilities and could recognize 150 or so objects, with about 30 classified per second.