Nvidia Tegra X1: 256 Octa Core
Nvidia announced a wide variety opf new products Sunday night during CES while the year before, they introduced the first fully programmable mobile GPU with DX12 support, based on its Kepler desktop GPU. Now, Nvidia launched the new Tegra X1 octa core SoC with a 4×4 configuration and 256-core Maxwell-based GPU, H.265 support, and full VP9 decode.
Compared to the current Tegra K1, Tegra X1 has a 33% increase in cores with Maxwell’s superior performance-per-watt, power consumption, and bandwidth-saving capabilities. Thanks to its unprecedented 16-bit floating numbers, mobile GPU, Tegra X1 has a significant increase in theoretical performance.
Nvidia also argues that the new Tegra X1 will be superior to any predecessors power efficiency-wise as well, and considering Maxwell’s record with that area, it isn’t that big of a stretch. While it took almost two years for the release of Tegra K1 after the release of the first GTX 680, Maxwell’s debut in mobile is almost a year after the launching of the GTX 750 Ti. Even though Nvidia CEO Jen-Hsun Huang announced a four month time line, Maxwell-class hardware were made available from the beginning of 2014.
Where is all that performance going to?
Nvidia intends to integrate the new Tegra X1 into the automotive computing area in which, they argue, X1 would be essential. The company has already created their own automotive platform (Drive CX) which they nicknamed ‘Digital Cockpit’.
Besides being a development platform for Nvidia’s Drive Studio, Drive CX aims to replace the usual cockpit with a virtualized version composed solely of multiple touchscreens full with applications. This idea sounds fine in theory but is real-life application is questionable at best. MyFord Touch and other similar critics argue that this all-touchscreens idea is simply not possible for real world application.
But Nvidia has even more ambitious plans than rich displays. They are pushing that automotive companies integrate their hardware with their systems in order to build multiple camera analysis systems then use those results to create next-generation self-driving cars.
Jen-Hsun’s selling pitch is using a ‘deep neural net’ approach in order to program a chip in order to better differentiate what a pedestrian is or isn’t so that the vehicle can be more situationally aware. If the concept of a self-driving car is ever to come to life, approaches such as these must be thoroughly thought through and fixed if and when needed.
Is Tegra X1 going to come with all of these new advancements? We’re not sure. While car manufacturers have made things difficult for mobile SoC implementation, Nvidia’s partner Audi cooperated with several of the ideas which include all-digital control systems, surround view cameras and fully digital cockpits, having worked with them in the past on other CES shows. Of course this could be seen as a part of larger, more complex five-to-10 year automotive roadmap strategy. Nvidia considers that the GPU at the center of their X1 is a potential solution for the self-driving concept car’s problems such as navigating through various obstacles, self-parking and detecting its surroundings, problems which Google has been dealing with as well.
Mobile part ‘ solely with an automotive focus?
Tegra X1 might debut in some tablets and hardware although Jen-Hsun said absolutely nothing regarding these markets. It’s also unclear whether or not the new SoC’s chip is either a Project Denver-derived core or a conventional Cortex A57 (or even A53 in a big, Little configuration).
What was very clear was Nvidia’s intention to distance itself from the consumer market. It may upgrade its Shield tablet but the company is slowly stepping out of the Andriod mass market competition with the likes of Samsung and Qualcomm.