Nvidia outlines how the company will grow for the next 10 years

At Nvidia’s keynote address at the GPU Technology Conference in Japan this week, CEO Jensen Huang laid out a plan to expand the reach of the company’s leadership position in artificial intelligence (AI). That includes strengthening the secondary AI field of inference compute, while also solidifying plans for the company to take its technology across a broad spectrum of segments.

Huang essentially laid out the next 10 years of growth at Nvidia.

Using Nvidia’s NVDA, +1.17% foothold in deep learning and AI processing in the data center, along with the market-driving work in autonomous vehicles and edge-based inference systems (think smaller systems to process live security camera feeds, etc.), Huang said he can leverage much of that work into areas like industrial automation, manufacturing, robotics and medical instrumentation.

There were three distinct announcements:

1. New GPU architecture for inference

I recently talked about the Nvidia announcement in conjunction with Google GOOG, +1.08% GOOGL, +0.90% to bring a product called the Tesla P4 to Google Cloud services and accelerate artificial-intelligence inference for it and its customers. (Inference is the process of taking an intelligent algorithm generated by massive AI server systems and applying it on end-user data.) That piece detailed the use cases for inference, how it would quickly grow to be an order of magnitude more valuable market than training, and why Nvidia could lead there.

During Huang’s keynote we saw the new Tesla T4, an upgrade and replacement for the P4 that brings the latest Turing graphics architecture to this segment. This is the same design of chip that is used in the professional Quadro RTX and gaming-centric GeForce RTX products but is power- and form-factor-optimized for hyperscale server customers.

This is the first time that Tensor Cores, a new feature in Turing that wasn’t in the P4, are included, dramatically accelerating deep-learning math functions and, therefore, AI processing. Nvidia is promising a quadrupling of performance improvement on speech recognition, a doubling of improvement on real-time video processing, and a tripling to quadrupling of improvement on AI-based recommendation systems popular with many online providers.

Over the next one to three years, we will see the adoption of AI-aided compute in nearly all aspects of consumer and commercial applications. Sorting and managing over a billion videos on Facebook FB, -0.40% each day and quickly processing more than 780 million voice search requests on Bing is just a drop in the bucket of the impact that AI and inference compute workloads will culminate in.

Perhaps equally important as the hardware announcement was a software ecosystem update with something called an “inference container.” These are easily deployable and managed software systems that allow developers to get up and running quickly, scale up easily and keep people inside the Nvidia ecosystem as they learn and grow. This methodology worked wonderfully for the company with CUDA and the graphics-compute revolution that led us to where are today and has kept Nvidia at the front of the pack.

2. New AGX family for autonomous machines

Huang announced a new brand for Nvidia called AGX that will encompass the existing platforms of Drive (for autonomous vehicles) and Jetson (for edge computing and robotics). It makes sense: Robots have a similar compute challenge to autonomous cars, including sensor input, path planning, actuation of drive systems, etc.

But robotics is potentially more complex, as the AI systems need to be able to interact with humans and other objects, not simply avoiding hitting something.

A new palm-sized module called the Jetson AGX Xavier was showcased as a development system available immediately for potential customers and researchers who want to apply artificial-intelligence processing to robotic systems.

Launching this in Japan isn’t a coincidence. Nvidia announced some of the most prominent Japanese industrial companies as partners that are integrating AGX. This includes Yamaha, which will use the Nvidia tech to power a broad portfolio of AI-driven machines from agriculture to last-mile cars and even marine products. This gives coverage for ground vehicles, drones and boats, all beginning testing in 2019.

Other prominent names include Fanuc FANUY, +1.22% which will use Nvidia AGX to “reshape manufacturing” with optimized factories, and Komatsu KMTUY, +1.20% which is looking to create a fleet of autonomous construction and mining vehicles to improve safety and productivity.

3. Medical instrumentation

Finally, Nvidia is going after a portion of the $100 billion industry that is medical instruments. Huang believes that his company can improve on the current system by using AI to increase the quality of sensor data from existing machines.

In the medical field, instruments tend to have a 10-plus year life span. Compared with the cadence to compute capability changes, that feels Paleolithic. But changing the upgrade and integration cycles of medical equipment seems to be an impossible (or at least very long-term) task, so instead, the adoption of Clara AGX systems will allow health-care providers to improve the quality and experience of medical care without delay.

Using AI systems to analyze the data output from the current sensors already in place, Nvidia says it can help twofold. First, professionals could lower the time and exposure of patients to MRI machines (as an example) by half, but still improve the quality of the readings by 10-fold.

After being trained in the cloud with high-performance AI platforms, Clara AGX would be to apply these algorithms through inference on the data as it is generated in the field. Nvidia built the Clara SDK (software development kit) to help customers properly process the data from existing systems, allowing us to see improved results and better quality of life immediately rather than 10 years from now when that instrument is actually replaced.

Must Read

error: Content is protected !!