Late last month, Nvidia’s Director of Product Marketing Paresh Kharya revealed some of the company’s plans for taking AI and machine learning “to the next level” at Computex 2019.

Nvidia is taking steps to introduce a number of AI products and services that are enhanced by the inference engine. The inference engine is a component of AI that enables it to draw its own conclusions after being presented with a set amount of data. After the data has been analyzed and a pattern has been found, the pattern is established as the set of “rules” it bases its conclusions on. For a more detailed explanation of the inference engine, read it here.

Some of Nvidia’s offerings down the pipeline include the Nvidia T4 GPU, a lower-cost and more power-efficient (consuming only 70 watts) alternative to more powerful variants like the Nvidia V100. The Nvidia T4 GPU is designed to offer higher efficiency on mainstream enterprise servers and is perfect for AI-driven applications like AI-based customer support chatbots, cybersecurity, smart retail and smart manufacturing.

For more demanding and time-sensitive applications, Nvidia offers the EGX platform, which consists of Nvidia Jetson Nano-based micro servers. These “micro-servers” boast of low-power consumption while providing the computing speed of one-half trillion operations per second (TOPS). The EGX platform is best suited for mining and processing huge volumes of data to gather business insights, such as real-time speech recognition, facial recognition and other demanding AI tasks that need to be processed in real time.

Finally, Nvidia will also offer an NGC-Ready Validation Program featuring a set of systems comprised of GPU-accelerated software, pre-trained AI models, model training for data analytics, machine learning, deep learning and CUDA-X AI accelerated computing with Nvidia T4 or V100 GPUs. This AI-driven software and hardware setup will be made available to businesses that require a diverse range of AI-driven workloads and huge amounts of data.

While the application of the inference engine in this way is new, the inference engine has been around for quite some time; in fact, it’s been operating without many people knowing it. The inference engine is used on search engines, streaming video services like Netflix, and even Amazon’s Alexa to provide “suggestions” to their users after compiling a database on them and analyzing their preferences.

To know more about Nvidia’s application of the inference engine to its upcoming products and services, read the story here: https://www.digitimes.com/news/a20190530PR209.html?chid=9

Leave a Reply