eCommerceNews New Zealand - Technology news for digital commerce decision-makers
Story image
NVIDIA AI Enterprise software catapults business operations forward
Tue, 19th Mar 2024

Businesses looking to incorporate artificial intelligence (AI) into their operations just got a significant boost with the introduction of NVIDIA AI Enterprise 5.0. Unveiled by NVIDIA, one of the leading players in AI and accelerated computing, the new suite of software tools includes cloud Application Programming Interfaces (APIs) for inference, pave the way for developing AI-powered applications.

As a comprehensive platform, NVIDIA AI Enterprise 5.0 provides NVIDIA microservices, software containers that can be downloaded and used to deploy generative AI applications. Available from leading cloud service providers, system builders and software vendors, it is already used by high-profile clients such as Uber. Albert Greenberg, vice president of platform engineering at Uber, emphasised the importance of the new software solution, saying "Our adoption of NVIDIA AI Enterprise inference software is important for meeting the high performance our users expect."

Microservices, small, autonomous services that work together to form a complex application, have become an increasingly popular way of developing applications on an enterprise scale. The new NVIDIA suite includes a wide range of such microservices, including the NVIDIA NIM for deploying AI models in production and NVIDIA CUDA-X collection of microservices.

Albert Greenberg further elaborated on Uber's adoption of this system, stating, "Uber prides itself on being at the forefront of adopting and using the latest, most advanced AI innovations to deliver a customer service platform that sets the industry standard for effectiveness and excellence."

The NVIDIA NIM microservices optimise inference for a large number of popular AI models from NVIDIA and its partner ecosystem, significantly reducing deployment times from weeks to minutes. Composed of Triton Inference Server, TensorRT, and TensorRT-LLM, the NIM offers a secure and manageable solution based on industry standards and compatible with enterprise-grade management tools.

One feature worth noting is the NVIDIA cuOpt, a GPU-accelerated AI microservice that holds world records for route optimisation. CUDA-X microservices like cuOpt empower dynamic decision-making that reduces cost, time, and carbon footprint. Another promising feature, NVIDIA RAG LLM operator, is currently in early access and seeks to make the transition from pilot to production without code rewriting for generative AI applications.

NVIDIA AI Enterprise 5.0 also comes with additional tools and features. It packs NVIDIA AI Workbench, a developer toolkit for quickly downloading, customising, and running generative AI projects. Furthermore, the platform now supports Red Hat OpenStack Platform, a commonly-used environment amongst Fortune 500 companies. Version 5.0 also has extended support to cover a wide range of the latest NVIDIA GPUs, networking hardware and virtualization software.

Companies now have flexible options to access the enhanced NVIDIA AI platform. The software can be deployed for applications in the data centre, the cloud, on workstations, or at the network's edge. NIM and CUDA-X microservices and all other 5.0 features will soon be available on leading cloud marketplaces. Various system providers are also supporting NVIDIA AI Enterprise, and users will share their experiences with the software at the upcoming NVIDIA GTC conference.

The introduction of NVIDIA AI Enterprise 5.0 marks a significant step in the advancement and simplification of AI application for business use. It offers companies an efficient and effective way to incorporate AI tools into their operations, and given its scalable and user-friendly features, it is likely to be positively received within the market.