Canonical Accelerates AI Application Development With NVIDIA AI Enterprise

Charmed Kubernetes support comes to NVIDIA AI Enterprise

Canonical’s Charmed Kubernetes is now supported on NVIDIA AI Enterprise 5.0. Organisations using Kubernetes deployments on Ubuntu can look forward to a seamless licensing migration to the latest release of the NVIDIA AI Enterprise software platform providing developers the latest AI models and optimised runtimes.

NVIDIA AI Enterprise 5.0

NVIDIA AI Enterprise 5.0 is supported across workstations, data centres, and cloud deployments, new updates include:

  • NVIDIA NIM microservices is a set of cloud-native microservices developers can use as building blocks to support custom AI application development and speed production AI, and will be supported on Charmed Kubernetes.
  • NVIDIA API catalog: providing quick access for enterprise developers to experiment, prototype and test NVIDIA-optimised foundation models powered by NIM. When ready to deploy, enterprise developers can export the enterprise-ready API and run on a self-hosted system
  • Infrastructure management enhancements include support for vGPU heterogeneous profiles, Charmed Kubernetes, and new GPU platforms.

Charmed Kubernetes and NVIDIA AI Enterprise 5.0

Data scientists and developers leveraging NVIDIA frameworks and workflows on Ubuntu across the board now have a single platform to rapidly develop AI applications on the latest generation NVIDIA Tensor Core GPUs. For data scientists and AI/ML developers who would like to deploy their latest AI workloads using kubernetes, it is vital to leverage the most performance out of Tensor Core GPUs through NVIDIA drivers and integrations.

Fig. NVIDIA AI Enterprise 5.0

With Charmed Kubernetes from Canonical, several features are provided that are unique to this distribution including inclusion of NVIDIA operators and GPU optimisation features, composability and extensibility using customised integrations through Ubuntu operating system.

Best-In-Class Kubernetes from Canonical 

Charmed Kubernetes can automatically detect GPU-enabled hardware and install required drivers from NVIDIA repositories. With the release of Charmed Kubernetes 1.29, the NVIDIA GPU Operator charm is available for specific GPU configuration and tuning. With support for GPU operators in Charmed K8s, organisations can rapidly and repeatedly deploy the same models utilising existing on-prem or cloud infrastructure to power AI workloads. 

With the NVIDIA GPU operator, users can automatically detect the GPU on the system and install NVIDIA repositories. It also allows for the most optimal configurations through features such as NVIDIA Multi-Instance GPU (MIG) technology in order to leverage the most efficiency out of the Tensor Core GPUs. GPU-optimised instances for AI/ML applications reduce latency and allow for more data processing, freeing for larger-scale applications and more complex model deployment. 

Paired with the GPU Operator, the Network Operator enables GPUDirect RDMA (GDR), a key technology that accelerates cloud-native AI workloads by orders of magnitude. GDR allows for optimised network performance, by enhancing data throughput and reducing latency. Another distinctive advantage is its seamless compatibility with NVIDIA’s ecosystem, ensuring a cohesive experience for users. Furthermore, its design, tailored for Kubernetes, ensures scalability and adaptability in various deployment scenarios. This all leads to more efficient networking operations, making it an invaluable tool for businesses aiming to harness the power of GPU-accelerated networking in their Kubernetes environments.

Speaking about these solutions, Marcin “Perk” Stożek, Kubernetes Product Manager at Canonical says: “Charmed Kubernetes validation with NVIDIA AI Enterprise is an important step towards an enterprise-grade, end-to-end solution for AI workloads. By integrating NVIDIA Operators with Charmed Kubernetes, we make sure that customers get what matters to them most: efficient infrastructure for their generative AI workloads.” 

Getting started is easy (and free). You can rest assured that Canonical experts are available to help if required.

By: Karen Horovitz
Originally published at: Ubuntu Blog

Source: cyberpogo.com



For enquiries, product placements, sponsorships, and collaborations, connect with us at hello@firegulaman.com. We'd love to hear from you!

Recent Articles

spot_img

Related Stories