All Posts

Unlock the Power of Mistral AI with Red Hat OpenShift AI and NVIDIA DGX H100

I will guide you through the process of deploying Red Hat OpenShift AI on the NVIDIA DGX H100 system and run the Mistral AI model. This blog post details the process of deploying and managing a fully automated MLOps solution for a large language model (LLM) presented in three main parts:

How to install Fedora and MicroShift on the NVIDIA Jetson AGX Xavier

On Dec 6, 2021, NVIDIA released the new UEFI/ACPI Experimental Firmware 1.1.2 for Jetson AGX Xavier and Jetson Xavier NX.

In this blog post, I will:

OpenShift Bare Metal provisioning with NVIDIA GPU

TL;DR

The bare metal installation of OCP is only this installer command:

$ openshift-baremetal-install --dir ~/clusterconfigs create cluster

but I’ll take time in this post to explain how to prepare your platform and how to follow the installation.

NVIDIA GPU Operator with OpenShift 4.3 on Red Hat OpenStack Platform 13

The NVIDIA GPU Operator has been available as a Beta since 2020, Jan 27, it’s a technical preview release: https://github.com/NVIDIA/gpu-operator/release

The GPU Operator manages NVIDIA GPU resources in an OpenShift cluster and automates tasks related to bootstrapping GPU nodes. Since the GPU is a special resource in the cluster, it requires a few components to be installed before application workloads can be deployed onto the GPU, these components include:

OPAE with Intel FPGA PAC with Arria 10 GX

OPAE is the Open Programmable Acceleration Engine, a software framework for managing and accessing programmable accelerators (FPGAs): https://01.org/opae

The OPAE SDK is open source and available in this git: https://github.com/OPAE/opae-sdk

How to enable NVIDIA T4 GPU with podman

We will enable GPU with Podman on a RHEL 8.1 system with a NVIDIA Tesla T4: NVIDIA Tesla T4

Take a RHEL 8.1 system:

[egallen@lab0 ~]$ cat /etc/redhat-release 
Red Hat Enterprise Linux release 8.1 (Ootpa)

Enable sudo passwordless:

OpenShift 4.2 on Red Hat OpenStack Platform 13 + GPU

Red Hat OpenShift Container Platform 4.2 introduces the general availability of full-stack automated deployments on OpenStack. With OpenShift 4.2, containers can be managed across multiple public and private clouds, including OpenStack. Red Hat and NVIDIA are working to provide the best platform for Artificial Intelligence and Machine Learning workloads.

NVIDIA vGPU with Red Hat OpenStack Platform 14

Red Hat OpenStack Platform 14 is now generally available \o/

NVIDIA GRID capabilities are available as a technology preview to support NVIDIA Virtual GPU (vGPU). Multiple OpenStack instances virtual machines can have simultaneous, direct access to a single physical GPU. vGPU configuration is fully automated via Red Hat OpenStack Platform director.

NVIDIA Tesla GPU PCI passthrough with Red Hat OpenStack Platform 13

Red Hat OpenStack Platform provides two ways to use NVIDIA Tesla GPU accelerators with virtual instances:

  • GPU PCI passthrough (only one physical GPU per instance)
  • vGPU GRID (one physical GPU can be shared to multiple instances, Tech Preview OSP14)

This blog post is intended to show how to setup GPU PCI passthrough.

USB Passthrough with Red Hat OpenStack Platform 13

Some OpenStack users users would like to attach USB devices to OpenStack instances for security or legacy applications.

For example, a security application which run inside an OpenStack instance could require access to a Java card from an USB Gemalto eToken: