All Posts

How to install Fedora and MicroShift on the NVIDIA Jetson AGX Xavier

On Dec 6, 2021, NVIDIA released the new UEFI/ACPI Experimental Firmware 1.1.2 for Jetson AGX Xavier and Jetson Xavier NX.

In this blog post, I will:

OpenShift Bare Metal provisioning with NVIDIA GPU

TL;DR

The bare metal installation of OCP is only this installer command:

$ openshift-baremetal-install --dir ~/clusterconfigs create cluster

but I’ll take time in this post to explain how to prepare your platform and how to follow the installation.

Red Hat OpenStack Platform 16.1

We are testing OCS on OCP on OSP, this installation is described in three parts:

  • Part 1: Red Hat OpenStack Platform 16.1 installation
  • Part 2: OpenShift Container Platform 4.6 installation
  • Part 3: OpenShift Container Storage 4.5 installation

Let’s first deploy Red Hat OpenStack Platform 16.1.

CodeReady Containers with GPU for Data Science

Lots of Data Scientists want to focus on model building.

Just using a local Jupyter Notebook can be a limitation if you want to:

  • create scalable Machine Learning systems
  • test local private data ingestion
  • contribute to Kubeflow
  • tune your model serving pipeline

You can build an All-in-One Kubernetes environment with NVIDIA GPU for Data Science on your local PC or one bare-metal cloud server, let’s see how CodeReady Containers works.

Open Data Hub v0.5.1 release

Open Data Hub v0.5.1 was released February 16, 2020.

Release node: https://opendatahub.io/news/2020-02-16/odh-release-0.5.1-blog.html

Open Data Hub includes many tools that are essential to a comprehensive AI/ML end-to-end platform. This new release integrate some bug fixes that resolve issues when deploying on OpenShift Container Platform v4.3. JupyterHub deployment and Spark cluster resources have now a greater customization. Let’s try the Data science tools JupyterHub 3.0.7 on OpenShift 4.3.

NVIDIA GPU Operator with OpenShift 4.3 on Red Hat OpenStack Platform 13

The NVIDIA GPU Operator has been available as a Beta since 2020, Jan 27, it’s a technical preview release: https://github.com/NVIDIA/gpu-operator/release

The GPU Operator manages NVIDIA GPU resources in an OpenShift cluster and automates tasks related to bootstrapping GPU nodes. Since the GPU is a special resource in the cluster, it requires a few components to be installed before application workloads can be deployed onto the GPU, these components include:

OPAE with Intel FPGA PAC with Arria 10 GX

OPAE is the Open Programmable Acceleration Engine, a software framework for managing and accessing programmable accelerators (FPGAs): https://01.org/opae

The OPAE SDK is open source and available in this git: https://github.com/OPAE/opae-sdk

How to enable NVIDIA T4 GPU with podman

We will enable GPU with Podman on a RHEL 8.1 system with a NVIDIA Tesla T4: NVIDIA Tesla T4

Take a RHEL 8.1 system:

[egallen@lab0 ~]$ cat /etc/redhat-release 
Red Hat Enterprise Linux release 8.1 (Ootpa)

Enable sudo passwordless:

Red Hat OpenStack Platform 15 standalone

If you need to deploy quickly a testing Red Hat Openstack Platform environment, you can use standalone deployment available since Red Hat OpenStack Platform 14.

OpenShift 4.2 on Red Hat OpenStack Platform 13 + GPU

Red Hat OpenShift Container Platform 4.2 introduces the general availability of full-stack automated deployments on OpenStack. With OpenShift 4.2, containers can be managed across multiple public and private clouds, including OpenStack. Red Hat and NVIDIA are working to provide the best platform for Artificial Intelligence and Machine Learning workloads.

NVIDIA vGPU with Red Hat OpenStack Platform 14

Red Hat OpenStack Platform 14 is now generally available \o/

NVIDIA GRID capabilities are available as a technology preview to support NVIDIA Virtual GPU (vGPU). Multiple OpenStack instances virtual machines can have simultaneous, direct access to a single physical GPU. vGPU configuration is fully automated via Red Hat OpenStack Platform director.

NVIDIA Tesla GPU PCI passthrough with Red Hat OpenStack Platform 13

Red Hat OpenStack Platform provides two ways to use NVIDIA Tesla GPU accelerators with virtual instances:

  • GPU PCI passthrough (only one physical GPU per instance)
  • vGPU GRID (one physical GPU can be shared to multiple instances, Tech Preview OSP14)

This blog post is intended to show how to setup GPU PCI passthrough.

USB Passthrough with Red Hat OpenStack Platform 13

Some OpenStack users users would like to attach USB devices to OpenStack instances for security or legacy applications.

For example, a security application which run inside an OpenStack instance could require access to a Java card from an USB Gemalto eToken: