Use case: how to build and run Docker containers with NVIDIA GPUs

Use case: how to build and run Docker containers with NVIDIA GPUs

Docker Consulting Series – Building & Running Containers With NVIDIA GPUs

In this installment of our Docker development and consulting series, we look at how to build and run containers using high-powered NVIDIA GPUs. GPU-accelerated computing is the use of a graphics processing unit to accelerate deep learning, analytics, and engineering applications. First introduced in 2007 by NVIDIA, today GPU accelerators power energy-efficient data-centers worldwide and play a key role in applications’ acceleration.

Containerizing GPU applications provides multiple benefits, such as ease of deployment, streamlined collaboration, isolation of individual devices and many more. However, Docker® containers are most commonly used to easily deploy CPU-based applications on several machines, where containers are both hardware- and platform-agnostic. Docker engine doesn’t support natively NDIVIA GPUs as it uses specialized hardware and requires installing he NVIDIA driver.

For one of our projects we had to use a graphics processing unit to build and run Docker containers. Further, we offer you a step-by-step description of how this was achieved.

To start, we’re going to need a server with NVIDIA GPU. Hetzner has a server with GeForce® GTX 1080

Requirements:

OS
CentOS 7.3 

Docker
Docker version 17.06.0-ce 

NVIDIA Drivers
latest 

 

Let’s download and install necessary drivers for this graphic card:

After downloading, we need to install driver, performing all the steps

Here’s how Nvidia and Docker work together:

We will need to install nvidia-docker и nvidia-docker-plugin. You can learn more about how to do that on nvidia github

Launching service:

Testing:

Should get the following result:

Docker container with GPU support in orchestrator.

 

* Docker Swarm is not suitable as in docker-compose V3 there is no possibility to get in the inside of the device.

 

From the official website:

Thus, we can use the resources of the graphic card, but if we need to use orchestration tools, then the nvidia-docker will not be able to start, since it is an add-on over the Docker.

We’ve just launched container in the Rancher claster.

Now let’s dive into the details of what nvidia-docker actually is. Basically, this is a service that creates a Docker volume and mounts the devices into a container.

To find out what was created and mounted, we will need to run the following command:

Here’s the result:

For mathematical calculations, we use a Python library – tensorflow-gpu (TensorFlow)

 

 

Let’s write Dockerfile, where the base image is taken from Docker Hub Nvidia/CUDA

 

Then write docker-compose to build and run the compute container:

 

Launching Docker container:

If everything is done correctly, then when you run the command:

 

You get the following result:

In the processes, you can see that python uses 56% of the GPU

Thus, we’ve just taught Docker, the leading container platform, to work with GeForce graphic cards, and it can now be used to containerize GPU-accelerated applications. This means you can easily containerize and isolate accelerated application without any modifications and deploy it on any supported GPU-enabled infrastructure.

 

Like it?
If you want to receive interesting information – subscribe!

Add comment

E-mail is already registered on the site. Please use the Login form or enter another.

You entered an incorrect username or password

Sorry that something went wrong, repeat again!
Contact us