NVIDIA in Docker

Libor Jonát

What we wanted?

  • Run TensorFlow easily
  • Quickly reproducible environment
  • Multiple TensorFlow versions and easy upgrades
  • Project isolation

Use Docker

  • Quickly reproducible environment
  • Project isolation
  • Hardware isolation

How?

https://github.com/NVIDIA/nvidia-docker/
  • Just a wrapper around the docker command line tool
  • Supports GPU isolation when you have multiple GPUs
  • Specialized Docker images by NVIDIA:
    • Ubuntu
    • CentOS

Usage

nvidia-docker run -it tensorflow/tensorflow:latest-gpu

If you don’t want to use the nvidia-docker wrapper, you can add the command line arguments manually:

docker run \
	--device=/dev/nvidiactl \
	--device=/dev/nvidia-uvm \
	--device=/dev/nvidia0 \
	...

Ready-made images

Caffe
https://github.com/BVLC/caffe/
Caffe2
https://hub.docker.com/r/caffe2ai/caffe2/
TensorFlow
https://hub.docker.com/r/tensorflow/tensorflow/
MXNet
https://hub.docker.com/r/mxnet/
PyTorch
https://hub.docker.com/r/pytorch/pytorch/

We forked the TensorFlow image and added support for Python 3 https://hub.docker.com/u/quantlane/tensorflow

How to let a non-admin user use this

  1. Add a non-privileged UNIX user…
  2. …that is not a member of the docker group.
  3. Have a long-running TensorFlow container
  4. Allow the non-privileged user to run
    docker exec -it ...
    (E.g. using sudo.)

Future


Thank you

quantlane.com