We'll showcase how to leverage GPUs inside Linux containers using NVIDIA-docker. Containerizing GPU applications provides multiple benefits: 1) Developers can have reproducible builds and deploy their software seamlessly. 2) GPU applications can run across heterogeneous OS/driver/toolkit environments with no performance overhead. 3) GPU devices can be isolated and assigned to different users or different tasks. We'll go through the particularities of GPU containers and demonstrate how to use container images, from the most basic NVIDIA CUDAapplication to the most complicated deep learning frameworks. We may also present other containers technologies besides Docker/NVIDIA-docker, for instance the Singularity project from Lawrence Berkeley National Laboratory, if not already covered by other speakers.