I’ve been trying to find ways to speed up training of my autonomous driving networks, since the current training time is about 12 hours per epoch. One of my most recent efforts has been trying to upgrade to the newly released CUDA 9.0 RC and CuDNN 7 packages from Nvidia. While these are optimized for Nvidia’s new Volta architecture, they claim it speeds up operations on Pascal GPUs like the 1080 Tis my lab has as well.
To get CUDA 9 and CuDNN 7 working with PyTorch, the deep learning framework all of my group’s research code is written in, I had to clone Pull Request #2263 from the PyTorch GitHub, which is written by an Nvidia engineer to add CUDA 9 and CuDNN 7 support to PyTorch. However, it turned out there were some other issues with this. To get everything to work, here are the steps I had to follow:
- Download and install CUDA 9
- Download and install CuDNN 7
- Download and install NCCL
- Download and install Anaconda for Python 3.6
- Run the following workaround for NCCL:
- Clone the CUDA 9 branch for PyTorch:
- Compile PyTorch
Once I finally got it working, I ran a speed test by running the PyTorch MNIST example on an AWS p2.xlarge instance with an Nvidia Tesla K80 using all of the default settings in the example code. Unfortunately, the speed test didn’t show CUDA 9 speeding up training in this case. On an instance with CUDA 8 and CuDNN 6, the MNIST example took 88 seconds to train 10 epochs. On an instance where I did the above steps to get CUDA 9 and CuDNN 7 working, it took 89 seconds. More experimentation is required to see if extra performance can be squeezed out of CUDA 9 and CuDNN 7.