Installing CUDA, tensorflow, torch for R & Python on Ubuntu 20.04
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
Last weekend, I finally managed to get round to upgrading Ubuntu from version 19.10 to the long-term support release 20.04 on my workhorse laptop. To be precise, I’m using the Kubuntu flavour since I’m more of a KDE guy myself. I usually do a fresh install on those occasions, instead of a dist_upgrade
, because it’s a good opportunity to remove clutter and update software that I might otherwise just keep at an older version, out of convenience.
One of my main goals this year is to get better at deep learning (DL) in R and Python – and there’s no way around using GPUs for those purposes. My laptop, a Dell G3 15, has a Nvidia GeForce GTX 1660, which at the time of writing does a decent job at playing with smaller neural networks which can then be scaled up on cloud platforms such as Kaggle Notebooks.
Setting up GPU-powered DL libraries on your local machine can still be a somewhat daunting task. Having just done it successfully (crossing fingers; nothing broke yet) I decided to write down my notes and experiences while they are still fresh. At a minimum, this will help me the next time I set up at DL machine. And maybe my experience can even be helpful to others in a similar situation.
Note: before starting you want to be sure that your machine has a Nvidia GPU that’s recent enough to run DL software. If in doubt, read up on compute capability (and consult those tables).
High-level overview & main challenge
These are the main ingredients you need to enable your R & Python DL packages:
CUDA drivers to access your GPU.
The cuDNN library which provides GPU acceleration.
For Python, the DL framework of your choice: Tensorflow or Pytorch.
For R, the
reticulate
package forkeras
and/or the newtorch
package.
These steps by themselves are not that hard, and there is a reasonable amount of documentation available online. The main challenge lies in finding the right library versions that play nicely together. This difficulty stems primarily from the breakneck speed at which all the parts of the DL ecosystem continue to evolve. New features are constantly being implemented, and older versions might no longer be supported. For instance, Tensorflow version 2 is significantly re-imagined (and considerably more beginner friendly) than version 1.
As a result, the latest GPU driver library versions might not always be supported by the latest DL package version. I ran into this problem at the very end of my first installation attempt (when installing Pytorch) and decided that it would be easier to redo everything from scratch. And indeed, the second installation went much smoother and faster. I hope that my lost hours are your gain, dear reader, and that my repeated experience will prove useful in one way or another.
Below I outline the necessary installation steps.
Step-by-step installation
Prerequisites: a clean system
It’s possible that you already have some CUDA or Nvidia libraries installed. But honestly, the best way is to remove everything and start with a clean install. Otherwise there’s just too much danger of version clashes or duplicated paths. The following steps accomplish this. This is also the way in which you can clean up a botched or wrong CUDA installation (like I did) and start afresh. The following is copied from the CUDA installation manual (more on this in the next step):
# Remove CUDA Toolkit: sudo apt-get --purge remove "*cublas*" "*cufft*" "*curand*" "*cusolver*" "*cusparse*" "*npp*" "*nvjpeg*" "cuda*" "nsight*" # Remove Nvidia Drivers: sudo apt-get --purge remove "*nvidia*" # Clean up the uninstall: sudo apt-get autoremove
Some more clean-up tips are given in this article.
CUDA drivers
Let’s get the CUDA GPU drivers (aka CUDA toolkit). Note, that there are instructions for this on software-specific websites, such as for Tensorflow. However, those aren’t always up to date, and I recommend instead to follow the official CUDA installation manual which is really good and detailed.
So detailed, in fact, that in can be a little overwhelming at first contact. Here I break down the essential steps:
Choose and install the appropriate CUDA version
There’s a nice little platform selector linked in the manual, but do not use this version. Or at least double check if you want this version. Because this link always chooses the most recent CUDA version, which is 11.2 as I’m writing these lines. Now, also at the time of writing, Pytorch & torchlib only support CUDA 11.0 (not the latest 11.2) and Tensorflow 2.4 is also build against the same version. Therefore, we want to install CUDA 11.0.
(If you decide to install the latest CUDA version instead, there are some troubleshooting notes at the very bottom of this article that might help you out in a pinch.)
You can get the CUDA 11.0 toolkit here. This gives you the exact same platform selection steps. This is my configuration, which gives me the following install commands:
wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/cuda-ubuntu2004.pin sudo mv cuda-ubuntu2004.pin /etc/apt/preferences.d/cuda-repository-pin-600 wget https://developer.download.nvidia.com/compute/cuda/11.0.3/local_installers/cuda-repo-ubuntu2004-11-0-local_11.0.3-450.51.06-1_amd64.deb sudo dpkg -i cuda-repo-ubuntu2004-11-0-local_11.0.3-450.51.06-1_amd64.deb sudo apt-key add /var/cuda-repo-ubuntu2004-11-0-local/7fa2af80.pub sudo apt-get update sudo apt-get install cuda
Essentially, you download the CUDA toolkit as a .deb
package, add the CUDA repository for Ubuntu 20.04, and install. The pin
stuff makes sure that you continue to pull CUDA stuff from the right repository in the future (see e.g. here).
The .deb
file is about 2.2 GB, so you might want to get a cup of coffee or tea while downloading.
Set the correct library paths
The easiest way is to copy those three lines into your .bashrc
. (What is bashrc?).
export PATH=/usr/local/cuda-11.0/bin${PATH:+:${PATH}} export LD_LIBRARY_PATH=/usr/local/cuda-11.0/lib64:$LD_LIBRARY_PATH export LD_LIBRARY_PATH=/usr/local/cuda-11.0/include:$LD_LIBRARY_PATH
Confirm the install
To make sure that everything is working, run those commands. None of them should throw an error:
cat /proc/driver/nvidia/version nvcc -V nvidia-smi
This last tool, SMI, is very useful to see your driver versions and also the GPU memory usage during training.
Optional libraries
Not strictly necessary, but probably useful in one way or another. In my case, I had most of those already installed anyway:
sudo apt-get install g++ freeglut3-dev build-essential libx11-dev libxmu-dev libxi-dev libglu1-mesa libglu1-mesa-dev
cuDNN libraries
You also need Nvidia’s cuDNN, the CUDA Deep Neural Network library. Those tools provide GPU-optimised implementation for neural network fundamentals.
Getting the appropriate cuDNN libraries is easier than the previous step. You can download them from the Nvidia developer portal. That website requires you to make a free account, which is just a formality. When choosing the cuDNN version you will see the options with their matching CUDA versions, e.g.: Download cuDNN v8.1.0 (January 26th, 2021), for CUDA 11.0,11.1 and 11.2
.
We just installed CUDA 11.0, so we’ll click on the above option which provides a list of download links for different operating systems and architectures. There is a generic version cuDNN Library for Linux (x86_64)
which provides a .tgz
file we could use. But as (K)Ubuntu users we can also download tailored .deb
packages instead. There is a “Developer Version”, a “Runtime Version”, and “Code Samples and User Guide” – all for “Ubuntu20.04 x86_64 (Deb)”. Perfect! Just download everything.
Once you’ve got the packages, there is a pretty nice cuDNN installation guide which boils down to the following:
sudo dpkg -i libcudnn8_8.1.0.77-1+cuda11.2_amd64.deb sudo dpkg -i libcudnn8-dev_8.1.0.77-1+cuda11.2_amd64.deb sudo dpkg -i libcudnn8-samples_8.1.0.77-1+cuda11.2_amd64.deb
The guide also includes some troubleshooting and verification steps, but this part rarely goes wrong.
Tensorflow & Pytorch for Python
The drivers are the main challenge; from here everything should be straightforward. There are 2 main deep learning packages in 2020: Tensorflow and Pytorch. If you’re just starting out with deep learning, then in my view it doesn’t matter much which one you pick. They are both pretty user friendly by now, and the fundamentals are similar enough so that familiarity with one package will help you to get started quickly with the other.
For Tensorflow, not long ago there were two different Python packages for GPU and CPU, respectively. But now you get everything via:
pip install tensorflow keras
Keras is a well-designed high-level API for Tensorflow. These other 2 packages are useful additions:
pip install tensorflow_datasets tensorflow_addons
For Pytorch, I have a penchant for FastAI as a higher-level gateway. Using my preferred miniconda environment, you can get both from their respective channels like this:
conda create -n fastai -c fastai -c pytorch fastai
You’ll need some kind of environment manager for the next R step anyway, and it’s easier to keep up with the rapidly evolving libraries if you use some version of anaconda. This conda
install will also get you stuff like torchvision
for image models.
Tensorflow & Torch for R
In R, Tensorflow and Keras are best installed via the keras package. This uses the fantastic reticulate package as a wrapper around Python’s Tensorflow/Keras, so make sure you got it installed. For an introduction to reticulate check out my earlier blogpost. Install as such:
install.packages("keras", repos="http://cran.r-project.org", dependencies=TRUE) keras::install_keras(tensorflow = "gpu")
Then you want to check your Python configuration for reticulate
, along with the keras
availability:
reticulate::py_config() reticulate::py_module_available("keras")
For torch, there is now a native R package which doesn’t use Pytorch under the hood. (Instead, it’s build on the same C++ backend, called libtorch, as the Python version.)
The 1st step for installing torch
is this:
install.packages("torch", repos="http://cran.r-project.org", dependencies=TRUE)
Now you need to activate it, which then downloads and installs necessary stuff:
library(torch)
Note: if this step fails for no good reason then you want to try replacing it with install_torch(timeout=1000)
. This timeout is important, because the corresponding files are relative large and the default is only 360 seconds.
And while you’re there, you might also want to get those extra packages for common use cases:
install.packages("torchvision", repos="http://cran.r-project.org", dependencies=TRUE) install.packages("torchaudio", repos="http://cran.r-project.org", dependencies=TRUE) remotes::install_github("mlverse/torchdatasets")
Does everything work?
Now everything should be there on your machine. But does it all work as it should?
In Python, you can check Tensorflow and Pytorch as such (and get some information about your GPU in the process):
import tensorflow as tf tf.config.list_physical_devices('GPU')
If you’re using conda
, don’t forget to activate
your environment:
import torch torch.cuda.get_device_name()
In R, the installations steps should already have told you if something didn’t work. In addition, you can also check the status of the keras
and torch
packages like this:
library(keras) is_keras_available() library(torch) cuda_is_available()
And that’s it! You now have GPU-powered deep learning capabilities at your disposal. Use them wisely. Or, you know, just have fun with them. Either way, I hope this post was helpful.
More info:
This is far from the only write-up on getting your GPU tools set up on (Ubuntu) Linux. For my first successful installation on Kubuntu 19.10 I was largely following this post by Dmitriy Kisil. For the current 20.04 install, I compared the CUDA and cuDNN instructions to posts by Bojan Tunguz and Stephen Gregory.
Remember that you don’t need to install GPU software if all you want to do is to experiment with deep learning tools. There are plenty of resources online where you can try out the code, e.g. via Google Colab, in a pre-configured cloud environment. This also includes Kaggle Notebooks, which come equipped with a large set of data science and machine learning packages.
I managed to install CUDA 11.2 (and TF 2.4), but it was less smooth than version 11.0. Specifically,
sudo apt-get install cuda
threw ayou have held broken packages
error and refused to do the install. The solution was to use the more tenaciousaptitude
tosudo aptitude install cuda
which suggested thatlibnvidia-compute-460
needed to be downgraded. After that the install worked without a hitch.Another CUDA 11.2 related issue popped up for Tensorflow on Python. The installation worked and
import tensorflow
also worked, but when using the library I got the error messageCould not load dynamic library 'libcusolver.so.10'
. This was most likely related to TF 2.4 being build against CUDA 11.0, not 11.2 (see here). The workaround was to make a hard-link to pretend that.so.11
is.so.10
(see here):cd /usr/local/cuda-11.2/lib64; sudo ln libcusolver.so.11 libcusolver.so.10
.
R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.