Installing Tensorflow on Fedora 34

Ricky Corte
2 min readMay 27, 2021
Photo by Cookie the Pom on Unsplash

Fedora is a great distro, you always get the latest updates and libraries but sometimes this could be painful. Instralling Tensorflow directly on the machine is really hard because you need a combination of specific versions of libraries that are not present in the repositories. Fortunately we still can use docker to install Tensorflow in a stable environment even if it requires a few steps.

This guide is only for nvidia gpu users.

Install Drivers

Make sure you enabled RMP Fusion repositories to download and install the latest nvidia drivers.

To enable the repositories follow this official guide here.

To install the drivers you can follow this guide here.

Remember to install cuda libraries!

For you convenience, I leave here the instructions for latest gpus

sudo dnf update -y 
sudo dnf install akmod-nvidia #
sudo dnf install xorg-x11-drv-nvidia-cuda

Install docker

To install docker follow their install guide here.

When you have finished your docker installation you must install nvidia-docker package. Unfortunately this is not available in any fedora repository.

Searching on google I came across this Github issue that explains how to install it: https://github.com/NVIDIA/nvidia-docker/issues/553

To install nvidia-docker run this commands as suggested in the comment

curl -s -L https://nvidia.github.io/nvidia-docker/centos7/nvidia-docker.repo | \
sudo tee /etc/yum.repos.d/nvidia-docker.repo
sudo dnf install nvidia-docker2
sudo pkill -SIGHUP dockerd

This will install nvidia-docker but it wont work as it is now.

In order to make it work you need to edit its config file

sudo nano /etc/nvidia-container-runtime/config.toml

and change this line

#no-cgroups = false

into

no-cgroups = true

When done, save you config and restart docker with

sudo systemctl restart docker

Running the container

We are almost there, you just need to create the container with Tensorflow. Notice that you must add — privileged and --gpu all to make the container work.

For example if you want to run a Jupyter server on port 8888 you have to execute the following command

sudo docker run --gpus all --privileged -p 8888:8888 -d tensorflow/tensorflow:latest-gpu-jupyter

When everything is up and running, you can retrieve the access token in the container logs.

To check if the everything is working you can create a new notebook and run this two lines and see if Tensorflow detects the gpu.

--

--