How to get Tensorflow acceleration with NVIDIA RTX 50 series GPU with docker ( RTX5060Ti 16GB) for Ubuntu and Windows WSL2

It seems that the Tensorflow acceleration is broken with the latest RTX50 series GPU, especially with the most cost effective RTX5060Ti 16GB card.

This is a how to guide to re-enable the Tensorflow acceleration with the official Tensorflow docker image from NVIDIA

Basically you need to install these four things:

  1. Docker (if you haven’t installed already)
  2. NVIDIA GPU drivers for Linux (only for Ubuntu)
  3. NVIDIA Container Toolkit
  4. NVIDIA Tensorflow Docker containers

Docker

apt -y install docker

Installing NVIDIA GPU Driver

After downloading the NVIDIA GPU drivers for Linux (only for fully Linux-based operating system). This step is unnecessary for Microsoft Windows WSL2

bash NVIDIA_Linux_x86_64 570.153.02.run

Installing NVIDIA Container Toolkit

Follow the instruction here: https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html

Do not forget to configure docker

sudo nvidia-ctk runtime configure --runtime=docker

Then restart docker to enable gpu driver integration

sudo systemctl restart docker

Test the nvidia driver under the container runtime

sudo docker run --rm --runtime=nvidia --gpus all ubuntu nvidia-smi

Install NVIDIA Tensorflow docker containers (basic)

docker run --gpus all -it --rm nvcr.io/nvidia/tensorflow:xx.xx-tfx-py3

Replace the xx with the actualy version of the tensorflow containers, at the time of the writing it is:

docker run --gpus all -it --rm nvcr.io/nvidia/tensorflow:25.02-tf2-py3

If you want to run it inside the docker, and link your /home directory with the /workspace directory inside the docker image, you can just run :

docker run --gpus all -it --rm -v /home/username:/workspace nvcr.io/nvidia/tensorflow:25.02-tf2-py3

BONUS: How to install Spyder, Jupyterlab and additional Tensorflow/Keras libraries in NVIDIA docker image

I’ve prepared a Dockerfile to rebuild the NVIDIA Tensorflow docker container with GPU acceleration. Download the dockerfile and run “docker build” with this parameter

docker build -t my-nvidia-tf-ds .

Then you can run the container with GPU acceleration

docker run --gpus all -it --rm -v /root:/workspace my-nvidia-tf-ds

You can also expose the port and run Jupyterlab within the docker image. Just follow this step:

docker run --gpus all -p 8888:8888 -it my-nvidia-tf-ds \
jupyter lab --ip=0.0.0.0 --allow-root

Additionally, you can run Spyder in the docker image by forwarding xhost :

xhost +local:docker

docker run -it --gpus all -e DISPLAY=$DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix \
-v /root:/workspace my-nvidia-tf-ds

spyder

Hopefully this would help you run GPU accelerated Tensorflow with RTX50 series GPU card.

This also works under Microsoft Windows 11 / WSL2 environment too!

How to install cockpit dashboard on older Raspberry Pi 3, running Bookworm

Cockpit dashboard is a convenient dashboard for home user or enthusiasts for monitoring several SOHO servers. It supports multiple Linux based operating system, however there are some caveats in installing in Raspberry Pi 3 as it runs on older Bookworm based operating system.

first your need to add bookworm-backports.

echo "deb http://deb.debian.org/debian ${VERSION_CODENAME}-backports main" | sudo tee /etc/apt/sources.list.d/backports.list

Then you need to configure the keyring

curl -O http://http.us.debian.org/debian/pool/main/d/debian-archive-keyring/debian-archive-keyring_2023.4_all.deb 
sudo dpkg -i debian-archive-keyring_2023.4_all.deb  

Afterwards, you need to run this combo command to update software packages list

apt update  && apt -y upgrade

Then finally you install cockpit via bookworm-backports

apt install -t bookworm-backports cockpit

After everything is done, you may check cockpit dashboard by going to:
http://<ip address>:9090. and log in using your system username and password.

nmap scanning for ip camera in the network

Here’s an nmap snippet for scanning for hidden cctv / ip camera in the network

nmap -sV --script=http-enum,http-title,rtsp-url-brute -p 80,443,554,8000 <ip range>

Or you can write as :

sudo nmap -sV --script=http-enum,http-title,rtsp-url-brute -p 80,443,554,8000 192.168.0.0/24

Make sure you have permission to scan on the network!

Getting Rid of /.well-known/traffic-advice 404 errors in nginx web server

It seems Google have implemented private prefetch proxy in Chrome for Android.

The upside of this private prefetch proxy is improved browsing experience for mobile users by reducing waiting time for web pages to load.

The downside is, as web server administrators – you might find a lot of 404 status in your web logs.

To solve this, you could either :

  • Write directive to ignore 404 logs for “traffic-advice”
  • Create “/.well-known/traffic-advice file for each domain and set the file to be served with “application/trafficadvice+json” MIME type [source]

Solution

Luckily, TechTitBits have come up with a convenient solution which only involves adding a few lines in configuration files to enable Chrome for Android prefetched proxy in nginx.

location = /.well-known/traffic-advice {
    types { }
    default_type "application/trafficadvice+json; charset=utf-8";
    return 200 '[{ "user_agent": "prefetch-proxy", "fraction": 1 }]';
}

With this solution, you would only need to add the location block within the server { } context in the site configurations.

Thank you for the tip: Traffic Advice configuration for Nginx