Running immich on Cuda (remote host with WSL2) #
- Install Cuda on WSL by following https://developer.nvidia.com/cuda-downloads?target_os=Linux&target_arch=x86_64&Distribution=WSL-Ubuntu&target_version=2.0&target_type=deb_local and if that doesn’t work, then: https://docs.nvidia.com/cuda/wsl-user-guide/index.html#cuda-support-for-wsl-2
- On the host, go to docker engine configuration and add:
"runtimes":{ "nvidia":{ "path":"/usr/bin/nvidia-container-runtime", "runtimeArgs":[ ] } }
- Now you should be able to run the following examples:
sudo docker run --rm --runtime=nvidia --gpus all ubuntu nvidia-smi
sudo docker run --rm --gpus all nvcr.io/nvidia/k8s/cuda-sample:nbody nbody -gpu -benchmark
sudo docker run --rm --gpus all nvcr.io/nvidia/k8s/cuda-sample:nbody nbody -gpu -benchmark -numbodies=1000000
- Run
./machine-learning-docker.sh
on wsl2