2 min read

Running Stable Diffusion on Arch Linux

Getting Stable Diffusion running on Arch Linux with an Nvidia graphics card.

I wanted to play around with Stable Diffusion and I have an Nvidia GPU, so I gave it a shot. I was able to get it running pretty quickly thanks to a number of helpful wrapper scripts, GUIs, and packages the community has created.

There were basically two choices for the install: Anaconda or Docker.  I initially attempted Anaconda but it got stuck installing dependencies, so I quickly switched to Docker.  If you are not using Anaconda for other things, I would recommend using Docker as it keeps your system cleaner.

My Setup

For context, here's what I'm running on:

  • Intel Core i9-9900K 3.6 GHz
  • 32GB DDR4 RAM
  • Nvidia GeForce RTX 2070 8GB
  • Arch Linux with KDE

I have Docker installed:

sudo pacman -S docker 

The AUR package nvidia-container-toolkit is also required to access GPUs from within Docker:

yay -S nvidia-container-toolkit

Start or restart Docker after installing:

sudo systemctl start docker

Installation

There are a number of different GUIs for Stable Diffusion, and things are changing quickly so there might be newer or better options available shortly.

One of the most popular is sd-webui/stable-diffusion-webui, which provides a frontend for txt2img, img2img, and a handful of additional models, optimizations, etc.

GitHub - sd-webui/stable-diffusion-webui: Stable Diffusion web UI
Stable Diffusion web UI. Contribute to sd-webui/stable-diffusion-webui development by creating an account on GitHub.

It can be run directly, but it also provides a Docker Compose setup which is what I'm interested.

Get started by cloning the repo:

git clone https://github.com/sd-webui/stable-diffusion-webui.git
cd stable-diffusion-webui

Next, copy the example environment file and rename to .env_docker

cp .env_docker.example .env_docker

I manually updated the WEBUI_ARGS flag in the environment file:

WEBUI_ARGS=--extra-models-cpu --optimized-turbo

This tells it to run extra models on my CPU, and --optimized-turbo allows running on GPUs with less than 10GB of VRAM.

Finally, bring up the Docker container:

docker compose up

The first time it runs, it will download a handful of large model files.  This will take a while, but afterwards you can add VALIDATE_MODELS=false to the environment file to skip checking the files.

Alternately, if you already have the model files downloaded, you can save time by manually add them to the following locations before launching the container:

  • sd-v1-4.ckptmodels/ldm/stable-diffusion-v1/model.ckpt
  • RealESRGAN_*.pthsrc/realesrgan/experiments/pretrained_models/RealESRGAN_*.pth
  • GFPGANv1.3.pthsrc/gfpgan/experiments/pretrained_models/GFPGANv1.3.pth

Once it's running, open http://localhost:7860/ to access the UI.

If you need to stop the Docker container, just press Ctrl-C and it will stop automatically.  The next time you start it will be faster since the Docker image is prebuilt and the models all downloaded.

Additional Upscalers

Here's how to add the Latent Diffusion Super Resolution and GoLatent upscalers:

cd src
git clone https://github.com/devilismyfriend/latent-diffusion.git
mkdir -p latent-diffusion/experiments/pretrained_models

Then download LDSR (2GB) and its configuration, and add them to the src/latent-diffusion/experiments/pretrained_models directory as model.ckpt and project.yaml.

You can restart the Docker container for this change to take effect.