[Home]

Stable Diffusion on a budget home PC

Thanks to all the hype in the news and at workplace etc. I got interested in AI and tried out Stable Diffusion. It seemed interesting enough to install it locally (I prefer to be self-hosting in about everything), but the problem was I mostly run old legacy recycled hardware, so modern software is not always running that smoothly...

To run it I installed InvokeAI using their instructions which were pretty straightforward. They support a CPU-only mode so basically you can run it on any old hardware. This is good because you can try it out first to see if it's anything you would like to ever use, before splashing money on a GPU.

I was impressed, and a after a few days of experimenting I was confident enough to invest in some new hardware.

Me and my old machines

I upgrade my computers only when absolutely necessary. Last time I bought a new PC was in 2013, to be used as a build server for Linux kernel development. I felt it was needed as I was unemployed at the time, and wanted to keep up following the kernel development. I've also got another recycled PC from that era that I still use. I remember both of these being pretty impressively fast at the time, but given the modern software and data bloat they are just average (and even way below) today. But these are the fastest machines I've got and they are good enough for me for my use cases, so I keep using them.

The main specs are:

I'll call these machines AMD and Intel in the further text.

CPU-based processing

I tried the CPU-based mode in both machines. Using SD 1.5 model with 512x512 images I got the following text-to-image speed:

So with 50 iterations one image took 15 - 30 minutes and 40 - 75 Wh. Worth nothing that InvokeAI doesn't fully utilize all cores but just half of them (maybe it limits itself based on physical nodes?).

Also interesting how Intel machine clearly outperforms, even though it has only 8 GB RAM and was even swapping. I thought these two boxes should be pretty equal in performance at least for normal workloads (like, running a compiler).

Anyway, while it was possible to use the software and see its capabilities, it was just too slow for any real use. Making a picture generally requires few iterations. So I decided to invest in new hardware.

New hardware selection

I was first tempted to buy a completely new shiny PC from a local computer shop, advertised as ``a gaming PC'' with the greatest and latest GPU available in the town. It was at well over a thousand euros price point.

But then I learned about cheap NVIDIA GPUs, notably RTX 3060. Based on googling this seemed to be a viable option, and since they were also instantly available at my local shop I decided to try it out. Just a couple of hundred euros, heh.

The exact model I chose was ASUS Dual Geforce RTX 3060 V2 with 12 GB VRAM. The only concern was if a PCI Express Gen 4 card would work on an old motherboard with only a Gen 2 16-lane slot. Luckily this was not an issue as PCI Express seems to be pretty well backwards compatible.

PSU

Once getting the GPU I learned the power requirements are big: 650 W. This was a recommendation from the shop, I have no idea what it's based on. NVIDIA pages actually recommend 550 W. The ASUS card itself didn't provide any useful technical information or system requirements, although it came with a fancy packaging and a thick ``manual''.

Originally I had hoped to use the Intel machine which is more power efficient, but unfortunately it's an HP machine with a custom 320 W PSU and non-standard wiring, and I didn't have an adapter cable for the ATX PSU, so changing the PSU was not an option on short term.

The AMD machine had 450 W PSU, but luckily I had a spare 600 W ATX power laying around, and changing it was a trivial job.

NVIDIA drivers

I'm an Ubuntu fan, and I chose to run 22.04 LTS release with their NVIDIA driver offering:

sudo ubuntu-drivers install

Since this (the whole AI hype) is just a toy and for amusement, I don't have an issue using some proprietary drivers and blobs.

Then I just again followed InvokeAI's installation instructions for other CUDA/NVIDIA stuff and everything just worked out-of-the-box.

Performance

The GPU provided a stunning improvement. The performance is now 8 iterations per second. So roughly 10 seconds to create an image with 50 iterations i.e. around 900% speedup. The software is actually usable.

I didn't make any scientific measurements but I could see power usage peaking somewhere at 300 W (using a cheap power meter). So the energy cost for an image is now considerably cheaper too. However, the idle usage of the machine jumped up to 70 W. But even with that the GPU still could pay itself back if I just feel to make enough images. At least I don't waste time to wait the result.

Summary

Stable Diffusion is great fun. Powerful GPUs aimed at gamers are easily available with reasonable prices, and they can also work well with older hardware.


Last updated: 2024-06-23 13:17 (EEST)