Deploying Jellyfin on K3s
Updated 31 Jan 2025
Updated 31 Jan 2025
In this post, I'm going to try a different format. Rather than go step-by-step through the process I took to set up k3s, I'm going to explain what I tried, where I got stuck, how I solved it, and some key lessons learned. I'd rather use these posts as high-level documentation of my learning. I figure I don't need to write the code twice -- it's already in Github, no need to put it all here too.
In my last post, I built a NAS with Mergerfs and SnapRAID. Rather than running containers directly on the NAS, I wanted to challenge myself to run my services in a Kubernetes cluster. I also wanted to learn the GitOps tool FluxCD and use Ansible as much as possible.
First, I provisioned a k3s cluster using this awesome Ansible playbook. I just followed the quick start guide and it worked like a dream. Then I got the FluxCD controller setup on my cluster and tested it using Flux's Getting Started guide which was great by the way. Then I used ChatGPT to generate some Kubernetes manifests for Jellyfin {deployment,service,pvc,pv}.yaml files and pushed them to the repo flux was watching.
Of course, the manifests weren't perfect on the first try so debugging was needed. But I had way too many types of things that either broke or could have been the problem that I just couldn't make progress. The main issue I ran into was that I couldn't get my nodes to use my NAS as an NFS-backed persistent volume. On top of that, I knew I was doing things wrong with Flux but I didn't know what they were so I couldn't eliminate it as a source of errors.
I wanted to power through but then I decided it would be smarter to to just remove Flux and try to get Jellyfin up and running manually first. Then I'd be able to codify it in Flux once I was happy with the result. This turned out to be a great solution because I was able to focus on debugging each problem one at a time and it only took an afternoon to get Jellyfin running on my cluster. I haven't incorporated Flux yet so I'm going to do that in the very near future.
storageclass.yaml
local-path
which comes with k3s. You do need to write your own PVs for manually defined storage like my NFS share.apt install intel-gpu-tools
and monitored GPU usage with sudo intel_gpu_top
.