I recently migrated my Raspberry Pi-based home automation stack from Docker Compose to Kubernetes. A guide on setting up a single node K3s cluster on a Raspberry Pi 5 is covered in a separate post, but this one focuses on deploying Home Assistant once you have a cluster running. This post details how to deploy Home Assistant on Kubernetes with a straightforward configuration – avoiding the complexity of Helm charts.
For those unfamiliar, Home Assistant is an open-source home automation platform that puts local control and privacy first. It acts as a central hub for your smart home devices, allowing you to control them and create powerful automations. By running it on your own hardware, like a Raspberry Pi, you keep your data within your home network, a key advantage over many cloud-based solutions.
My Raspberry Pi serves as a dedicated platform for essential services like adblocking and home automation. Home Assistant is central to my setup, providing a unified interface for managing diverse smart home devices. While the official documentation focuses on Docker Compose, I needed a Kubernetes-native approach.
Using the standard Home Assistant container image, I created a deployment with persistent storage (using k3s’ local-path provisioner) and exposed it via a LoadBalancer service. This setup is ideal for single-node clusters like mine, but can be easily adapted to larger environments.
That being said let’s start by creating the namespace first:
apiVersion: v1
kind: Namespace
metadata:
name: home-assistant
labels:
name: home-assistant
My Storage definition looks like this but feel free to replace the storageClassName with the one that is running on your cluster:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: ha-volume-claim
namespace: home-assistant
spec:
storageClassName: local-path
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
And the most important part, the deployment definition:
apiVersion: apps/v1
kind: Deployment
metadata:
name: home-assistant
namespace: home-assistant
labels:
app: home-assistant
spec:
replicas: 1
selector:
matchLabels:
app: home-assistant
template:
metadata:
labels:
app: home-assistant
spec:
containers:
- name: home-assistant
image: "ghcr.io/home-assistant/home-assistant:stable"
securityContext:
privileged: true
ports:
- containerPort: 8123
volumeMounts:
- mountPath: /config
name: ha-config-volume
- name: tz-config
mountPath: /etc/localtime
volumes:
- name: ha-config-volume
persistentVolumeClaim:
claimName: ha-volume-claim
- name: tz-config
hostPath:
path: /usr/share/zoneinfo/Europe/Berlin
Beyond persisting the Home Assistant configuration with a dedicated volume, I also create a separate volume to store the timezone information from the host system. Time accuracy is essential for reliable automation within Home Assistant.
The setup above would be useless if we cannot access it outside the cluster, so here is the service definition of LoadBalancer type:
apiVersion: v1
kind: Service
metadata:
name: home-assistant
namespace: home-assistant
labels:
app: home-assistant
spec:
ports:
- port: 8123
targetPort: 8123
name: home-assistant-web
selector:
app: home-assistant
type: LoadBalancer
While this guide focuses on a single-node k3s cluster, the principles outlined here are readily adaptable to larger, more complex Kubernetes environments. Remember to adjust resource limits and storage class settings as needed for your specific infrastructure.