monotux.tech

SQLite, Kubernetes & Litestream

Kubernetes, SQLite, Litestream

I’ve been learning Kubernetes at home recently, and I’ve found a nice (but slightly dangerous!) pattern for hosting applications backed by SQLite in my homelab.

Table of Contents

Overview #

The pattern is relatively simple:

  • No persistent volumes, just ephemeral ones
  • No deployments, we use a statefulset instead
  • No external database, we use litestream to backup/restore automagically

If this sounds familiar it’s because I’ve used this pattern for running SQLite-backed applications on fly.io before.

The obvious issue with this is that we risk losing any data written between Litestream replications if our cluster dies. Since I’m the only user and I have backups through litestream I’m fine with this risk level1.

Example manifest #

In this example we’ll deploy Kanboard, a basic Trello-like application. I’m using ArgoCD & Kustomize to deploy this, but you can just use kubectl apply -f example.yaml as well. I will describe the prerequisites after the yaml.

As we don’t use a persistent volume, any attachments used in Kanboard will disappear on restart, rescheduling of pod et c. Kanboard is not an ideal fit for this pattern! Linkding was much better :-)

The example goes the following:

  • Create a namespace to run everything in
  • Create a ConfigMap for litestream, which describes what database to backup, and where to backup/restore from
  • Create the StatefulSet, which includes an init container (which restores from backup if necessary) and Kanboard + Litestream in a pod. This also creates a ephemeral volume which the database is stored in.
  • Create a service for the STS
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
---
apiVersion: v1
kind: Namespace
metadata:
  name: kanboard-sts
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: litestream
  namespace: kanboard-sts
data:
  litestream.yml: |
	dbs:
	  - path: '/var/www/app/data/db.sqlite'
		replicas:
		  - type: s3
			bucket: '$S3_BUCKET'
			path: '$S3_PATH'
			endpoint: '$S3_ENDPOINT'
			force-path-style: true
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: kanboard
  namespace: kanboard-sts
  labels:
	app: kanboard
spec:
  replicas: 1
  selector:
	matchLabels:
	  app: kanboard
  template:
	metadata:
	  labels:
		app: kanboard
	spec:
	  volumes:
		- name: data
		  emptyDir: {}
		- name: litestream-config
		  configMap:
			name: litestream
			items:
			- key: litestream.yml
			  path: litestream.yml
	  initContainers:
		- name: init-litestream
		  image: litestream/litestream:0.3
		  args: ['restore', '-if-db-not-exists', '/var/www/app/data/db.sqlite']
		  envFrom:
		  - secretRef:
			  name: litestream-s3
		  volumeMounts:
			- name: data
			  mountPath: /var/www/app/data
			- name: litestream-config
			  mountPath: /etc/litestream.yml
			  subPath: litestream.yml
	  containers:
		- name: litestream
		  image: litestream/litestream:0.3
		  args: ['replicate']
		  envFrom:
		  - secretRef:
			  name: litestream-s3
		  volumeMounts:
			- name: data
			  mountPath: /var/www/app/data
			- name: litestream-config
			  mountPath: /etc/litestream.yml
			  subPath: litestream.yml
		- name: kanboard
		  image: "docker.io/kanboard/kanboard:v1.2.45"
		  ports:
			- containerPort: 80
			  name: http
		  volumeMounts:
			- name: data
			  mountPath: /var/www/app/data
---
apiVersion: v1
kind: Service
metadata:
  name: kanboard-sts
  namespace: kanboard-sts
spec:
  selector:
	app: kanboard
  ports:
	- name: http
	  port: 80
	  targetPort: http

In order for this to work, you have to:

The secret needs to contain the following keys:

---
apiVersion: v1
kind: Secret
metadata:
  name: litestream-s3
  namespace: kanboard-sts
stringData:
  # Bucket name
  S3_BUCKET: ""

  # Might look like: https://s3.us-west-000.backblazeb2.com
  S3_ENDPOINT: ""

  # Might look like this
  S3_PATH: "kanboard_replica.sqlite3"

  LITESTREAM_ACCESS_KEY_ID: ""
  LITESTREAM_SECRET_ACCESS_KEY: ""
type: Opaque

Just apply the secret to your namespace.

Caveats / bootstrapping #

If you just apply the example manifest to your cluster, the StatefulSet wont be able to start! This is due to the init container, which will crash as it can’t restore from backups as there are no backups in the bucket.

The same thing goes for the LiteStream container running in the application pod. It won’t start as the application hasn’t started yet, so there is no database to backup.

So, in order to get this running:

  1. Comment out the entire initContainers section of the manifest (line 49-61)
  2. Comment out the LiteStream container from the containers section of the manifest (line 63-74)
  3. Apply the manifest, make sure the StatefulSet comes up
  4. Uncomment the LiteStream container from the containers section again
  5. Apply the manifest again, and make sure the StatefulSet comes up again
  6. Uncomment the initContainers section again, and finally
  7. Apply the manifest again

I haven’t tried the new sidecar support in Kubernetes 1.33 yet, that might remove the need for commenting/uncommenting the LiteStream container during bootstrapping.

Conclusion #

If you can live with the quirky bootstrapping process, it’s a fairly simple deployment pattern for SQLite backed applications.


  1. I’m also using PostgreSQL on my NAS for “serious” applications, as I’ve decided that having my Kubernetes cluster depend on it is fine! ↩︎