monotux.tech

Configure Incus containers using Ansible

Incus Ansible

This post will outline how I setup an Ansible playbook to manage a Incus container on a remote host. This was fairly straight forward1 but I couldn’t find a suitable write-up elsewhere so here we go!

Table of Contents

Overview

I’m running the Ansible playbook from my laptop (with an Incus client installed) and the remote Incus host is reached through a VPN (tailscale) and I’m using TLS for authentication in Incus.

Initially I thought Ansible was to ssh to the incus host, and then just exec into each container somehow. This is not the case – Ansible will use our local Incus client to exec into the remote containers (and VMs if used). I’m not using CI/CD for applying my playbooks (yet), but the same principles would apply for setting this up in a CD runner.

The process in short:

  1. Establish a trust relationship between Ansible host and Incus host
  2. Configure Ansible and install collections necessary
  3. ???
  4. Profit!

I’ve not tested creating Incus/LXD containers using Ansible yet, the scope of this entry is to configure the created container.

Setup Incus client

Skip this if you already have a working remote Incus configuration :-)

On the Incus host side:

incus config trust add foo

This will create a token that we will use on our Ansible host (my laptop in this case) to establish a trust relationship with the remote Incus instance.

To use it, we have to add the remove Incus instance to our Ansible host Incus configuration:

incus remote add foo https://foo.example.com:8443 --auth-type tls --protocol incus

List and test the Incus remote:

incus remote list
incus remote switch foo
incus list

In my case:

# incus remote list
+-------------------+-------------------------------------------+---------------+-------------+--------+--------+--------+
|       NAME        |                    URL                    |   PROTOCOL    |  AUTH TYPE  | PUBLIC | STATIC | GLOBAL |
+-------------------+-------------------------------------------+---------------+-------------+--------+--------+--------+
| foo               | https://foo.example.com:8443              | incus         | tls         | NO     | NO     | NO     |
+-------------------+-------------------------------------------+---------------+-------------+--------+--------+--------+
| images            | https://images.linuxcontainers.org        | simplestreams | none        | YES    | NO     | NO     |
+-------------------+-------------------------------------------+---------------+-------------+--------+--------+--------+
| old     (current) | https://old.example.com:8443              | incus         | tls         | NO     | NO     | NO     |
+-------------------+-------------------------------------------+---------------+-------------+--------+--------+--------+
| local             | unix://                                   | incus         | file access | NO     | YES    | NO     |
+-------------------+-------------------------------------------+---------------+-------------+--------+--------+--------+

# incus remote switch foo

# incus remote list
+-------------------+-------------------------------------------+---------------+-------------+--------+--------+--------+
|       NAME        |                    URL                    |   PROTOCOL    |  AUTH TYPE  | PUBLIC | STATIC | GLOBAL |
+-------------------+-------------------------------------------+---------------+-------------+--------+--------+--------+
| foo     (current) | https://foo.example.com:8443              | incus         | tls         | NO     | NO     | NO     |
+-------------------+-------------------------------------------+---------------+-------------+--------+--------+--------+
| images            | https://images.linuxcontainers.org        | simplestreams | none        | YES    | NO     | NO     |
+-------------------+-------------------------------------------+---------------+-------------+--------+--------+--------+
| old               | https://old.example.com:8443              | incus         | tls         | NO     | NO     | NO     |
+-------------------+-------------------------------------------+---------------+-------------+--------+--------+--------+
| local             | unix://                                   | incus         | file access | NO     | YES    | NO     |
+-------------------+-------------------------------------------+---------------+-------------+--------+--------+--------+

# incus list
+--------+---------+---------------------+------------------------------------------------+-----------+-----------+
| NAME   |  STATE  |        IPV4         |                      IPV6                      |   TYPE    | SNAPSHOTS |
+--------+---------+---------------------+------------------------------------------------+-----------+-----------+
| test01 | RUNNING | 10.94.217.45 (eth0) | fd42:d0b5:60cf:59a8:1266:6aff:fee1:7b3f (eth0) | CONTAINER | 0         |
+--------+---------+---------------------+------------------------------------------------+-----------+-----------+

Now I’ve verified that I can talk to the remote Incus instance.

Ansible

First of all, community.general.incus is included if you installed Ansible, but not if you installed ansible-core. If you installed ansible-core you might have to install this manually (ansible-galaxy collection install community.general).

Connection configuration

We need to instruct Ansible to connect to our containers using the right connection plugin. This all depends a bit on how you organize your inventory, but as an example I’ve organized my containers into a group per Incus host:

# inventory/inventory.ini
test01
test02

[foo_containers]
test01
test02

Then I’ve added the necessary configuration as a group variable:

# inventory/group_vars/foo_containers/vars.yaml
---
ansible_connection: community.general.incus
# This is the user inside of the container!
ansible_user: root

Testing

Then finally, we can test if this works:

# ansible test01 -m ping
test01 | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python3.13"
    },
    "changed": false,
    "ping": "pong"
}

And now we can use this as a normal host in Ansible:

---
- name: Example playbook
  hosts: test01

  tasks:
    - name: Ping host
      ansible.builtin.ping:

Run the example playbook:

# ansible-playbook playbooks/test01.yaml

PLAY [Example playbook] *******************************************************************************************************************************

TASK [Gathering Facts] ********************************************************************************************************************************
ok: [test01]

TASK [Ping host] **************************************************************************************************************************************
ok: [test01]

PLAY RECAP ********************************************************************************************************************************************
test01                     : ok=2    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

  1. …as things are once you’ve understood it… ↩︎