How to customize Incus containers with cloud-init – Mi blog lah!
Incus is a manager for virtual machines and system containers. There is also an Incus support forum.
A virtual machine (VM) is an instance of an operating system that runs on a computer, along with the main operating system. A virtual machine uses hardware virtualization features for the separation from the main operating system. With virtual machines, the full operating system boots up in them. You can use cloud-init
to customize virtual machines that are launched with Incus.
A system container is an instance of an operating system that also runs on a computer, along with the main operating system. A system container, instead, uses security primitives of the Linux kernel for the separation from the main operating system. You can think of system containers as software virtual machines. System containers reuse the running Linux kernel of the host, therefore you can only have Linux system containers, any Linux distribution. You can use cloud-init
to customize system containers that are launched with Incus.
In this post we see how to use cloud-init to customize Incus virtual machines and system containers. When you launch such an instance, they will be immediately customized to your liking and ready to use.
Prerequisites
- You have installed Incus or you have migrated from LXD.
- The container images that have a
cloud
variant are the ones that have support for cloud-init. Have a look at https://images.linuxcontainers.org/ and check that your favorite container image has a cloud variant in the Variant column.
You can also view which images have cloud-init
support by running the following command. The command performs an image list
for images on the images:
remote, by matching the string cloud
anywhere in their name.
incus image list images:cloud
Managing profiles in Incus
Incus has profiles, and these are used to group together configuration options. See how to use profiles in Incus.
When you launch a system container or a virtual machine, Incus uses by default the default
profile for the configuration.
Let’s show this profile. The config
section is empty and in this section we will be doing later the cloud-init
stuff. There are two devices
, the eth0
network device (because it is of type nic
) which is served by the incusbr0
network bridge. If you migrated from LXD, it might be called lxdbr0
. Then, there is the root
disk device (because it is of type disk
) which is served by the default
storage pool. You can dig for more with incus network list
and incus storage list
.
$ incus profile show default
config: {}
description: Default Incus profile
devices:
eth0:
name: eth0
network: incusbr0
type: nic
root:
path: /
pool: default
type: disk
name: default
used_by:
....
$
You can perform many actions on Incus profiles. Here is the list of commands.
$ incus profile
Usage:
incus profile [flags]
incus profile [command]
Available Commands:
add Add profiles to instances
assign Assign sets of profiles to instances
copy Copy profiles
create Create profiles
delete Delete profiles
device Manage devices
edit Edit profile configurations as YAML
get Get values for profile configuration keys
list List profiles
remove Remove profiles from instances
rename Rename profiles
set Set profile configuration keys
show Show profile configurations
unset Unset profile configuration keys
Global Flags:
--debug Show all debug messages
--force-local Force using the local unix socket
-h, --help Print help
--project Override the source project
-q, --quiet Don't show progress information
--sub-commands Use with help or --help to view sub-commands
-v, --verbose Show all information messages
--version Print version number
Use "incus profile [command] --help" for more information about a command.
$
Creating a profile for cloud-init
We are going to create a new profile, not a fully-fledged profile, that has just the cloud-init
configuration. Then, when we are about to use the profile, we will specify that new profile along with the default
profile. By doing so, we are not messing with the default
profile; we keep them separate and tidy.
$ incus profile create cloud-dev
Profile cloud-dev created
$ incus profile show cloud-dev
config: {}
description: ""
devices: {}
name: cloud-dev
used_by: []
$
We want to insert the following cloud-init
configuration. If you are viewing the following from my blog, you will notice that there is a gray background color for the text. That is important so that there are no extra spaces at the end of the lines. That would cause formatting issues later on. The cloud-init.user-data
part says that the next will be about cloud-init
. The |
character at the end of the line is very significant. It means that until the end of this field, all commands should be kept verbatim. Whatever appears there, will be injected into the instance as soon as it starts, at the proper location for cloud-init. When the instance is starting for the first time, it will start the cloud-init service which will look for the injected commands and process them accordingly. In this example, we use runcmd
to run the touch
command and create the file /tmp/simos_was_here
. We just want some evidence that cloud-init actually worked.
cloud-init.user-data: |
#cloud-config
runcmd:
- [touch, /tmp/simos_was_here]
We need to open the profile for editing, then paste the configuration. When you run the following line, a text editor will open (likely pico
) and you can paste the above text in the config
section. Remove the {}
from the config: {}
line.
$ incus profile edit cloud-dev
Here is how the cloud-dev
profile should look like in the end. The command has a certain format. It’s a list of items, the first being the actual command to run (touch
), and the second the argument to the command. It’s going to run touch /tmp/simos_was_here
and should work with all distributions.
$ incus profile show cloud-dev
config:
cloud-init.user-data: |
#cloud-config
runcmd:
- [touch, /tmp/simos_was_here]
description: ""
devices: {}
name: cloud-dev
used_by: []
$
Now we are ready to launch a container.
Launching an Incus container with cloud-init
Alpine is a lightweight Linux distribution. Let’s see what’s in store for Alpine images that have cloud
support. Using incus image (for Incus image-related commands) we want to list the available ones from the images:
remote, and filter for alpine
and cloud
. Whatever comes after the remote (i.e. images:
), is a filter word.
incus image list images: alpine cloud
Here is the full output. I appended --columns ldt
to the command, which shows only three columns, l for shortest alias, d for description, and t for image type (either container or virtual machine). Without the columns, the output would be too wide and would not fit in my blog’s narrow width.
$ incus image list images: alpine cloud --columns ldt
+----------------------------+------------------------------------+-----------------+
| ALIAS | DESCRIPTION | TYPE |
+----------------------------+------------------------------------+-----------------+
| alpine/3.16/cloud (1 more) | Alpine 3.16 amd64 (20240202_13:00) | CONTAINER |
+----------------------------+------------------------------------+-----------------+
| alpine/3.16/cloud (1 more) | Alpine 3.16 amd64 (20240202_13:00) | VIRTUAL-MACHINE |
+----------------------------+------------------------------------+-----------------+
| alpine/3.16/cloud/arm64 | Alpine 3.16 arm64 (20240202_13:00) | CONTAINER |
+----------------------------+------------------------------------+-----------------+
| alpine/3.16/cloud/arm64 | Alpine 3.16 arm64 (20240202_13:00) | VIRTUAL-MACHINE |
+----------------------------+------------------------------------+-----------------+
| alpine/3.17/cloud (1 more) | Alpine 3.17 amd64 (20240202_13:00) | CONTAINER |
+----------------------------+------------------------------------+-----------------+
| alpine/3.17/cloud (1 more) | Alpine 3.17 amd64 (20240202_13:00) | VIRTUAL-MACHINE |
+----------------------------+------------------------------------+-----------------+
| alpine/3.17/cloud/arm64 | Alpine 3.17 arm64 (20240202_13:00) | CONTAINER |
+----------------------------+------------------------------------+-----------------+
| alpine/3.17/cloud/arm64 | Alpine 3.17 arm64 (20240202_13:00) | VIRTUAL-MACHINE |
+----------------------------+------------------------------------+-----------------+
| alpine/3.18/cloud (1 more) | Alpine 3.18 amd64 (20240202_13:00) | CONTAINER |
+----------------------------+------------------------------------+-----------------+
| alpine/3.18/cloud (1 more) | Alpine 3.18 amd64 (20240202_13:00) | VIRTUAL-MACHINE |
+----------------------------+------------------------------------+-----------------+
| alpine/3.18/cloud/arm64 | Alpine 3.18 arm64 (20240202_13:00) | CONTAINER |
+----------------------------+------------------------------------+-----------------+
| alpine/3.18/cloud/arm64 | Alpine 3.18 arm64 (20240202_13:00) | VIRTUAL-MACHINE |
+----------------------------+------------------------------------+-----------------+
| alpine/3.19/cloud (1 more) | Alpine 3.19 amd64 (20240202_13:00) | CONTAINER |
+----------------------------+------------------------------------+-----------------+
| alpine/3.19/cloud (1 more) | Alpine 3.19 amd64 (20240202_13:00) | VIRTUAL-MACHINE |
+----------------------------+------------------------------------+-----------------+
| alpine/3.19/cloud/arm64 | Alpine 3.19 arm64 (20240202_13:00) | CONTAINER |
+----------------------------+------------------------------------+-----------------+
| alpine/3.19/cloud/arm64 | Alpine 3.19 arm64 (20240202_13:00) | VIRTUAL-MACHINE |
+----------------------------+------------------------------------+-----------------+
| alpine/edge/cloud (1 more) | Alpine edge amd64 (20240202_13:00) | CONTAINER |
+----------------------------+------------------------------------+-----------------+
| alpine/edge/cloud (1 more) | Alpine edge amd64 (20240202_13:00) | VIRTUAL-MACHINE |
+----------------------------+------------------------------------+-----------------+
| alpine/edge/cloud/arm64 | Alpine edge arm64 (20240202_13:00) | CONTAINER |
+----------------------------+------------------------------------+-----------------+
| alpine/edge/cloud/arm64 | Alpine edge arm64 (20240202_13:00) | VIRTUAL-MACHINE |
+----------------------------+------------------------------------+-----------------+
$
I am going to use alpine/3.19/cloud
. Alpine 3.19 was released in December 2023, so it’s a fairly recent version. The same version is also available as a virtual machine image which is handy. We could easily use the virtual machine version simply by adding --vm
when we launch the image through incus launch
. The rest would be the same. In the following we will be creating a container image.
In the following, I launch the cloud variety of the Alpine 3.19 image (images:alpine/3.19/cloud
), I give it the name myalpine
, and I apply both the default
and cloud-dev
Incus profiles. Why apply the default
Incus profile as well? Because when we specify a profile, Incus does not add the default
profile by default (see what I did here?). Therefore, we specify first the default
profile, then the new cloud-dev
profile. If the default
profile had some configuration in the config:
section, then the new cloud-dev
profile would mask (hide) it. The cloud-init configuration is not merged among profiles; the last profile in the list overwrites any previous cloud-init configuration. Then, we get a shell into the container, and check that the file has been created in /tmp
. Finally, we exit, stop the container and delete it. Nice and clean.
$ incus launch images:alpine/3.19/cloud myalpine --profile default --profile cloud-dev
Launching myalpine
$ incus shell myalpine
myalpine:~# ls -l /tmp/
total 1
-rw-r--r-- 1 root root 0 Feb 3 12:02 simos_was_here
myalpine:~# exit
$ incus stop myalpine
$ incus delete myalpine
$
Case study: Disable IPv6 addresses in container
The ultimate purpose of cloud-init is to provide customization while at the same time stick with standard container images as they are provided by the images:
remote. The alternative to cloud-init would be to create a whole custom range images with our desired changes. In this case study, we are going to create a cloud-init configuration that disables IPv6 in Alpine containers (and virtual machines). The motivation of this, was a request by a user from the Incus discussion and support forum. Read over there how you would manually disable IPv6 in an Alpine container.
Here are the cloud-init instructions that disable IPv6 in a Alpine container or virtual machine. Alpine gets an IP address from DHCP which includes IPv4 and IPv6 addresses. At some point early in the boot process, we use the bootcmd
module to run commands. We add a configuration to the sysctl
service to disable IPv6. Then, we enable the sysctl
service because it is disabled by default in AlpineLinux. Finally, we restart the service in order to apply the configuration we just added.
cloud-init.user-data: |
#cloud-config
bootcmd:
- echo "net.ipv6.conf.all.disable_ipv6 = 1" > /etc/sysctl.d/10-disable-ipv6.conf
- rc-update add sysctl default
- rc-service sysctl restart
Here we test out the new Incus profile with cloud-init
to disable IPv6 in a container. There is no IPv6 address in the container.
$ incus launch images:alpine/3.19/cloud myalpine --profile default --profile cloud-alpine-noipv6
Launching myalpine
$ incus list myalpine
+----------+---------+--------------------+------+-----------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+----------+---------+--------------------+------+-----------+-----------+
| myalpine | RUNNING | 10.10.10.44 (eth0) | | CONTAINER | 0 |
+----------+---------+--------------------+------+-----------+-----------+
$ incus stop myalpine
$ incus delete myalpine
$
We tried with a system container. How about a virtual machine? Let’s try the same with a virtual machine. The same command but with --vm
added to it. We get an issue that the AlpineLinux image cannot work with Secure Boot. Incus provides an environment that offers Secure Boot but AlpineLinux cannot work with it. Therefore, we instruct Incus not to offer Secure Boot.
$ incus launch images:alpine/3.19/cloud myalpine --vm --profile default --profile cloud-alpine-noipv6
Launching myalpine
Error: Failed instance creation: The image used by this instance is incompatible with secureboot. Please set security.secureboot=false on the instance
$ incus delete --force myalpine
$ incus launch images:alpine/3.19/cloud myalpine --vm --profile default --profile cloud-alpine-noipv6 --config security.secureboot=false
Launching myalpine
$ incus list myalpine
+----------+---------+--------------------+------+-----------------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+----------+---------+--------------------+------+-----------------+-----------+
| myalpine | RUNNING | 10.10.10.88 (eth0) | | VIRTUAL-MACHINE | 0 |
+----------+---------+--------------------+------+-----------------+-----------+
$ incus stop myalpine
$ incus delete myalpine
$
Case study: Launching a Debian instance with a Web server
A common task when using Incus, is to launch an instance, install a Web server, modify the default HTML file to say Hello, world!, and finally view the file using the host’s Web browser. Instead of doing all these steps manually, we automate them.
In this example, when the instance is launched, Incus places the cloud-init
instructions in the file /var/lib/cloud/seed/nocloud-net/user-data
. The cloud-init
service in the instance is started. The following Incus profile uses more advanced cloud-init commands. It performs a package update, then a package upgrade, and finally it would reboot if the package upgrade requires it. We do not need to specify which command would perform the package update or upgrade because cloud-init can deduce them from the running system. Next, it installs the nginx
package. Finally, our custom script is created in /var/lib/cloud/scripts/per-boot/edit-nginx-index.sh
. The cloud-init
service run the edit-nginx-index.sh
script, which modifies the /var/www/html/index.nginx-debian.html
file, which is the default HTML file for nginx in Debian.
$ incus profile create cloud-debian-helloweb
Profile cloud-debian-helloweb created
$ incus profile edit cloud-debian-helloweb
<furiously editing the cloud-init section>
$ incus profile show cloud-debian-helloweb
config:
cloud-init.user-data: |
#cloud-config
package_update: true
package_upgrade: true
package_reboot_if_required: true
packages:
- nginx
write_files:
- path: /var/lib/cloud/scripts/per-boot/edit-nginx-index.sh
permissions: 0755
content: |
#!/bin/bash
sed -i 's/Welcome to nginx/Welcome to Incus/g' /var/www/html/index.nginx-debian.html
sed -i 's/Thank you for using nginx/Thank you for using Incus/g' /var/www/html/index.nginx-debian.html
description: ""
devices: {}
name: cloud-debian-helloweb
used_by: []
$
Let’s test these in a Debian system container.
$ incus launch images:debian/12/cloud mydebian --profile default --profile cloud-debian-helloweb
Launching mydebian
$ incus list mydebian --columns ns4t
+----------+---------+---------------------+-----------+
| NAME | STATE | IPV4 | TYPE |
+----------+---------+---------------------+-----------+
| mydebian | RUNNING | 10.10.10.120 (eth0) | CONTAINER |
+----------+---------+---------------------+-----------+
$
Open up the above IP address in your favorite Web browser. Note that the home page now has two references to Incus, thanks to the changes that we did through cloud-init.
For completeness, the same with a Debian virtual machine. In this case, we just add --vm
in the incus launch
command line and all the rest are the same. The Debian VM image works with Secure Boot. When you get the IP address, open up the page in your favorite Web browser. Note that since this is a virtual machine, the network device is not eth0
but a normal-looking network device.
$ incus stop mydebian
$ incus delete mydebian
$ incus launch images:debian/12/cloud mydebian --vm --profile default --profile cloud-debian-helloweb
Launching mydebian
<wait for 10-20 seconds because virtual machines take more time to setup>
$ incus list mydebian --columns ns4t
+----------+---------+------------------------+-----------------+
| NAME | STATE | IPV4 | TYPE |
+----------+---------+------------------------+-----------------+
| mydebian | RUNNING | 10.10.10.110 (enp5s0) | VIRTUAL-MACHINE |
+----------+---------+------------------------+-----------------+
$
Summary
We have seen how to use the cloud
Incus images, both for containers and virtual machines. They provide customization to the Incus instances and it helps you to get them configured to your liking from the start.
cloud-init offers a lot of opportunity for customization. Normally you would setup the Incus instance manually to your liking, and then interpret your changes into cloud-init commands.
Troubleshooting
Error: My cloud-init instructions are all messed up!
Here is what I got!
$ incus profile show cloud-dev
config:
cloud-init.user-data: "#cloud-confignruncmd: n - [touch, /tmp/simos_was_here]n"
description: ""
devices: {}
name: cloud-dev
used_by: []
$
This happens if there are any extra spaces at the end of the cloud-init lines. pico
, the default editor tries to help you on this. The above problem happened because there was some extra space somewhere in the cloud-init configuration.
You would need to remove the configuration and paste it again, taking care of the formatting. While editing with the pico
text editor, there should be no red blocks at the end of the lines.
How can I debug cloud-init
?
When an Incus instance with cloud-init
is launched, the cloud-init service is running, and it creates two log files, /var/log/cloud-init.log
and /var/log/cloud-init-output.log
.
Here are some relevant lines from cloud-init.log
relating to the nginx
example.
2024-02-03 19:07:09,237 - util.py[DEBUG]: Writing to /var/lib/cloud/scripts/per-boot/edit-nginx-index.sh - wb: [755] 200 bytes
...
2024-02-03 19:07:14,814 - subp.py[DEBUG]: Running command ['/var/lib/cloud/scripts/per-boot/edit-nginx-index.sh'] with allowed return codes [0] (shell=False, capture=False)
...
Error: Unable to connect
If you try to open the Web server in the Incus instance and you get a browser error Unable to connect, then
- Verify that you got the correct IP address of the Incus instance.
- Verify that the URL is
http://
and nothttps://
. Some browsers switch automatically tohttps
while in these examples we have only launched plainhttp
Web servers.