×

How to compile Incus – Mi blog lah!

How to compile Incus – Mi blog lah!


Incus is a manager for virtual machines and system containers.

A virtual machine is an instance of an operating system that runs on a computer, along with the main operating system. A virtual machine uses hardware virtualization features for the separation from the main operating system.

A system container is an instance of an operating system that also runs on a computer, along with the main operating system. A system container, instead, uses security primitives of the Linux kernel for the separation from the main operating system. You can think of system containers as software virtual machines.

In this post we are going to see how to compile and install Incus using the instructions on how to install from source.

Prerequisites

This post assumes that

  1. You are familiar with compiling software from source.
  2. You are familiar with using Incus.
  3. If you are not installing on your workstation, you already have Incus installed so that you can create a VM.

Setting up the environment

We are going to create a virtual machine running Ubuntu 22.04 from the image images:debian/12. By default Incus allocates 10GB of space for the virtual machine and this image is quite lean, leaving around 8.5GB free space for our compilation. We are using a virtual machine instead of a system container because we want to run Incus in there. It’s easier to do that instead of introducing nesting in this tutorial.

$ incus launch --vm images:debian/12 incus-compile
Creating incus-compile
Starting incus-compile

Then, get a shell into the virtual machine.

$ incus exec incus-compile -- sudo --login --user debian
debian@incus-compile:~$ 

Installing development packages

We follow the instructions from https://linuxcontainers.org/incus/docs/main/installing/ on how to install from source. The commands have the -y flag so that you can easily copy and paste directly into the terminal window of your VM.

First, install the prerequisite development packages. Note that liblxc-dev on Ubuntu is lxc-dev on Debian. Also note that Incus requires a recent version of Go. We are not installing Go from the distro repositories, we are getting a Go binary from the official site. The go version command below should show a version 1.21.6 or newer.

sudo apt update
sudo apt install -y acl attr autoconf automake dnsmasq-base git libacl1-dev libcap-dev liblxc1 lxc-dev libsqlite3-dev libtool libudev-dev liblz4-dev libuv1-dev make pkg-config rsync squashfs-tools tar tcl xz-utils ebtables

sudo apt install -y wget
wget https://go.dev/dl/go1.21.6.linux-amd64.tar.gz
rm -rf /usr/local/go && sudo tar -C /usr/local -xzf go1.21.6.linux-amd64.tar.gz
echo 'export PATH=$PATH:/usr/local/go/bin' >> ~/.bashrc
export PATH=$PATH:/usr/local/go/bin
go version

Second, install the prerequisite development packages for the different storage drivers. We are not installing zfsutils-linux because it is not a standard Debian package; it requires to enable the contrib repository, and we do not need that at the moment.

sudo apt install -y btrfs-progs 
sudo apt install -y ceph-common 
sudo apt install -y lvm2 thin-provisioning-tools 

Thirdly, install the packages for the test suite.

sudo apt install -y busybox-static curl gettext jq sqlite3 socat bind9-dnsutils

Getting the source

We can clone the Incus repository and either use the latest version of the source code or a tagged version. We will use the latest version here.

git clone https://github.com/lxc/incus 
cd incus

Compiling the source: dependencies

Then, we run make deps so that we sort out the dependencies for the compilation. This step takes about 30 seconds.

make deps

At the end of the output of that command, we get the following instructions.

Please set the following in your environment (possibly ~/.bashrc)
export CGO_CFLAGS="-I/home/ubuntu/go/deps/raft/include/ -I/home/ubuntu/go/deps/cowsql/include/"
export CGO_LDFLAGS="-L/home/ubuntu/go/deps/raft/.libs -L/home/ubuntu/go/deps/cowsql/.libs/"
export LD_LIBRARY_PATH="/home/ubuntu/go/deps/raft/.libs/:/home/ubuntu/go/deps/cowsql/.libs/"
export CGO_LDFLAGS_ALLOW="(-Wl,-wrap,pthread_create)|(-Wl,-z,now)"

To do that, we run the following to append the lines to ~/.bashrc. We run it and the shell is waiting for us to paste the four lines.

cat >> ~/.bashrc

We paste these lines. Please use the corresponding lines from your output.

export CGO_CFLAGS="-I/home/ubuntu/go/deps/raft/include/ -I/home/ubuntu/go/deps/cowsql/include/"
export CGO_LDFLAGS="-L/home/ubuntu/go/deps/raft/.libs -L/home/ubuntu/go/deps/cowsql/.libs/"
export LD_LIBRARY_PATH="/home/ubuntu/go/deps/raft/.libs/:/home/ubuntu/go/deps/cowsql/.libs/"
export CGO_LDFLAGS_ALLOW="(-Wl,-wrap,pthread_create)|(-Wl,-z,now)"

Finally, press Ctrl+D to get out of the cat command and return to your shell. Then exit to exit the VM. We need that so that we get a fresh shell that would get updated with the new lines from ~/.bashrc.

Ctrl+D
exit

Get back into the virtual machine and we are ready to actually compile.

incus exec incus-compile -- sudo --login --user ubuntu
ubuntu@incus-compile:~$ cd incus/
ubuntu@incus-compile:~/incus$ 

Compiling the sources: compilation

We compile Incus by running make. This process downloads the source code of several Go packages and then performs the compilation. Therefore, the speed is related to how fast your Internet connection is, and how fast your computer is. A compilation from scratch takes about 3 minutes.

debian@incus-compile:~/incus$ make
...
Incus built successfully
debian@incus-compile:~/incus$

Installation of compiled binaries

There are two binaries, incus the client and incusd the server. They are found in the Go bin/ directory. Let’s add the directory to the $PATH and add this configuration to our shell configuration.

export PATH="${PATH}:$(go env GOPATH)/bin"
echo 'export PATH="${PATH}:$(go env GOPATH)/bin"' >> ~/.bashrc

System setup

Before Incus is ready to run, we need to prepare our system. Incus uses a Linux kernel feature called namespaces. With namespaces, we specify a range for User IDs and Group IDs to be used for the unprivileged containers in Incus. Containers are process trees that live in children namespaces. From within the container you would think you are root with UID 0, but thanks to namespaces, that actual ID on the host would be something like 1000000. Totally unprivileged. Incus requires about 10M of such ID space. Those IDs are subordinate IDs. Here is how to add them manually to our system.

echo "root:1000000:1000000000" | sudo tee -a /etc/subuid /etc/subgid

Running incusd

We run incusd with the following command.

sudo -E PATH=${PATH}:/sbin LD_LIBRARY_PATH=${LD_LIBRARY_PATH} $(go env GOPATH)/bin/incusd --group sudo

Running incusd with --verbose

We run incusd with --verbose so that we can view and understand the related messages that are printing.

core scheduling is no. I do not know yet what is this, and what are the implications. Other than that, all look fine.

The incusd server keeps running until we hit Ctrl+C to stop it. We keep it running and open a new terminal window to run other commads.

debian@incus-compile:~$ sudo -E PATH=${PATH}:/sbin LD_LIBRARY_PATH=${LD_LIBRARY_PATH} $(go env GOPATH)/bin/incusd --group sudo --verbose
INFO   [2024-01-29T01:34:20Z] Starting up                                   mode=normal path=/var/lib/incus version=0.5.1
INFO   [2024-01-29T01:34:20Z] System idmap (root user):                    
INFO   [2024-01-29T01:34:20Z]  - u 0 1000000 1000000000                    
INFO   [2024-01-29T01:34:20Z]  - g 0 1000000 1000000000                    
INFO   [2024-01-29T01:34:20Z] Selected idmap:                              
INFO   [2024-01-29T01:34:20Z]  - u 0 1000000 1000000000                    
INFO   [2024-01-29T01:34:20Z]  - g 0 1000000 1000000000                    
INFO   [2024-01-29T01:34:20Z] Kernel features:                             
INFO   [2024-01-29T01:34:20Z]  - closing multiple file descriptors efficiently: yes 
INFO   [2024-01-29T01:34:20Z]  - netnsid-based network retrieval: yes      
INFO   [2024-01-29T01:34:20Z]  - pidfds: yes                               
INFO   [2024-01-29T01:34:20Z]  - core scheduling: no                       
INFO   [2024-01-29T01:34:20Z]  - uevent injection: yes                     
INFO   [2024-01-29T01:34:20Z]  - seccomp listener: yes                     
INFO   [2024-01-29T01:34:20Z]  - seccomp listener continue syscalls: yes   
INFO   [2024-01-29T01:34:20Z]  - seccomp listener add file descriptors: yes 
INFO   [2024-01-29T01:34:20Z]  - attach to namespaces via pidfds: yes      
INFO   [2024-01-29T01:34:20Z]  - safe native terminal allocation : yes     
INFO   [2024-01-29T01:34:20Z]  - unprivileged file capabilities: yes       
INFO   [2024-01-29T01:34:20Z]  - cgroup layout: cgroup2                    
INFO   [2024-01-29T01:34:20Z]  - idmapped mounts kernel support: yes       
INFO   [2024-01-29T01:34:20Z] Instance type operational                     driver=lxc features="map[]" type=container
INFO   [2024-01-29T01:34:20Z] Instance type operational                     driver=qemu features="map[cpu_hotplug:{} io_uring:{} vhost_net:{}]" type=virtual-machine
INFO   [2024-01-29T01:34:20Z] Initializing local database                  
INFO   [2024-01-29T01:34:20Z] Set client certificate to server certificate  fingerprint=04e709e2c64c90b007a643bfe9c9252ca10b706a695cfb0292468268684fb41a
INFO   [2024-01-29T01:34:20Z] Starting database node                        id=1 local=1 role=voter
INFO   [2024-01-29T01:34:20Z] Loading daemon configuration                 
INFO   [2024-01-29T01:34:20Z] Binding socket                                socket=/var/lib/incus/unix.socket type="REST API Unix socket"
INFO   [2024-01-29T01:34:20Z] Binding socket                                socket=/var/lib/incus/guestapi/sock type="devIncus socket"
INFO   [2024-01-29T01:34:20Z] Binding socket                                socket="host(2):13517" type="VM socket"
INFO   [2024-01-29T01:34:20Z] Initializing global database                 
INFO   [2024-01-29T01:34:20Z] Connecting to global database                
INFO   [2024-01-29T01:34:20Z] Connected to global database                 
INFO   [2024-01-29T01:34:20Z] Initialized global database                  
INFO   [2024-01-29T01:34:20Z] Firewall loaded driver                        driver=nftables
INFO   [2024-01-29T01:34:20Z] Initializing storage pools                   
INFO   [2024-01-29T01:34:20Z] Initializing daemon storage mounts           
INFO   [2024-01-29T01:34:20Z] Initializing networks                        
INFO   [2024-01-29T01:34:20Z] All networks initialized                     
INFO   [2024-01-29T01:34:20Z] Cleaning up leftover image files             
INFO   [2024-01-29T01:34:20Z] Done cleaning up leftover image files        
INFO   [2024-01-29T01:34:20Z] Starting device monitor                      
INFO   [2024-01-29T01:34:20Z] Initialized filesystem monitor                driver=fanotify path=/dev
INFO   [2024-01-29T01:34:20Z] Started seccomp handler                       path=/run/incus/seccomp.socket
INFO   [2024-01-29T01:34:20Z] Pruning expired images                       
INFO   [2024-01-29T01:34:20Z] Done pruning expired images                  
INFO   [2024-01-29T01:34:20Z] Pruning expired backups                      
INFO   [2024-01-29T01:34:20Z] Done pruning expired backups                 
INFO   [2024-01-29T01:34:20Z] Expiring log files                           
INFO   [2024-01-29T01:34:20Z] Daemon started                               
INFO   [2024-01-29T01:34:20Z] Updating images                              
INFO   [2024-01-29T01:34:20Z] Pruning resolved warnings                    
INFO   [2024-01-29T01:34:20Z] Done expiring log files                      
INFO   [2024-01-29T01:34:20Z] Done pruning resolved warnings               
INFO   [2024-01-29T01:34:20Z] Updating instance types                      
INFO   [2024-01-29T01:34:20Z] Done updating images                         
INFO   [2024-01-29T01:34:22Z] Done updating instance types                 

Initializing Incus

We initialize the Incus server incusd through the Incus client incus.

debian@incus-compile:~$ incus admin init
Would you like to use clustering? (yes/no) [default=no]: 
Do you want to configure a new storage pool? (yes/no) [default=yes]: 
Name of the new storage pool [default=default]: 
Name of the storage backend to use (ceph, dir, lvm, btrfs) [default=btrfs]: 
Create a new BTRFS pool? (yes/no) [default=yes]: 
Would you like to use an existing empty block device (e.g. a disk or partition)? (yes/no) [default=no]: 
Size in GiB of the new loop device (1GiB minimum) [default=5GiB]: 3
Would you like to create a new local network bridge? (yes/no) [default=yes]: 
What should the new bridge be called? [default=incusbr0]: 
What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: 
What IPv6 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: 
Would you like the server to be available over the network? (yes/no) [default=no]: 
Would you like stale cached images to be updated automatically? (yes/no) [default=yes]: 
Would you like a YAML "init" preseed to be printed? (yes/no) [default=no]: yes
config: {}
networks:
- config:
    ipv4.address: auto
    ipv6.address: auto
  description: ""
  name: incusbr0
  type: ""
  project: default
storage_pools:
- config:
    size: 3GiB
  description: ""
  name: default
  driver: btrfs
profiles:
- config: {}
  description: ""
  devices:
    eth0:
      name: eth0
      network: incusbr0
      type: nic
    root:
      path: /
      pool: default
      type: disk
  name: default
projects: []
cluster: null

debian@incus-compile:~$ 

Launching a container in a VM

We can now launch system containers through the freshly-compiled Incus installation.

debian@incus-compile:~$ incus launch images:debian/12/cloud mycontainer
Launching mycontainer
debian@incus-compile:~$ incus list -c ns4t
+-------------+---------+-------------------+-----------+
|    NAME     |  STATE  |       IPV4        |   TYPE    |
+-------------+---------+-------------------+-----------+
| mycontainer | RUNNING | 10.10.50.7 (eth0) | CONTAINER |
+-------------+---------+-------------------+-----------+
debian@incus-compile:~$

Getting a shell in a container in a VM

We get a shell in a container in a virtual machine.

debian@incus-compile:~$ incus exec mycontainer -- sudo --login --user debian
debian@mycontainer:~$ 

Closing up

You can shutdown the server by terminating the server process. For example, hit Ctrl+C on the terminal window with Incus running.

Conclusion

It’s quite easy to build Incus and test it out. The process is effortless once you setup a VM. As Incus is developed, you can run git pull and receive the source code updates. Then, you can compile again and get the updated version running.

Troubleshooting

Error: VM agent isn't currently running

You have launched a virtual machine with Incus and you try to get a shell in it. But instead you got this message.

The virtual machine is still starting up and you are very fast with the keyboard. Try to connect again in a few moments.

Error: sudo: unknown user debian

Error: sudo: error initializing audit plugin sudoers_audit

You started a virtual machine with Incus and you try to get a shell in it as a non-root account using sudo. You get these two error messages.

It is possible that you are launching an image that does not create a non-root account (here, debian for the Debian image). Or, you are trying to get access too fast that the VM has not yet completed the booting process. The /cloud images have instructions that create a non-root account. You can get a shell with incus shell mycontainer and investigate.

Error: make: *** [Makefile:37: build] Error 2

You are compiling Incus by running make and the compilation fails with this output. Sudden failure without much information.

...
github.com/lxc/incus/internal/server/db/generate/file
github.com/lxc/incus/internal/server/db/generate/db
github.com/lxc/incus/internal/server/db/generate
github.com/lxc/incus/test/dev_incus-client
github.com/lxc/incus/test/syscall/sysinfo
make: *** [Makefile:37: build] Error 2
debian@incus-compile:~/incus$

You have not enabled a recent version of Go and you are using your distro’s Go package. You may have to edit the $PATH so that the fresh Go takes precedence. You can verify that this is the issue by running make again. In that case, the error message would be different, sort of the following. Need a newer Go compiler.

debian@incus-compile:~/incus$ make
CC="cc" CGO_LDFLAGS_ALLOW="(-Wl,-wrap,pthread_create)|(-Wl,-z,now)" go install -v -tags "libsqlite3"  ./...
github.com/lxc/incus/shared/idmap
# github.com/lxc/incus/shared/idmap
shared/idmap/set_linux.go:142:23: undefined: filepath.SkipAll
note: module requires Go 1.20
github.com/go-acme/lego/v4/certificate
# github.com/go-acme/lego/v4/certificate
../go/pkg/mod/github.com/go-acme/lego/v4@v4.14.2/certificate/errors.go:31:16: undefined: errors.Join
note: module requires Go 1.20
make: *** [Makefile:37: build] Error 2
debian@incus-compile:~/incus$

Error: AppArmor support has been disabled because 'apparmor_parser' couldn't be found

You have compiled Incus and you try to run the Incus server (incusd). You get an error related to AppArmor while you do have AppArmor support in your Linux distribution. The binaries are in /sbin/.

You need to add /sbin to the $PATH when you run incusd.

Error: Firewall failed to detect any compatible driver, falling back to "xtables"

You have compiled Incus and you try to run the Incus server (incusd). You get an error related to the Linux kernel firewall while you do have firewall support in your Linux distribution. The binaries are in /sbin/.

You need to add /sbin to the $PATH when you run incusd.

Error: QEMU command not available for CPU architecture

You have compiled Incus and you try to run the Incus server (incusd). You get an error related to Qemu support for running virtual machines with Incus. The packages are missing.

You need to install the package qemu-system to get support for multiple architectures. Or, just qemu-system-x86 if you want to save space. Or, ignore the message if you only plan to launch only system containers. Or, ignore if your system does not support VM nesting.



Source link