www.colben.cn/content/post/incus-install.md
2025-03-14 20:38:49 +08:00

416 lines
14 KiB
Markdown
Raw Blame History

This file contains ambiguous Unicode characters

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

---
title: "Incus 安装"
date: 2025-01-05T11:09:00+08:00
lastmod: 2025-01-05T11:09:00+08:00
tags: ["kvm", "虚拟化", "容器"]
categories: ["kvm", "container"]
---
## 单机环境
### 服务器
处理器 | 内存 | 系统盘 | 数据盘
---- | ---- | ---- | ----
4核 | 8GB | 30GB | 30GB
### 操作系统配置
- 在 /etc/security/limits.conf 中追加如下配置
```
* soft nofile 1048576
* hard nofile 1048576
root soft nofile 1048576
root hard nofile 1048576
* soft memlock unlimited
* hard memlock unlimited
root soft memlock unlimited
root hard memlock unlimited
```
- 在 /etc/sysctl.conf 中追加如下配置
```
fs.aio-max-nr = 524288
fs.inotify.max_queued_events = 1048576
fs.inotify.max_user_instances = 1048576
fs.inotify.max_user_watches = 1048576
vm.max_map_count = 262144
```
- 安装 chrony配置时间同步
#### debian
- 安装 curl 和 gpg
```BASH
apt install curl gpg
```
#### centos/rocky/fedora
- 禁用 selinux
- 关闭并禁用防火墙firewalld
- 安装 epel
```BASH
dnf install epel-release
dnf makecache
```
- 配置子用户
```BASH
echo root:1000000:1000000000 > /etc/subuid
echo root:1000000:1000000000 > /etc/subgid
```
- 重启操作系统
### 安装 incus 环境
#### debian
- 参考[zabbly/incus](https://github.com/zabbly/incus)
- 引入公钥,用于验证软件包的完整性
```BASH
curl -fsSL https://pkgs.zabbly.com/key.asc | gpg --show-keys --fingerprint
mkdir -p /etc/apt/keyrings/
curl -fsSL https://pkgs.zabbly.com/key.asc -o /etc/apt/keyrings/zabbly.asc
```
- 配置软件源lts 版本太旧了,这里用的最新稳定版
```BASH
sh -c 'cat <<EOF > /etc/apt/sources.list.d/zabbly-incus-stable.sources
Enabled: yes
Types: deb
URIs: https://pkgs.zabbly.com/incus/stable
Suites: $(. /etc/os-release && echo ${VERSION_CODENAME})
Components: main
Architectures: $(dpkg --print-architecture)
Signed-By: /etc/apt/keyrings/zabbly.asc
EOF'
```
- 安装 incus 包
```BASH
apt update
apt install incus qemu-system
```
- 配置 incus
```BASH
echo 'INCUS_EDK2_PATH=/usr/share/ovmf' >> /etc/default/incus
```
- 重启 incus
```BASH
systemctl restart incus
```
#### centos/rocky
- 安装 incus 包,**目前测试 qemu 启动 vm 失败**
```BASH
dnf -y copr enable ligenix/enterprise-qemu-wider
dnf install lvm2 incus incus-tools
# 打算尝试虚拟机的可以安装 qemu-system 包
#dnf install qemu-system
```
- ~~修改 incus 服务文件~~
```BASH
sed -i 's/INCUS_OVMF_PATH/INCUS_EDK2_PATH/' /usr/lib/systemd/system/incus.service
systemctl daemon-reload
```
- 启动 incus 服务
```BASH
systemctl start incus
```
#### fedora
- 安装 incus 包
```BASH
dnf install lvm2 incus incus-tools qemu-system
```
### 初始化 incus 单机环境
- 初始化 incus
```BASH
incus admin init
```
- 按提示回答初始化交互命令,大都直接回车就好了,大概回答内容如下
```
Would you like to use clustering? (yes/no) [default=no]:
Do you want to configure a new storage pool? (yes/no) [default=yes]:
Name of the new storage pool [default=default]:
Name of the storage backend to use (dir, lvm) [default=dir]:
Would you like to create a new local network bridge? (yes/no) [default=yes]:
What should the new bridge be called? [default=incusbr0]:
What IPv4 address should be used? (CIDR subnet notation, auto or none) [default=auto]:
What IPv6 address should be used? (CIDR subnet notation, auto or none) [default=auto]:
Would you like the server to be available over the network? (yes/no) [default=no]:
Would you like stale cached images to be updated automatically? (yes/no) [default=yes]: no
Would you like a YAML "init" preseed to be printed? (yes/no) [default=no]:
```
---
## 集群环境
### 服务器
主机名 | 服务器网卡IP | 集群网卡IP | 数据盘 | /etc/hosts
---- | ---- | ---- | ---- | ----
incus1 | eth0: 192.168.1.1 | 10.10.10.1 | /dev/sdb | 10.10.10.1 incus1
incus2 | eth0: 192.168.1.2 | 10.10.10.2 | /dev/sdb | 10.10.10.2 incus2
incus3 | eth0: 192.168.1.3 | 10.10.10.3 | /dev/sdb | 10.10.10.3 incus3
### 操作系统配置
- [每台服务器的操作与单机环境完全一致](#操作系统配置)
### 安装 incus 环境
- [每台服务器的操作与单机环境完全一致](#安装-incus-环境)
### 创建网桥
#### debian
- **在每台服务器里执行下面操作**
- 创建网桥 incusbr
```BASH
apt install bridge-utils
brctl addbr incusbr
```
- 修改 /etc/network/interfaces把 eth0 相关配置改成如下网桥配置
```
iface eth0 inet manual
auto incusbr
iface incusbr inet static
address ${eth0_ip}/24
gateway 192.168.1.254
bridge-ports eth0
bridge-stp off
bridge-fd 0
#dns-nameservers 223.5.5.5
# 把 ${eth0_ip} 替换成对应服务器的 eth0 网卡 ip
```
- 重启网络服务,**注意此操作可能会导致服务器断网**
```BASH
systemctl restart networking
```
#### centos/rocky/fedora
- **在每台服务器里执行下面操作**
- 创建网桥 incusbr连接服务器网卡 eth0**注意此操作可能会导致服务器断网**
```BASH
nmcli c add \
type bridge stp no \
ifname incusbr \
con-name incusbr \
autoconnect yes \
ipv4.addr ${eth0_ip}/24 \
ipv4.gateway 192.168.1.254 \
ipv4.method manual
# 把 ${eth0_ip} 替换成对应服务器的 eth0 网卡 ip
nmcli c add type bridge-slave con-name incusbr-eth0 ifname eth0 master incusbr
```
### 创建 lvm 卷组
- **在每台服务器里执行下面操作**
- 基于数据盘创建 lvm 卷组 incusvg
```BASH
pvcreate /dev/sdb
vgcreate incusvg /dev/sdb
```
### 创建集群
- **在 incus1 里执行下面操作**
- 初始化 incus
```BASH
incus admin init
```
- 按提示回答初始化交互命令,大概回答内容如下
```
Would you like to use clustering? (yes/no) [default=no]: yes # 使用集群模式
What IP address or DNS name should be used to reach this server? [default=10.10.10.1]: # 集群 ip
Are you joining an existing cluster? (yes/no) [default=no]: # 这里是创建新集群,不是加入已有集群
What member name should be used to identify this server in the cluster? [default=incus1]:
Do you want to configure a new local storage pool? (yes/no) [default=yes]: no # 不创建本地存储池
Do you want to configure a new remote storage pool? (yes/no) [default=no]: # 不创建远程存储池
Would you like to use an existing bridge or host interface? (yes/no) [default=no]: # 不创建网络
Would you like stale cached images to be updated automatically? (yes/no) [default=yes]: no
Would you like a YAML "init" preseed to be printed? (yes/no) [default=no]:
```
- **经测试,需要手动创建存储池和受管网络,否则后面其他 incus 节点加入集群失败**
- 创建存储池 pool1
```BASH
incus storage create pool1 lvm source=incusvg
```
- 创建受管网络 incusbr99
```BASH
incus network create incusbr99
```
### 加入集群
- **在 incus2** 里初始化 incus
```BASH
incus admin init
```
- **在 incus1** 里生成加入 incus2 的令牌
```BASH
incus cluster add incus2
# 复制这里输出的令牌字符串,用于回答 incus2 加入集群的 token
```
- **返回 incus2**,按提示回答初始化交互命令,大概回答内容如下
```
Would you like to use clustering? (yes/no) [default=no]: yes
What IP address or DNS name should be used to reach this server? [default=10.10.10.2]:
Are you joining an existing cluster? (yes/no) [default=no]: yes # 加入已有的集群
Please provide join token: xxxxxxxx # 这里是 incus1 里生成的令牌
All existing data is lost when joining a cluster, continue? (yes/no) [default=no] yes
Choose "lvm.thinpool_name" property for storage pool "pool1": incusvg # 存储用 lvm 卷组 incusvg
Choose "lvm.vg_name" property for storage pool "pool1": incusvg # 存储用 lvm 卷组 incusvg
Choose "source" property for storage pool "pool1": incusvg # 存储用 lvm 卷组 incusvg
Would you like a YAML "init" preseed to be printed? (yes/no) [default=no]:
```
- **在 incus3** 里初始化 incus
```BASH
incus admin init
```
- **在 incus1** 里生成加入 incus3 的令牌
```BASH
incus cluster add incus3
# 复制这里输出的令牌字符串,用于回答 incus3 加入集群的 token
```
- **返回 incus3**,按提示回答初始化交互命令,大概回答内容如下
```
Would you like to use clustering? (yes/no) [default=no]: yes
What IP address or DNS name should be used to reach this server? [default=10.10.10.3]:
Are you joining an existing cluster? (yes/no) [default=no]: yes # 加入已有的集群
Please provide join token: xxxxxxxx # 这里是 incus1 里生成的令牌
All existing data is lost when joining a cluster, continue? (yes/no) [default=no] yes
Choose "lvm.thinpool_name" property for storage pool "pool1": incusvg # 存储用 lvm 卷组 incusvg
Choose "lvm.vg_name" property for storage pool "pool1": incusvg # 存储用 lvm 卷组 incusvg
Choose "source" property for storage pool "pool1": incusvg # 存储用 lvm 卷组 incusvg
Would you like a YAML "init" preseed to be printed? (yes/no) [default=no]:
```
- **之前创建的受管网络 incusbr99 虽然没用,但不建议删除,否则后面向该集群增加其他 incus 节点还会失败**
---
## 简单使用
### 配置镜像源
- 增加清华镜像源
```BASH
incus remote add tuna https://mirrors.tuna.tsinghua.edu.cn/lxc-images/ \
--protocol=simplestreams --public
incus remote list # 查看镜像源
```
### lxc 容器
- 拉取 alpine lxc 镜像
```BASH
incus image list tuna: alpine amd64 # 查看清华源里存在的 alpine amd64 镜像
incus image copy tuna:alpine/3.21 local:
incus image alias create local:alpine-3.21 81f0ad86761e
```
- 启动一个系统级容器 alpine-lxc
```BASH
# 单机环境
incus launch local:alpine/3.21 alpine-lxc \
-c limits.cpu=2 -c limits.memory=4GiB -d root,size=5GiB
# 集群环境中的 incus2 节点
```BASH
incus launch local:alpine/3.21 alpine-lxc \
-c limits.cpu=2 -c limits.memory=4GiB -d root,size=5GiB \
--network incusbr --storage pool1 --target incus2
```
- 进入 alpine-lxc 容器
```BASH
incus shell alpine-lxc
# 单机环境的网络是 incus 管理的,此时会发现该虚拟机已有 ip可以正常上网
# 集群环境的服务器网络中,如果有 dhcp 服务,该虚拟机也会分到 ip
# 如果没有 dhcp 服务,可以手动配置一个临时 ip
ip a add 192.168.1.123/24 dev eth0
ping 192.168.1.254 # 正常情况网关网络可达
```
### qemu 虚拟机
- 客户机安装 [virt-viewer](https://releases.pagure.org/virt-viewer/virt-viewer-x64-11.0-1.0.msi)
- 登录已安装 incus 的 debian 操作系统下
- 下载 RockyLinux8 操作系统镜像文件Rocky-8.10-x86_64-minimal.iso
- 创建 iso 存储卷
```BASH
incus storage volume import pool1 \
/root/Rocky-8.7-x86_64-minimal.iso \
rocky8-iso-volume --type=iso
# pool1: 存储池
# /root/Rocky-8.7-x86_64-minimal.iso本地 iso 镜像文件
# rocky8-iso-volume创建的 iso 存储卷的名字
```
- 创建一个空的虚拟机,并设置 cpu、内存和系统盘大小和 boot 优先级
```BASH
incus create vm1 --empty --vm -c limits.cpu=2 -c limits.memory=4GiB -d root,size=6GiB -s pool1
# vm1虚拟机名字
# limits.cpu=2虚拟机占用 2 核
# limits.memory虚拟机占用 4G 内存
# root,size=6GiB虚拟机中的系统盘设备名是 root大小是 6G
# pool1: 存储池
incus config device set vm1 root boot.priority=20
# boot.priority=20boot 优先级,数字越大,优先级越高
# 修改虚拟机配置
#incus config set vm1 limits.cpu=4
#incus config edit vm1
```
- 为虚拟机增加 iso 存储卷,并设置 boot 优先级
```BASH
incus config device add vm1 iso-cd disk \
pool=pool1 source=rocky8-iso-volume boot.priority=10
# vm1虚拟机名字
# iso-cd虚拟机中的 iso 只读盘设备名
# pool1存储池
# rocky8-iso-volume创建的 iso 存储卷的名字
# boot.priority=10boot 优先级,数字越大,优先级越高
```
- **在 aarch64 架构中,关闭虚拟机的安全引导**
```BASH
incus config set vm1 security.secureboot=false
```
- 启动虚拟机
```BASH
incus start vm1
```
- 打开已运行虚拟机的 console 终端
```BASH
incus console vm1 --type=vga
# 服务器中未安装 remote-viewer因此该命令会输出下面 spice socket 信息:
The client automatically uses either spicy or remote-viewer when present.
As neither could be found, the raw SPICE socket can be found at:
spice+unix:///root/.config/incus/sockets/xxxx.spice
```
- 用 ssh 把 socket 文件转成 tcp 端口,[/etc/sshd_config 配置参考这里](/post/ssh)
```BASH
ssh -N -g -L 5555:/root/.config/incus/sockets/xxxx.spice 127.0.0.1
```
- 在客户机中打开 virt-viewer输入地址“spice://{debian 服务器 ip}:5555”连接
- 在打开的窗口中开始安装 RockyLinux8
- 系统安装完成后,虚拟机不再需要 iso 只读盘设备,可以卸载
```BASH
incus config device remove vm1 iso-cd
```