www.colben.cn/content/post/incus-install.md
2025-01-05 20:55:09 +08:00

278 lines
9.7 KiB
Markdown
Raw Blame History

This file contains ambiguous Unicode characters

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

---
title: "Incus 安装"
date: 2025-01-05T11:09:00+08:00
lastmod: 2025-01-05T11:09:00+08:00
tags: ["kvm", "虚拟化", "容器"]
categories: ["kvm", "container"]
---
## 单机环境
### 服务器
处理器 | 内存 | 系统盘 | 数据盘 | 操作系统
---- | ---- | ---- | ---- | ----
4核 | 8GB | 30GB | 30GB | Rocky9
<a id="os"></a>
### 操作系统配置
- 禁用 selinux
```BASH
sed -i '/^SELINUX=/cSELINUX=disabled' /etc/selinux/config
```
- 关闭并禁用防火墙firewalld
```BASH
systemctl stop firewalld
systemctl disable firewalld
```
- 同步时间,这里可以把 ntp.tencent.com 换成自己内网的时间服务器
```BASH
sed -i '/^pool/d' /etc/chrony.conf
echo 'pool ntp.tencent.com iburst' >> /etc/chrony.conf
systemctl restart chronyd
```
- 安装 epel
```BASH
dnf install epel-release
dnf makecache
```
- 在 /etc/security/limits.conf 中追加如下配置
```
* soft nofile 1048576
* hard nofile 1048576
root soft nofile 1048576
root hard nofile 1048576
* soft memlock unlimited
* hard memlock unlimited
root soft memlock unlimited
root hard memlock unlimited
```
- 在 /etc/sysctl.conf 中追加如下配置
```
fs.aio-max-nr = 524288
fs.inotify.max_queued_events = 1048576
fs.inotify.max_user_instances = 1048576
fs.inotify.max_user_watches = 1048576
vm.max_map_count = 262144
```
- 配置子用户
```BASH
echo root:1000000:1000000000 > /etc/subuid
echo root:1000000:1000000000 > /etc/subgid
```
- 重启操作系统
<a id="incus"></a>
### 安装 incus 环境
- 安装 incus 包
```BASH
dnf copr enable neil/incus
dnf config-manager --enable crb
dnf install lvm2 incus incus-tools
```
- 此时的环境足以运行 lxc 容器了,如果要运行 kvm 虚拟机,还需要安装 qemu 包
```BASH
# 卸载自带的 qemu 包
yum remove qemu-img qemu-kvm-common qemu-kvm-core
# 安装完整的 qemu 环境
curl -LO curl -LO https://mirror.ghettoforge.org/distributions/gf/el/9/gf/x86_64/gf-release-9-13.gf.el9.noarch.rpm
rpm -ivh gf-release-9-13.gf.el9.noarch.rpm
dnf --enablerepo=gf install qemu-system-x86
```
- 修改 incus 服务文件
```BASH
sed -i 's/INCUS_OVMF_PATH/INCUS_EDK2_PATH/' /etc/systemd/system/incus.service
systemctl daemon-reload
```
- 启动 incus 服务
```BASH
systemctl start incus
```
- 增加清华 lxc 镜像源
```BASH
incus remote add tuna https://mirrors.tuna.tsinghua.edu.cn/lxc-images/ --protocol=simplestreams --public
incus remote list # 查看镜像源
```
### 初始化 incus 单机环境
- 初始化 incus
```BASH
incus admin init
```
- 按提示回答初始化交互命令,大都直接回车就好了,大概回答内容如下
```
Would you like to use clustering? (yes/no) [default=no]:
Do you want to configure a new storage pool? (yes/no) [default=yes]:
Name of the new storage pool [default=default]:
Name of the storage backend to use (dir, lvm) [default=dir]:
Would you like to create a new local network bridge? (yes/no) [default=yes]:
What should the new bridge be called? [default=incusbr0]:
What IPv4 address should be used? (CIDR subnet notation, auto or none) [default=auto]:
What IPv6 address should be used? (CIDR subnet notation, auto or none) [default=auto]:
Would you like the server to be available over the network? (yes/no) [default=no]:
Would you like stale cached images to be updated automatically? (yes/no) [default=yes]: no
Would you like a YAML "init" preseed to be printed? (yes/no) [default=no]:
```
### 简单使用
- 拉取 alpine lxc 镜像
```BASH
incus image list tuna: alpine amd64 # 查看清华源里存在的 alpine amd64 镜像
incus image copy tuna:alpine/3.21 local:
```
- 启动一个系统级容器 alpine-lxc
```BASH
incus launch local:alpine/3.21 alpine-lxc
```
- 进入 alpine-lxc
```BASH
incus shell alpine-lxc
# 进入容器后,会发现虚拟机已有 ip可以正常上网
```
## 集群环境
### 服务器
主机名 | 服务器网卡IP | 集群网卡IP | 操作系统 | 数据盘 | /etc/hosts
---- | ---- | ---- | ---- | ---- | ----
incus1 | eth0: 192.168.1.1 | 10.10.10.1 | Rocky9 | /dev/sdb | 10.10.10.1 incus1
incus2 | eth0: 192.168.1.2 | 10.10.10.2 | Rocky9 | /dev/sdb | 10.10.10.2 incus2
incus3 | eth0: 192.168.1.3 | 10.10.10.3 | Rocky9 | /dev/sdb | 10.10.10.3 incus3
### 操作系统配置
- [每台服务器的操作与单机环境完全一致](#os)
### 安装 incus 环境
- [每台服务器的操作与单机环境完全一致](#incus)
### 创建网桥和 lvm 卷组
- **在每台服务器里执行下面操作**
- 创建网桥 incusbr连接服务器网卡 eth0**注意此操作可能会导致服务器断网**
```BASH
nmcli c add \
type bridge stp no \
ifname incusbr \
con-name incusbr \
autoconnect yes \
ipv4.addr ${eth0_ip}/24 \
ipv4.gateway 192.168.1.254 \
ipv4.method manual
# 把 ${eth0_ip} 替换成对应服务器的 eth0 网卡 ip
nmcli c add type bridge-slave con-name incusbr-eth0 ifname eth0 master incusbr
```
- 基于数据盘创建 lvm 卷组 incusvg
```BASH
pvcreate /dev/sdb
vgcreate incusvg /dev/sdb
```
### 创建集群
- **在 incus1 里执行下面操作**
- 初始化 incus
```BASH
incus admin init
```
- 按提示回答初始化交互命令,大概回答内容如下
```
Would you like to use clustering? (yes/no) [default=no]: yes # 使用集群模式
What IP address or DNS name should be used to reach this server? [default=10.10.10.1]: # 集群 ip
Are you joining an existing cluster? (yes/no) [default=no]: # 这里是创建新集群,不是加入已有集群
What member name should be used to identify this server in the cluster? [default=incus1]:
Do you want to configure a new local storage pool? (yes/no) [default=yes]: no # 不创建本地存储池
Do you want to configure a new remote storage pool? (yes/no) [default=no]: # 不创建远程存储池
Would you like to use an existing bridge or host interface? (yes/no) [default=no]: # 不创建网络
Would you like stale cached images to be updated automatically? (yes/no) [default=yes]: no
Would you like a YAML "init" preseed to be printed? (yes/no) [default=no]:
```
- **经测试,需要手动创建存储池和受管网络,否则后面其他 incus 节点加入集群失败**
- 创建存储池 pool1
```BASH
incus storage create pool1 lvm source=incusvg
```
- 创建受管网络 incusbr99
```BASH
incus network create incusbr99
```
### 加入集群
- **在 incus2** 里初始化 incus
```BASH
incus admin init
```
- **在 incus1** 里生成加入 incus2 的令牌
```BASH
incus cluster add incus2
# 复制这里输出的令牌字符串,用于回答 incus2 加入集群的 token
```
- **返回 incus2**,按提示回答初始化交互命令,大概回答内容如下
```
Would you like to use clustering? (yes/no) [default=no]: yes
What IP address or DNS name should be used to reach this server? [default=10.10.10.2]:
Are you joining an existing cluster? (yes/no) [default=no]: yes # 加入已有的集群
Please provide join token: xxxxxxxx # 这里是 incus1 里生成的令牌
All existing data is lost when joining a cluster, continue? (yes/no) [default=no] yes
Choose "lvm.thinpool_name" property for storage pool "pool1": incusvg # 存储用 lvm 卷组 incusvg
Choose "lvm.vg_name" property for storage pool "pool1": incusvg # 存储用 lvm 卷组 incusvg
Choose "source" property for storage pool "pool1": incusvg # 存储用 lvm 卷组 incusvg
Would you like a YAML "init" preseed to be printed? (yes/no) [default=no]:
```
- **在 incus3** 里初始化 incus
```BASH
incus admin init
```
- **在 incus1** 里生成加入 incus3 的令牌
```BASH
incus cluster add incus3
# 复制这里输出的令牌字符串,用于回答 incus3 加入集群的 token
```
- **返回 incus3**,按提示回答初始化交互命令,大概回答内容如下
```
Would you like to use clustering? (yes/no) [default=no]: yes
What IP address or DNS name should be used to reach this server? [default=10.10.10.3]:
Are you joining an existing cluster? (yes/no) [default=no]: yes # 加入已有的集群
Please provide join token: xxxxxxxx # 这里是 incus1 里生成的令牌
All existing data is lost when joining a cluster, continue? (yes/no) [default=no] yes
Choose "lvm.thinpool_name" property for storage pool "pool1": incusvg # 存储用 lvm 卷组 incusvg
Choose "lvm.vg_name" property for storage pool "pool1": incusvg # 存储用 lvm 卷组 incusvg
Choose "source" property for storage pool "pool1": incusvg # 存储用 lvm 卷组 incusvg
Would you like a YAML "init" preseed to be printed? (yes/no) [default=no]:
```
- **之前创建的受管网络 incusbr99 虽然没用,但不建议删除,否则后面向该集群增加其他 incus 节点还会失败**
### 简单使用
- 拉取 alpine lxc 镜像,与单机环境对应操作一样
- 在 incus2 节点中启动一个系统级容器 alpine-lxc
```BASH
incus launch local:alpine/3.21 alpine-lxc --network incusbr --storage pool1 --target incus2
```
- 进入 alpine-lxc配置网络
```BASH
incus shell alpine-lxc
ip a add 192.168.1.123/24 dev eth0
ping 192.168.1.254 # 正常情况网关网络可达
```