323 lines
11 KiB
Markdown
323 lines
11 KiB
Markdown
---
|
||
title: "Incus 安装"
|
||
date: 2025-01-05T11:09:00+08:00
|
||
lastmod: 2025-01-05T11:09:00+08:00
|
||
tags: ["kvm", "虚拟化", "容器"]
|
||
categories: ["kvm", "container"]
|
||
---
|
||
|
||
## 单机环境
|
||
### 服务器
|
||
处理器 | 内存 | 系统盘 | 数据盘 | 操作系统
|
||
---- | ---- | ---- | ---- | ----
|
||
4核 | 8GB | 30GB | 30GB | Rocky9
|
||
|
||
### 操作系统配置
|
||
- 禁用 selinux
|
||
```BASH
|
||
sed -i '/^SELINUX=/cSELINUX=disabled' /etc/selinux/config
|
||
```
|
||
|
||
- 关闭并禁用防火墙(firewalld)
|
||
```BASH
|
||
systemctl stop firewalld
|
||
systemctl disable firewalld
|
||
```
|
||
|
||
- 同步时间,这里可以把 ntp.tencent.com 换成自己内网的时间服务器
|
||
```BASH
|
||
sed -i '/^pool/d' /etc/chrony.conf
|
||
echo 'pool ntp.tencent.com iburst' >> /etc/chrony.conf
|
||
systemctl restart chronyd
|
||
```
|
||
|
||
- 安装 epel
|
||
```BASH
|
||
dnf install epel-release
|
||
dnf makecache
|
||
```
|
||
|
||
- 在 /etc/security/limits.conf 中追加如下配置
|
||
```
|
||
* soft nofile 1048576
|
||
* hard nofile 1048576
|
||
root soft nofile 1048576
|
||
root hard nofile 1048576
|
||
* soft memlock unlimited
|
||
* hard memlock unlimited
|
||
root soft memlock unlimited
|
||
root hard memlock unlimited
|
||
```
|
||
|
||
- 在 /etc/sysctl.conf 中追加如下配置
|
||
```
|
||
fs.aio-max-nr = 524288
|
||
fs.inotify.max_queued_events = 1048576
|
||
fs.inotify.max_user_instances = 1048576
|
||
fs.inotify.max_user_watches = 1048576
|
||
vm.max_map_count = 262144
|
||
```
|
||
|
||
- 配置子用户
|
||
```BASH
|
||
echo root:1000000:1000000000 > /etc/subuid
|
||
echo root:1000000:1000000000 > /etc/subgid
|
||
```
|
||
|
||
- 重启操作系统
|
||
|
||
### 安装 incus 环境
|
||
- 安装 incus 包
|
||
```BASH
|
||
dnf -y copr enable ligenix/enterprise-qemu-wider
|
||
dnf install lvm2 incus incus-tools
|
||
# 如果要运行 kvm 虚拟机,还需要安装如下 qemu 包
|
||
dnf install qemu-system-x86
|
||
```
|
||
|
||
- 修改 incus 服务文件
|
||
```BASH
|
||
sed -i 's/INCUS_OVMF_PATH/INCUS_EDK2_PATH/' /usr/lib/systemd/system/incus.service
|
||
systemctl daemon-reload
|
||
```
|
||
|
||
- 启动 incus 服务
|
||
```BASH
|
||
systemctl start incus
|
||
```
|
||
|
||
- 增加清华 lxc 镜像源
|
||
```BASH
|
||
incus remote add tuna https://mirrors.tuna.tsinghua.edu.cn/lxc-images/ \
|
||
--protocol=simplestreams --public
|
||
incus remote list # 查看镜像源
|
||
```
|
||
|
||
### 初始化 incus 单机环境
|
||
- 初始化 incus
|
||
```BASH
|
||
incus admin init
|
||
```
|
||
|
||
- 按提示回答初始化交互命令,大都直接回车就好了,大概回答内容如下
|
||
```
|
||
Would you like to use clustering? (yes/no) [default=no]:
|
||
Do you want to configure a new storage pool? (yes/no) [default=yes]:
|
||
Name of the new storage pool [default=default]:
|
||
Name of the storage backend to use (dir, lvm) [default=dir]:
|
||
Would you like to create a new local network bridge? (yes/no) [default=yes]:
|
||
What should the new bridge be called? [default=incusbr0]:
|
||
What IPv4 address should be used? (CIDR subnet notation, auto or none) [default=auto]:
|
||
What IPv6 address should be used? (CIDR subnet notation, auto or none) [default=auto]:
|
||
Would you like the server to be available over the network? (yes/no) [default=no]:
|
||
Would you like stale cached images to be updated automatically? (yes/no) [default=yes]: no
|
||
Would you like a YAML "init" preseed to be printed? (yes/no) [default=no]:
|
||
```
|
||
|
||
---
|
||
|
||
## 集群环境
|
||
### 服务器
|
||
主机名 | 服务器网卡IP | 集群网卡IP | 操作系统 | 数据盘 | /etc/hosts
|
||
---- | ---- | ---- | ---- | ---- | ----
|
||
incus1 | eth0: 192.168.1.1 | 10.10.10.1 | Rocky9 | /dev/sdb | 10.10.10.1 incus1
|
||
incus2 | eth0: 192.168.1.2 | 10.10.10.2 | Rocky9 | /dev/sdb | 10.10.10.2 incus2
|
||
incus3 | eth0: 192.168.1.3 | 10.10.10.3 | Rocky9 | /dev/sdb | 10.10.10.3 incus3
|
||
|
||
### 操作系统配置
|
||
- [每台服务器的操作与单机环境完全一致](#操作系统配置)
|
||
|
||
### 安装 incus 环境
|
||
- [每台服务器的操作与单机环境完全一致](#安装-incus-环境)
|
||
|
||
### 创建网桥和 lvm 卷组
|
||
- **在每台服务器里执行下面操作**
|
||
- 创建网桥 incusbr,连接服务器网卡 eth0,**注意此操作可能会导致服务器断网**
|
||
```BASH
|
||
nmcli c add \
|
||
type bridge stp no \
|
||
ifname incusbr \
|
||
con-name incusbr \
|
||
autoconnect yes \
|
||
ipv4.addr ${eth0_ip}/24 \
|
||
ipv4.gateway 192.168.1.254 \
|
||
ipv4.method manual
|
||
# 把 ${eth0_ip} 替换成对应服务器的 eth0 网卡 ip
|
||
|
||
nmcli c add type bridge-slave con-name incusbr-eth0 ifname eth0 master incusbr
|
||
```
|
||
|
||
- 基于数据盘创建 lvm 卷组 incusvg
|
||
```BASH
|
||
pvcreate /dev/sdb
|
||
vgcreate incusvg /dev/sdb
|
||
```
|
||
|
||
### 创建集群
|
||
- **在 incus1 里执行下面操作**
|
||
- 初始化 incus
|
||
```BASH
|
||
incus admin init
|
||
```
|
||
|
||
- 按提示回答初始化交互命令,大概回答内容如下
|
||
```
|
||
Would you like to use clustering? (yes/no) [default=no]: yes # 使用集群模式
|
||
What IP address or DNS name should be used to reach this server? [default=10.10.10.1]: # 集群 ip
|
||
Are you joining an existing cluster? (yes/no) [default=no]: # 这里是创建新集群,不是加入已有集群
|
||
What member name should be used to identify this server in the cluster? [default=incus1]:
|
||
Do you want to configure a new local storage pool? (yes/no) [default=yes]: no # 不创建本地存储池
|
||
Do you want to configure a new remote storage pool? (yes/no) [default=no]: # 不创建远程存储池
|
||
Would you like to use an existing bridge or host interface? (yes/no) [default=no]: # 不创建网络
|
||
Would you like stale cached images to be updated automatically? (yes/no) [default=yes]: no
|
||
Would you like a YAML "init" preseed to be printed? (yes/no) [default=no]:
|
||
```
|
||
|
||
- **经测试,需要手动创建存储池和受管网络,否则后面其他 incus 节点加入集群失败**
|
||
- 创建存储池 pool1
|
||
```BASH
|
||
incus storage create pool1 lvm source=incusvg
|
||
```
|
||
|
||
- 创建受管网络 incusbr99
|
||
```BASH
|
||
incus network create incusbr99
|
||
```
|
||
|
||
### 加入集群
|
||
- **在 incus2** 里初始化 incus
|
||
```BASH
|
||
incus admin init
|
||
```
|
||
|
||
- **在 incus1** 里生成加入 incus2 的令牌
|
||
```BASH
|
||
incus cluster add incus2
|
||
# 复制这里输出的令牌字符串,用于回答 incus2 加入集群的 token
|
||
```
|
||
|
||
- **返回 incus2**,按提示回答初始化交互命令,大概回答内容如下
|
||
```
|
||
Would you like to use clustering? (yes/no) [default=no]: yes
|
||
What IP address or DNS name should be used to reach this server? [default=10.10.10.2]:
|
||
Are you joining an existing cluster? (yes/no) [default=no]: yes # 加入已有的集群
|
||
Please provide join token: xxxxxxxx # 这里是 incus1 里生成的令牌
|
||
All existing data is lost when joining a cluster, continue? (yes/no) [default=no] yes
|
||
Choose "lvm.thinpool_name" property for storage pool "pool1": incusvg # 存储用 lvm 卷组 incusvg
|
||
Choose "lvm.vg_name" property for storage pool "pool1": incusvg # 存储用 lvm 卷组 incusvg
|
||
Choose "source" property for storage pool "pool1": incusvg # 存储用 lvm 卷组 incusvg
|
||
Would you like a YAML "init" preseed to be printed? (yes/no) [default=no]:
|
||
```
|
||
|
||
- **在 incus3** 里初始化 incus
|
||
```BASH
|
||
incus admin init
|
||
```
|
||
|
||
- **在 incus1** 里生成加入 incus3 的令牌
|
||
```BASH
|
||
incus cluster add incus3
|
||
# 复制这里输出的令牌字符串,用于回答 incus3 加入集群的 token
|
||
```
|
||
|
||
- **返回 incus3**,按提示回答初始化交互命令,大概回答内容如下
|
||
```
|
||
Would you like to use clustering? (yes/no) [default=no]: yes
|
||
What IP address or DNS name should be used to reach this server? [default=10.10.10.3]:
|
||
Are you joining an existing cluster? (yes/no) [default=no]: yes # 加入已有的集群
|
||
Please provide join token: xxxxxxxx # 这里是 incus1 里生成的令牌
|
||
All existing data is lost when joining a cluster, continue? (yes/no) [default=no] yes
|
||
Choose "lvm.thinpool_name" property for storage pool "pool1": incusvg # 存储用 lvm 卷组 incusvg
|
||
Choose "lvm.vg_name" property for storage pool "pool1": incusvg # 存储用 lvm 卷组 incusvg
|
||
Choose "source" property for storage pool "pool1": incusvg # 存储用 lvm 卷组 incusvg
|
||
Would you like a YAML "init" preseed to be printed? (yes/no) [default=no]:
|
||
```
|
||
|
||
- **之前创建的受管网络 incusbr99 虽然没用,但不建议删除,否则后面向该集群增加其他 incus 节点还会失败**
|
||
|
||
---
|
||
|
||
## 简单使用
|
||
### lxc 容器
|
||
- 拉取 alpine lxc 镜像
|
||
```BASH
|
||
incus image list tuna: alpine amd64 # 查看清华源里存在的 alpine amd64 镜像
|
||
incus image copy tuna:alpine/3.21 local:
|
||
```
|
||
|
||
- 启动一个系统级容器 alpine-lxc
|
||
```BASH
|
||
# 单机环境
|
||
incus launch local:alpine/3.21 alpine-lxc
|
||
|
||
# 集群环境中的 incus2 节点
|
||
```BASH
|
||
incus launch local:alpine/3.21 alpine-lxc \
|
||
--network incusbr --storage pool1 --target incus2
|
||
```
|
||
|
||
- 进入 alpine-lxc 容器
|
||
```BASH
|
||
incus shell alpine-lxc
|
||
|
||
# 单机环境的网络是 incus 管理的,此时会发现该虚拟机已有 ip,可以正常上网
|
||
# 集群环境的服务器网络中,如果有 dhcp 服务,该虚拟机也会分到 ip
|
||
# 如果没有 dhcp 服务,可以手动配置一个临时 ip
|
||
ip a add 192.168.1.123/24 dev eth0
|
||
ping 192.168.1.254 # 正常情况网关网络可达
|
||
```
|
||
|
||
### qemu 虚拟机
|
||
- Windows 客户机需要安装 xmanager 和 xshell
|
||
- 用 xshell 连接 incus 服务器
|
||
- 下载 RockyLinux8 操作系统镜像文件:Rocky-8.10-x86_64-minimal.iso
|
||
- 创建 iso 存储卷
|
||
```BASH
|
||
incus storage volume import pool1 \
|
||
/root/Rocky-8.7-x86_64-minimal.iso \
|
||
rocky8-iso-volume --type=iso
|
||
# pool1: 存储池
|
||
# /root/Rocky-8.7-x86_64-minimal.iso:本地 iso 镜像文件
|
||
# rocky8-iso-volume:创建的 iso 存储卷的名字
|
||
```
|
||
|
||
- 创建一个空的虚拟机,并设置系统盘大小和 boot 优先级
|
||
```BASH
|
||
incus create vm1 --empty --vm -d root,size=6GiB -s pool1
|
||
# vm1:虚拟机名字
|
||
# root,size=6GiB:虚拟机中的系统盘设备名是 root,大小是 6G
|
||
# pool1: 存储池
|
||
|
||
incus config device set vm1 root boot.priority=20
|
||
# boot.priority=20:boot 优先级,数字越大,优先级越高
|
||
```
|
||
|
||
- 为虚拟机增加 iso 存储卷,并设置 boot 优先级
|
||
```BASH
|
||
incus config device add vm1 iso-cd disk \
|
||
pool=pool1 source=rocky8-iso-volume boot.priority=10
|
||
# vm1:虚拟机名字
|
||
# iso-cd:虚拟机中的 iso 只读盘设备名
|
||
# pool1:存储池
|
||
# rocky8-iso-volume:创建的 iso 存储卷的名字
|
||
# boot.priority=10:boot 优先级,数字越大,优先级越高
|
||
```
|
||
|
||
- 启动虚拟机,这里会调用 windows 客户机的 xmanager
|
||
```BASH
|
||
incus start vm1 --console=vga
|
||
```
|
||
|
||
- 在弹出的窗口中开始安装 RockyLinux8
|
||
- 安装完成(点击“重新启动”)后,窗口自动关闭,此时虚拟机正在重启中
|
||
- 打开已运行虚拟机的 console 终端
|
||
```BASH
|
||
incus console vm1 --type=vga
|
||
```
|
||
|
||
- 系统安装完成后,虚拟机不再需要 iso 只读盘设备,可以卸载
|
||
```BASH
|
||
incus config device remove vm1 iso-cd
|
||
```
|
||
|