536 lines
15 KiB
Markdown
536 lines
15 KiB
Markdown
---
|
||
title: "RockyLinux 安装 Ceph"
|
||
date: 2019-10-30T11:09:30+08:00
|
||
lastmod: 2021-09-29T12:25:00+08:00
|
||
keywords: []
|
||
tags: ["ceph"]
|
||
categories: ["storage"]
|
||
---
|
||
|
||
## 环境
|
||
操作系统 | 主机名 | 公用地址 | 集群地址 | 数据裸盘 | ceph 版本
|
||
---- | ---- | ---- | ---- | ---- | ----
|
||
Rocky Linux 8.4 | ceph41 | 10.0.4.41 | 192.168.4.41 | /dev/sdb, /dev/sdc | 15.2.14
|
||
Rocky Linux 8.4 | ceph42 | 10.0.4.42 | 192.168.4.42 | /dev/sdb, /dev/sdc | 15.2.14
|
||
Rocky Linux 8.4 | ceph43 | 10.0.4.43 | 192.168.4.43 | /dev/sdb, /dev/sdc | 15.2.14
|
||
|
||
## 关闭防火墙,配置 host
|
||
- 在全部节点上执行如下操作
|
||
- 关闭 firewalld 和 SELinux
|
||
```bash
|
||
systemctl stop firewalld
|
||
systemctl disable firewalld
|
||
sed -i '/^SELINUX=/cSELINUX=disabled' /etc/selinux/config # 重启操作系统生效
|
||
```
|
||
|
||
- 配置各节点主机名解析
|
||
```bash
|
||
echo "10.0.4.41 ceph41" >> /etc/hosts
|
||
echo "10.0.4.42 ceph42" >> /etc/hosts
|
||
echo "10.0.4.43 ceph43" >> /etc/hosts
|
||
```
|
||
|
||
## 配置 yum 源
|
||
- 在全部节点上执行如下操作
|
||
- 移动系统默认的 repo 文件到备份目录
|
||
```bash
|
||
cd /etc/yum.repos.d
|
||
mkdir bak
|
||
mv Rocky-*.repo bak/
|
||
```
|
||
|
||
- 创建新的系统 yum 源文件 /etc/yum.repos.d/rocky-nju.repo,使用南京大学镜像站,内容如下
|
||
```
|
||
[appstream]
|
||
name=Rocky Linux $releasever - AppStream
|
||
baseurl=https://mirrors.nju.edu.cn/$contentdir/$releasever/AppStream/$basearch/os/
|
||
gpgcheck=1
|
||
enabled=1
|
||
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-rockyofficial
|
||
|
||
[baseos]
|
||
name=Rocky Linux $releasever - BaseOS
|
||
baseurl=https://mirrors.nju.edu.cn/$contentdir/$releasever/BaseOS/$basearch/os/
|
||
gpgcheck=1
|
||
enabled=1
|
||
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-rockyofficial
|
||
```
|
||
|
||
- 创建 epel yum 源文件 /etc/yum.repos.d/epel-tsinghua.repo,使用清华大学镜像站,内容如下
|
||
```
|
||
[epel]
|
||
name=Extra Packages for Enterprise Linux $releasever - $basearch
|
||
baseurl=https://mirrors.tuna.tsinghua.edu.cn/epel/$releasever/Everything/$basearch
|
||
failovermethod=priority
|
||
enabled=1
|
||
gpgcheck=1
|
||
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-$releasever
|
||
```
|
||
|
||
- 下载 RPM-GPG-KEY-EPEL-8
|
||
```bash
|
||
cd /etc/pki/rpm-gpg/
|
||
curl -LO https://mirrors.nju.edu.cn/epel/RPM-GPG-KEY-EPEL-8
|
||
```
|
||
|
||
- 创建 ceph yum 源文件 /etc/yum.repos.d/ceph-tsinghua.repo,使用清华大学镜像站,内容如下
|
||
```
|
||
[ceph]
|
||
name=Ceph packages for $basearch
|
||
baseurl=https://mirrors.tuna.tsinghua.edu.cn/ceph/rpm-15.2.14/el8/$basearch
|
||
enabled=1
|
||
priority=2
|
||
gpgcheck=1
|
||
gpgkey=https://mirrors.tuna.tsinghua.edu.cn/ceph/keys/release.asc
|
||
|
||
[ceph-noarch]
|
||
name=Ceph noarch packages
|
||
baseurl=https://mirrors.tuna.tsinghua.edu.cn/ceph/rpm-15.2.14/el8/noarch
|
||
enabled=1
|
||
priority=2
|
||
gpgcheck=1
|
||
gpgkey=https://mirrors.tuna.tsinghua.edu.cn/ceph/keys/release.asc
|
||
```
|
||
|
||
- 导入 release.asc
|
||
```bash
|
||
rpm --import 'https://mirrors.tuna.tsinghua.edu.cn/ceph/keys/release.asc'
|
||
```
|
||
|
||
- 更新 yum 缓存
|
||
```bash
|
||
dnf clean all
|
||
dnf makecache
|
||
```
|
||
|
||
## 配置时间同步
|
||
- 在全部节点上执行如下操作
|
||
- 安装 chrony
|
||
```bash
|
||
dnf install chrony
|
||
```
|
||
|
||
- 如果内网没有时间服务器,可以在 ceph41 上启动一个时间同步服务,修改 /etc/chrony.conf
|
||
```
|
||
...
|
||
pool ntp.aliyun.com iburst
|
||
...
|
||
allow 10.0.4.0/24
|
||
...
|
||
local stratum 10
|
||
...
|
||
```
|
||
|
||
- 设置 ceph42 和 ceph43 从 ceph41 上同步时间,修改 /etc/chrony.conf
|
||
```
|
||
...
|
||
pool ceph41 iburst
|
||
...
|
||
```
|
||
|
||
- 在全部服务器上启动 chronyd 服务,并设置开机自动启动
|
||
```bash
|
||
systemctl start chronyd
|
||
systemctl enable chronyd
|
||
```
|
||
|
||
## 安装 ceph
|
||
- 在全部节点上执行如下操作
|
||
```bash
|
||
dnf install leveldb gdisk gperftools-libs python3-ceph-argparse nvme-cli
|
||
dnf install ceph
|
||
```
|
||
|
||
- 创建 ceph 配置文件 /etc/ceph/ceph.conf,内容如下
|
||
```
|
||
[global]
|
||
fsid = aaaa0000-bbbb-1111-cccc-2222dddd3333
|
||
mon_initial_members = ceph41, ceph42, ceph43
|
||
mon_host = 10.0.4.41, 10.0.4.42, 10.0.4.43
|
||
public_network = 10.0.4.0/24
|
||
cluster_network = 192.168.4.0/24
|
||
auth_cluster_required = cephx
|
||
auth_service_required = cephx
|
||
auth_client_required = cephx
|
||
osd_pool_default_size = 2 # 推荐使用官方默认的 3
|
||
osd_pool_default_min_size = 2
|
||
```
|
||
|
||
## 部署 mon
|
||
- 在 ceph41 上执行如下操作
|
||
- 这里创建了一堆傻逼密钥文件,没看懂啥意思,照搬官网
|
||
```bash
|
||
ceph-authtool --create-keyring /tmp/ceph.mon.keyring --gen-key -n mon. --cap mon 'allow *'
|
||
ceph-authtool --create-keyring /etc/ceph/ceph.client.admin.keyring --gen-key -n client.admin --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow *' --cap mgr 'allow *'
|
||
ceph-authtool --create-keyring /var/lib/ceph/bootstrap-osd/ceph.keyring --gen-key -n client.bootstrap-osd --cap mon 'profile bootstrap-osd' --cap mgr 'allow r'
|
||
ceph-authtool /tmp/ceph.mon.keyring --import-keyring /etc/ceph/ceph.client.admin.keyring
|
||
ceph-authtool /tmp/ceph.mon.keyring --import-keyring /var/lib/ceph/bootstrap-osd/ceph.keyring
|
||
chown ceph:ceph /tmp/ceph.mon.keyring
|
||
monmaptool --create --add ceph41 10.0.4.41 --add ceph42 10.0.4.42 --add ceph43 10.0.4.43 --fsid aaaa0000-bbbb-1111-cccc-2222dddd3333 /tmp/monmap
|
||
```
|
||
|
||
- 初始化 mon 数据目录
|
||
```bash
|
||
sudo -u ceph mkdir /var/lib/ceph/mon/ceph-ceph41
|
||
sudo -u ceph ceph-mon --mkfs -i ceph41 --monmap /tmp/monmap --keyring /tmp/ceph.mon.keyring
|
||
```
|
||
|
||
- 启动 mon 服务,并设置开机自动启动
|
||
```bash
|
||
systemctl start ceph-mon@ceph41
|
||
systemctl enable ceph-mon@ceph41
|
||
```
|
||
|
||
- 复制密钥文件到 ceph42 和 ceph43 上
|
||
```bash
|
||
scp /tmp/{ceph.mon.keyring,monmap} ceph42:/tmp/
|
||
scp /etc/ceph/ceph.client.admin.keyring ceph42:/etc/ceph/
|
||
scp /var/lib/ceph/bootstrap-osd/ceph.keyring ceph42:/var/lib/ceph/bootstrap-osd/
|
||
|
||
scp /tmp/{ceph.mon.keyring,monmap} ceph43:/tmp/
|
||
scp /etc/ceph/ceph.client.admin.keyring ceph43:/etc/ceph/
|
||
scp /var/lib/ceph/bootstrap-osd/ceph.keyring ceph43:/var/lib/ceph/bootstrap-osd/
|
||
```
|
||
|
||
- 在 **ceph42** 上执行如下操作
|
||
- 初始化 mon 数据目录
|
||
```bash
|
||
chown ceph:ceph /tmp/ceph.mon.keyring
|
||
sudo -u ceph mkdir /var/lib/ceph/mon/ceph-ceph42
|
||
sudo -u ceph ceph-mon --mkfs -i ceph42 --monmap /tmp/monmap --keyring /tmp/ceph.mon.keyring
|
||
```
|
||
|
||
- 启动 mon 服务,并设置开机自动启动
|
||
```bash
|
||
systemctl start ceph-mon@ceph42
|
||
systemctl enable ceph-mon@ceph42
|
||
```
|
||
|
||
- 在 **ceph43** 上执行如下操作
|
||
- 初始化 mon 数据目录
|
||
```bash
|
||
chown ceph:ceph /tmp/ceph.mon.keyring
|
||
sudo -u ceph mkdir /var/lib/ceph/mon/ceph-ceph43
|
||
sudo -u ceph ceph-mon --mkfs -i ceph43 --monmap /tmp/monmap --keyring /tmp/ceph.mon.keyring
|
||
```
|
||
|
||
- 启动 mon 服务,并设置开机自动启动
|
||
```bash
|
||
systemctl start ceph-mon@ceph43
|
||
systemctl enable ceph-mon@ceph43
|
||
```
|
||
|
||
- 在任一节点上执行如下操作
|
||
- mon 额外配置
|
||
```bash
|
||
# 开启 msgr2,监听 tcp 3300 端口
|
||
ceph mon enable-msgr2
|
||
|
||
# 禁用 auth_allow_insecure_global_id_reclaim
|
||
ceph config set mon auth_allow_insecure_global_id_reclaim false
|
||
```
|
||
|
||
### 查看集群状态
|
||
- 在任一节点上执行如下操作
|
||
```bash
|
||
ceph -s
|
||
```
|
||
|
||
- 集群状态如下
|
||
```
|
||
cluster:
|
||
id: aaaa0000-bbbb-1111-cccc-2222dddd3333
|
||
health: HEALTH_OK
|
||
|
||
services:
|
||
mon: 3 daemons, quorum ceph41,ceph42,ceph43 (age ...)
|
||
mgr: no daemons active
|
||
osd: 0 osds: 0 up, 0 in
|
||
|
||
data:
|
||
pools: 0 pools, 0 pgs
|
||
objects: 0 objects, 0 B
|
||
usage: 0 B used, 0 B / 0 B avail
|
||
pgs:
|
||
```
|
||
|
||
## 部署 mgr
|
||
- 在 ceph41 上执行如下操作
|
||
- 又是创建密钥文件,没看懂啥意思,照搬官网
|
||
```bash
|
||
sudo -u ceph mkdir /var/lib/ceph/mgr/ceph-ceph41
|
||
ceph auth get-or-create mgr.ceph41 mon 'allow profile mgr' osd 'allow *' mds 'allow *' -o /var/lib/ceph/mgr/ceph-ceph41/keyring
|
||
chown ceph.ceph /var/lib/ceph/mgr/ceph-ceph41/keyring
|
||
```
|
||
|
||
- 启动 mgr 服务,并配置开机自动启动
|
||
```bash
|
||
systemctl start ceph-mgr@ceph41
|
||
systemctl enable ceph-mgr@ceph41
|
||
```
|
||
|
||
- 在 ceph42 上执行如下操作
|
||
- 创建密钥文件
|
||
```bash
|
||
sudo -u ceph mkdir /var/lib/ceph/mgr/ceph-ceph42
|
||
ceph auth get-or-create mgr.ceph42 mon 'allow profile mgr' osd 'allow *' mds 'allow *' -o /var/lib/ceph/mgr/ceph-ceph42/keyring
|
||
chown ceph.ceph /var/lib/ceph/mgr/ceph-ceph42/keyring
|
||
```
|
||
|
||
- 启动 mgr 服务,并配置开机自动启动
|
||
```bash
|
||
systemctl start ceph-mgr@ceph42
|
||
systemctl enable ceph-mgr@ceph42
|
||
```
|
||
|
||
- 在 ceph43 上执行如下操作
|
||
- 创建密钥文件
|
||
```bash
|
||
sudo -u ceph mkdir /var/lib/ceph/mgr/ceph-ceph43
|
||
ceph auth get-or-create mgr.ceph43 mon 'allow profile mgr' osd 'allow *' mds 'allow *' -o /var/lib/ceph/mgr/ceph-ceph43/keyring
|
||
chown ceph.ceph /var/lib/ceph/mgr/ceph-ceph43/keyring
|
||
```
|
||
|
||
- 启动 mgr 服务,并配置开机自动启动
|
||
```bash
|
||
systemctl start ceph-mgr@ceph43
|
||
systemctl enable ceph-mgr@ceph43
|
||
```
|
||
|
||
### 查看集群状态
|
||
- 在任一节点上执行如下操作
|
||
```bash
|
||
ceph -s
|
||
```
|
||
|
||
- 集群状态如下
|
||
```
|
||
cluster:
|
||
id: aaaa0000-bbbb-1111-cccc-2222dddd3333
|
||
health: HEALTH_WARN
|
||
OSD count 0 < osd_pool_default_size 2
|
||
|
||
services:
|
||
mon: 3 daemons, quorum ceph41,ceph42,ceph43 (age ...)
|
||
mgr: ceph41(active, since ...), standbys: ceph42, ceph43
|
||
osd: 0 osds: 0 up, 0 in
|
||
|
||
data:
|
||
pools: 0 pools, 0 pgs
|
||
objects: 0 objects, 0 B
|
||
usage: 0 B used, 0 B / 0 B avail
|
||
pgs:
|
||
```
|
||
|
||
## 部署 osd
|
||
### 逻辑卷 osd
|
||
- 操作简单,推荐
|
||
- 直接创建并启动逻辑卷 osd
|
||
```bash
|
||
ceph-volume lvm create --bluestore --data /dev/sdb
|
||
ceph-volume lvm create --bluestore --data /dev/sdc
|
||
```
|
||
|
||
- 上一步执行成功后,每个 ceph-osd 服务都已启动,且开机自动启动
|
||
|
||
### 裸设备 osd
|
||
- 操作麻烦,不推荐
|
||
- 在全部节点上执行如下操作
|
||
- 准备 osd
|
||
```bash
|
||
ceph-volume raw prepare --bluestore --data /dev/sdb
|
||
ceph-volume raw prepare --bluestore --data /dev/sdc
|
||
```
|
||
|
||
- 查看 osd 的 id
|
||
```bash
|
||
ceph-volume raw list
|
||
```
|
||
|
||
- 激活 osd,**使用裸设备创建 osd 时不支持 systemd,需要单独配置开机自动启动**
|
||
```bash
|
||
ceph-volume raw activate --device /dev/sdb --no-systemd
|
||
ceph-volume raw activate --device /dev/sdc --no-systemd
|
||
```
|
||
|
||
- 在 ceph41 上启动 osd 服务
|
||
```bash
|
||
systemctl start ceph-osd@0
|
||
systemctl start ceph-osd@1
|
||
```
|
||
|
||
- 配置开机自动启动
|
||
```bash
|
||
chmod 0755 /etc/rc.d/rc.local
|
||
echo 'ceph-volume raw activate --device /dev/sdb --no-systemd
|
||
ceph-volume raw activate --device /dev/sdc --no-systemd
|
||
systemctl start ceph-osd@0
|
||
systemctl start ceph-osd@1
|
||
' >> /etc/rc.d/rc.local
|
||
```
|
||
|
||
- 在 ceph42 上启动 osd 服务
|
||
```bash
|
||
systemctl start ceph-osd@2
|
||
systemctl start ceph-osd@3
|
||
```
|
||
|
||
- 配置开机自动启动
|
||
```bash
|
||
chmod 0755 /etc/rc.d/rc.local
|
||
echo 'ceph-volume raw activate --device /dev/sdb --no-systemd
|
||
ceph-volume raw activate --device /dev/sdc --no-systemd
|
||
systemctl start ceph-osd@2
|
||
systemctl start ceph-osd@3
|
||
' >> /etc/rc.d/rc.local
|
||
```
|
||
|
||
- 在 ceph43 上启动 osd 服务
|
||
```bash
|
||
systemctl start ceph-osd@4
|
||
systemctl start ceph-osd@5
|
||
```
|
||
|
||
- 配置开机自动启动
|
||
```bash
|
||
chmod 0755 /etc/rc.d/rc.local
|
||
echo 'ceph-volume raw activate --device /dev/sdb --no-systemd
|
||
ceph-volume raw activate --device /dev/sdc --no-systemd
|
||
systemctl start ceph-osd@4
|
||
systemctl start ceph-osd@5
|
||
' >> /etc/rc.d/rc.local
|
||
```
|
||
|
||
### 查看集群状态
|
||
- 在任一节点执行如下操作
|
||
```bash
|
||
ceph -s
|
||
```
|
||
|
||
- 集群状态如下
|
||
```
|
||
cluster:
|
||
id: aaaa0000-bbbb-1111-cccc-2222dddd3333
|
||
health: HEALTH_OK
|
||
|
||
services:
|
||
mon: 3 daemons, quorum ceph41,ceph42,ceph43 (age ...)
|
||
mgr: ceph41(active, since ...), standbys: ceph42, ceph43
|
||
osd: 6 osds: 6 up (since ...), 6 in (since ...)
|
||
|
||
data:
|
||
pools: 1 pools, 1 pgs
|
||
objects: 0 objects, 0 B
|
||
usage: ... GiB used, ... GiB / ... GiB avail
|
||
pgs: 1 active+clean
|
||
```
|
||
|
||
## 部署 mds
|
||
- 只有 cephfs 会用到 mds
|
||
- 在 ceph41 上执行如下操作
|
||
- 创建密钥文件 ...... 照搬官网
|
||
```bash
|
||
sudo -u ceph mkdir -p /var/lib/ceph/mds/ceph-ceph41
|
||
sudo -u ceph ceph-authtool --create-keyring /var/lib/ceph/mds/ceph-ceph41/keyring --gen-key -n mds.ceph41
|
||
ceph auth add mds.ceph41 osd "allow rwx" mds "allow *" mon "allow profile mds" -i /var/lib/ceph/mds/ceph-ceph41/keyring
|
||
```
|
||
|
||
- 启动 mds 服务,并配置开机自动启动
|
||
```bash
|
||
systemctl start ceph-mds@ceph41
|
||
systemctl enable ceph-mds@ceph41
|
||
```
|
||
|
||
- 在 ceph42 上执行如下操作
|
||
- 创建密钥文件 ...... 照搬官网
|
||
```bash
|
||
sudo -u ceph mkdir -p /var/lib/ceph/mds/ceph-ceph42
|
||
sudo -u ceph ceph-authtool --create-keyring /var/lib/ceph/mds/ceph-ceph42/keyring --gen-key -n mds.ceph42
|
||
ceph auth add mds.ceph42 osd "allow rwx" mds "allow *" mon "allow profile mds" -i /var/lib/ceph/mds/ceph-ceph42/keyring
|
||
```
|
||
|
||
- 启动 mds 服务,并配置开机自动启动
|
||
```bash
|
||
systemctl start ceph-mds@ceph42
|
||
systemctl enable ceph-mds@ceph42
|
||
```
|
||
|
||
- 在 ceph43 上执行如下操作
|
||
- 创建密钥文件 ...... 照搬官网
|
||
```bash
|
||
sudo -u ceph mkdir -p /var/lib/ceph/mds/ceph-ceph43
|
||
sudo -u ceph ceph-authtool --create-keyring /var/lib/ceph/mds/ceph-ceph43/keyring --gen-key -n mds.ceph43
|
||
ceph auth add mds.ceph43 osd "allow rwx" mds "allow *" mon "allow profile mds" -i /var/lib/ceph/mds/ceph-ceph43/keyring
|
||
```
|
||
|
||
- 启动 mds 服务,并配置开机自动启动
|
||
```bash
|
||
systemctl start ceph-mds@ceph43
|
||
systemctl enable ceph-mds@ceph43
|
||
```
|
||
|
||
### 查看集群状态
|
||
- 在任一节点上执行如下操作
|
||
```bash
|
||
ceph -s
|
||
```
|
||
|
||
- 集群状态如下
|
||
```
|
||
cluster:
|
||
id: aaaa0000-bbbb-1111-cccc-2222dddd3333
|
||
health: HEALTH_OK
|
||
|
||
services:
|
||
mon: 3 daemons, quorum ceph41,ceph42,ceph43 (age ...)
|
||
mgr: ceph41(active, since ...), standbys: ceph42, ceph43
|
||
mds: cephfs:1 {0=ceph43=up:active} 2 up:standby
|
||
osd: 3 osds: 3 up, 3 in
|
||
|
||
data:
|
||
pools: 1 pools, 1 pgs
|
||
objects: 0 objects, 0 B
|
||
usage: ... GiB used, ... GiB / ... GiB avail
|
||
pgs: 1 active+clean
|
||
```
|
||
|
||
## 简单使用
|
||
### rbd
|
||
- 创建 rbd 池
|
||
```bash
|
||
ceph osd pool create rbd 128 128
|
||
ceph osd pool application enable rbd rbd
|
||
```
|
||
|
||
### cephfs
|
||
- 创建 cephfs 池
|
||
```bash
|
||
# 创建 cephfs 元数据池,pg 不用太大,设置 3 个副本
|
||
ceph osd pool create cephfs_metadata 8 8
|
||
ceph osd pool set cephfs_metadata size 3
|
||
|
||
# 创建 cephfs 数据池,根据数据量配置相应 pg
|
||
ceph osd pool create cephfs_data 128 128
|
||
```
|
||
|
||
- 创建 cephfs 文件系统
|
||
```bash
|
||
ceph fs new cephfs cephfs_metadata cephfs_data
|
||
```
|
||
|
||
- 查看 mds 状态
|
||
```bash
|
||
ceph mds stat
|
||
```
|
||
|
||
- 在任一 ceph 节点服务器上查看 admin 用户的 key
|
||
```bash
|
||
cat /etc/ceph/ceph.client.admin.keyring | grep key | awk '{print $2}'
|
||
```
|
||
|
||
- 在其他服务器上挂载 cephfs
|
||
```bash
|
||
mount -t ceph 10.0.4.41:6789,10.0.4.42:6789,10.0.4.43:6789:/ /mnt -o name=admin,secret={admin 的 key}
|
||
```
|
||
|