使用国产容器isula实现ceph集群的容器化
三台openEuler系统虚拟机用于部署Ceph集群。硬件配置:2C4G,另外每台机器最少挂载三块硬盘(每块盘5G)先在ceph01节点运行,注意MON_IP=自己机子的ip地址。注意,仔细看自己的盘是哪个,笔者的为/dev/sdc。(7)每台虚拟机用 yum 命令安装isulad。每个虚拟机都要拉取,在此以ceph01为例。(8)修改 isulad 的配置文件。(6)在ceph01实现免密登录。
一、环境准备
1.虚拟机准备
三台openEuler系统虚拟机用于部署Ceph集群。硬件配置:2C4G,另外每台机器最少挂载三块硬盘(每块盘5G)
192.168.132.135 ceph01
192.168.132.136 ceph02
192.168.132.137 ceph03
2.在ceph三台机器操作
(1)关闭防火墙
systemctl disable --now firewalld
setenforce 0
(2)关闭selinux
sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config
(3)添加主机名与IP对应关系:
vim /etc/hosts
192.168.132.135 ceph01
192.168.132.136 ceph02
192.168.132.137 ceph03
(4)设置主机名:
#第一台主机
hostnamectl set-hostname ceph01
#第二台主机
hostnamectl set-hostname ceph02
#第三台主机
hostnamectl set-hostname ceph03
(5)同步网络时间和修改时区
systemctl restart chronyd.service && systemctl enable chronyd.service
cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
(6)在ceph01实现免密登录
ssh-copy-id root@ceph02
ssh-copy-id root@ceph03
(7)每台虚拟机用 yum 命令安装isulad
yum install -y iSulad
systemctl start isulad
systemctl status isulad
[root@ceph01 ~]# isula version
Client:
Version: 2.0.18
Git commit: cbbf3711bc84e5f3ef3147b4e15d85888f33cb39
Built: 2024-03-19T12:14:14.058337789+00:00
Server:
Version: 2.0.18
Git commit: cbbf3711bc84e5f3ef3147b4e15d85888f33cb39
Built: 2024-03-19T12:14:14.058337789+00:00
OCI config:
Version: 1.0.1
Default file: /etc/default/isulad/config.json
(8)修改 isulad 的配置文件
[root@ceph01 ~]# cat /etc/isulad/daemon.json
{
"group": "isula",
"default-runtime": "lcr",
"graph": "/var/lib/isulad",
"state": "/var/run/isulad",
"engine": "lcr",
"log-level": "ERROR",
"pidfile": "/var/run/isulad.pid",
"log-opts": {
"log-file-mode": "0600",
"log-path": "/var/lib/isulad",
"max-file": "1",
"max-size": "30KB"
},
"log-driver": "stdout",
"container-log": {
"driver": "json-file"
},
"hook-spec": "/etc/default/isulad/hooks/default.json",
"start-timeout": "2m",
"storage-driver": "overlay2",
"storage-opts": [
"overlay2.override_kernel_check=true"
],
"registry-mirrors": [
"https://gcctk8ld.mirror.aliyuncs.com"
],
"insecure-registries": [],
"pod-sandbox-image": "",
"native.umask": "normal",
"network-plugin": "",
"cni-bin-dir": "",
"cni-conf-dir": "",
"image-layer-check": false,
"use-decrypted-key": true,
"insecure-skip-verify-enforce": false,
"cri-runtimes": {
"kata": "io.containerd.kata.v2"
},
"tcp-address": "0.0.0.0:2375"
}
二、开始部署
1.拉取镜像
每个虚拟机都要拉取,在此以ceph01为例
[root@ceph01 ~]# isula pull ceph/daemon:v3.0.5-stable-3.0-luminous-centos-7-x86_64
Image "ceph/daemon:v3.0.5-stable-3.0-luminous-centos-7-x86_64" pulling
Image "9a4fcaf4e9d7fcd9b9ad744b247a4d0d30a2d377d610fa8e4e42ae524d8e6b51" pulled
[root@ceph01 ~]# isula images
REPOSITORY TAG IMAGE ID CREATED SIZE
ceph/daemon v3.0.5-stable-3.0-luminous-centos-7-x86_64 9a4fcaf4e9d7 2018-05-11 08:38:53 729.660MB
2.启动主节点mon
先在ceph01节点运行,注意MON_IP=自己机子的ip地址
[root@ceph01 ~]# isula run -d --net=host --name=mon -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph -e MON_IP=192.168.132.135 -e CEPH_PUBLIC_NETWORK=192.168.132.0/24 ceph/daemon:v3.0.5-stable-3.0-luminous-centos-7-x86_64 mon
26a2c3725b8e76592dbfff66facd4fc3998d7270257da1e41f2cb5baa721bfc5
[root@ceph01 ~]# isula ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
26a2c3725b8e ceph/daemon:v3.0.5-stable-3.0-luminous-centos-7-x86_64 "/entrypoint.sh mon" 5 seconds ago Up 5 seconds mon
[root@ceph01 ~]# isula exec mon ceph -s
cluster:
id: 7f2fab25-e03b-4712-9728-f9a4b19ec69c
health: HEALTH_OK
services:
mon: 1 daemons, quorum compute
mgr: no daemons active
osd: 0 osds: 0 up, 0 in
data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0 bytes
usage: 0 kB used, 0 kB / 0 kB avail
pgs:
3.拷贝配置文件和系统文件到其他两个节点
scp -r /etc/ceph ceph02:/etc/
scp -r /etc/ceph ceph03:/etc/
scp -r /var/lib/ceph ceph02:/var/lib/
scp -r /var/lib/ceph ceph03:/var/lib/
4.启动其他节点mon,对应IP做相应修改
[root@ceph02 ~]# isula run -d --net=host --name=mon -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph -e MON_IP=192.168.132.136 -e CEPH_PUBLIC_NETWORK=192.168.132.0/24 ceph/daemon:v3.0.5-stable-3.0-luminous-centos-7-x86_64 mon
[root@ceph03 ~]# isula run -d --net=host --name=mon -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph -e MON_IP=192.168.132.137 -e CEPH_PUBLIC_NETWORK=192.168.132.0/24 ceph/daemon:v3.0.5-stable-3.0-luminous-centos-7-x86_64 mon
5.挂载osd
注意,仔细看自己的盘是哪个,笔者的为/dev/sdc
[root@ceph01 ~]# fdisk -l
Disk /dev/sdc:5 GiB,5368709120 字节,10485760 个扇区
磁盘型号:VMware Virtual S
单元:扇区 / 1 * 512 = 512 字节
扇区大小(逻辑/物理):512 字节 / 512 字节
I/O 大小(最小/最佳):512 字节 / 512 字节
[root@ceph01 ~]# mkfs.xfs /dev/sdc
meta-data=/dev/sdc isize=512 agcount=4, agsize=327680 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=1, sparse=1, rmapbt=0
= reflink=1 bigtime=0 inobtcount=0
data = bsize=4096 blocks=1310720, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0, ftype=1
log =internal log bsize=4096 blocks=2560, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
[root@ceph01 ~]# mkdir /osd0
[root@ceph01 ~]# mount /dev/sdc /osd0
6.启动OSD服务
[root@ceph01 ~]# isula run -d --net=host --name=osd1 -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph -v /dev:/dev -v /osd0:/var/lib/ceph/osd --privileged=true ceph/daemon:v3.0.5-stable-3.0-luminous-centos-7-x86_64 osd_directory
cc472c45d76037d3b904de830eda2354d3d96fec9c69324869dee56597023556
[root@ceph01 ~]# isula ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
cc472c45d760 ceph/daemon:v3.0.5-stable-3.0-luminous-centos-7-x86_64 "/entrypoint.sh os..." 4 seconds ago Up 4 seconds osd1
fba0bacf7517 ceph/daemon:v3.0.5-stable-3.0-luminous-centos-7-x86_64 "/entrypoint.sh mon" 2 minutes ago Up 2 minutes mon
7.其他节点启动OSD参考前文5和6
8.在ceph01启动mgr
[root@ceph01 ~]# isula run -d --net=host -v /etc/ceph:/etc/ceph -v /var/lib/ceph/:/var/lib/ceph/ ceph/daemon:v3.0.5-stable-3.0-luminous-centos-7-x86_64 mgr
14f678b91af65acbb3d2e1fa47e93e38c7fc98930d2d03b02c48fd9ddc195b4d
[root@ceph01 ~]# isula ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
80556d315be5 ceph/daemon:v3.0.5-stable-3.0-luminous-centos-7-x86_64 "/entrypoint.sh mgr" 3 seconds ago Up 3 seconds 80556d315be5160311c5fac8ec48ad5e76db679013a4b29d224fe3bfd8a3f5aa
cc472c45d760 ceph/daemon:v3.0.5-stable-3.0-luminous-centos-7-x86_64 "/entrypoint.sh os..." About a minute ago Up About a minute osd1
fba0bacf7517 ceph/daemon:v3.0.5-stable-3.0-luminous-centos-7-x86_64 "/entrypoint.sh mon" 4 minutes ago Up 4 minutes mon
[root@ceph01 ~]# isula exec mon ceph -s
cluster:
id: 9fbaf183-ece3-4568-801f-cb3b2a63f477
health: HEALTH_OK
services:
mon: 1 daemons, quorum compute
mgr: compute(active)
osd: 1 osds: 1 up, 1 in
data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0 bytes
usage: 242 MB used, 4867 MB / 5110 MB avail
pgs:
9.在ceph中创建一个pool
isula exec mon ceph osd pool create rbd 64
检查osd tree
isula exec mon ceph osd tree
检查ceph运行情况
isula exec mon ceph -s
鲲鹏昇腾开发者社区是面向全社会开放的“联接全球计算开发者,聚合华为+生态”的社区,内容涵盖鲲鹏、昇腾资源,帮助开发者快速获取所需的知识、经验、软件、工具、算力,支撑开发者易学、好用、成功,成为核心开发者。
更多推荐

所有评论(0)