Ceph分布式存储《实战》

 4个月前     87  

文章目录

Ceph分布式存储《实战》

1、实验准备

实验环境:MacOS M1 ARM 16GB

虚拟平台:Parallels Desktop 17

系统环境:CentOS Linux release 7.9.2009 (AltArch)

内核版本:Linux version 5.11.12-300.el7.aarch64

1.1、拓扑

主机 IP 角色
Ceph1 10.211.55.51 admin、osd、mon、mgr
Ceph2 10.211.55.52 osd、mds
Ceph3 10.211.55.53 osd、mds
CepgClient 10.211.55.54 client

Ceph1、2、3分别加一块8G的硬盘给OSD做存储用

1.2、配置主机名以及免密登录

hostnamectl set-hostname ceph1

hostnamectl set-hostname ceph2

hostnamectl set-hostname ceph3

hostnamectl set-hostname client

[root@ceph1 ~]# cat /root/.ssh/id_rsa.pub >> /root/.ssh/authorized_keys
[root@ceph1 ~]# cat /root/.ssh/2.pub >> /root/.ssh/authorized_keys
[root@ceph1 ~]# cat /root/.ssh/3.pub >> /root/.ssh/authorized_keys
[root@ceph1 ~]# cat /root/.ssh/4.pub >> /root/.ssh/authorized_keys
[root@ceph1 ~]# scp /root/.ssh/authorized_keys root@ceph2:/root/.ssh/authorized_keys
The authenticity of host 'ceph2 (10.211.55.52)' can't be established.
ECDSA key fingerprint is SHA256:cSU3ZIBsZtRjKZUYhntWnQ8cDg2Ogd0D04S5Hf1cQ5o.
ECDSA key fingerprint is MD5:cd:06:70:58:af:bb:82:21:b3:3e:ed:71:71:50:c8:ad.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'ceph2' (ECDSA) to the list of known hosts.
root@ceph2's password:
authorized_keys                               100% 1569     1.1MB/s   00:00
[root@ceph1 ~]# scp /root/.ssh/authorized_keys root@ceph3:/root/.ssh/authorized_keys
The authenticity of host 'ceph3 (10.211.55.53)' can't be established.
ECDSA key fingerprint is SHA256:cSU3ZIBsZtRjKZUYhntWnQ8cDg2Ogd0D04S5Hf1cQ5o.
ECDSA key fingerprint is MD5:cd:06:70:58:af:bb:82:21:b3:3e:ed:71:71:50:c8:ad.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'ceph3' (ECDSA) to the list of known hosts.
root@ceph3's password:
authorized_keys                               100% 1569     1.0MB/s   00:00
[root@ceph1 ~]# scp /root/.ssh/authorized_keys root@client:/root/.ssh/authorized_keys
The authenticity of host 'client (10.211.55.54)' can't be established.
ECDSA key fingerprint is SHA256:cSU3ZIBsZtRjKZUYhntWnQ8cDg2Ogd0D04S5Hf1cQ5o.
ECDSA key fingerprint is MD5:cd:06:70:58:af:bb:82:21:b3:3e:ed:71:71:50:c8:ad.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'client' (ECDSA) to the list of known hosts.
root@client's password:
authorized_keys                               100% 1569     1.1MB/s   00:00
[root@ceph1 ~]# ssh root@client
Last login: Wed Jul 27 22:39:03 2022 from 10.211.55.2
[root@client ~]# exit
登出
Connection to client closed.
[root@ceph1 ~]#

 

 

1.3、同步好时间服务器,关闭防火墙、SELINUX

systemctl enable chronyd && timedatectl set-ntp true && chronyc -a makestep
systemctl stop firewalld && systemctl disable firewalld 
sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config

1.4、配置yum源

1.4.1、配置阿里云网络源

wget http://mirrors.aliyun.com/repo/Centos-altarch-7.repo -O /etc/yum.repos.d/CentOS-Base.repo
yum clean all && yum makecache

1.4.2、配置ceph源

vim /etc/yum.repo.d/ceph.repo

[ceph]
name=ceph
baseurl=https://mirrors.163.com/ceph/rpm-luminous/el7/aarch64/
gpgcheck=0
[ceph-noarch]
name=ceph-noarch
baseurl=https://mirrors.163.com/ceph/rpm-luminous/el7/noarch/
gpgcheck=0

1.4.3、直接scp到其他三台服务器

[root@ceph1 ~]# scp /etc/yum.repos.d/ceph.repo root@ceph2:/etc/yum.repos.d/ceph.repo
ceph.repo                                     100%  195   279.4KB/s   00:00
[root@ceph1 ~]# scp /etc/yum.repos.d/ceph.repo root@ceph3:/etc/yum.repos.d/ceph.repo
ceph.repo                                     100%  195   102.6KB/s   00:00
[root@ceph1 ~]# scp /etc/yum.repos.d/ceph.repo root@client:/etc/yum.repos.d/ceph.repo
ceph.repo                                     100%  195   136.5KB/s   00:00

1.4.4、更新yum缓存

yum clean all && yum makecache

 

1.5、安装epel-release-所有节点

yum -y install epel-release yum-plugin-priorities yum-utils ntpdate

1.6、部署ceph-所有节点

Ceph官方推出了一个用python写的工具 ceph-deploy,可以很大的简化ceph集群的配置过程。 在ceph1、ceph2、ceph3上安装,ceph-deploy是ceph集群部署工具。其他软件是依赖包。

yum install -y ceph-deploy ceph ceph-radosgw snappy leveldb gdisk python-argparse gperftools-libs yum-plugin-priorities yum-utils ntpdate

 

2、在管理节点:ceph1上部署服务

2.1、创建一个新集群

也可以同时在ceph2,ceph3上部署mon,实现高可用,生产环境至少3个mon独立

在 /etc/ceph/目录进行操作:

[root@ceph1 ceph]# ceph-deploy new ceph1
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy new ceph1
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  func                          : <function new at 0x7fa097ced8>
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7fa08b6758>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  ssh_copykey                   : True
[ceph_deploy.cli][INFO  ]  mon                           : ['ceph1']
[ceph_deploy.cli][INFO  ]  public_network                : None
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  cluster_network               : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  fsid                          : None
[ceph_deploy.new][DEBUG ] Creating new cluster named ceph
[ceph_deploy.new][INFO  ] making sure passwordless SSH succeeds
[ceph1][DEBUG ] connected to host: ceph1
[ceph1][DEBUG ] detect platform information from remote host
[ceph1][DEBUG ] detect machine type
[ceph1][DEBUG ] find the location of an executable
[ceph1][INFO  ] Running command: /usr/sbin/ip link show
[ceph1][INFO  ] Running command: /usr/sbin/ip addr show
[ceph1][DEBUG ] IP addresses found: [u'10.211.55.51', u'fdb2:2c26:f4e4:0:21c:42ff:fee3:fe85']
[ceph_deploy.new][DEBUG ] Resolving host ceph1
[ceph_deploy.new][DEBUG ] Monitor ceph1 at 10.211.55.51
[ceph_deploy.new][DEBUG ] Monitor initial members are ['ceph1']
[ceph_deploy.new][DEBUG ] Monitor addrs are ['10.211.55.51']
[ceph_deploy.new][DEBUG ] Creating a random mon key...
[ceph_deploy.new][DEBUG ] Writing monitor keyring to ceph.mon.keyring...
[ceph_deploy.new][DEBUG ] Writing initial config to ceph.conf...

此时可以看到ceph目录下生成了三个文件, 其中有一个ceph配置文件可以做各种参数优化。一个是监视器秘钥环。

注意,在osd进程生成并挂载使用后,想修改配置需要使用命令行工具,修改配置文件是无效的,所以需要提前规划好优化的参数。

[root@ceph1 ceph]# ls -al
总用量 28
drwxr-xr-x   2 root root   89 7月  27 23:20 .
drwxr-xr-x. 85 root root 8192 7月  27 23:15 ..
-rw-r--r--   1 root root  195 7月  27 23:20 ceph.conf
-rw-r--r--   1 root root 2950 7月  27 23:20 ceph-deploy-ceph.log
-rw-------   1 root root   73 7月  27 23:20 ceph.mon.keyring
-rw-r--r--   1 root root   92 5月  16 2020 rbdmap

2.2、修改副本数

配置文件的默认副本数为2,这样只有两个osd也能达到active+clean状态

[root@ceph1 ceph]# vim ceph.conf

[global]
fsid = 375b0966-62de-4388-9613-90c72846a466
mon_initial_members = ceph1
mon_host = 10.211.55.51
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
#在末尾添加以下配置
osd_pool_default_size=2

2.3、安装ceph monitor

[root@ceph1 ceph]# ceph-deploy mon create ceph1
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy mon create ceph1
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : create
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f831d3320>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  mon                           : ['ceph1']
[ceph_deploy.cli][INFO  ]  func                          : <function mon at 0x7f83257500>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  keyrings                      : None
[ceph_deploy.mon][DEBUG ] Deploying mon, cluster ceph hosts ceph1
[ceph_deploy.mon][DEBUG ] detecting platform for host ceph1 ...
[ceph1][DEBUG ] connected to host: ceph1
[ceph1][DEBUG ] detect platform information from remote host
[ceph1][DEBUG ] detect machine type
[ceph1][DEBUG ] find the location of an executable
[ceph_deploy.mon][INFO  ] distro info: CentOS Linux 7.9.2009 AltArch
[ceph1][DEBUG ] determining if provided host has same hostname in remote
[ceph1][DEBUG ] get remote short hostname
[ceph1][DEBUG ] deploying mon to ceph1
[ceph1][DEBUG ] get remote short hostname
[ceph1][DEBUG ] remote hostname: ceph1
[ceph1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph1][DEBUG ] create the mon path if it does not exist
[ceph1][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-ceph1/done
[ceph1][DEBUG ] done path does not exist: /var/lib/ceph/mon/ceph-ceph1/done
[ceph1][INFO  ] creating keyring file: /var/lib/ceph/tmp/ceph-ceph1.mon.keyring
[ceph1][DEBUG ] create the monitor keyring file
[ceph1][INFO  ] Running command: ceph-mon --cluster ceph --mkfs -i ceph1 --keyring /var/lib/ceph/tmp/ceph-ceph1.mon.keyring --setuser 167 --setgroup 167
[ceph1][INFO  ] unlinking keyring file /var/lib/ceph/tmp/ceph-ceph1.mon.keyring
[ceph1][DEBUG ] create a done file to avoid re-doing the mon deployment
[ceph1][DEBUG ] create the init path if it does not exist
[ceph1][INFO  ] Running command: systemctl enable ceph.target
[ceph1][INFO  ] Running command: systemctl enable ceph-mon@ceph1
[ceph1][WARNIN] Created symlink from /etc/systemd/system/ceph-mon.target.wants/ceph-mon@ceph1.service to /usr/lib/systemd/system/ceph-mon@.service.
[ceph1][INFO  ] Running command: systemctl start ceph-mon@ceph1
[ceph1][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph1.asok mon_status
[ceph1][DEBUG ] ********************************************************************************
[ceph1][DEBUG ] status for monitor: mon.ceph1
[ceph1][DEBUG ] {
[ceph1][DEBUG ]   "election_epoch": 3,
[ceph1][DEBUG ]   "extra_probe_peers": [],
[ceph1][DEBUG ]   "feature_map": {
[ceph1][DEBUG ]     "mon": {
[ceph1][DEBUG ]       "group": {
[ceph1][DEBUG ]         "features": "0x3ffddff8eeacfffb",
[ceph1][DEBUG ]         "num": 1,
[ceph1][DEBUG ]         "release": "luminous"
[ceph1][DEBUG ]       }
[ceph1][DEBUG ]     }
[ceph1][DEBUG ]   },
[ceph1][DEBUG ]   "features": {
[ceph1][DEBUG ]     "quorum_con": "4611087853746454523",
[ceph1][DEBUG ]     "quorum_mon": [
[ceph1][DEBUG ]       "kraken",
[ceph1][DEBUG ]       "luminous"
[ceph1][DEBUG ]     ],
[ceph1][DEBUG ]     "required_con": "153140804152475648",
[ceph1][DEBUG ]     "required_mon": [
[ceph1][DEBUG ]       "kraken",
[ceph1][DEBUG ]       "luminous"
[ceph1][DEBUG ]     ]
[ceph1][DEBUG ]   },
[ceph1][DEBUG ]   "monmap": {
[ceph1][DEBUG ]     "created": "2022-07-27 23:28:41.589973",
[ceph1][DEBUG ]     "epoch": 1,
[ceph1][DEBUG ]     "features": {
[ceph1][DEBUG ]       "optional": [],
[ceph1][DEBUG ]       "persistent": [
[ceph1][DEBUG ]         "kraken",
[ceph1][DEBUG ]         "luminous"
[ceph1][DEBUG ]       ]
[ceph1][DEBUG ]     },
[ceph1][DEBUG ]     "fsid": "375b0966-62de-4388-9613-90c72846a466",
[ceph1][DEBUG ]     "modified": "2022-07-27 23:28:41.589973",
[ceph1][DEBUG ]     "mons": [
[ceph1][DEBUG ]       {
[ceph1][DEBUG ]         "addr": "10.211.55.51:6789/0",
[ceph1][DEBUG ]         "name": "ceph1",
[ceph1][DEBUG ]         "public_addr": "10.211.55.51:6789/0",
[ceph1][DEBUG ]         "rank": 0
[ceph1][DEBUG ]       }
[ceph1][DEBUG ]     ]
[ceph1][DEBUG ]   },
[ceph1][DEBUG ]   "name": "ceph1",
[ceph1][DEBUG ]   "outside_quorum": [],
[ceph1][DEBUG ]   "quorum": [
[ceph1][DEBUG ]     0
[ceph1][DEBUG ]   ],
[ceph1][DEBUG ]   "rank": 0,
[ceph1][DEBUG ]   "state": "leader",
[ceph1][DEBUG ]   "sync_provider": []
[ceph1][DEBUG ] }
[ceph1][DEBUG ] ********************************************************************************
[ceph1][INFO  ] monitor: mon.ceph1 is running
[ceph1][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph1.asok mon_status
Ceph分布式存储《实战》

2.4、收集节点的keyring文件

收集Ceph集群的密码文件

[root@ceph1 ceph]# ceph-deploy  gatherkeys ceph1
[root@ceph1 ceph]# ls
ceph.bootstrap-mds.keyring  ceph.bootstrap-rgw.keyring  ceph-deploy-ceph.log
ceph.bootstrap-mgr.keyring  ceph.client.admin.keyring   ceph.mon.keyring
ceph.bootstrap-osd.keyring  ceph.conf                   rbdmap
[root@ceph1 ceph]# cat ceph.client.admin.keyring
[client.admin]
	key = AQAkW+FiJ4uZKRAAZ6WLy8Abp0zFq9wXKG9bCA==
Ceph分布式存储《实战》

这个就是连接Ceph集群的admin的密码

3、部署osd服务

Ceph 12版本部署osd格式化命令跟之前不同, 添加完硬盘直接使用,不要分区

3.1、使用ceph自动分区

用下列命令删除节点磁盘的分区表,以用于 Ceph

[root@ceph1 ceph]# ceph-deploy disk zap ceph1 /dev/sdb
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy disk zap ceph1 /dev/sdb
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  debug                         : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : zap
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7fb3ea5758>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  host                          : ceph1
[ceph_deploy.cli][INFO  ]  func                          : <function disk at 0x7fb3f13a28>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  disk                          : ['/dev/sdb']
[ceph_deploy.osd][DEBUG ] zapping /dev/sdb on ceph1
[ceph1][DEBUG ] connected to host: ceph1
[ceph1][DEBUG ] detect platform information from remote host
[ceph1][DEBUG ] detect machine type
[ceph1][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: CentOS Linux 7.9.2009 AltArch
[ceph1][DEBUG ] zeroing last few blocks of device
[ceph1][DEBUG ] find the location of an executable
[ceph1][INFO  ] Running command: /usr/sbin/ceph-volume lvm zap /dev/sdb
[ceph1][DEBUG ] --> Zapping: /dev/sdb
[ceph1][DEBUG ] --> --destroy was not specified, but zapping a whole device will remove the partition table
[ceph1][DEBUG ] Running command: wipefs --all /dev/sdb
[ceph1][DEBUG ] Running command: dd if=/dev/zero of=/dev/sdb bs=1M count=10
[ceph1][DEBUG ] --> Zapping successful for: <Raw Device: /dev/sdb>

同理,将ceph2、3进行同样操作

[ceph_deploy.osd][DEBUG ] zapping /dev/sdb on ceph2
[ceph2][DEBUG ] connected to host: ceph2
[ceph2][DEBUG ] detect platform information from remote host
[ceph2][DEBUG ] detect machine type
[ceph2][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: CentOS Linux 7.9.2009 AltArch
[ceph2][DEBUG ] zeroing last few blocks of device
[ceph2][DEBUG ] find the location of an executable
[ceph2][INFO  ] Running command: /usr/sbin/ceph-volume lvm zap /dev/sdb
[ceph2][DEBUG ] --> Zapping: /dev/sdb
[ceph2][DEBUG ] --> --destroy was not specified, but zapping a whole device will remove the partition table
[ceph2][DEBUG ] Running command: wipefs --all /dev/sdb
[ceph2][DEBUG ] Running command: dd if=/dev/zero of=/dev/sdb bs=1M count=10
[ceph2][DEBUG ]  stderr: 记录了10+0 的读入
[ceph2][DEBUG ] 记录了10+0 的写出
[ceph2][DEBUG ] 10485760字节(10 MB)已复制,0.00456629 秒,2.3 GB/秒
[ceph2][DEBUG ] --> Zapping successful for: <Raw Device: /dev/sdb>
[root@ceph1 ceph]# ceph-deploy disk zap ceph3 /dev/sdb
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy disk zap ceph3 /dev/sdb
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  debug                         : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : zap
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f98f4d758>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  host                          : ceph3
[ceph_deploy.cli][INFO  ]  func                          : <function disk at 0x7f98fbba28>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  disk                          : ['/dev/sdb']
[ceph_deploy.osd][DEBUG ] zapping /dev/sdb on ceph3
[ceph3][DEBUG ] connected to host: ceph3
[ceph3][DEBUG ] detect platform information from remote host
[ceph3][DEBUG ] detect machine type
[ceph3][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: CentOS Linux 7.9.2009 AltArch
[ceph3][DEBUG ] zeroing last few blocks of device
[ceph3][DEBUG ] find the location of an executable
[ceph3][INFO  ] Running command: /usr/sbin/ceph-volume lvm zap /dev/sdb
[ceph3][DEBUG ] --> Zapping: /dev/sdb
[ceph3][DEBUG ] --> --destroy was not specified, but zapping a whole device will remove the partition table
[ceph3][DEBUG ] Running command: wipefs --all /dev/sdb
[ceph3][DEBUG ] Running command: dd if=/dev/zero of=/dev/sdb bs=1M count=10
[ceph3][DEBUG ]  stderr: 记录了10+0 的读入
[ceph3][DEBUG ] 记录了10+0 的写出
[ceph3][DEBUG ] 10485760字节(10 MB)已复制,0.00515049 秒,2.0 GB/秒
[ceph3][DEBUG ] --> Zapping successful for: <Raw Device: /dev/sdb>

3.2、添加osd节点

[root@ceph1 ceph]# ceph-deploy osd create ceph1 --data /dev/sdb
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy osd create ceph1 --data /dev/sdb
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  bluestore                     : None
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f86f6f878>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  fs_type                       : xfs
[ceph_deploy.cli][INFO  ]  block_wal                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  journal                       : None
[ceph_deploy.cli][INFO  ]  subcommand                    : create
[ceph_deploy.cli][INFO  ]  host                          : ceph1
[ceph_deploy.cli][INFO  ]  filestore                     : None
[ceph_deploy.cli][INFO  ]  func                          : <function osd at 0x7f86fd89b0>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  zap_disk                      : False
[ceph_deploy.cli][INFO  ]  data                          : /dev/sdb
[ceph_deploy.cli][INFO  ]  block_db                      : None
[ceph_deploy.cli][INFO  ]  dmcrypt                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  dmcrypt_key_dir               : /etc/ceph/dmcrypt-keys
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  debug                         : False
[ceph_deploy.osd][DEBUG ] Creating OSD on cluster ceph with data device /dev/sdb
[ceph1][DEBUG ] connected to host: ceph1
[ceph1][DEBUG ] detect platform information from remote host
[ceph1][DEBUG ] detect machine type
[ceph1][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: CentOS Linux 7.9.2009 AltArch
[ceph_deploy.osd][DEBUG ] Deploying osd to ceph1
[ceph1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph1][WARNIN] osd keyring does not exist yet, creating one
[ceph1][DEBUG ] create a keyring file
[ceph1][DEBUG ] find the location of an executable
[ceph1][INFO  ] Running command: /usr/sbin/ceph-volume --cluster ceph lvm create --bluestore --data /dev/sdb
[ceph1][DEBUG ] Running command: /bin/ceph-authtool --gen-print-key
[ceph1][DEBUG ] Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new c367ee2f-50a1-45b6-a89a-489de61bd64a
[ceph1][DEBUG ] Running command: vgcreate --force --yes ceph-326e18b2-2680-4d1e-84f5-69534fd5b236 /dev/sdb
[ceph1][DEBUG ]  stdout: Physical volume "/dev/sdb" successfully created.
[ceph1][DEBUG ]  stdout: Volume group "ceph-326e18b2-2680-4d1e-84f5-69534fd5b236" successfully created
[ceph1][DEBUG ] Running command: lvcreate --yes -l 100%FREE -n osd-block-c367ee2f-50a1-45b6-a89a-489de61bd64a ceph-326e18b2-2680-4d1e-84f5-69534fd5b236
[ceph1][DEBUG ]  stdout: Logical volume "osd-block-c367ee2f-50a1-45b6-a89a-489de61bd64a" created.
[ceph1][DEBUG ] Running command: /bin/ceph-authtool --gen-print-key
[ceph1][DEBUG ] Running command: mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
[ceph1][DEBUG ] Running command: restorecon /var/lib/ceph/osd/ceph-0
[ceph1][DEBUG ] Running command: chown -h ceph:ceph /dev/ceph-326e18b2-2680-4d1e-84f5-69534fd5b236/osd-block-c367ee2f-50a1-45b6-a89a-489de61bd64a
[ceph1][DEBUG ] Running command: chown -R ceph:ceph /dev/dm-2
[ceph1][DEBUG ] Running command: ln -s /dev/ceph-326e18b2-2680-4d1e-84f5-69534fd5b236/osd-block-c367ee2f-50a1-45b6-a89a-489de61bd64a /var/lib/ceph/osd/ceph-0/block
[ceph1][DEBUG ] Running command: ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
[ceph1][DEBUG ]  stderr: got monmap epoch 1
[ceph1][DEBUG ] Running command: ceph-authtool /var/lib/ceph/osd/ceph-0/keyring --create-keyring --name osd.0 --add-key AQCEXeFiEY7LMBAAnNGr3GIHIU5nZ8UkRxeSQw==
[ceph1][DEBUG ]  stdout: creating /var/lib/ceph/osd/ceph-0/keyring
[ceph1][DEBUG ] added entity osd.0 auth auth(auid = 18446744073709551615 key=AQCEXeFiEY7LMBAAnNGr3GIHIU5nZ8UkRxeSQw== with 0 caps)
[ceph1][DEBUG ] Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
[ceph1][DEBUG ] Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
[ceph1][DEBUG ] Running command: /bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid c367ee2f-50a1-45b6-a89a-489de61bd64a --setuser ceph --setgroup ceph
[ceph1][DEBUG ] --> ceph-volume lvm prepare successful for: /dev/sdb
[ceph1][DEBUG ] Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
[ceph1][DEBUG ] Running command: ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-326e18b2-2680-4d1e-84f5-69534fd5b236/osd-block-c367ee2f-50a1-45b6-a89a-489de61bd64a --path /var/lib/ceph/osd/ceph-0
[ceph1][DEBUG ] Running command: ln -snf /dev/ceph-326e18b2-2680-4d1e-84f5-69534fd5b236/osd-block-c367ee2f-50a1-45b6-a89a-489de61bd64a /var/lib/ceph/osd/ceph-0/block
[ceph1][DEBUG ] Running command: chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
[ceph1][DEBUG ] Running command: chown -R ceph:ceph /dev/dm-2
[ceph1][DEBUG ] Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
[ceph1][DEBUG ] Running command: systemctl enable ceph-volume@lvm-0-c367ee2f-50a1-45b6-a89a-489de61bd64a
[ceph1][DEBUG ]  stderr: Created symlink from /etc/systemd/system/multi-user.target.wants/ceph-volume@lvm-0-c367ee2f-50a1-45b6-a89a-489de61bd64a.service to /usr/lib/systemd/system/ceph-volume@.service.
[ceph1][DEBUG ] Running command: systemctl enable --runtime ceph-osd@0
[ceph1][DEBUG ]  stderr: Created symlink from /run/systemd/system/ceph-osd.target.wants/ceph-osd@0.service to /usr/lib/systemd/system/ceph-osd@.service.
[ceph1][DEBUG ] Running command: systemctl start ceph-osd@0
[ceph1][DEBUG ] --> ceph-volume lvm activate successful for osd ID: 0
[ceph1][DEBUG ] --> ceph-volume lvm create successful for: /dev/sdb
[ceph1][INFO  ] checking OSD status...
[ceph1][DEBUG ] find the location of an executable
[ceph1][INFO  ] Running command: /bin/ceph --cluster=ceph osd stat --format=json
[ceph_deploy.osd][DEBUG ] Host ceph1 is now ready for osd use.

查看创建结果:成功

Ceph分布式存储《实战》

同理执行,创建过程可能会卡主,稍作等待即可。

[root@ceph1 ceph]# ceph-deploy osd create ceph2 --data /dev/sdb
[root@ceph1 ceph]# ceph-deploy osd create ceph3 --data /dev/sdb

3.3、查看osd状态

[root@ceph1 ceph]# ceph-deploy osd list ceph1 ceph2 ceph3

 

Ceph分布式存储《实战》

4、部署mgr管理服务

生产环境中可以同时在ceph2,ceph3上部署mgr,实现高可用。

[root@ceph1 ceph]# ceph-deploy mgr create ceph1
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy mgr create ceph1
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  mgr                           : [('ceph1', 'ceph1')]
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : create
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7fa10fab90>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  func                          : <function mgr at 0x7fa115b230>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.mgr][DEBUG ] Deploying mgr, cluster ceph hosts ceph1:ceph1
[ceph1][DEBUG ] connected to host: ceph1
[ceph1][DEBUG ] detect platform information from remote host
[ceph1][DEBUG ] detect machine type
[ceph_deploy.mgr][INFO  ] Distro info: CentOS Linux 7.9.2009 AltArch
[ceph_deploy.mgr][DEBUG ] remote host will use systemd
[ceph_deploy.mgr][DEBUG ] deploying mgr bootstrap to ceph1
[ceph1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph1][WARNIN] mgr keyring does not exist yet, creating one
[ceph1][DEBUG ] create a keyring file
[ceph1][DEBUG ] create path recursively if it doesn't exist
[ceph1][INFO  ] Running command: ceph --cluster ceph --name client.bootstrap-mgr --keyring /var/lib/ceph/bootstrap-mgr/ceph.keyring auth get-or-create mgr.ceph1 mon allow profile mgr osd allow * mds allow * -o /var/lib/ceph/mgr/ceph-ceph1/keyring
[ceph1][INFO  ] Running command: systemctl enable ceph-mgr@ceph1
[ceph1][WARNIN] Created symlink from /etc/systemd/system/ceph-mgr.target.wants/ceph-mgr@ceph1.service to /usr/lib/systemd/system/ceph-mgr@.service.
[ceph1][INFO  ] Running command: systemctl start ceph-mgr@ceph1
[ceph1][INFO  ] Running command: systemctl enable ceph.target

这里可以看到已经成功

Ceph分布式存储《实战》

4.1、统一集群配置,便于操作。

参考SSH免密登录原理

用ceph-deploy把配置文件和admin密钥拷贝到所有节点,这样每次执行Ceph命令行时就无需指定monitor地址和ceph.client.admin.keyring了。

[root@ceph1 ceph]# ceph-deploy admin ceph1 ceph2 ceph3
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy admin ceph1 ceph2 ceph3
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f8828a878>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  client                        : ['ceph1', 'ceph2', 'ceph3']
[ceph_deploy.cli][INFO  ]  func                          : <function admin at 0x7f8838e320>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to ceph1
[ceph1][DEBUG ] connected to host: ceph1
[ceph1][DEBUG ] detect platform information from remote host
[ceph1][DEBUG ] detect machine type
[ceph1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to ceph2
[ceph2][DEBUG ] connected to host: ceph2
[ceph2][DEBUG ] detect platform information from remote host
[ceph2][DEBUG ] detect machine type
[ceph2][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to ceph3
[ceph3][DEBUG ] connected to host: ceph3
[ceph3][DEBUG ] detect platform information from remote host
[ceph3][DEBUG ] detect machine type
[ceph3][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
Ceph分布式存储《实战》

4.2、修改各节点ceph.client.admin.keyring权限

[root@ceph1 ceph]# chmod +r /etc/ceph/ceph.client.admin.keyring
[root@ceph2 ceph]# chmod +r /etc/ceph/ceph.client.admin.keyring
[root@ceph3 ceph]# chmod +r /etc/ceph/ceph.client.admin.keyring

5、部署mds服务

Mds是ceph集群中的元数据服务器,而通常它都不是必须的,因为只有在使用cephfs的时候才需要它,而目在云计算中用的更广泛的是另外两种存储方式。

Mds虽然是元数据服务器,但是它不负责存储元数据,元数据也是被切成对象存在各个osd节点中的,如下图:

Ceph分布式存储《实战》

在创建CEPH FS时,要至少创建两个POOL,一个用于存放数据,另一个用于存放元数据。Mds只是负责接受用户的元数据查询请求,然后从osd中把数据取出来映射进自己的内存中供客户访问。所以mds其实类似一个代理缓存服务器,替osd分担了用户的访问压力,如下图:

Ceph分布式存储《实战》

5.1、安装mds

[root@ceph1 ceph]# ceph-deploy mds create ceph2 ceph3

5.2、查看mds服务

[root@ceph1 ceph]# ceph mds stat
, 2 up:standby

5.3、查看集群状态

[root@ceph1 ceph]# ceph -s
  cluster:
    id:     375b0966-62de-4388-9613-90c72846a466
    health: HEALTH_OK

  services:
    mon: 1 daemons, quorum ceph1
    mgr: ceph1(active)
    osd: 3 osds: 3 up, 3 in

  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0B
    usage:   3.00GiB used, 21.0GiB / 24.0GiB avail
    pgs:

6、创建ceph文件系统

6.1、查看文件系统

[root@ceph1 ceph]# ceph fs ls
No filesystems enabled

这里可以看到没有创建之前是没有文件系统的

6.2、创建存储池

命令:

ceph osd pool create cephfs_data <pg_num>
ceph osd pool create cephfs_metadata <pg_num>

其中:<pg_num> = 128 ,

关于创建存储池

确定 pg_num 取值是强制性的,因为不能自动计算。下面是几个常用的值:

*少于 5 个 OSD 时可把 pg_num 设置为 128

*OSD 数量在 5 到 10 个时,可把 pg_num 设置为 512

*OSD 数量在 10 到 50 个时,可把 pg_num 设置为 4096

*OSD 数量大于 50 时,你得理解权衡方法、以及如何自己计算 pg_num 取值

*自己计算 pg_num 取值时可借助 pgcalc 工具

https://ceph.com/pgcalc/

随着 OSD 数量的增加,正确的 pg_num 取值变得更加重要,因为它显著地影响着集群的行为、以及出错时的数据持久性(即灾难性事件导致数据丢失的概率)。

[root@ceph1 ceph]# ceph fs ls
No filesystems enabled
[root@ceph1 ceph]# ceph osd pool create cephfs_data 128
pool 'cephfs_data' created
[root@ceph1 ceph]# ceph osd pool create cephfs_metadata 128
pool 'cephfs_metadata' created

6.3、创建文件系统

创建好存储池后,就可以用 fs new 命令创建文件系统了。

命令:

ceph fs new <fs_name> cephfs_metadata cephfs_data

其中:<fs_name> = cephfs 可自定义给刚才创建的2个存储池创建文件系统

[root@ceph1 ceph]# ceph fs new cephfs cephfs_metadata cephfs_data
new fs with metadata pool 2 and data pool 1

6.4、再次查看ceph文件系统

[root@ceph1 ceph]# ceph fs ls
name: cephfs, metadata pool: cephfs_metadata, data pools: [cephfs_data ]

6.5、再次查看mds节点状态

[root@ceph1 ceph]# ceph mds stat
cephfs-1/1/1 up  {0=ceph3=up:active}, 1 up:standby

cephfs-1/1/1 up {0=cong13=up:active}, 1 up:standby

active是活跃的,另1个是处于热备份的状态

7、在客户机上挂载Ceph文件系统

要挂载 Ceph 文件系统,如果你知道监视器 IP 地址可以用 mount 命令、或者用 mount.ceph 工具来自动解析监视器 IP 地址。

7.1、client创建挂载点

mkdir -p /data/aa

存储密钥(如果没有在管理节点使用ceph-deploy拷贝ceph配置文件)

[root@ceph1 ceph]# cat /etc/ceph/ceph.client.admin.keyring
[client.admin]
key = AQDl5eFiLeXVFRAAA+bjTlPG+nENUEbJe9K0Ng==

key对应的值复制下来保存到文件:/etc/ceph/admin.secret中。

7.2、使用密钥挂载

[root@client ~]# mount -t ceph 10.211.55.51:6789:/ /date/aa -o name=admin,secret=AQDl5eFiLeXVFRAAA+bjTlPG+nENUEbJe9K0Ng==
[root@client ~]# df -Th
文件系统              类型      容量  已用  可用 已用% 挂载点
devtmpfs              devtmpfs  472M     0  472M    0% /dev
tmpfs                 tmpfs     484M     0  484M    0% /dev/shm
tmpfs                 tmpfs     484M  6.5M  477M    2% /run
tmpfs                 tmpfs     484M     0  484M    0% /sys/fs/cgroup
/dev/mapper/cl_7-root xfs        17G  2.5G   14G   15% /
/dev/sda2             xfs      1014M  139M  876M   14% /boot
/dev/sda1             vfat      599M  9.0M  590M    2% /boot/efi
tmpfs                 tmpfs      97M     0   97M    0% /run/user/0
10.211.55.51:6789:/   ceph      9.9G     0  9.9G    0% /date/aa

7.3、使用密钥文件挂载

[root@client ~]# umount /date/aa/
[root@client ~]# mount -t ceph 10.211.55.51:6789:/ /date/aa -o name=admin,secretfile=/etc/ceph/admin.secret
mount: 文件系统类型错误、选项错误、10.211.55.51:6789:/ 上有坏超级块、
       缺少代码页或助手程序,或其他错误

       有些情况下在 syslog 中可以找到一些有用信息- 请尝试
       dmesg | tail  这样的命令看看。

报错:使用秘钥文件挂在需要安装ceph-common

版本要和ceph版本一致,否则无法使用密钥文件挂载

[root@client ~]# yum install -y ceph-common-12.2.12

再次挂载

[root@client ~]# mount -t ceph 10.211.55.51:6789:/ /date/aa -o name=admin,secretfile=/etc/ceph/admin.secret
unable to read secretfile: No such file or directory
error reading secret file
failed to parse ceph_options
[root@client ~]# cd /etc/ceph/
[root@client ceph]# ls
admin.secert  rbdmap
[root@client ceph]# mv admin.secert admin.secret

我这里命名的时候打错了。。。重新命名了一下

[root@client ceph]# mount -t ceph 10.211.55.51:6789:/ /date/aa -o name=admin,secretfile=/etc/ceph/admin.secret
[root@client ceph]# df -Th
文件系统              类型      容量  已用  可用 已用% 挂载点
devtmpfs              devtmpfs  472M     0  472M    0% /dev
tmpfs                 tmpfs     484M     0  484M    0% /dev/shm
tmpfs                 tmpfs     484M  6.5M  477M    2% /run
tmpfs                 tmpfs     484M     0  484M    0% /sys/fs/cgroup
/dev/mapper/cl_7-root xfs        17G  2.6G   14G   16% /
/dev/sda2             xfs      1014M  139M  876M   14% /boot
/dev/sda1             vfat      599M  9.0M  590M    2% /boot/efi
tmpfs                 tmpfs      97M     0   97M    0% /run/user/0
10.211.55.51:6789:/   ceph      9.9G     0  9.9G    0% /date/aa

7.4、使用用户控件挂载Ceph文件系统

7.4.1、取消之前的挂载

[root@client ceph]# umount /date/aa/

7.4.2、安装ceph-fuse

[root@client ~]# yum install -y ceph-fuse
[root@ceph1 ~]# scp /etc/ceph/ceph.client.admin.keyring root@client:/etc/ceph/
ceph.client.admin.keyring                     100%   63   106.3KB/s   00:00
[root@ceph1 ~]# scp /etc/ceph/ceph.conf root@client:/etc/ceph/
ceph.conf                                     100%  220   380.7KB/s   00:00

挂载

[root@client ~]# ceph-fuse -m 10.211.55.51:6789 /date/aa/
ceph-fuse[2250]: starting ceph client
2022-07-28 09:55:24.951812 7fa84be010 -1 init, newargv = 0x55826627e0 newargc=9
ceph-fuse[2250]: starting fuse
[root@client ~]# df -Th
文件系统              类型            容量  已用  可用 已用% 挂载点
devtmpfs              devtmpfs        472M     0  472M    0% /dev
tmpfs                 tmpfs           484M     0  484M    0% /dev/shm
tmpfs                 tmpfs           484M  6.5M  477M    2% /run
tmpfs                 tmpfs           484M     0  484M    0% /sys/fs/cgroup
/dev/mapper/cl_7-root xfs              17G  2.6G   14G   16% /
/dev/sda2             xfs            1014M  139M  876M   14% /boot
/dev/sda1             vfat            599M  9.0M  590M    2% /boot/efi
tmpfs                 tmpfs            97M     0   97M    0% /run/user/0
ceph-fuse             fuse.ceph-fuse  9.9G     0  9.9G    0% /date/aa

8、Ceph可视化:dashboard

ceph-dash极其简单,提供类似ceph -s命令的信息及实时的IO速率等

8.1、下载ceph-dash

[root@ceph1 ~]# cd /ceph-dash/
[root@ceph1 ceph-dash]# git clone https://github.com/Crapworks/ceph-dash.git
-bash: git: 未找到命令
[root@ceph1 ceph-dash]# yum install -y git

在每个mgr节点安装,本实验仅需在ceph1上安装

[root@ceph1 ceph-dash]# git clone https://github.com/Crapworks/ceph-dash.git
正克隆到 'ceph-dash'...
remote: Enumerating objects: 1085, done.
remote: Counting objects: 100% (53/53), done.
remote: Compressing objects: 100% (38/38), done.
remote: Total 1085 (delta 21), reused 29 (delta 12), pack-reused 1032
接收对象中: 100% (1085/1085), 4.68 MiB | 1.58 MiB/s, done.
处理 delta 中: 100% (511/511), done.

8.2、安装python-pip

[root@ceph1 ceph-dash]# yum -y install python-pip

8.3、启动Ceph-dash

[root@ceph1 ceph-dash]# ls
ceph-dash
[root@ceph1 ceph-dash]# cd ./ceph-dash/
[root@ceph1 ceph-dash]# ls
app                   contrib           screenshots
ceph-dash.py          debian            test-requirements.txt
ChangeLog             Dockerfile        tests
config.grahite.json   LICENSE           tox.ini
config.influxdb.json  README.md
config.json           requirements.txt
[root@ceph1 ceph-dash]# ./ceph-dash.py

8.4、浏览器访问10.211.55.51:5000

Ceph分布式存储《实战》

8.5、修改默认端口

[root@ceph1 ceph-dash]# vim ./ceph-dash.py

#!/usr/bin/env python
# -*- coding: utf-8 -*-

from app import app

app.run(host='0.0.0.0',port=6666, debug=True)

启动查看

[root@ceph1 ceph-dash]# ./ceph-dash.py
 * Running on http://0.0.0.0:6666/
 * Restarting with reloader
Ceph分布式存储《实战》

 

版权声明:七月 发表于 4个月前,共 29579 字。
转载请注明:Ceph分布式存储《实战》 | 远游博客

暂无评论

暂无评论...