当前位置:文档之家› Ceph-原理-安装-维护-Centos7

Ceph-原理-安装-维护-Centos7

Ceph在存储中的层次第一层:物理存储介质。

a.LUN:通常将硬件生成生成的虚拟磁盘叫LUN, 比如raid卡生成的虚拟磁盘。

b.Volume:通常将软件层次生成的虚拟磁盘叫做卷,比如LVM生成的逻辑卷。

c.Disk:就是物理磁盘第二层:内核层次的文件系统,维护文件到磁层磁盘的映射关系。

(用户一般不需要管)第三层:应用层次的文件系统(需要用户自己手工安装应用程序,启动应用进程)第四层:网络文件访问系统NFS, CIFS(服务器端装Server,客户端装Client,挂载目录远程访问) Ceph原理1.Ceph存储系统的逻辑结构2.Rados的系统逻辑结构3.Ceph寻址流程4.ceph部署网络拓扑备注:Cluster Network可选,但是最好建议有该网络,用于OSD扩展时,后端网络传输数据用。

在实际工作中时,深有体会,如果只有public network,在OSD扩展时,由于ceph需要重新“搬运”数据,导致升级长达5个小时。

如果有专门的集群网络(万兆交换机+光钎),几分钟升级完成。

Ceph安装(ceph-deploy)1.环境准备以及各ceph节点初始化➢部署逻辑架构节点安装组件备注该章节的操作均通过root执行且在各个ceph节点均要执行➢修改/etc/hostname#vi /etc/hostname #如果为其他节点调整为其他节点的名称ceph{number} #如ceph1#hostname -F /etc/hostname #立即生效,断开shell重新登录➢创建安装用户irteam且该用户不需要tty#useradd -d /home/irteam -k /etc/skel -m irteam#sudo passwd irteam#echo " irteam ALL = (root) NOPASSWD: ALL" | sudo tee /etc/sudoers.d/irteam#chmod 0440 /etc/sudoers.d/irteam修改/etc/sudoers,irteam用户不需要tty#chmod 755 /etc/sudoers#vi /etc/sudoers #添加如下配置,而不是将原来的Default requiretty注释掉Defaults:irteam !requiretty#chmod 440 /etc/sudoers➢yum源以及ceph源设置#yum clean all#rm -rf /etc/yum.repos.d/*.repo#wget -O /etc/yum.repos.d/CentOS-Base.repo /repo/Centos-7.repo #wget -O /etc/yum.repos.d/epel.repo /repo/epel-7.repo#sed -i '/aliyuncs/d' /etc/yum.repos.d/CentOS-Base.repo#sed -i 's/$releasever/7.2.1511/g' /etc/yum.repos.d/CentOS-Base.repo#vi /etc/yum.repos.d/ceph.repo #增加ceph源[ceph]name=cephbaseurl=/ceph/rpm-jewel/el7/x86_64/gpgcheck=0[ceph-noarch]name=cephnoarchbaseurl=/ceph/rpm-jewel/el7/noarch/gpgcheck=0➢安装ceph#yum makecache#yum install -y ceph#ceph --version #版本查看ceph version 10.2.2 (45107e21c568dd033c2f0a3107dec8f0b0e58374)➢关闭selinux & firewalld#sed -i 's/SELINUX=.*/SELINUX=disabled/' /etc/selinux/config#setenforce 0#systemctl stop firewalld#systemctl disable firewalld➢同步时间节点(rdate & ntp均可以)#timedatectl set-timezone Asia/Shanghai #设置时区#yum install -y rdate#rdate -s #选个可用且权威的服务器#echo "00 0 1 * * root rdate -s " >> /etc/crontab #加入调度2.部署Ceph集群备注:以下操作均在admin-node节点执行,在本文中,由于admin-node与ceph1共享,所以在ceph1执行就可以了,统一用用户:irteam执行➢修改/etc/hosts#sudo vi /etc/hosts192.168.11.119 ceph1192.168.11.124 ceph2192.168.11.112 ceph3➢生成密钥对& 复制秘钥到各节点(防止部署输入密码,即无密码验证)#sudo su - irteam#ssh-keygenGenerating public/private key pair.Enter file in which to save the key (/irteam/.ssh/id_rsa):Enter passphrase (empty for no passphrase):Enter same passphrase again:Your identification has been saved in /irteam/.ssh/id_rsa.Your public key has been saved in /irteam/.ssh/id_rsa.pub.#ssh-copy-id irteam@ceph1#ssh-copy-id irteam@ceph2#ssh-copy-id irteam@ceph3➢用户配置,防止部署时输入用户名#sudo su - irteam #如果当前登录用户是irteam,则忽略该步骤#vi ~/.ssh/configStrictHostKeyChecking noHost ceph1Hostname ceph1User irteamHost ceph2Hostname ceph2User irteamHost ceph3Hostname ceph3User irteam#chmod 600 ~/.ssh/config➢安装部署工具#sudo yum -y install ceph-deploy#ceph-deploy --version1.5.34➢创建集群#sudo su - irteam #如果当前登录用户是irteam,则不用执行#mkdir ~/my-cluster && cd ~/my-cluster#创建集群:在当前目录下生成ceph.conf以及ceph.mon.keyring#ceph-deploy new ceph1 ceph2 ceph3#ls ~/my-cluster #查看生成的文件ceph.conf ceph-deploy-ceph.log ceph.mon.keyring修改集群ceph.conf配置,增加public_network,增加monitor之间的时差(默认为0.05s,现改为2s),总共的副本数据调整为2#vi ceph.conf[global]fsid = 7cec0691-c713-46d0-bce8-5cb1d57f051fmon_initial_members = ceph1, ceph2, ceph3 #也可以用IP,用hostname最佳mon_host = 192.168.11.119,192.168.11.124,192.168.11.112auth_cluster_required = cephxauth_service_required = cephxauth_client_required = cephxpublic_network = 192.168.11.0/24mon_clock_drift_allowed = 2osd_pool_default_size = 2➢部署monitors#ceph-deploy mon create-initial#ll ~/my-clusterceph.bootstrap-mds.keyringceph.bootstrap-rgw.keyringceph.confceph.mon.keyringceph.bootstrap-osd.keyringceph.client.admin.keyringceph-deploy-ceph.log#sudo ceph -s #查看集群情况cluster 7cec0691-c713-46d0-bce8-5cb1d57f051fhealth HEALTH_ERRno osdsmonmap e1: 3 mons at {ceph1=192.168.11.119:6789/0,ceph2=192.168.11.124:6789/0,ceph3=192.168.11.112:6789/0} election epoch 4, quorum 0,1,2 ceph3,ceph1,ceph2osdmap e1: 0 osds: 0 up, 0 inflags sortbitwisepgmap v2: 64 pgs, 1 pools, 0 bytes data, 0 objects0 kB used, 0 kB / 0 kB avail64 creating➢部署osds由于没有足够多的磁盘(如果用磁盘请参考维护章节),用文件夹:#以下创建文件夹,赋权的动作均在ceph1,ceph2,ceph3上执行#sudo mkdir /var/local/osd1 && sudo chmod 777 -R /var/local/osd1osd预处理与激活#ceph-deploy osd prepare ceph1:/var/local/osd1 ceph2:/var/local/osd1 ceph3:/var/local/osd1#ceph-deploy osd activate ceph1:/var/local/osd1 ceph2:/var/local/osd1 ceph3:/var/local/osd1备注:a.如果你有足够多的磁盘,你也可以直接对磁盘操作#ceph-deploy osd prepare ceph1:sdb#ceph-deploy osd activate ceph1:sdbb.对上述osd prepare & osd activate,也可以一步完成#ceph-deploy osd create ceph1:sdb查看集群状态#sudo ceph -scluster 7cec0691-c713-46d0-bce8-5cb1d57f051fhealth HEALTH_OKmonmap e1: 3 mons at{ceph1=192.168.11.119:6789/0,ceph2=192.168.11.124:6789/0,ceph3=192.168.11.112:6789/0} election epoch 4, quorum 0,1,2 ceph3,ceph1,ceph2osdmap e15: 3 osds: 3 up, 3 inflags sortbitwisepgmap v26: 64 pgs, 1 pools, 0 bytes data, 0 objects29590 MB used, 113 GB / 142 GB avail64 active+cleanCeph安装(kolla)除了官方推荐的ceph-deploy安装方式,你还可以选择如下安装方式:通过工具ansible,远程登录到各node节点安装,并且让mon,osd,rgw用docker方式来承载另外:由于我们使用docker的目的是想部署openstack,涉及到openstack部分,则不涉及。

相关主题