MFS分布式文件系统安装配置拓扑上述拓扑简单工作原理;管理控制是MFS分布式文件系统核心,负责处理client与chunkserver之间的联系。
Chunkserver提供存储数据的空间,通常是分区,磁盘,RAID等设备。
服务器功能描述:管理服务器(master):负责各个数据存储服务器(chunkserver)的管理,文件读写调度,文件空间回收以及恢复,多点拷贝元数据日志服务器(metalogger):负责备份master服务器的变化日志文件,文件类型为changelog_ml.*.mfs,以便master出故障时接替其工作数据存储服务器(chunkserver):提供存储空间,听从master调度,为客户提供数据传输客户机:挂载目录,挂载点是master服务器的VIP,而非数据存储服务器的IP。
MFS启动顺序1.启动管理服务器2.启动所有的数据存储服务器3.启动元数据服务器(如果有的话)4.当所有数据存储服务器连接到管理服务器,用户就可以挂载目录。
(要了解数据存储服务器是否连接到了管理服务器,可通过查看管理服务器的日志或是查看WEB管理界面得知)MFS安全关闭顺序1.客户机卸载所有已挂在目录2.关闭数据存储服务器3.关闭元数据服务器(如果有的话)4.最后,关闭管理服务器元数据服务器的备份1. 主要的元数据文件有metadata.mfs和metadata.mfs.back,在管理服务器运行期间,元舒服服务器默认每隔24小时与其做同步2. 元数据变更日志会存储在硬盘上几个小时,这个存储时间由BACK_LOGS设置定义The main metadata file needs regular backups with the frequency depending on how many hourly changelogs are stored. Metadata changelogs should be automatically replicated in real time. Since MooseFS 1.6.5, both tasks are done by mfsmetalogger daemon.管理服务器灾难恢复操作:一旦崩溃,元数据变更日志需要整合进主元数据文件中(metadata.mfs)。
可以用mfsmetarestore工具完成整合工作。
最简单的操作:/usr/local/mfs/bin/mfsmetarestore -a如果元数据是存放在其他目录,而非编译时指定的目录。
那么需要指定目录的具体位置/usr/local/mfs/bin/mfsmetarestore -a -d /storage/mfsmaster测试环境概述:数据存储服务器:新增2GB硬盘用于数据存储副本概述:一个数据存储服务器可理解为一个副本,有多少存储服务器就有多少个副本客户机:每个文件保留2个副本(2个数据存储服务器)资源描述1. 数据存储服务器向client提供数据存储目录(2台都一样)/mnt/chunk12. 客户机挂载点/mnt/client1两台数据存储服务器同时宕机情况:客户机所挂载的目录不会断开链接(虚机测试下来的确是没有断很奇怪),已有的文件仍可以访问,但不能写入,至少要有1台数据存储服务器提供服务======================================Master主控服务器的安装配置(2台配置相同)添加用户和组groupadd mfsuseradd -g mfs -s /sbin/nologin sbin安装tar zxvf mfs-1.6.20-2.tar.gzcd mfs-1.6.20-2./configure --prefix=/usr/local/mfs --with-default-user=mfs \--with-default-group=mfs --disable-mfschunkserver --disable-mfsmount makemake install配置进入目录/usr/local/mfs/etccp mfsexports.cfg.dist mfsexports.cfgcp mfsmaster.cfg.dist mfsmaster.cfgmfsmaster.cfg不用修改就可以工作编辑mfsmaster.cfg# WORKING_USER = mfs# WORKING_GROUP = mfs# SYSLOG_IDENT = mfsmaster# LOCK_MEMORY = 0# NICE_LEVEL = -19# EXPORTS_FILENAME = /usr/local/mfs/etc/mfsexports.cfg# DATA_PATH = /usr/local/mfs/var/mfs# BACK_LOGS = 50# REPLICATIONS_DELAY_INIT = 300# REPLICATIONS_DELAY_DISCONNECT = 3600# MATOML_LISTEN_HOST = * <--metalogger监听的IP地址# MATOML_LISTEN_PORT = 9419 <--metalooger监听的端口# MATOCS_LISTEN_HOST = * <--chunkserver监听的IP地址# MATOCS_LISTEN_PORT = 9420 <--chunkserver监听的端口# MATOCU_LISTEN_HOST = * <-- client监听的IP地址# MATOCU_LISTEN_PORT = 9421 <--client监听的端口# CHUNKS_LOOP_TIME = 300# CHUNKS_DEL_LIMIT = 100# CHUNKS_WRITE_REP_LIMIT = 1# CHUNKS_READ_REP_LIMIT = 5# REJECT_OLD_CLIENTS = 0# LOCK_FILE = /var/run/mfs/mfsmaster.lockmfsexports.cfg 指定哪个客户被允许挂载进入目录/usr/local/mfs/var/mfs该目录存放metadata数据和changelog数据mv metadata.mfs.empty metadata.mfs启动/usr/local/mfs/sbin/mfsmaster start启动完之后会在/usr/local/mfs/var/mfs目录里生成metadata.mfs.back停止(切忌不要使用kill杀死进程)/usr/local/mfs/sbin/mfsmaster -s开启WEB监视,以便使用浏览器对整个MFS进行监视/usr/local/mfs/sbin/mfscgiserv访问方式:http://ip:9425==============================================================================元数据服务器的安装配置添加用户和组groupadd mfsuseradd -g mfs -s /sbin/nologin sbin安装tar zxvf mfs-1.6.20-2.tar.gzcd mfs-1.6.20-2./configure --prefix=/usr/local/mfs --with-default-user=mfs \--with-default-group=mfs --disable-mfschunkserver --disable-mfsmount makemake install配置进入目录/usr/local/mfs/etccp mfsmetalogger.cfg.dist mfsmetalogger.cfg编辑mfsmetalogger.cfg# WORKING_USER = mfs# WORKING_GROUP = mfs# SYSLOG_IDENT = mfsmetalogger# LOCK_MEMORY = 0# NICE_LEVEL = -19# DATA_PATH = /usr/local/mfs/var/mfs# BACK_LOGS = 50META_DOWNLOAD_FREQ = 1 <--每隔1小时从主控服务器下载元数据文件和变更日志文件# MASTER_RECONNECTION_DELAY = 5MASTER_HOST = 192.168.1.254 <--主控服务器的VIP地址# MASTER_PORT = 9419# MASTER_TIMEOUT = 60启动/usr/local/mfs/sbin/mfsmetalogger start启动后,会在目录/usr/local/mfs/var/mfs生成changelog_ml_back.N.mfs和metadata_ml.mfs.back(从主控服务器下载来)停止(切忌不要使用kill杀死进程)/usr/local/mfs/sbin/mfsmetalogger -s==============================================================================数据存储服务器安装配置(磁盘空间必需大于1GB,否则client无法写入数据)添加用户和组groupadd mfsuseradd -g mfs -s /sbin/nologin sbin安装tar zxvf mfs-1.6.20-2.tar.gzcd mfs-1.6.20-2./configure --prefix=/usr/local/mfs --with-default-user=mfs \--with-default-group=mfs --disable-mfsmastermakemake install配置进入目录/usr/local/mfs/etccp mfschunkserver.cfg.dist mfschunkserver.cfgcp mfshdd.cfg.dist mfshdd.cfg编辑mfschunkserver.cfg# WORKING_USER = mfs# WORKING_GROUP = mfs# SYSLOG_IDENT = mfschunkserver# LOCK_MEMORY = 0# NICE_LEVEL = -19# DATA_PATH = /usr/local/mfs/var/mfs# MASTER_RECONNECTION_DELAY = 5# BIND_HOST = *MASTER_HOST = 192.168.1.254 <--主控服务器的VIP地址# MASTER_PORT = 9420# MASTER_TIMEOUT = 60# CSSERV_LISTEN_HOST = *# CSSERV_LISTEN_PORT = 9422# HDD_CONF_FILENAME = /usr/local/mfs/etc/mfshdd.cfg# HDD_TEST_FREQ = 10# LOCK_FILE = /var/run/mfs/mfschunkserver.lock# BACK_LOGS = 50# CSSERV_TIMEOUT = 5编辑mfshdd.cfg/mnt/chunk1 <--给clinet挂载的目录chown –R mfs: /mnt/chunk1启动/usr/local/mfs/sbin/mfschunkserver start停止(切忌不要使用kill杀死进程)/usr/local/mfs/sbin/mfschunkserver -s==============================================================================Client的安装及配置Fuse的安装(版本要2.6以上)tar zxvf fuse-2.7.4.tar.gzcd fuse-2.7.4./configure && make && make installmodprobe fuse添加用户和组groupadd mfsuseradd -g mfs -s /sbin/nologin sbintar zxvf mfs-1.6.20-2.tar.gzcd mfs-1.6.20-2./configure --prefix=/usr/local/mfs --with-default-user=mfs \--with-default-group=mfs --disable-mfsmaster --disable-mfschunkservermakemake install建立用于挂载的目录mkdir /mnt/client1挂载ChunkServer目录(/mnt/chunk1)/usr/local/mfs/bin/mfsmount /mnt/client1 -H 192.168.1.254注意:挂载的IP是主控服务器的VIP而不是ChunkServer的IP卸载挂接umount -l /mnt/chunk1==============================================================================Keepalived实现两台控制器的主备切换安装keepalivedtar zxvf keepalived-1.1.17.tar.gzcd keepalived-1.1.17./configure --prefix=/usr/local/keepalivedmake && make installmkdir /etc/keepalivedcp /usr/local/src/keepalived-1.1.17/keepalived/etc/keepalived/keepalived.conf /etc/keepalivedcp /usr/local/src/keepalived-1.1.17/keepalived/etc/init.d/keepalived.sysconfig /etc/sysconfig/keepalived cp /usr/local/src/keepalived-1.1.17/keepalived/etc/init.d/keepalived.init /etc/init.d/keepalived chmod 700 /etc/init.d/keepalivedchkconfig --add keepalivedchkconfig keepalived on启动方式/usr/local/keepalived/sbin/keepalived -d停止方式/etc/init.d/keepalived stop主配置vrrp_script chk_mfs {script "/opt/scripts/mfs_check.sh" <--检测mfsmaster进程interval 1weight -100}vrrp_instance external {state MASTERinterface eth0lvs_sync_daemon_interface eth0virtual_router_id 51priority 150advert_int 1authentication {auth_type PASSauth_pass 1111}virtual_ipaddress {192.168.1.254}track_script {chk_mfs}}mfs_check.sh脚本#!/bin/sh# Need nmap softwareip="192.168.1.243"port_9419=9419port_9420=9420port_9421=9421logfile="/var/log/mfs_check.log"date=`date +%Y-%m-%d`status_9419=`nmap -p$port_9419 $ip | grep closed | awk '{print $2}'` if [ "$status_9419" = "closed" ]; thenmfs_status_9419=0elsemfs_status_9419=1fistatus_9420=`nmap -p$port_9420 $ip | grep closed | awk '{print $2}'` if [ "$status_9420" = "closed" ]; thenmfs_status_9420=0elsemfs_status_9420=1fistatus_9421=`nmap -p$port_9421 $ip | grep closed | awk '{print $2}'`if [ "$status_9421" = "closed" ]; thenmfs_status_9421=0elsemfs_status_9421=1fiif [[ $mfs_status_9419 -eq 0 || $mfs_status_9420 -eq 0 || $mfs_status_9421 -eq 0 ]]; thenecho "stop keepalived" | tee -a $logfile/etc/init.d/keepalived stopreturn 0fi辅配置vrrp_instance external {state BACKUPinterface eth0lvs_sync_daemon_interface eth0virtual_router_id 51priority 100advert_int 1authentication {auth_type PASSauth_pass 1111}virtual_ipaddress {192.168.1.254}notify_master /opt/scripts/mfs_start.sh <---进入主状态,开启MFSnotify_backup /opt/scripts/mfs_stop.sh <---进入备份状态,关闭MFS}mfs_start.sh脚本#!/bin/shlogfile="/var/log/mfs_status.log"date=`date +%Y-%m-%d`cmd="/usr/local/mfs/sbin/mfsmaster"echo "${date}: start Mfsmaster Progress" | tee -a $logfile$cmd startmfs_stop.sh脚本#!/bin/shlogfile="/var/log/mfs_status.log"date=`date +%Y-%m-%d`cmd="/usr/local/mfs/sbin/mfsmaster"echo "${date}: stop Mfsmaster Progress" | tee -a $logfile$cmd -s=======================================MFS操作:1.设定每个写入文件都有2个存储副本,一个副本就是一个数据存储服务器需要在客户机上操作:/usr/local/mfs/bin/mfssetgoal 2 /mnt/client1/此时,向/mnt/client1目录写入的文件及子目录都有两个副本保存注意:对于已经存在的文件不会改变其副本数,只对后续新写入的文件副本数生效2.新增一台数据存储服务器,改变现有文件及后续新写入文件副本数为3需要在客户机上操作:/usr/local/mfs/bin/mfssetgoal -r 3 /mnt/client1此时,所有已经存在的文件及子目录副本数为3,并且新写入的文件和子目录的副本数也为3注意:文件及目录所保留的真实副本数是依据数据存储服务器的数量,如果数据存储服服务器只有两台,但却为文件及目录设定了3个副本的话,最后的真实副本数为23.使用mfsgetgoal命令查看文件及目录副本数需要在客户机上操作:[root@lvs-backup t5]# /usr/local/mfs/bin/mfsgetgoal -r /mnt/client1//mnt/client1/:files with goal 2 : 6 <--共6个文件,每个文件有2个副本directories with goal 2 : 7 <--共7个目录,每个目录有2个副本4.使用mfsfileinfo命令查看文件副本的真实情况需要在客户机上操作:[root@lvs-backup client1]# /usr/local/mfs/bin/mfsfileinfo /mnt/client1/passwd /mnt/client1/passwd:chunk 0: 0000000000000008_00000001 / (id:8 ver:1)copy 1: 192.168.1.243:9422copy 1: 192.168.1.251:94225.显示整个目录数的摘要信息:需要在客户机上操作:[root@lvs-backup client1]# /usr/local/mfs/bin/mfsdirinfo -h /mnt/client1//mnt/client1/:inodes: 14directories: 7files: 7chunks: 7length: 15MiBsize: 15MiBrealsize: 30MiBMFS排错1.非正常关闭管理服务器进程导致启动失败,启动过程中生成如下报错日志:错误日志一:can't open metadata fileif this is new instalation then rename/usr/local/mfs/var/mfs/metadata.mfs.empty as/usr/local/mfs/var/mfs/metadata.mfs以上错误提示将metadata.mfs.empty 文件重命名为metadata.mfs解决:cd /usr/local/mfs/var/mfsmv metadata.mfs.empty metadata.mfs错误日志二:当执行完上述命令再次启动进程后,会有如下错误日志生成backup file is newer than current file - please check it manually - propably you should run metarestoreinit: file system manager failed !!!error occured during initialization - exiting解决:使用mfsmetarestor命令修复,元数据路径一般为$(prefix)/var/mfs在确保元数据路径就是$(prefix)/var/mfs的情况下,可以执行下列命令/usr/local/mfs/sbin/mfsmetarestore -a <---自动修复元数据文件如果元数据文件存储在非$(prefix)/var/mfs路径下,执行以下命令/usr/local/mfs/sbin/mfsmetarestore -a -d /storage/mfs手动修复元数据文件/usr/local/mfs/sbin/mfsmetarestore -m metadata.mfs.mak -o metadata.mfs changelog.*mfs。