当前位置:文档之家› Hadoop集群搭建(二)HDFS_2017

Hadoop集群搭建(二)HDFS_2017

Hadoop集群搭建(二)HDFSHDFS只是Hadoop最基本的一个服务,很多其他服务,都是基于HDFS 展开的。

所以部署一个HDFS集群,是很核心的一个动作,也是大数据平台的开始。

安装Hadoop集群,首先需要有Zookeeper才可以完成安装。

如果没有Zookeeper,请先部署一套Zookeeper。

另外,JDK以及物理主机的一些设置等。

都请参考下文:Hadoop集群搭建(一) Zookeeper下面开始HDFS的安装HDFS主机分配1.192.168.67.101 c6701 --Namenode+datanode2.192.168.67.102 c6702 --datanode3.192.168.67.103 c6703 --datanode1. 安装HDFS,解压hadoop-2.6.0-EDH-0u2.tar.gz我同时下载2.6和2.7版本的软件,先安装2.6,然后在执行2.6到2.7的升级步骤eradd hdfs2.echo "hdfs:hdfs"| chpasswd3.su - hdfs4.cd /tmp/software5.tar -zxvf hadoop-2.6.0-EDH-0u2.tar.gz -C /home/hdfs/6.mkdir -p /data/hadoop/temp7.mkdir -p /data/hadoop/journal8.mkdir -p /data/hadoop/hdfs/name9.mkdir -p /data/hadoop/hdfs/data10.chown -R hdfs:hdfs /data/hadoop11.chown -R hdfs:hdfs /data/hadoop/temp12.chown -R hdfs:hdfs /data/hadoop/journal13.chown -R hdfs:hdfs /data/hadoop/hdfs/name14.chown -R hdfs:hdfs /data/hadoop/hdfs/data15.$ pwd16./home/hdfs/hadoop-2.6.0-EDH-0u2/etc/hadoop2. 修改core-site.xml对应的参数1.$ cat core-site.xml2.&lt;configuration&gt;3.&lt;!--指定hdfs的nameservice为ns --&gt;4.&lt;property&gt;5.&lt;name&gt;fs.defaultFS&lt;/name&gt;6.&lt;value&gt;hdfs://ns&lt;/value&gt;7.&lt;/property&gt;8.&lt;!--指定hadoop数据临时存放目录--&gt;9.&lt;property&gt;10.&lt;name&gt;hadoop.tmp.dir&lt;/name&gt;11.&lt;value&gt;/data/hadoop/temp&lt;/value&gt;12.&lt;/property&gt;13.14.&lt;property&gt;15.&lt;name&gt;io.file.buffer.size&lt;/name&gt;16.&lt;value&gt;4096&lt;/value&gt;17.&lt;/property&gt;18.&lt;!--指定zookeeper地址--&gt;19.&lt;property&gt;20.&lt;name&gt;ha.zookeeper.quorum&lt;/name&gt;21.&lt;value&gt;c6701:2181,c6702:2181,c6703:2181&lt;/value&gt;22.&lt;/property&gt;23.&lt;/configuration&gt;3. 修改hdfs-site.xml对应的参数1.cat hdfs-site.xml2.&lt;configuration&gt;3.&lt;!--指定hdfs的nameservice为ns,需要和core-site.xml中的保持一致,并且ns如果改,整个文件中,全部的ns要都修改,保持统一--&gt;4.&lt;property&gt;5.&lt;name&gt;services&lt;/name&gt;6.&lt;value&gt;ns&lt;/value&gt;7.&lt;/property&gt;8.&lt;!-- ns下面有两个NameNode,分别是nn1,nn2 --&gt;9.&lt;property&gt;10.&lt;name&gt;nodes.ns&lt;/name&gt;11.&lt;value&gt;nn1,nn2&lt;/value&gt;12.&lt;/property&gt;13.&lt;!-- nn1的RPC通信地址--&gt;14.&lt;property&gt;15.&lt;name&gt;node.rpc-address.ns.nn1&lt;/name&gt;16.&lt;value&gt;c6701:9000&lt;/value&gt;17.&lt;/property&gt;18.&lt;!-- nn1的http通信地址--&gt;19.&lt;property&gt;20.&lt;name&gt;node.http-address.ns.nn1&lt;/name&gt;21.&lt;value&gt;c6701:50070&lt;/value&gt;22.&lt;/property&gt;23.&lt;!-- nn2的RPC通信地址--&gt;24.&lt;property&gt;25.&lt;name&gt;node.rpc-address.ns.nn2&lt;/name&gt;26.&lt;value&gt;c6702:9000&lt;/value&gt;27.&lt;/property&gt;28.&lt;!-- nn2的http通信地址--&gt;29.&lt;property&gt;30.&lt;name&gt;node.http-address.ns.nn2&lt;/name&gt;31.&lt;value&gt;c6702:50070&lt;/value&gt;32.&lt;/property&gt;33.&lt;!--指定NameNode的元数据在JournalNode上的存放位置--&gt;34.&lt;property&gt;35.&lt;name&gt;node.shared.edits.dir&lt;/name&gt;36.&lt;value&gt;qjournal://c6701:8485;c6702:8485;c6703:8485/ns&lt;/value&gt;37.&lt;/property&gt;38.&lt;!--指定JournalNode在本地磁盘存放数据的位置--&gt;39.&lt;property&gt;40.&lt;name&gt;dfs.journalnode.edits.dir&lt;/name&gt;41.&lt;value&gt;/data/hadoop/journal&lt;/value&gt;42.&lt;/property&gt;43.&lt;!--开启NameNode故障时自动切换--&gt;44.&lt;property&gt;45.&lt;name&gt;dfs.ha.automatic-failover.enabled&lt;/name&gt;46.&lt;value&gt;true&lt;/value&gt;47.&lt;/property&gt;48.&lt;!--配置失败自动切换实现方式--&gt;49.&lt;property&gt;50.&lt;name&gt;dfs.client.failover.proxy.provider.ns&lt;/name&gt;51.&lt;value&gt;node.ha.ConfiguredFailoverProxyProvider&lt;/value&gt;52.&lt;/property&gt;53.&lt;!--配置隔离机制--&gt;54.&lt;property&gt;55.&lt;name&gt;dfs.ha.fencing.methods&lt;/name&gt;56.&lt;value&gt;sshfence&lt;/value&gt;57.&lt;/property&gt;58.&lt;!--使用隔离机制时需要ssh免登陆--&gt;59.&lt;property&gt;60.&lt;name&gt;dfs.ha.fencing.ssh.private-key-files&lt;/name&gt;61.&lt;value&gt;/home/hdfs/.ssh/id_rsa&lt;/value&gt;62.&lt;/property&gt;63.64.&lt;property&gt;65.&lt;name&gt;.dir&lt;/name&gt;66.&lt;value&gt;/data/hadoop/hdfs/name&lt;/value&gt;67.&lt;/property&gt;68.69.&lt;property&gt;70.&lt;name&gt;dfs.datanode.data.dir&lt;/name&gt;71.&lt;value&gt;/data/hadoop/hdfs/data&lt;/value&gt;72.&lt;/property&gt;73.74.&lt;property&gt;75.&lt;name&gt;dfs.replication&lt;/name&gt;76.&lt;value&gt;2&lt;/value&gt;77.&lt;/property&gt;78.&lt;!--在NN和DN上开启WebHDFS(REST API)功能,不是必须--&gt;79.&lt;property&gt;80.&lt;name&gt;dfs.webhdfs.enabled&lt;/name&gt;81.&lt;value&gt;true&lt;/value&gt;82.&lt;/property&gt;83.&lt;/configuration&gt;4. 添加slaves文件1.$ more slaves2.c67013.c67024.c6703--- 安装C6702的hdfs---5. 创建c6702的用户,并为hdfs用户ssh免密1.ssh c6702 "useradd hdfs"2.ssh c6702 "echo "hdfs:hdfs" | chpasswd"3.ssh-copy-id hdfs@c67026. 拷贝软件1.scp -r /tmp/software/hadoop-2.6.0-EDH-0u2.tar.gz root@c6702:/tmp/software/.2.ssh c6702 "chmod 777 /tmp/software/*"7. 创建目录,解压软件1.ssh hdfs@c6702 "mkdir hdfs"2.ssh hdfs@c6702 "tar -zxvf /tmp/software/hadoop-2.6.0-EDH-0u2.tar.gz -C /home/hdfs"3.ssh hdfs@c6702 "ls -al hdfs"4.ssh hdfs@c6702 "ls -al hdfs/hadoop*"复制配置文件1.ssh hdfs@c6702 "rm -rf /home/hdfs/hadoop-2.6.0-EDH-0u2/etc/hadoop/core-site.xml"2.ssh hdfs@c6702 "rm -rf /home/hdfs/hadoop-2.6.0-EDH-0u2/etc/hadoop/hdfs-site.xml"3.scp -r /home/hdfs/hadoop-2.6.0-EDH-0u2/etc/hadoop/core-site.xml hdfs@c6702:/home/hdfs/hadoop-2.6.0-EDH-0u2/etc/hadoop/core-site.xml4.scp -r /home/hdfs/hadoop-2.6.0-EDH-0u2/etc/hadoop/hdfs-site.xml hdfs@c6702:/home/hdfs/hadoop-2.6.0-EDH-0u2/etc/hadoop/hdfs-site.xml5.scp -r /home/hdfs/hadoop-2.6.0-EDH-0u2/etc/hadoop/slaves hdfs@c6702:/home/hdfs/hadoop-2.6.0-EDH-0u2/etc/hadoop/slaves创建hdfs需要的目录1.ssh root@c6702 "mkdir -p /data/hadoop"2.ssh root@c6702 " chown -R hdfs:hdfs /data/hadoop"3.ssh hdfs@c6702 "mkdir -p /data/hadoop/temp"4.ssh hdfs@c6702 "mkdir -p /data/hadoop/journal"5.ssh hdfs@c6702 "mkdir -p /data/hadoop/hdfs/name"6.ssh hdfs@c6702 "mkdir -p /data/hadoop/hdfs/data"--- 安装C6703的hdfs---8. 创建c6703的用户,并为hdfs用户ssh免密1.ssh c6703 "useradd hdfs"2.ssh c6703 "echo "hdfs:hdfs" | chpasswd"3.ssh-copy-id hdfs@c67039. 拷贝软件1.scp -r /tmp/software/hadoop-2.6.0-EDH-0u2.tar.gz root@c6703:/tmp/software/.2.ssh c6703 "chmod 777 /tmp/software/*"3.10.创建目录,解压软件4.ssh hdfs@c6703 "mkdir hdfs"5.ssh hdfs@c6703 "tar -zxvf /tmp/software/hadoop-2.6.0-EDH-0u2.tar.gz -C /home/hdfs"6.ssh hdfs@c6703 "ls -al hdfs"7.ssh hdfs@c6703 "ls -al hdfs/hadoop*"复制配置文件1.ssh hdfs@c6703 "rm -rf /home/hdfs/hadoop-2.6.0-EDH-0u2/etc/hadoop/core-site.xml"2.ssh hdfs@c6703 "rm -rf /home/hdfs/hadoop-2.6.0-EDH-0u2/etc/hadoop/hdfs-site.xml"3.scp -r /home/hdfs/hadoop-2.6.0-EDH-0u2/etc/hadoop/core-site.xml hdfs@c6703:/home/hdfs/hadoop-2.6.0-EDH-0u2/etc/hadoop/core-site.xml4.scp -r /home/hdfs/hadoop-2.6.0-EDH-0u2/etc/hadoop/hdfs-site.xml hdfs@c6703:/home/hdfs/hadoop-2.6.0-EDH-0u2/etc/hadoop/hdfs-site.xml5.scp -r /home/hdfs/hadoop-2.6.0-EDH-0u2/etc/hadoop/slaves hdfs@c6703:/home/hdfs/hadoop-2.6.0-EDH-0u2/etc/hadoop/slaves创建hdfs需要的目录1.ssh root@c6703 "mkdir -p /data/hadoop"2.ssh root@c6703 " chown -R hdfs:hdfs /data/hadoop"3.ssh hdfs@c6703 "mkdir -p /data/hadoop/temp"4.ssh hdfs@c6703 "mkdir -p /data/hadoop/journal"5.ssh hdfs@c6703 "mkdir -p /data/hadoop/hdfs/name"6.ssh hdfs@c6703 "mkdir -p /data/hadoop/hdfs/data"11. 启动HDFS,先启动三个节点的journalnode/home/hdfs/hadoop-2.6.0-EDH-0u2/sbin/hadoop-daemon.sh start journalnode检查状态1.$ jps2.3958Jps3.3868JournalNode12. 然后启动namenode,首次启动namenode之前,先在其中一个节点(主节点)format namenode信息,信息会存在于.dir指定的路径中1.&lt;name&gt;.dir&lt;/name&gt;2.&lt;value&gt;/data/hadoop/hdfs/name&lt;/value&gt;1.$ ./hdfs namenode -format2.17/09/2607:52:17 INFO Node: STARTUP_MSG:3./************************************************************4.STARTUP_MSG: Starting NameNode5.STARTUP_MSG: host = /192.168.67.1016.STARTUP_MSG: args = [-format]7.STARTUP_MSG: version = 2.6.0-EDH-0u28.STARTUP_MSG: classpath = /home/hdfs/hadoop-2.6.0-EDHxxxxxxxxxx9.STARTUP_MSG: build = http://gitlab-xxxxx10.STARTUP_MSG: java = 1.8.0_14411.************************************************************/12.17/09/2607:52:17 INFO Node: registered UNIX signal handlers for[TERM, HUP, INT]13.17/09/2607:52:17 INFO Node: createNameNode [-format]14.17/09/2607:52:18 WARN common.Util:Path/data/hadoop/hdfs/name should be specified as a URI in configuration files.Please update hdfs configuration.15.17/09/2607:52:18 WARN common.Util:Path/data/hadoop/hdfs/name should be specified as a URI in configuration files.Please update hdfs configuration.16.Formatting using clusterid: CID-b2f01411-862f-44b2-a6dc-7d17bd48d52217.17/09/2607:52:18 INFO namenode.FSNamesystem:No KeyProvider found.18.17/09/2607:52:18 INFO namenode.FSNamesystem: fsLock is fair:true19.17/09/2607:52:18 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=100020.17/09/2607:52:18 INFO blockmanagement.DatanodeManager: node.datanode.registration.ip-hostname-check=true21.17/09/2607:52:18 INFO blockmanagement.BlockManager: node.startup.delay.block.deletion.sec is set to 000:00:00:00.000will start around 2017Sep2607:52:1823.17/09/2607:52:18 INFO util.GSet:Computing capacity for map BlocksMap24.17/09/2607:52:18 INFO util.GSet: VM type =64-bit25.17/09/2607:52:18 INFO util.GSet:2.0% max memory 966.7 MB =19.3 MB26.17/09/2607:52:18 INFO util.GSet: capacity =2^21=2097152 entries27.17/09/2607:52:18 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false28.17/09/2607:52:18 INFO blockmanagement.BlockManager: defaultReplication=229.17/09/2607:52:18 INFO blockmanagement.BlockManager: maxReplication=51230.17/09/2607:52:18 INFO blockmanagement.BlockManager: minReplication=131.17/09/2607:52:18 INFO blockmanagement.BlockManager: maxReplicationStreams =232.17/09/2607:52:18 INFO blockmanagement.BlockManager: shouldCheckForEnoughRacks =false33.17/09/2607:52:18 INFO blockmanagement.BlockManager: replicationRecheckInterval =300034.17/09/2607:52:18 INFO blockmanagement.BlockManager: encryptDataTransfer =falseLog =100036.17/09/2607:52:18 INFO namenode.FSNamesystem: fsOwner = hdfs (auth:SIMPLE)37.17/09/2607:52:18 INFO namenode.FSNamesystem: supergroup = supergroup38.17/09/2607:52:18 INFO namenode.FSNamesystem: isPermissionEnabled =true39.17/09/2607:52:18 INFO namenode.FSNamesystem:Determined nameserviceID: ns40.17/09/2607:52:18 INFO namenode.FSNamesystem: HA Enabled:true41.17/09/2607:52:18 INFO namenode.FSNamesystem:Append Enabled:true42.17/09/2607:52:18 INFO util.GSet:Computing capacity for map INodeMap43.17/09/2607:52:18 INFO util.GSet: VM type =64-bit44.17/09/2607:52:18 INFO util.GSet:1.0% max memory 966.7 MB =9.7 MB45.17/09/2607:52:18 INFO util.GSet: capacity =2^20=1048576 entries46.17/09/2607:52:18 INFO Node:Caching file names occuringmore than 10 times47.17/09/2607:52:18 INFO util.GSet:Computing capacity for map cachedBlocks48.17/09/2607:52:18 INFO util.GSet: VM type =64-bit49.17/09/2607:52:18 INFO util.GSet:0.25% max memory 966.7 MB =2.4 MB50.17/09/2607:52:18 INFO util.GSet: capacity =2^18=262144 entries51.17/09/2607:52:18 INFO namenode.FSNamesystem: node.safemode.threshold-pct =0.999000012874603352.17/09/2607:52:18 INFO namenode.FSNamesystem: node.safemode.min.datanodes =053.17/09/2607:52:18 INFO namenode.FSNamesystem: node.safemode.extension =3000054.17/09/2607:52:18 INFO namenode.FSNamesystem:Retry cache on namenode is enabled55.17/09/2607:52:18 INFO namenode.FSNamesystem:Retry cache will use0.03of total heap and retry cache entry expiry time is600000 millis56.17/09/2607:52:18 INFO util.GSet:Computing capacity for map NameNodeRetryCache57.17/09/2607:52:18 INFO util.GSet: VM type =64-bit58.17/09/2607:52:18 INFO util.GSet:0.029999999329447746% max memory 966.7 MB =297.0 KB59.17/09/2607:52:18 INFO util.GSet: capacity =2^15=32768 entries60.17/09/2607:52:18 INFO namenode.NNConf:ACLs enabled?false61.17/09/2607:52:18 INFO namenode.NNConf:XAttrs enabled?true62.17/09/2607:52:18 INFO namenode.NNConf:Maximum size of an xattr:1638463.17/09/2607:52:19 INFO namenode.FSImage:Allocated new BlockPoolId: BP-144216011-192.168.67.101-150641233975764.17/09/2607:52:19 INFO common.Storage:Storage directory /data/hadoop/hdfs/name has been successfully formatted.65.17/09/2607:52:20 INFO namenode.NNStorageRetentionManager:Going to retain 1 images with txid &gt;=066.17/09/2607:52:20 INFO util.ExitUtil:Exiting with status 067.17/09/2607:52:20 INFO Node: SHUTDOWN_MSG:68./************************************************************69.SHUTDOWN_MSG: Shutting down NameNode at /192.168.67.10170.************************************************************/13. standby namenode需要先执行bootstrapstandby,输出如下1.[hdfs@c6702 sbin]$ ../bin/hdfs namenode -bootstrapstandby2.17/09/2609:44:58 INFO Node: STARTUP_MSG:3./************************************************************4.STARTUP_MSG: Starting NameNode5.STARTUP_MSG: host = /192.168.67.1026.STARTUP_MSG: args = [-bootstrapstandby]7.STARTUP_MSG: version = 2.6.0-EDH-0u28.STARTUP_MSG: classpath = /home/hdfs/haxxx9.STARTUP_MSG: build = http://gitlab-xxxx10.STARTUP_MSG: java = 1.8.0_14411.************************************************************/12.17/09/2609:44:58 INFO Node: registered UNIX signal handlers for[TERM, HUP, INT]13.17/09/2609:44:58 INFO Node: createNameNode [-bootstrapstandby]14.17/09/2609:44:59 WARN common.Util:Path/data/hadoop/hdfs/name should be specified as a URI in configuration files.Please update hdfs configuration.15.17/09/2609:44:59 WARN common.Util:Path/data/hadoop/hdfs/name should be specified as a URI in configuration files.Please update hdfs configuration.16.=====================================================17.About to bootstrap Standby ID nn2 from:service ID: ns19.Other Namenode ID: nn120.Other NN's HTTP address: http://c6701:5007021. Other NN's IPC address: c6701/192.168.67.101:9000space ID:79366220723.Block pool ID: BP-144216011-192.168.67.101-150641233975724.Cluster ID: CID-b2f01411-862f-44b2-a6dc-7d17bd48d522yout version:-6026.=====================================================27.Re-format filesystem in Storage Directory/data/hadoop/hdfs/name ?(Y or N) y28.17/09/2609:45:16 INFO common.Storage:Storage directory /data/hadoop/hdfs/name has been successfully formatted.29.17/09/2609:45:16 WARN common.Util:Path/data/hadoop/hdfs/name should be specified as a URI in configuration files.Please update hdfs configuration.30.17/09/2609:45:16 WARN common.Util:Path/data/hadoop/hdfs/name should be specified as a URI in configuration files.Please update hdfs configuration.31.17/09/2609:45:17 INFO namenode.TransferFsImage:Opening connection tohttp://c6701:50070/imagetransfer?getimage=1&amp;txid=0&amp;storageInf o=-60:793662207:0:CID-b2f01411-862f-44b2-a6dc-7d17bd48d52232.17/09/2609:45:17 INFO namenode.TransferFsImage:Image Transfer timeoutconfigured to 60000 milliseconds33.17/09/2609:45:17 INFO namenode.TransferFsImage:Transfer took 0.01s at 0.00 KB/s34.17/09/2609:45:17 INFO namenode.TransferFsImage:Downloaded file fsimage.ckpt_0000000000000000000 size 351 bytes.35.17/09/2609:45:17 INFO util.ExitUtil:Exiting with status 036.17/09/2609:45:17 INFO Node: SHUTDOWN_MSG:37./************************************************************38.SHUTDOWN_MSG: Shutting down NameNode at /192.168.67.10239.************************************************************/14. 检查状态,namenode还没有启动1.[hdfs@c6702 sbin]$ jps2.4539Jps3.3868JournalNode15. 启动standby namenode,命令和master启动的方式相同1.[hdfs@c6702 sbin]$ ./hadoop-daemon.sh start namenode2.starting namenode, logging to /home/hdfs/hadoop-2.6.0-EDH-0u2/logs/had.out16. 再次检查,namenode已经启动1.[hdfs@c6702 sbin]$ jps2.4640Jps3.4570NameNode4.3868JournalNode17. 格式化zkfc,让在zookeeper中生成ha节点,在master上执行如下命令,完成格式化1.[hdfs@c6701 bin]$ ./hdfs zkfc -formatZK2.17/09/2609:59:20 INFO tools.DFSZKFailoverController:Failover controller configured for NameNode NameNode at c6701/192.168.67.101:90003.17/09/2609:59:20 INFO zookeeper.ZooKeeper:Client environment:zookeeper.version=3.4.6-1569965, built on 02/20/201409:09 GMT4.17/09/2609:59:20 INFO zookeeper.ZooKeeper:Client environment:=5.17/09/2609:59:20 INFO zookeeper.ZooKeeper:Client environment:java.version=1.8.0_1446.17/09/2609:59:20 INFO zookeeper.ZooKeeper:Client environment:java.vendor=Oracle Corporation7.17/09/2609:59:20 INFO zookeeper.ZooKeeper:Client environment:java.home=/usr/local/jdk1.8.0_144/jre8.17/09/2609:59:20 INFO zookeeper.ZooKeeper:Client environment:java.class.path=/home/hdfs/hadoop-2.6.0-EDH-0u2/exxxx9.17/09/2609:59:20 INFO zookeeper.ZooKeeper:Client environment:java.library.path=/home/hdfs/hadoop-2.6.0-EDH-0u2/lib/native10.17/09/2609:59:20 INFO zookeeper.ZooKeeper:Client environment:java.io.tmpdir=/tmp11.17/09/2609:59:20 INFO zookeeper.ZooKeeper:Client environment:piler=&lt;NA&gt;12.17/09/2609:59:20 INFO zookeeper.ZooKeeper:Client environment:=Linux13.17/09/2609:59:20 INFO zookeeper.ZooKeeper:Client environment:os.arch=amd6414.17/09/2609:59:20 INFO zookeeper.ZooKeeper:Client environment:os.version=2.6.32-573.el6.x86_6415.17/09/2609:59:20 INFO zookeeper.ZooKeeper:Client environment:=hdfs16.17/09/2609:59:20 INFO zookeeper.ZooKeeper:Client environment:user.home=/home/hdfs17.17/09/2609:59:20 INFO zookeeper.ZooKeeper:Client environment:user.dir=/home/hdfs/hadoop-2.6.0-EDH-0u2/bin18.17/09/2609:59:20 INFO zookeeper.ZooKeeper:Initiating client connection, connectString=c6701:2181,c6702:2181,c6703:2181 sessionTimeout=5000 watc her=org.apache.hadoop.ha.ActiveStandbyElector$WatcherWithClientRef@20 deea7f19.17/09/2609:59:20 INFO zookeeper.ClientCnxn:Opening socket connection toserver /192.168.67.103:2181.Will not attempt to authen ticate using SASL (unknown error)20.17/09/2609:59:20 INFO zookeeper.ClientCnxn:Socket connection established to /192.168.67.103:2181, initiating session21.17/09/2609:59:20 INFO zookeeper.ClientCnxn:Session establishment complete on server /192.168.67.103:2181, sessionid =0x35ebc5 163710000, negotiated timeout =500022.17/09/2609:59:20 INFO ha.ActiveStandbyElector:Session connected.23.17/09/2609:59:20 INFO ha.ActiveStandbyElector:Successfully created /hadoop-ha/ns in ZK.24.17/09/2609:59:20 INFO zookeeper.ZooKeeper:Session:0x35ebc5163710000closed25.17/09/2609:59:20 INFO zookeeper.ClientCnxn:EventThread shut down18. 格式化完成的检查格式成功后,查看zookeeper中可以看到 <<<<<<<<<<<命令没确认1.[zk: localhost:2181(CONNECTED)1] ls /hadoop-ha19. 启动zkfc,这个就是为namenode使用的1../hadoop-daemon.sh start zkfc2.starting zkfc, logging to /home/hdfs/hadoop-2.6.0-EDH-0u2/logs/hadoop-hd.out3.$ jps4.4272DataNode5.4402JournalNode6.6339Jps7.6277DFSZKFailoverController8.4952NameNode20. 另一个节点启动zkfc,1.ssh hdfs@c67022./home/hdfs/hadoop-2.6.0-EDH-0u2/sbin/hadoop-daemon.sh start zkfc3.$ jps4.4981Jps5.4935DFSZKFailoverController6.4570NameNode7.3868JournalNode21. 注意:进行初始化的时候,必须保证zk集群已经启动了。

相关主题