当前位置:文档之家› hadoop + zookeeper +hive + hbase安装学习-12页文档资料

hadoop + zookeeper +hive + hbase安装学习-12页文档资料

伪分布式安装Hadoop+zookeeper+hive+hbase安装配置1.安装JDK,配置环境JAVA环境变量export JAVA_HOME=/usr/lib/java-1.6.0/jdk1.6.0_37export PATH=$PATH:$JAVA_HOME/binexport CLASSPATH=$CLASSPATH:$JAVA_HOME/lib:$JAVA_HOME/jre/libexport HADOOP_INSTALL=/usr/hadoop/hadoop-1.0.3export PATH=$PATH:$HADOOP_INSTALL/binexport JAVA_HOME=/user/local/jdk1.6.0_27export JRE_HOME=/user/local/jdk1.6.0_27/jreexport ANT_HOME=/user/local/apache-ant-1.8.2export CLASSPATH=.:$JAVA_HOME/lib:$JRE_HOME/lib:$CLASSPATH2.安装Hadoop-1.0.32.1.下载hadoop文件,地址为:/coases.html,下载完成后解压hadoop-1.0.3.tar.gzsudo tar -xzf hadoop-1.0.3.tar.gz2.2.配置Hadoop环境变量export HADOOP_INSTALL=/user/local/hadoop-1.0.3export PATH=$PATH:$HADOOP_INSTALL/bin激活profile文件:[root@localhost etc]# chmod +x profile[root@localhost etc]# source profile[root@localhost etc]# hadoop version2.3.查看hadoop版本[root@localhost ~]# hadoop versionHadoop 1.0.3Subversionhttps:///repos/asf/hadoop/common/branches/branch-1.0 -r 1335192Compiled by hortonfo on Tue May 8 20:31:25 UTC 2012From source with checksum e6b0c1e23dcf76907c5fecb4b832f3be输入 hadoop version命令后输入下图,则安装hadoop成功2.4.修改配置文件a)解压hadoop-1.0.3/hadoop-core-1.0.3.jarb)去解压后的hadoop-core-1.0.3文件夹下,复制文件core-default.xml,hdfs-default.xml,mapred-default.xml三个文件到hadoop-1.0.3/conf/下,删除hadoop-1.0.3/conf/文件夹下的core-site.xml,hdfs-site.xml,mapred-site.xml,将复制过来的三个文件的文件名中的default修改为sitec)在hadoop-1.0.3文件夹同级创建文件夹hadoop,打开core-site.xml文件,修改属性节点下的name节点为hadoop.tmp.dir对应的value节点,修改为/user/local/${}/hadoop/hadoop-${},这样hadoop生成的文件会放入这个文件夹下.修改name节点为对应的value节点,修改为hdfs://localhost:9000/打开mapred-site.xml文件,修改property节点下name为mapred.job.tracker对应的的value,改为:localhost:90013.安装ssh1.执行命令安装ssh:sudo apt-get install ssh2.基于空口令创建一个新SSH密钥,以启用无密码登陆a)ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa执行结果:b)cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys3.测试ssh localhost测试结果:输入yes再次输入ssh localhost:成功之后,就不需要密钥4.格式化HDFS文件系统输入指令:hadoop namenode –format[root@localhost ~]# hadoop namenode –format13/07/17 14:26:41 INFO Node: STARTUP_MSG:STARTUP_MSG: Starting NameNodeSTARTUP_MSG: host = localhost.localdomain/127.0.0.1STARTUP_MSG: args = [–format]STARTUP_MSG: version = 1.0.3STARTUP_MSG: build =https:///repos/asf/hadoop/common/branches/branch-1.0 -r 1335192; compiled by 'hortonfo' on Tue May 8 20:31:25 UTC 2012Usage: java NameNode [-format] | [-upgrade] | [-rollback] | [-finalize] | [-importCheckpoint]13/07/17 14:26:41 INFO Node: SHUTDOWN_MSG:SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/127.0.0.15.启动和终止守护进程启动和终止HDFS和MapReduce守护进程,键入如下指令启动start-all.sh(start-dfs.sh,start-mapred.sh)出错了,JAVA_HOME is not set需要修改文件,打开hadoop-1.0.3/conf/ hadoop-env.sh将红线以内部分注释解开,修改为本机JAVA_HOME再次执行启动命令start-all.sh停止stop-all.sh(stop-dfs.sh,stop-mapred.sh)到此,hadoop就已经安装完成了6.Hadoop文件系统6.1.查看hadoop所有块文件执行命令:hadoop fsck / -files –blocks执行结果:此结果显示,hadoop文件系统中,还没有文件可以显示本机出错,出错原因:datanode没有启动,具体见evernote笔记。

[root@localhost ~]# hadoop fsck / -files –blocks13/07/17 14:44:15 ERROR erGroupInformation: PriviledgedActionException as:root cause:java.ConnectException: Connection refusedException in thread "main" java.ConnectException: Connection refusedat java.PlainSocketImpl.socketConnect(Native Method)at java.PlainSocketImpl.doConnect(PlainSocketImpl.java:351)at java.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:211) at java.PlainSocketImpl.connect(PlainSocketImpl.java:200)at java.SocksSocketImpl.connect(SocksSocketImpl.java:366)at java.Socket.connect(Socket.java:529)at java.Socket.connect(Socket.java:478)at sunworkClient.doConnect(NetworkClient.java:163)at sun.http.HttpClient.openServer(HttpClient.java:388)at sun.http.HttpClient.openServer(HttpClient.java:523)at sun.http.HttpClient.<init>(HttpClient.java:227)at sun.http.HttpClient.New(HttpClient.java:300)at sun.http.HttpClient.New(HttpClient.java:317)atsun.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.jav a:970)atsun.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:91 1)atsun.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:836)atsun.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java: 1172)at org.apache.hadoop.hdfs.tools.DFSck$1.run(DFSck.java:141)at org.apache.hadoop.hdfs.tools.DFSck$1.run(DFSck.java:110)at java.security.AccessController.doPrivileged(Native Method)at javax.security.auth.Subject.doAs(Subject.java:396)aterGroupInformation.doAs(UserGroupInformation. java:1121)at org.apache.hadoop.hdfs.tools.DFSck.run(DFSck.java:110)at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)at org.apache.hadoop.hdfs.tools.DFSck.main(DFSck.java:182)[root@localhost ~]#6.2.将文件复制到hadoop文件系统中a)在hadoop文件系统中创建文件夹,执行命令:hadoop fs -mkdir docsb)复制本地文件到hadoop文件系统中执行命令:hadoop fs -copyFromLocal docs/test.txt \hdfs://localhost:9000/user/docs/test.txtc)复制hadoop文件系统中的文件回本地,并检查是否一致复制:hadoop fs -copyToLocal docs/test.txt docs/test.txt.bat检查:md5 docs/test.txt docs/text.txt.bat检查结果若显示两个md5加密值相同,则文件内容相同。

相关主题