前面已經(jīng)完成Zookeeper和HDFS的安裝,本文會(huì)詳細(xì)介紹Hbase的安裝步驟。以及安裝過(guò)程中遇到問(wèn)題的匯總。
站在用戶的角度思考問(wèn)題,與客戶深入溝通,找到南安網(wǎng)站設(shè)計(jì)與南安網(wǎng)站推廣的解決方案,憑借多年的經(jīng)驗(yàn),讓設(shè)計(jì)與互聯(lián)網(wǎng)技術(shù)結(jié)合,創(chuàng)造個(gè)性化、用戶體驗(yàn)好的作品,建站類(lèi)型包括:成都網(wǎng)站制作、做網(wǎng)站、企業(yè)官網(wǎng)、英文網(wǎng)站、手機(jī)端網(wǎng)站、網(wǎng)站推廣、主機(jī)域名、虛擬主機(jī)、企業(yè)郵箱。業(yè)務(wù)覆蓋南安地區(qū)。
系列文章:
Hadoop集群(一) Zookeeper搭建
Hadoop集群(二) HDFS搭建
Hadoop集群(四) Hadoop升級(jí)
下面開(kāi)始Hbase的安裝。
Hbase的服務(wù)器規(guī)劃
192.168.67.101 c6701 --Master + regionserver 192.168.67.102 c6702 --Master(standby)+regionserver 192.168.67.103 c6703 --regionserver
---在c6701上安裝Hbase
1. 創(chuàng)建hbase用戶,及創(chuàng)建相關(guān)目錄
su - root useradd hbase echo "hbase:hbase" | chpasswd mkdir -p /data/zookeeper mkdir -p /data/hbase/tmp mkdir -p /data/hbase/logs chown -R hbase:hbase /data/hbase chown -R a+rx /home/hdfs <<<<<<<<<<<<<讓hadoop目錄,其他人可以用讀和執(zhí)行--很重要hbase需要訪問(wèn)
2. 解壓軟件
su - hbase cd /tmp/software tar -zxvf hbase-1.1.3.tar.gz -C /home/hbase/
3. 設(shè)置hbase-site.xml的參數(shù)
[hbase@c6701 conf]$ more hbase-site.xml <configuration> <property> <name>hbase.tmp.dir</name> <value>/data/hbase/tmp</value> </property> <property> <name>hbase.rootdir</name> <value>hdfs://ns/hbase</value> <<<<<<<<<<<<<<<<<這里要注意,ns是namenode的名字,Hbase可以訪問(wèn)很多HDFS,在這里標(biāo)注namenode,才是指定訪問(wèn)這個(gè)namenode,實(shí)際在hdfs看到是/hbase的目錄,不會(huì)看到ns的。 </property> <property> <name>hbase.cluster.distributed</name> <value>true</value> </property> <property> <name>hbase.master</name> <value>60000</value> </property> <property> <name>hbase.zookeeper.quorum</name> <value>c6701,c6702,c6703</value> </property> <property> <name>hbase.zookeeper.property.clientPort</name> <value>2181</value> </property> <property> <name>hbase.zookeeper.property.dataDir</name> <value>/data/zookeeper</value> </property> </configuration>
4. 設(shè)置hbase-env.sh的參數(shù)
[hbase@c6701 conf]$ cat hbase-env.sh |grep -v "^#" export JAVA_HOME=/usr/local/jdk1.8.0_144 export HBASE_CLASSPATH=$HADOOP_HOME/etc/hadoop/ export HBASE_HEAPSIZE=500M export HBASE_OPTS="-XX:+UseConcMarkSweepGC" export HBASE_REGIONSERVER_OPTS="-Xmx1g -Xms400m -Xmn128m -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=70" export HBASE_MASTER_OPTS="$HBASE_MASTER_OPTS -Xmx1g -Xms400m -XX:PermSize=128m -XX:MaxPermSize=128m" export HBASE_REGIONSERVER_OPTS="$HBASE_REGIONSERVER_OPTS -XX:PermSize=128m -XX:MaxPermSize=128m" export HBASE_LOG_DIR=/data/hbase/logs export HBASE_PID_DIR=/data/hbase/hadoopPidDir export HBASE_MANAGES_ZK=true export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$HBASE_HOME/lib/:/usr/lib64/ export HBASE_LIBRARY_PATH=$HBASE_LIBRARY_PATH:$HBASE_HOME/lib/:/usr/lib64/
5. 注意一下內(nèi)存設(shè)置,由于是測(cè)試環(huán)境,設(shè)置過(guò)大,導(dǎo)致內(nèi)存不足,無(wú)法啟動(dòng)問(wèn)題
export HBASE_REGIONSERVER_OPTS="-Xmx1g -Xms400m -Xmn128m -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=70" export HBASE_MASTER_OPTS="$HBASE_MASTER_OPTS -Xmx1g -Xms400m -XX:PermSize=128m -XX:MaxPermSize=128m"
內(nèi)存不足的錯(cuò)誤
[hbase@c6701 bin]$ ./hbase-daemon.sh start master starting master, logging to /data/hbase/logs/hbase-hbase-master-c6701.python279.org.out Java HotSpot(TM) 64-Bit Server VM warning: ignoring option PermSize=128m; support was removed in 8.0 Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=128m; support was removed in 8.0 Java HotSpot(TM) 64-Bit Server VM warning: INFO: os::commit_memory(0x00000006c5330000, 2060255232, 0) failed; error='Cannot allocate memory' (errno=12) # # There is insufficient memory for the Java Runtime Environment to continue. # Native memory allocation (mmap) failed to map 2060255232 bytes for committing reserved memory. # An error report file with more information is saved as: # /home/hbase/hbase-1.1.3/bin/hs_err_pid7507.log
6. 增加hadoop的參數(shù)到/etc/profile中,后面hbase運(yùn)行,需要知道hadoop_home
export JAVA_HOME=/usr/local/jdk1.8.0_144 export JRE_HOME=/usr/local/jdk1.8.0_144/jre export PATH=$JAVA_HOME/bin:$PATH:/home/hbase/hbase-1.1.3/bin export HADOOP_HOME=/home/hdfs/hadoop-2.6.0-EDH-0u2 export HADOOP_INSTALL=$HADOOP_HOME export HADOOP_MAPRED_HOME=$HADOOP_HOME export HADOOP_COMMON_HOME=$HADOOP_HOME export HADOOP_HDFS_HOME=$HADOOP_HOME export YARN_HOME=$HADOOP_HOME export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native export PATH=$PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/bin export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib:$HADOOP_COMMON_LIB_NATIVE_DIR"
---安裝c6702的hbase
7. 創(chuàng)建hbase用戶
ssh c6702 "useradd hbase;echo "hbase:hbase" | chpasswd"
8. 為hbase用戶ssh免密
ssh-copy-id hbase@c6702
9. 拷貝軟件,創(chuàng)建目錄,解壓軟件
scp -r /tmp/software/hbase-1.1.3.tar.gz root@c6702:/tmp/software/. ssh c6702 "chmod 777 /tmp/software/*;mkdir -p /data/zookeeper;mkdir -p /data/hbase/tmp;mkdir -p /data/hbase/logs;chown -R hbase:hbase /data/hbase" ssh c6702 "chmod -R a+rx /home/hdfs" ssh hbase@c6702 "tar -zxvf /tmp/software/hbase-1.1.3.tar.gz -C /home/hbase"
10.復(fù)制配置文件
scp -r /etc/profile root@c6702:/etc/profile scp -r /home/hbase/hbase-1.1.3/conf/hbase-site.xml hbase@c6702:/home/hbase/hbase-1.1.3/conf/. scp -r /home/hbase/hbase-1.1.3/conf/hbase-env.sh hbase@c6702:/home/hbase/hbase-1.1.3/conf/.
---安裝c6703的hbase
8. 創(chuàng)建hbase用戶
ssh c6703 "useradd hbase;echo "hbase:hbase" | chpasswd"
9. 為hbase用戶ssh免密
ssh-copy-id hbase@c6703
10. 拷貝軟件,創(chuàng)建目錄,解壓軟件
scp -r /tmp/software/hbase-1.1.3.tar.gz root@c6703:/tmp/software/. ssh c6703 "chmod 777 /tmp/software/*;mkdir -p /data/zookeeper;mkdir -p /data/hbase/tmp;mkdir -p /data/hbase/logs;chown -R hbase:hbase /data/hbase" ssh c6703 "chmod -R a+rx /home/hdfs" ssh hbase@c6703 "tar -zxvf /tmp/software/hbase-1.1.3.tar.gz -C /home/hbase"
11. 復(fù)制配置文件
scp -r /etc/profile root@c6703:/etc/profile scp -r /home/hbase/hbase-1.1.3/conf/hbase-site.xml hbase@c6703:/home/hbase/hbase-1.1.3/conf/. scp -r /home/hbase/hbase-1.1.3/conf/hbase-env.sh hbase@c6703:/home/hbase/hbase-1.1.3/conf/.
12. 啟動(dòng)Hbase master(在c6701和c6702)
hbase-daemon.sh start master ssh -t -q c6702 sudo su -l hbase -c "/usr/local/hadoop/hbase-release/bin/hbase-daemon.sh\ start\ master"
13. 啟動(dòng)Hbase regionserver(在c6701和c6702和c6703)
hbase-daemon.sh start regionserver ssh -t -q c6702 sudo su -l hbase -c "/usr/local/hadoop/hbase-release/bin/hbase-daemon.sh\ start\ regionserver" ssh -t -q c6703 sudo su -l hbase -c "/usr/local/hadoop/hbase-release/bin/hbase-daemon.sh\ start\ regionserver"
HBASE安裝過(guò)程,問(wèn)題匯總
1. 首先Hbase-site.xml設(shè)置錯(cuò)誤,ns是namenode的名字,Hbase可以訪問(wèn)很多HDFS,在這里標(biāo)注namenode,才是指定訪問(wèn)這個(gè)namenode,實(shí)際在hdfs看到是/hbase的目錄,不會(huì)看到ns的。
<name>hbase.rootdir</name> <value>hdfs://ns/hbase</value> <<<<<<<<<<<<<<<<<這里要注意
2.通過(guò)hbase訪問(wèn)hdfs遇到問(wèn)題,雖然也可以對(duì)hdfs操作,但是有warning
[root@c6701 home]# su - hbase $ hdfs dfs -mkdir /hbase $ hdfs dfs -ls / Found 1 items drwxrwx--- - hdfs hadoop 0 2017-10-25 10:18 /hbase 權(quán)限不對(duì),需要hbase是owner才可以讓hbase正常訪問(wèn)hdfs路徑 $ hadoop fs -chown hbase:hbase /hbase $ hdfs dfs -ls / Found 1 items drwxrwx--- - hbase hbase 0 2017-10-25 10:18 /hbase 已經(jīng)修改成功 [hbase@c6701 ~]$ hdfs dfs -ls /hbase/test 17/09/27 07:45:59 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
最后注釋掉/etc/profile中的CLASSPATH=解決的這個(gè)問(wèn)題
export JAVA_HOME=/usr/local/jdk1.8.0_144 export JRE_HOME=/usr/local/jdk1.8.0_144/jre #export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar:$JRE_HOME/lib:$CLASSPATH export PATH=$JAVA_HOME/bin:$PATH:/home/hbase/hbase-1.1.3/bin
3. 啟動(dòng)Hbase的時(shí)候,還遇到權(quán)限問(wèn)題,無(wú)法找到hadoop。首先通過(guò)hbase用戶檢查hadoop訪問(wèn)權(quán)限
su - hbase /home/hdfs/hadoop-2.6.0-EDH-0u2/bin/hadoop version
如果權(quán)限有問(wèn)題,需要增加hbase執(zhí)行hadoop文件的權(quán)限
4. 啟動(dòng)過(guò)程中,遇到錯(cuò)誤,是由于/home/hbase/hbase-1.1.3/lib/原有的一些jar包版本與hadoop不同,進(jìn)而影響,刪除即可。hbase會(huì)從hadoop中讀取jar包
[hbase@c6702 ~]$ hbase-daemon.sh start regionserver starting regionserver, logging to /data/hbase/logs/hbase-hbase-regionserver-c6702.python279.org.out Java HotSpot(TM) 64-Bit Server VM warning: ignoring option PermSize=128m; support was removed in 8.0 Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=128m; support was removed in 8.0 SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/home/hbase/hbase-1.1.3/lib/kylin-jdbc-1.5.2.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/home/hbase/hbase-1.1.3/lib/kylin-job-1.5.2.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/home/hbase/hbase-1.1.3/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/home/hdfs/hadoop-2.6.0-EDH-0u2/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
至此Zookeeper+HDFS+Hbase安裝全部完成,前面還是比較順利的,但是在Hbase安裝的過(guò)程中,由于與Hadoop的銜接上,出現(xiàn)一些問(wèn)題,耗費(fèi)一些時(shí)間分析解決。
后續(xù)會(huì)繼續(xù)測(cè)試,hadoop升級(jí)的過(guò)程。
新聞標(biāo)題:Hadoop集群(三)Hbase搭建
當(dāng)前網(wǎng)址:http://www.rwnh.cn/article42/pgchec.html
成都網(wǎng)站建設(shè)公司_創(chuàng)新互聯(lián),為您提供企業(yè)建站、虛擬主機(jī)、企業(yè)網(wǎng)站制作、營(yíng)銷(xiāo)型網(wǎng)站建設(shè)、面包屑導(dǎo)航、網(wǎng)頁(yè)設(shè)計(jì)公司
聲明:本網(wǎng)站發(fā)布的內(nèi)容(圖片、視頻和文字)以用戶投稿、用戶轉(zhuǎn)載內(nèi)容為主,如果涉及侵權(quán)請(qǐng)盡快告知,我們將會(huì)在第一時(shí)間刪除。文章觀點(diǎn)不代表本網(wǎng)站立場(chǎng),如需處理請(qǐng)聯(lián)系客服。電話:028-86922220;郵箱:631063699@qq.com。內(nèi)容未經(jīng)允許不得轉(zhuǎn)載,或轉(zhuǎn)載時(shí)需注明來(lái)源: 創(chuàng)新互聯(lián)
全網(wǎng)營(yíng)銷(xiāo)推廣知識(shí)