hbase0.9.6 分布式安装

hbase完全分布式安装,目前是3台机子,分别为:

192.168.80.101hadoop1(作为hbase的master节点)

192.168.80.102hadoop2(作为hbase的regionserver节点)

192.168.80.103hadoop3(作为hbase的regionserver节点)

解压hbase-0.94.2-security.tar.gz与重命名

#cd/usr/local#tar-zxvfhbase-0.94.2-security.tar.gz#

mvhbase-0.94.2-securityhbase

修改/etc/profile文件。

#vi/etc/profile

增加exportHBASE_HOME=/usr/local/hbase

修改exportPATH=$JAVA_HOME/bin:$HADOOP_HOME/bin:$HBASE_HOME/bin:$PATH

保存退出#source/etc/profile

修改$HBASE_HOME/conf/hbase-env.sh文件

exportJAVA_HOME=/usr/local/jdk

exportHBASE_MANAGES_ZK=false保存后退出

修改$HBASE_HOME/conf/hbase-site.xml

<configuration>

<property>

<name>hbase.rootdir</name>

<value>hdfs://hadoop1:9000/hbase</value>

</property>

<property>

<name>hbase.cluster.distributed</name>

<value>true</value>

</property>

<property>

<name>hbase.zookeeper.quorum</name>

<value>hadoop1,hadoop2,hadoop3</value>

</property>

<property>

<name>dfs.replication</name>

<value>1</value>

</property>

</configuration>

修改$HBASE_HOME/conf/regionservers文件夹

Hadoop2

Hadoop3

修改好后,通过如下命令复制到其他hadoop2,hadoop3三个节点。

scp-rhbaseroot@hadoop2/usr/local/

scp-rhbaseroot@hadoop3/usr/local/

经过上面配置完后,可以通过bin/start-hbase.sh命令启动hbase集群。注意启动hbase前,先要启动hadoop和zookeeper集群。

启动成功后,可以通过http://hadoop1:60010查看hbase的管理页面。

错误表现:

SLF4J:ClasspathcontainsmultipleSLF4Jbindings.

SLF4J:Foundbindingin[jar:file:/usr/hbase/lib/slf4j-log4j12-1.6.4.jar!/org/slf4j/impl/StaticLoggerBinder.class]

SLF4J:Foundbindingin[jar:file:/usr/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j

发生jar包冲突了:

分别为:

file:/usr/hbase/lib/slf4j-log4j12-1.6.4.jar!/org/slf4j/impl/StaticLoggerBinder.class

file:/usr/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class

移除其中一个jar包即可

hbase安装完成后,创建表时报错

报错信息如下:

hbase(main):004:0>create't1',{NAME=>'f1',VERSIONS=>5}

ERROR:Can'tgetmasteraddressfromZooKeeper;znodedata==null

查看日志如下

org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.RpcServerException):

Unknownoutofbandcall#-2147483647

解决方法:

hbase0.96.0中lib带的相关hadoop的jar包为hadoop-2.1.0-beta版,所以要做以下修改:

<1>把所有节点的hbase-0.96.0中lib包中所有以hadoop开头的jar包全部删除

rm-rfhadoop-*.jar

<2>将hadoop-2.2.0相关的jar包拷贝至hbase的lib下,具体jar包有:

hadoop-annotations-2.2.0.jar

hadoop-auth-2.2.0.jar

hadoop-common-2.2.0.jar

hadoop-hdfs-2.2.0.jar

hadoop-hdfs-2.2.0-tests.jar

hadoop-mapreduce-client-app-2.2.0.jar

hadoop-mapreduce-client-common-2.2.0.jar

hadoop-mapreduce-client-core-2.2.0.jar

hadoop-mapreduce-client-jobclient-2.2.0.jar

hadoop-mapreduce-client-jobclient-2.2.0-tests.jar

hadoop-mapreduce-client-shuffle-2.2.0.jar

hadoop-yarn-api-2.2.0.jar

hadoop-yarn-client-2.2.0.jar

hadoop-yarn-common-2.2.0.jar

hadoop-yarn-server-common-2.2.0.jar

hadoop-yarn-server-nodemanager-2.2.0.jar