介绍下Hbase的完全分布式的安装部署
介绍下Hbase的完全分布式的安装部署,完全分布式的安装同样是基于Hadoop的HDFS底层的,也就是说,要想完成Hbase分布式的安装,那么你的Hadoop坏境也必须是完全分布式的,然后一个Hbase应用配套的部署在一个Hadoop节点上,关于Hadoop的完全分布式的安装部署,散仙以前的博客里有介绍,在这里就不再涉及了,接下来,让我们开始进入正题,如果目前的情况是,你已经会部署Hbase的伪分布式的环境,那么相对来说,再部署完全分布式就容易很多了,当然如果,你是直接上手Hbase的完全分布式的环境,那也没关系,只要你对Hadoop的完全分布式的环境熟悉,那么,散仙相信,部署这个Hbase这个完全分布式的环境,更是容易不过。
环境依旧是hadoop1.2.0版本,habae0.94.8的版本,zookeeper3.4.5的版本,唯一与伪分布式不同的就是,多了2个节点,详细请参考下表配置。
IP地址 | 节点名 | 10.2.143.5 | Master | 10.2.143.36 | Slave | 10.2.143.37 | Slave2 |
完全分布式的配置(基于内置Zookeeper的集群),需要三步才能完成,如下表格所示:
步骤 | 配置文件 | 一 | 配置hbase-env.sh文件 | 二 | 配置hbase-site.xml文件 | 三 | 配置regionservers文件 |
下面开始给出各个步骤需要配置的内容,我们先来看下第一步里面需要配置什么,截图如下:
- # The java implementation to use. Java 1.6 required.
- export JAVA_HOME=/root/jdk1.6.0_45
- # Extra Java CLASSPATH elements. Optional.
- export HBASE_CLASSPATH=/root/hadoop-1.2.0/conf
- # The maximum amount of heap to use, in MB. Default is 1000.
- # export HBASE_HEAPSIZE=1000
- export HBASE_MANAGES_ZK=true
# The java implementation to use. Java 1.6 required. export JAVA_HOME=/root/jdk1.6.0_45 # Extra Java CLASSPATH elements. Optional. export HBASE_CLASSPATH=/root/hadoop-1.2.0/conf # The maximum amount of heap to use, in MB. Default is 1000. # export HBASE_HEAPSIZE=1000 export HBASE_MANAGES_ZK=true
第二步里面的配置内容以及截图如下:
- <?xml version="1.0"?>
- <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
- <!--
- /**
- * Copyright 2010 The Apache Software Foundation
- *
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements. See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership. The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License. You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
- -->
- <configuration>
- <property>
- <name>hbase.cluster.distributed</name>
- <value>true</value>
- </property>
- <property>
- <name>hbase.rootdir</name>
- <value>hdfs://10.2.143.5:9090/hbase</value>
- </property>
- <property>
- <name>hbase.master</name>
- <value>10.2.143.5:60000</value>
- </property>
- <property>
- <name>hbase.zookeeper.quorum</name>
- <value>10.2.143.5,10.2.143.36,10.2.143.37</value>
- </property>
- </configuration>
<?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <!-- /** * Copyright 2010 The Apache Software Foundation * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ --> <configuration> <property> <name>hbase.cluster.distributed</name> <value>true</value> </property> <property> <name>hbase.rootdir</name> <value>hdfs://10.2.143.5:9090/hbase</value> </property> <property> <name>hbase.master</name> <value>10.2.143.5:60000</value> </property> <property> <name>hbase.zookeeper.quorum</name> <value>10.2.143.5,10.2.143.36,10.2.143.37</value> </property> </configuration>
接下来,我们看下,第三步里面的配置内容以及截图如下:
- 10.2.143.5
- 10.2.143.36
- 10.2.143.37
10.2.143.5 10.2.143.36 10.2.143.37
接下来,我们就可以使用scp -r hbase 子节点名称:/目录,命令来进行远程拷贝分发了,截图如下:
然后,我们就可以关闭各个节点上的防火墙,来启动集群了,注意,要先启动Hadoop的集群,然后启动Hbase的集群,顺序不能反,截图如下:
至此,我们的集群已经成功启动,下面散仙访问hbase的端口60010的web页面,可以看到我们的集群信息,截图如下:
注意,为了确保能够在win上访问hbase的端口成功,需要关闭,防火墙以及在win上的hosts文件配置映射信息,截图如下:
至此,我们已经配置完毕,最后关闭集群的时候,要先关闭hbase的集群,然后再关闭hadoop的集群。