hadoop和hbase lzo压缩

配置hadoop lzo

一、下载、解压并编译lzo包

1	[wyp@master  ~]$ wget http://www.oberhumer.com/opensource/lzo/download/lzo-2.06.tar.gz 
2	[wyp@master  ~]$ tar -zxvf lzo-2.06.tar.gz 
3	[wyp@master  ~]$ cd lzo-2.06 
4	[wyp@master  ~]$ export CFLAGS=-m64 
5	[wyp@master  ~]$ ./configure -enable-shared -prefix=/usr/local/hadoop/lzo/
6	[wyp@master  ~]$ make && sudo make install

编译完lzo包之后,会在/usr/local/hadoop/lzo/生成一些文件,目录结构如下:

1	[wyp@master  /usr/local/hadoop/lzo]$ ls -l 
2	total 12 
3	drwxr-xr-x 3 root root 4096 Mar 21 17:23 include 
4	drwxr-xr-x 2 root root 4096 Mar 21 17:23 lib 
5	drwxr-xr-x 3 root root 4096 Mar 21 17:23 share

  将/usr/local/hadoop/lzo目录下的所有文件打包,并同步到集群中的所有机器上。(我没这么做)

   在编译lzo包的时候,需要一些环境,可以用下面的命令安装好lzo编译环境

1	[wyp@master  ~]$ yum -y install  lzo-devel     \ 
2	               zlib-devel  gcc autoconf automake libtool

二、安装Hadoop-LZO

 这里下载的是Twitter hadoop-lzo,可以用Maven(如何安装Maven请参照本博客的《Linux命令行下安装Maven与配置》)进行编译。

1	[wyp@master  ~]$ wget https://github.com/twitter/hadoop-lzo/archive/master.zip

 下载后的文件名是master,它是一个zip格式的压缩包,可以进行解压:

1	[wyp@master  ~]$ unzip master

 hadoop-lzo中的pom.xml依赖了hadoop2.1.0-beta,由于我们这里用到的是Hadoop 2.2.0,所以建议将hadoop版(我用的2.3,2.4,2.5也可以用)

1	<properties> 
2	    <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> 
3	    <hadoop.current.version>2.2.0</hadoop.current.version> 
4	    <hadoop.old.version>1.0.4</hadoop.old.version> 
5	  </properties>

 然后进入hadoop-lzo-master目录,依次执行下面的命令

01	[wyp@master  hadoop-lzo-master]$ export CFLAGS=-m64 
02	[wyp@master  hadoop-lzo-master]$ export CXXFLAGS=-m64 
03	[wyp@master  hadoop-lzo-master]$ export C_INCLUDE_PATH=     \ 
04	                                  /usr/local/hadoop/lzo/include 
05	[wyp@master  hadoop-lzo-master]$ export LIBRARY_PATH=/usr/local/hadoop/lzo/lib 
06	[wyp@master  hadoop-lzo-master]$ mvn clean package -Dmaven.test.skip=true 
07	[wyp@master  hadoop-lzo-master]$ cd target/native/Linux-amd64-64 
08	[wyp@master  Linux-amd64-64]$ tar -cBf - -C lib . | tar -xBvf - -C ~ 
09	[wyp@master  ~]$cp ~/libgplcompression* $HADOOP_HOME/lib/native/  
10	[wyp@master  hadoop-lzo-master]$cp target/hadoop-lzo-0.4.18-SNAPSHOT.jar   \ 
11	                                   $HADOOP_HOME/share/hadoop/common/

 其中~目录下的libgplcompression.so和libgplcompression.so.0是链接文件,指向libgplcompression.so.0.0.0,将刚刚生成的libgplcompression*和target/hadoop-lzo-0.4.18-SNAPSHOT.jar同步到集群中的所有机器对应的目录(这个同步是必须的,否则或找不到jar包和执行错误)

配置hadoop

添加到maper-site.xml

<property>
    <name>mapred.compress.map.output</name>
        <value>true</value>
 </property>
 <property>
    <name>mapred.map.output.compression.codec</name>
    <value>com.hadoop.compression.lzo.LzoCodec</value>
 </property>
 <property>
       <name>mapred.child.env</name>
       <value>LD_LIBRARY_PATH=/usr/local/hadoop/lzo/lib</value>
</property>

添加到core-site.xml

<property>
<name>io.compression.codecs</name>
<value>org.apache.hadoop.io.compress.DefaultCodec,org.apache.hadoop.io.compress.GzipCodec,org.apache.hadoop.io.compress.BZip2Codec,com.hadoop.compression.lzo.LzopCodec</value>
</property>
<property>
         <name>io.compression.codec.lzo.class</name>
         <value>com.hadoop.compression.lzo.LzoCodec</value>
</property>

使用LZO压缩

1.mr输出压缩文件:

Configuration conf = new Configuration();
		conf.set("mapred.output.compression.codec", "com.hadoop.compression.lzo.LzopCodec");
		conf.set("mapred.output.compress", "true");

2.hive创建压缩表

CREATE EXTERNAL TABLE test(
  key string , 
  params string , )
row format delimited fields terminated by '\t'
collection items terminated by ','
lines terminated by '\n'
STORED AS INPUTFORMAT 'com.hadoop.mapred.DeprecatedLzoTextInputFormat' 
OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'
location '/user/hive/warehouse/dirk.db/test';

配置hbase lzo

将hadoop-lzo-0.4.18-SNAPSHOT.jar放入$HBASE_HOME/lib下,需要重启hbase

创建hbase表

create 'test',{NAME=>'f',COMPRESSION=>'LZO'}
alter 'dirktt',{NAME=>'f',COMPRESSION=>'LZO'}

结束

转 http://my.oschina.net/u/1169079/blog/225070

相关推荐