(三)大数据环境准备:Hive安装步骤(依赖Hadoop)

1、解压缩文件[root@hadoop0opt]#tar-zxvfhive-0.9.0.tar.gz

2、改名字[root@hadoop0opt]#mvhive-0.9.0hive

3、配置环境变量,修改etc/profile全局变量文件/opt/hive/bin

JAVA_HOME=/opt/jdk1.6.0_24

HADOOP_HOME=/opt/hadoop

HBASE_HOME=/opt/hbase

HIVE_HOME=/opt/hive

PATH=$JAVA_HOME/bin:$HADOOP_HOME/bin:$HBASE_HOME/bin:$HIVE_HOME/bin:$PATH

exportJAVA_HOMEHADOOP_HOMEHBASE_HOMEHIVE_HOMEPATH

[root@hadoop0bin]#su-

4、测试运行,看看是否安装成功[root@hadoop0~]#hive

WARNING:org.apache.hadoop.metrics.jvm.EventCounterisdeprecated.Pleaseuseorg.apache.hadoop.log.metrics.EventCounterinallthelog4j.propertiesfiles.

Logginginitializedusingconfigurationinjar:file:/opt/hive/lib/hive-common-0.9.0.jar!/hive-log4j.properties

Hivehistoryfile=/tmp/root/hive_job_log_root_201509250619_148272494.txt

hive>showtables;

FAILED:Errorinmetadata:MetaException(message:Gotexception:java.net.ConnectExceptionCalltohadoop0/192.168.46.129:9000failedonconnectionexception:java.net.ConnectException:Connectionrefused)

FAILED:ExecutionError,returncode1fromorg.apache.hadoop.hive.ql.exec.DDLTask

--解决方案:hive依赖于hdfs存储数据,所以确保hadoop启动了

[root@hadoop0~]#start-all.sh

Warning:$HADOOP_HOMEisdeprecated.

startingnamenode,loggingto/opt/hadoop/libexec/../logs/hadoop-root-namenode-hadoop0.out

localhost:startingdatanode,loggingto/opt/hadoop/libexec/../logs/hadoop-root-datanode-hadoop0.out

localhost:startingsecondarynamenode,loggingto/opt/hadoop/libexec/../logs/hadoop-root-secondarynamenode-hadoop0.out

startingjobtracker,loggingto/opt/hadoop/libexec/../logs/hadoop-root-jobtracker-hadoop0.out

localhost:startingtasktracker,loggingto/opt/hadoop/libexec/../logs/hadoop-root-tasktracker-hadoop0.out

--至此最简单的hive环境配置完毕

5、开始创建数据表hive>showtables;

OK

Timetaken:5.619seconds

hive>createtablestu(nameString,ageint);

FAILED:Errorinmetadata:MetaException(message:Gotexception:org.apache.hadoop.ipc.RemoteException

org.apache.hadoop.hdfs.server.namenode.SafeModeException:Cannotcreatedirectory/user/hive/warehouse/stu.

Namenodeisinsafemode.

Thereportedblocks18hasreachedthethreshold0.9990oftotalblocks17.Safemodewillbeturnedoffautomaticallyin15seconds.

atorg.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(FSNamesystem.java:2204)

atorg.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:2178)

atorg.apache.hadoop.hdfs.server.namenode.NameNode.mkdirs(NameNode.java:857)

atsun.reflect.NativeMethodAccessorImpl.invoke0(NativeMethod)

atsun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)

atsun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)

atjava.lang.reflect.Method.invoke(Method.java:597)

atorg.apache.hadoop.ipc.RPC$Server.call(RPC.java:578)

atorg.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1393)

atorg.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1389)

atjava.security.AccessController.doPrivileged(NativeMethod)

atjavax.security.auth.Subject.doAs(Subject.java:396)

atorg.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)

atorg.apache.hadoop.ipc.Server$Handler.run(Server.java:1387)

)

FAILED:ExecutionError,returncode1fromorg.apache.hadoop.hive.ql.exec.DDLTask

--解决方案:由于缺少参数配置,手工建立目录,解决这个问题

[root@hadoop0~]#mkdir-p/user/hive/warehouse/stu

hive>createtablestu(nameString,ageint);

OK

Timetaken:0.229seconds

6、开始插入数据,Hive不支持Insert语句hive>insertintostuvalues('MengMeng',24);

FAILED:ParseError:line1:12mismatchedinput'stu'expectingTABLEnear'into'ininsertclause

hive>showtables;

OK

stu

Timetaken:0.078seconds

hive>descstu;

OK

namestring

ageint

Timetaken:0.255seconds

--解决方案:hive不支持上述操作,可以使用load加载

hive>LOADDATALOCALINPATH'/opt/stu.txt'OVERWRITEINTOTABLEstu;

Copyingdatafromfile:/opt/stu.txt

Copyingfile:file:/opt/stu.txt

Loadingdatatotabledefault.stu

Deletedhdfs://hadoop0:9000/user/hive/warehouse/stu

OK

Timetaken:0.643seconds

7、查询刚才导入的语句hive>selectname,agefromstu;

TotalMapReducejobs=1

LaunchingJob1outof1

Numberofreducetasksissetto0sincethere'snoreduceoperator

StartingJob=job_201509250620_0001,TrackingURL=http://hadoop0:50030/jobdetails.jsp?jobid=job_201509250620_0001

KillCommand=/opt/hadoop/libexec/../bin/hadoopjob-Dmapred.job.tracker=hadoop0:9001-killjob_201509250620_0001

HadoopjobinformationforStage-1:numberofmappers:1;numberofreducers:0

2015-09-2506:37:55,535Stage-1map=0%,reduce=0%

2015-09-2506:37:58,565Stage-1map=100%,reduce=0%,CumulativeCPU0.59sec

2015-09-2506:37:59,595Stage-1map=100%,reduce=0%,CumulativeCPU0.59sec

2015-09-2506:38:00,647Stage-1map=100%,reduce=100%,CumulativeCPU0.59sec

MapReduceTotalcumulativeCPUtime:590msec

EndedJob=job_201509250620_0001

MapReduceJobsLaunched:

Job0:Map:1CumulativeCPU:0.59secHDFSRead:221HDFSWrite:22SUCCESS

TotalMapReduceCPUTimeSpent:590msec

OK

--查询结构显示出来了

JieJie26NULL

MM24NULL

Timetaken:12.812seconds

疑问:为何有个null值呢,切待下次研究

相关推荐