Hadoop的Client搭建-即集群外主机访问Hadoop
Hadoop的Client搭建-即集群外主机访问Hadoop
1、增加主机映射(与namenode的映射一样):
增加最后一行
[root@localhost ~]# su - root
[root@localhost ~]# vi /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.48.129 hadoop-master
[root@localhost ~]#
2、新建用户hadoop
建立hadoop用户组
新建用户,useradd -d /usr/hadoop -g hadoop -m hadoop (新建用户hadoop指定用户主目录/usr/hadoop 及所属组hadoop)
passwd hadoop 设置hadoop密码(这里设置密码为hadoop)
[root@localhost ~]# groupadd hadoop
[root@localhost ~]# useradd -d /usr/hadoop -g hadoop -m hadoop
[root@localhost ~]# passwd hadoop
3、配置jdk环境
本次安装的是hadoop-2.7.5,需要JDK 7以上版本。若已安装可跳过。
或者直接拷贝master上的JDK文件更有利于保持版本的一致性。
[root@localhost java]# su - root
[root@localhost java]# mkdir -p /usr/java
[root@localhost java]# scp -r hadoop@hadoop-master:/usr/java/jdk1.7.0_79 /usr/java
[root@localhost java]# ll
total 12
drwxr-xr-x. 8 root root 4096 Feb 13 01:34 default
drwxr-xr-x. 8 root root 4096 Feb 13 01:34 jdk1.7.0_79
drwxr-xr-x. 8 root root 4096 Feb 13 01:34 latest
设置Java及hadoop环境变量
确保/usr/java/jdk1.7.0.79存在
su - root
vi /etc/profile
确保/usr/java/jdk1.7.0.79存在
unset i
unset -f pathmunge
JAVA_HOME=/usr/java/jdk1.7.0_79
CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
PATH=/usr/hadoop/hadoop-2.7.5/bin:$JAVA_HOME/bin:$PATH
设置生效(重要)
[root@localhost ~]# source /etc/profile
[root@localhost ~]#
JDK安装后确认:
[hadoop@localhost ~]$ java -version
java version "1.7.0_79"
Java(TM) SE Runtime Environment (build 1.7.0_79-b15)
Java HotSpot(TM) 64-Bit Server VM (build 24.79-b02, mixed mode)
[hadoop@localhost ~]$
4、设置hadoop的环境变量
拷贝namenode上已配置好的hadoop目录到当前主机
[root@localhost ~]# su - hadoop
Last login: Sat Feb 24 14:04:55 CST 2018 on pts/1
[hadoop@localhost ~]$ pwd
/usr/hadoop
[hadoop@localhost ~]$ scp -r hadoop@hadoop-master:/usr/hadoop/hadoop-2.7.5 .
The authenticity of host 'hadoop-master (192.168.48.129)' can't be established.
ECDSA key fingerprint is 1e:cd:d1:3d:b0:5b:62:45:a3:63:df:c7:7a:0f:b8:7c.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'hadoop-master,192.168.48.129' (ECDSA) to the list of known hosts.
hadoop@hadoop-master's password:
[hadoop@localhost ~]$ ll
total 0
drwxr-xr-x 2 hadoop hadoop 6 Feb 24 11:32 Desktop
drwxr-xr-x 2 hadoop hadoop 6 Feb 24 11:32 Documents
drwxr-xr-x 2 hadoop hadoop 6 Feb 24 11:32 Downloads
drwxr-xr-x 10 hadoop hadoop 150 Feb 24 14:30 hadoop-2.7.5
drwxr-xr-x 2 hadoop hadoop 6 Feb 24 11:32 Music
drwxr-xr-x 2 hadoop hadoop 6 Feb 24 11:32 Pictures
drwxr-xr-x 2 hadoop hadoop 6 Feb 24 11:32 Public
drwxr-xr-x 2 hadoop hadoop 6 Feb 24 11:32 Templates
drwxr-xr-x 2 hadoop hadoop 6 Feb 24 11:32 Videos
[hadoop@localhost ~]$
到此,Hadoop的客户端安装就算完成了,接下来就可以使用了。
执行hadoop命令结果如下,
[hadoop@localhost ~]$ hadoop
Usage: hadoop [--config confdir] [COMMAND | CLASSNAME]
CLASSNAME run the class named CLASSNAME
or
where COMMAND is one of:
fs run a generic filesystem user client
version print the version
jar <jar> run a jar file
note: please use "yarn jar" to launch
YARN applications, not this command.
checknative [-a|-h] check native hadoop and compression libraries availability
distcp <srcurl> <desturl> copy file or directories recursively
archive -archiveName NAME -p <parent path> <src>* <dest> create a hadoop archive
classpath prints the class path needed to get the
credential interact with credential providers
Hadoop jar and the required libraries
daemonlog get/set the log level for each daemon
trace view and modify Hadoop tracing settings
Most commands print help when invoked w/o parameters.
[hadoop@localhost ~]$
5、使用hadoop
创建本地文件
[hadoop@localhost ~]$ hdfs dfs -ls
Found 1 items
drwxr-xr-x - hadoop supergroup 0 2018-02-22 23:41 output
[hadoop@localhost ~]$ vi my-local.txt
hello boy!
yehyeh
上传本地文件至集群
[hadoop@localhost ~]$ hdfs dfs -mkdir upload
[hadoop@localhost ~]$ hdfs dfs -ls upload
[hadoop@localhost ~]$ hdfs dfs -ls
Found 2 items
drwxr-xr-x - hadoop supergroup 0 2018-02-22 23:41 output
drwxr-xr-x - hadoop supergroup 0 2018-02-23 22:38 upload
[hadoop@localhost ~]$ hdfs dfs -ls upload
[hadoop@localhost ~]$ hdfs dfs -put my-local.txt upload
[hadoop@localhost ~]$ hdfs dfs -ls upload
Found 1 items
-rw-r--r-- 3 hadoop supergroup 18 2018-02-23 22:45 upload/my-local.txt
[hadoop@localhost ~]$ hdfs dfs -cat upload/my-local.txt
hello boy!
yehyeh
[hadoop@localhost ~]$
ps:注意本地java版本与master拷贝过来的文件中/etc/hadoop-env.sh配置的JAVA_HOME是否要保持一致没有验证过,本文是保持一致的。