Hadoop安装过程

还是记录一下hadoop的安装过程吧,每次都查- -!, 版本是Cloudera的 hadoop-0.20.2-cdh3u1

1.配置/etc/hosts

master 和 slave 配置相同的hosts

10.0.10.24 hadoop1
10.0.10.25 hadoop2
10.0.10.26 hadoop3

2.创建Hadoop用户

groupadd hadoop
useradd -g hadoop hadoop
passwd hadoop
cd /data/
mkdir hadoop
mkdir hadoopdata
chown hadoop:hadoop -R hadoop
chown hadoop:hadoop -R hadoopdata

3.ssh配置

su - hadoop
ssh-keygen -t rsa
cd $HOME/.ssh
cp id_rsa.pub authorized_keys
hadoop1# ssh-copy-id -i $HOME/.ssh/id_rsa.pub hadoop@hadoop2
hadoop1# ssh-copy-id -i $HOME/.ssh/id_rsa.pub hadoop@hadoop3
hadoop2# ssh-copy-id -i $HOME/.ssh/id_rsa.pub hadoop@hadoop1
hadoop3# ssh-copy-id -i $HOME/.ssh/id_rsa.pub hadoop@hadoop1
ssh -o StrictHostKeyChecking=no hadoop1
ssh -o StrictHostKeyChecking=no hadoop2
ssh -o StrictHostKeyChecking=no hadoop3

4.环境变量

vi /etc/profile
export HADOOP_HOME=/data/hadoop
export PATH=$HADOOP_HOME/bin:$PATH
source /etc/profile

5.Hadoop配置

cd $HADOOP_HOME/conf

hadoop-env.sh

export JAVA_HOME=/usr/java/jdk1.6.0_22/
core-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
    <property>
        <name>hadoop.tmp.dir</name>
        <value>/data/hadoop/tmp</value>
    </property>
    <property>
        <name>fs.default.name</name>
        <value>hdfs://hadoop1:9000</value>
    </property>
</configuration>
hdfs-site.xml 
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
    <property>
        <name>dfs.replication</name>
        <value>1</value>
    </property>
    <property>
        <name>dfs.name.dir</name>
        <value>/data/hadoopdata/name</value>
    </property>
    <property>
        <name>dfs.data.dir</name>
        <value>/data/hadoopdata/data</value>
    </property>
    <property>
        <name>dfs.block.size</name>
        <value>33554432</value>
    </property>
</configuration>
mapred-site.xml 
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
    <property>
        <name>mapred.job.tracker</name>
        <value>hadoop1:9001</value>
    </property>
</configuration>
masters
hadoop1
slaves
hadoop1
hadoop2
hadoop3
 
6.格式化
cd $HADOOP_HOME/bin
./hadoop namenode -format
7.启动
./start-all.sh
 

相关推荐