hadoop最基本配置及build(ant代理) 转
网上的大多数都是hadoop-site.xml
20的版本,分成了3个配置文件,core-site.xml,hdfs-site.xml,mapred-site.xml,
默认的在core.jar里面,源码在
hadoop-0.20.2\src\core\core-default.xml
hadoop-0.20.2\src\mapred\mapred-default.xml
hadoop-0.20.2\src\hdfs\hdfs-default.xml
core.jar里面还打了个hadoop-metrics.properties不知道干什么的
bin.tgz是把bin目录下的东西打包了,这样就能用了?
还有两个slaves,masters也相关
5个最基本的比较重要
1.fs.default.name
2.hadoop.tmp.dir
3.mapred.job.tracker
4.dfs.name.dir
5.dfs.data.dir
6.dfs.http.address
core-site.xml中hadoop.tmp.dir为临时目录,如果hdfs-site.xml中不配置namenode和datanode的存放位置,默认就放在这个hadoop.tmp.dir中了<?xml version= "1.0" ?> <?xml-stylesheet type="text/xsl" href= "configuration.xsl" ?> <configuration> <property> <name>fs.default .name</name> <value>hdfs://192.168.200.12:8888</value> </property> <property> <name>dfs.replication</name> <value>1 </value> </property> <property> <name>hadoop.tmp.dir</name> <value>/home/Administrator/tmp</value> </property> </configuration>
hdfs-site.xml中
dfs.http.address提供web页面显示的地址和端口默认是50070,ip是namenode的ip
dfs.data.dir是datanode机器上data数据存放的位置,没有则放到core-site.xml的tmp目录中
dfs.name.dir是namenode机器上name数据粗放的位置,没有则放到core-site.xml的tmp目录中
<property>
<name>dfs.hosts.exclude</name>
<value>conf/excludes</value>
</property>这个不知道是啥,放上报错就去掉了<configuration> <property> <name>dfs.http.address</name> <value>192.168 . 200.12 : 50071 </value> </property> <property> <name>dfs.balance.bandwidthPerSec</name> <value>12582912 </value> </property> <property> <name>dfs.block.size</name> <value>134217728 </value> <final > true </ final > </property> <property> <name>dfs.data.dir</name> <value>/home/Administrator/data/</value> <final > true </ final > </property> <property> <name>dfs.datanode.du.reserved</name> <value>1073741824 </value> <final > true </ final > </property> <property> <name>dfs.datanode.handler.count</name> <value>10 </value> <final > true </ final > </property> <property> <name>dfs.name.dir</name> <value>/home/Administrator/name/</value> <final > true </ final > </property> <property> <name>dfs.namenode.handler.count</name> <value>64 </value> <final > true </ final > </property> <property> <name>dfs.permissions</name> <value>True</value> <final > true </ final > </property> <property> <name>dfs.replication</name> <value>3 </value> </property> </configuration>
mapred-site.xml内容为,mapred.job.tracker为jobtracker的ip和端口
<?xml version= "1.0" ?> <?xml-stylesheet type="text/xsl" href= "configuration.xsl" ?> <configuration> <property> <name>mapred.job.tracker</name> <value>192.168 . 200.12 : 9999 </value> </property> </configuration>
常见问题:
一。防火墙在学习阶段就全关了吧,iptables-L查看,网络问题1.互信;2.iptables
二。如果调用./bin/hadoopnamenode-format前最好清空/tmp目录和基本配置中的目录(否则有可能运行后dfsadmin-report显示全0)
三。logs很重要,没事就看看,job那个总出问题
看到hadoop中用了ivy,下了个apache-ivy-2.2.0
需要把ivy-2.2.0.jar放到ant的lib目录,比如D:\oracle\apache-ant-1.8.1\lib
可以用apache-ivy-2.2.0\src\example\hello-ivy下ant看ivy是否好使
----------------★★---------------
因为有代理,所以ant配合ivy有问题
改了之后不好使啊
又找到https://issues.apache.org/jira/browse/IVY-529
拿apache-ivy-2.2.0\src\example\hello-ivy>apache-ivy-2.2.0\src\example\hello-ivy为例子
在<ivy:retrieve/>前加入proxy,并加depends="proxy"解决<target name= "proxy" > <property name="proxy.host" value= "代理的ip地址" /> <property name="proxy.port" value= "8080" /> <input message="Please enter proxy username" addproperty= "proxy.user" /> <input message="Please enter proxy password - NOTE: CLEAR TEXT" addproperty= "proxy.pass" /> <setproxy proxyhost="${proxy.host}" proxyport= "${proxy.port}" proxyuser= "${proxy.user}" proxypassword= "${proxy.pass}" /> </target> <target name="resolve" depends= "proxy" description= "--> retreive dependencies with ivy" > <ivy:retrieve/> </target>
在hadoop-0.20.2\build.xml中
找到<targetname="ivy-download"...
修改为<target name= "proxy" > <property name="proxy.host" value= "代理的ip地址" /> <property name="proxy.port" value= "8080" /> <input message="Please enter proxy username" addproperty= "proxy.user" /> <input message="Please enter proxy password - NOTE: CLEAR TEXT" addproperty= "proxy.pass" /> <setproxy proxyhost="${proxy.host}" proxyport= "${proxy.port}" proxyuser= "${proxy.user}" proxypassword= "${proxy.pass}" /> </target> <target name="ivy-download" depends= "proxy" description= "To download ivy" unless= "offline" > <get src="${ivy_repo_url}" dest= "${ivy.jar}" usetimestamp= "true" /> </target>
---------------★★------------------
ant之后
\hadoop-0.20.2\src\hdfs,hadoop-0.20.2\src\mapred,hadoop-0.20.2\src\core下的所有代码会变成hadoop-0.20.2\build\hadoop-0.20.3-dev-core.jar替换原来的hadoop-0.20.2-core.jar即可
如果用eclipse创建工程,buildpath--source--LinkSource选上面三个目录(如果看例子可以引个examples目录),lib引用\hadoop-0.20.2\lib下的所有jar和外面的除了core.jar,还需要个ant.jar,
-------------★★----------------------
hadoop还用到了这个东西TheJRConcurrentProgrammingLanguage
http://www.cs.ucdavis.edu/~olsson/research/jr/
安装http://www.cs.ucdavis.edu/~olsson/research/jr/versions/2.00602/install.html
这有个helloworld
http://developer.51cto.com/art/201006/208197.htm
尝试一下jrwindows上跑需要perl,放redhat5上测试,把jr.zip解压到/usr/local/hadoop/
设置环境变量export CLASSPATH=$CLASSPATH:/usr/local/hadoop/jr/classes/jrt.jar:/usr/local/hadoop/jr/classes/jrx.jar:. export JR_HOME=/usr/local/hadoop/jr export PATH=$PATH:/usr/local/hadoop/jr/bin:/usr/local/hadoop/jr/jrv
cd /usr/local/hadoop/jr/vsuite
[root @122226 vsuite]# jrv quick Starting JRV JR_HOME= /usr/local/hadoop/jr JRC= perl "/usr/local/hadoop/jr/bin/jrc" JRRUN= perl "/usr/local/hadoop/jr/bin/jrrun" JAVAC= "/usr/java/jdk1.6.0_23/bin/javac" JAVA= "/usr/java/jdk1.6.0_23/bin/java" ccr2jr= perl "/usr/local/hadoop/jr/bin/ccr2jr" csp2jr= perl "/usr/local/hadoop/jr/bin/csp2jr" m2jr= perl "/usr/local/hadoop/jr/bin/m2jr" WHICH= /usr/bin/which CMP= perl "/usr/local/hadoop/jr/bin/cmp.pl" GREP= perl "/usr/local/hadoop/jr/bin/grep.pl" SORT= perl "/usr/local/hadoop/jr/bin/sort.pl" TAIL= perl "/usr/local/hadoop/jr/bin/tail.pl" jr compiler version "2.00602 (Mon Jun 1 10:59:20 PDT 2009)" jr rts version "2.00602 (Mon Jun 1 10:59:25 PDT 2009)" HOST= 122226 Start Directory= /usr/local/hadoop/jr/vsuite JR.JRT = /usr/local/hadoop/jr/classes/jrt.jar -rw-r--r-- 1 root root 2090324 Jun 2 2009 /usr/local/hadoop/jr/classes/jrt.jar JR.JRX = /usr/local/hadoop/jr/classes/jrx.jar -rw-r--r-- 1 root root 227198 Jun 2 2009 /usr/local/hadoop/jr/classes/jrx.jar Operating System= original CLASSPATH= /usr/java/jdk1.6 .0_23/lib:/usr/local/hadoop/jr/classes/jrt.jar:/usr/local/hadoop/jr/classes/jrx.jar:. jrv sets CLASSPATH= .:/usr/local/hadoop/jr/classes/jrt.jar:/usr/local/hadoop/jr/classes/jrx.jar DATE= Thu Oct 14 20 : 14 : 21 2010 quick/baby: expected 0 , got 1 from jrrun < null quick/fact_2: expected 0 , got 1 from jrrun < null quick/misc_invocation_count_st_by_0: expected 0 , got 1 from jrrun < null DATE= Thu Oct 14 20 : 14 : 34 2010 Elapsed time (hh:mm:ss)= 00 : 00 : 13 [root@122226 vsuite]# pwd /usr/local/hadoop/jr/vsuite