Hadoop通过C的API访问HDFS
在通过Hadoop的C的API 访问HDFS的时候,编译和运行出现了不少问题,在这边,做个总结吧:
系统:Ubuntu 11.04,Hadoop-0.20.203.0
样例代码就是参考官方文档中提供到:
- #include "hdfs.h"
- int main(int argc, char **argv) {
- hdfsFS fs = hdfsConnect("default", 0);
- const char* writePath = "/tmp/testfile.txt";
- hdfsFile writeFile = hdfsOpenFile(fs, writePath, O_WRONLY|O_CREAT, 0, 0, 0);
- if(!writeFile) {
- fprintf(stderr, "Failed to open %s for writing!\n", writePath);
- exit(-1);
- }
- char* buffer = "Hello, World!";
- tSize num_written_bytes = hdfsWrite(fs, writeFile, (void*)buffer, strlen(buffer)+1);
- if (hdfsFlush(fs, writeFile)) {
- fprintf(stderr, "Failed to 'flush' %s\n", writePath);
- exit(-1);
- }
- hdfsCloseFile(fs, writeFile);
- }
编译:官网这样描述
See the Makefile for hdfs_test.c in the libhdfs source directory (${HADOOP_HOME}/src/c++/libhdfs/Makefile) or something like:
gcc above_sample.c -I${HADOOP_HOME}/src/c++/libhdfs -L${HADOOP_HOME}/libhdfs -lhdfs -o above_sample
但是我两个方法都试了,不行,后面发现原来是要少了:
- LIB = -L$(HADOOP_INSTALL)/c++/Linux-i386-32/lib/
- libjvm=/usr/lib/jvm/java-6-openjdk/jre/lib/i386/client/libjvm.so
所以完整到makefile就是:
- HADOOP_INSTALL=/home/fzuir/hadoop-0.20.203.0
- PLATFORM=Linux-i386-32
- JAVA_HOME=/usr/lib/jvm/java-6-openjdk/
- CPPFLAGS= -I$(HADOOP_INSTALL)/src/c++/libhdfs
- LIB = -L$(HADOOP_INSTALL)/c++/Linux-i386-32/lib/
- libjvm=/usr/lib/jvm/java-6-openjdk/jre/lib/i386/client/libjvm.so
- LDFLAGS += -lhdfs
- testHdfs: testHdfs.c
- gcc testHdfs.c $(CPPFLAGS) $(LIB) $(LDFLAGS) $(libjvm) -o testHdfs
- clean:
- rm testHdfs
Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/commons/configuration/Configuration
...
Call to org.apache.hadoop.fs.Filesystem::get(URI, Configuration) failed!
Exception in thread "main" java.lang.NullPointerException
Call to get configuration object from filesystem failed!
解决方法:将HADOOP_HOME和HADOOP_HOME/lib下所有到jar包加入/usr/lib/jvm/java-6-openjdk/jre/lib/ext/(其实只要需要到加入,但是不知道哪些是需要的)
最后,恭喜你,问题解决了。