HBase多线程建立HTable问题

最近在写wormhole的HBase plugin,需要分别实现hbase reader和hbase writer。

相关阅读

在测试的时候会报错如下:

2013-07-08 09:30:02,568 [pool-2-thread-1] org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:1631) WARN  client.HConnectionManager$HConnectionImplementation - Failed all from region=t1,,1373246892580.877bb26da1e4aed541915870fa924224., hostname=test89.hadoop, port=60020
java.util.concurrent.ExecutionException: java.io.IOException: Call to test89.hadoop/10.1.77.89:60020 failed on local exception: java.io.InterruptedIOException: Interruped while waiting for IO on channel java.nio.channels.SocketChannel[connected local=/10.1.77.84:51032 remote=test89.hadoop/10.1.77.89:60020]. 59999 millis timeout left.
 at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:222)
 at java.util.concurrent.FutureTask.get(FutureTask.java:83)
 at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:1601)
 at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatch(HConnectionManager.java:1453)
 at org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:936)
 at org.apache.hadoop.hbase.client.HTable.put(HTable.java:783)
 at com.dp.nebula.wormhole.plugins.common.HBaseClient.flush(HBaseClient.java:121)
 at com.dp.nebula.wormhole.plugins.writer.hbasewriter.HBaseWriter.commit(HBaseWriter.java:112)
 at com.dp.nebula.wormhole.engine.core.WriterThread.call(WriterThread.java:52)
 at com.dp.nebula.wormhole.engine.core.WriterThread.call(WriterThread.java:1)
 at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
 at java.util.concurrent.FutureTask.run(FutureTask.java:138)
 at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
 at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
 at java.lang.Thread.run(Thread.java:662)
Caused by: java.io.IOException: Call to test89.hadoop/10.1.77.89:60020 failed on local exception: java.io.InterruptedIOException: Interruped while waiting for IO on channel java.nio.channels.SocketChannel[connected local=/10.1.77.84:51032 remote=test89.hadoop/10.1.77.89:60020]. 59999 millis timeout left.
 at org.apache.hadoop.hbase.ipc.HBaseClient.wrapException(HBaseClient.java:1030)
 at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:999)
 at org.apache.hadoop.hbase.ipc.SecureRpcEngine$Invoker.invoke(SecureRpcEngine.java:104)
​ at com.sun.proxy.$Proxy5.multi(Unknown Source)
 at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$3$1.call(HConnectionManager.java:1430)
 at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$3$1.call(HConnectionManager.java:1428)
 at org.apache.hadoop.hbase.client.ServerCallable.withoutRetries(ServerCallable.java:215)
 at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$3.call(HConnectionManager.java:1437)
 at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$3.call(HConnectionManager.java:1425)
 ... 5 more
Caused by: java.io.InterruptedIOException: Interruped while waiting for IO on channel java.nio.channels.SocketChannel[connected local=/10.1.77.84:51032 remote=test89.hadoop/10.1.77.89:60020]. 59999 millis timeout left.
2013-07-08 09:30:03,579 [pool-2-thread-6] com.dp.nebula.wormhole.engine.core.WriterThread.call(WriterThread.java:56) ERROR core.WriterThread - Exception occurs in writer thread!
com.dp.nebula.wormhole.common.WormholeException: java.io.IOException: <SPAN style="COLOR: #ff0000">org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@b7c96a9 closed</SPAN>
 at com.dp.nebula.wormhole.plugins.writer.hbasewriter.HBaseWriter.commit(HBaseWriter.java:114)
 at com.dp.nebula.wormhole.engine.core.WriterThread.call(WriterThread.java:52)
 at com.dp.nebula.wormhole.engine.core.WriterThread.call(WriterThread.java:1)
 at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
 at java.util.concurrent.FutureTask.run(FutureTask.java:138)
 at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
 at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
 at java.lang.Thread.run(Thread.java:662)
Caused by: java.io.IOException: org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@b7c96a9 closed
 at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:877)
 at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:857)
 at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:1568)
 at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatch(HConnectionManager.java:1453)
 at org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:936)
 at org.apache.hadoop.hbase.client.HTable.put(HTable.java:783)
 at com.dp.nebula.wormhole.plugins.common.HBaseClient.flush(HBaseClient.java:121)
 at com.dp.nebula.wormhole.plugins.writer.hbasewriter.HBaseWriter.commit(HBaseWriter.java:112)
 ... 7 more

wormhole的reader和writer会分别起一个ThreadPoolExecutor,出错是在writer端的flush阶段,也就是最后一次批量插入操作。由于我的reader是每一个thread一个htable instance没有问题,而writer是共用了一个singleton HBaseClient,然后用ThreadLocal去保证每一个thread拥有一个本地htable对象,有可能有错误,最简单的方法是把writer端不用singleton HBaseClient,问题应该解决,不过没搞清root cause,不爽啊。。。

后来看了HTable和HAdmin的源代码才有点线索

public HTable(Configuration conf, final byte [] tableName)
  throws IOException {
    this.tableName = tableName;
    this.cleanupPoolOnClose = this.cleanupConnectionOnClose = true;
    if (conf == null) {
      this.connection = null;
      return;
    }
    this.connection = HConnectionManager.getConnection(conf);
    this.configuration = conf;
    int maxThreads = conf.getInt("hbase.htable.threads.max", Integer.MAX_VALUE);
    if (maxThreads == 0) {
      maxThreads = 1; // is there a better default?
    }
    long keepAliveTime = conf.getLong("hbase.htable.threads.keepalivetime", 60);
    ((ThreadPoolExecutor)this.pool).allowCoreThreadTimeOut(true);
    this.finishSetup();
  }

每一个HTable instance都有一个HConnection对象,它负责与Zookeeper和之后的HBase Cluster建立链接(比如cluster中定位region,locations的cache,当region移动后重新校准),它由HConnectionManager来管理

  public static HConnection getConnection(Configuration conf)
  throws ZooKeeperConnectionException {
    HConnectionKey connectionKey = new HConnectionKey(conf);
    synchronized (HBASE_INSTANCES) {
      HConnectionImplementation connection = HBASE_INSTANCES.get(connectionKey);
      if (connection == null) {
        connection = new HConnectionImplementation(conf, true);
        HBASE_INSTANCES.put(connectionKey, connection);
      }
      connection.incCount();
      return connection;
    }
  }

HConnectionManager内部有LRU MAP => HBASE_INSTANCES的静态变量作为cache,key为HConnectionKey,包含了username和指定的properties(由传进去的conf提取), value就是HConnection具体实现HConnectionImplementation,由于传入进去的conf都一样,所以都指向同一个HConnectionImplementation,最后会调用connection.incCount()将client reference count加1

public void close() throws IOException {
if (this.closed) {
return;
}
flushCommits();
if (cleanupPoolOnClose) {
this.pool.shutdown();
}
if (cleanupConnectionOnClose) {
if (this.connection != null) {
this.connection.close();
}
}
this.closed = true;
}

相关推荐