Coder Social home page Coder Social logo

opentsdb / asynchbase Goto Github PK

View Code? Open in Web Editor NEW
606.0 606.0 307.0 7.9 MB

A fully asynchronous, non-blocking, thread-safe, high-performance HBase client.

Home Page: https://github.com/OpenTSDB/asynchbase

License: BSD 3-Clause "New" or "Revised" License

Makefile 0.82% Shell 0.19% Java 98.99%

asynchbase's Introduction

       ___                 _____ ____  ____  ____
      / _ \ _ __   ___ _ _|_   _/ ___||  _ \| __ )
     | | | | '_ \ / _ \ '_ \| | \___ \| | | |  _ \
     | |_| | |_) |  __/ | | | |  ___) | |_| | |_) |
      \___/| .__/ \___|_| |_|_| |____/|____/|____/
           |_|    The modern time series database.

OpenTSDB is a distributed, scalable Time Series Database (TSDB) written on
top of HBase.  OpenTSDB was written to address a common need: store, index
and serve metrics collected from computer systems (network gear, operating
systems, applications) at a large scale, and make this data easily accessible
and graphable.

Thanks to HBase's scalability, OpenTSDB allows you to collect thousands of
metrics from tens of thousands of hosts and applications, at a high rate
(every few seconds). OpenTSDB will never delete or downsample data and can
easily store hundreds of billions of data points.

OpenTSDB is free software and is available under both LGPLv2.1+ and GPLv3+.
Find more about OpenTSDB at http://opentsdb.net

asynchbase's People

Contributors

adityakishore avatar b4hand avatar bdd avatar cm-cnnxty avatar danburkert avatar dvdreddy avatar gabebenjamin avatar heartsavior avatar james-at-dataxu avatar jasonculverhouse avatar jesse5e avatar jmaloney10 avatar jnfang avatar junegunn avatar karanmehta93 avatar manolama avatar mboyanov avatar mbrukman avatar octo47 avatar phs avatar qiubz avatar saintstack avatar shrijeetrf avatar spiwn avatar stiga-huang avatar tsuna avatar xunl avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

asynchbase's Issues

MultiAction fails on 0.92.0

Using the HEAD revision I get deserialization exception on the server side (HBase 0.92.0) for MultiAction operations with DeleteRequests. E.g.

2012-02-19 16:21:30,799 [IPC Reader 8 on port 57026] WARN  org.apache.hadoop.ipc.HBaseServer - Unable to read call parameters for client 192.168.1.115
java.io.IOException: Error in readFields
    at org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:586)
    at org.apache.hadoop.hbase.ipc.Invocation.readFields(Invocation.java:125)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Connection.processData(HBaseServer.java:1238)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Connection.readAndProcess(HBaseServer.java:1167)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Listener.doRead(HBaseServer.java:703)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader.doRunLoop(HBaseServer.java:495)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader.run(HBaseServer.java:470)
    at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
    at java.lang.Thread.run(Thread.java:680)
Caused by: java.io.IOException: Error in readFields
    at org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:586)
    at org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:503)
    at org.apache.hadoop.hbase.client.MultiAction.readFields(MultiAction.java:116)
    at org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:583)
    ... 9 more
Caused by: java.io.IOException: Error in readFields
    at org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:586)
    at org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:503)
    at org.apache.hadoop.hbase.client.Action.readFields(Action.java:101)
    at org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:583)
    ... 12 more
Caused by: java.io.EOFException
    at java.io.DataInputStream.readInt(DataInputStream.java:375)
    at org.apache.hadoop.io.WritableUtils.readString(WritableUtils.java:113)
    at org.apache.hadoop.hbase.client.OperationWithAttributes.readAttributes(OperationWithAttributes.java:101)
    at org.apache.hadoop.hbase.client.Delete.readFields(Delete.java:262)
    at org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:583)
    ... 15 more

Client does not throttle in case of complete server crash

Consider a situation when HBase cluster got nuked[1] and we want our client application to feel minimal impact. I wrote a sample code[3] to test this situation[2]. Here are observations

After nuking the cluster
  1. The errback gets called with RemoteException for only few RPC requests. These must be the ones which were sent from client just before backend went down, they met exception at the other end.
  2. Soon the client log starts spewing [Hashed wheel timer #1-EventThread] HBaseClient: The znode for the -ROOT- region doesn't exist!
  3. The client keeps on going, no blocking call even when backend was down. This was no surprise, as its asynchbase.
  4. No throttling happens. Guess throttling is associated with NSRE case only as of today.
After bringing the cluster back
  1. The client recovers smoothly.
  2. To string-ed RegionClient and found all the RPC sent while backend was down were buffered in 'inflight' state. Here is sample statement RegionClient@2066231378(chan=[id: 0x372c9557, /10.241.94.172:54017 => /172.22.0.31:60020], #pending_rpcs=0, #batched=0, #rpcs_inflight=16099). For a while this count goes up but as backened stabilizes the inflight count goes down.
  3. App finishes and a row count on server side confirms no puts were lost

While it is great that no write was lost, there is a danger of OOMing client here. Do you think throttling App or even throwing an error if, for example, too many attempts to discover ROOT occur consecutively?

[1] All region servers died/The data center lost connectivity from outside worlds etc.
[2] Staging cluster has one master and one region server. I start the app and then let it run for some time, then kill one & only region server. Later I bring that region server
[3] https://gist.github.com/3153858

Client fails to recover on full HBase cluster restarts

We keep running into this issue at StumbleUpon: whenever we do a full cluster restart, clients get stuck, especially ones that are experiencing high write throughput (1000 QPS and more). Essentially the client knows that every region is NSRE'd or down and eventually everything is waiting on -ROOT- to come back online. But at some point the client "forgets" to retry finding -ROOT- and thus remains stuck forever.

Set multiple qualifiers on scanners

For now scanners can restrict the key values to only a single qualifiers. It would be nice to be able to add a set of qualifiers like offered by the regular client.
Not sure though how this could be implemented without breaking the existing code.

ArrayIndexOutOfBoundsException when serializing AtomicIncrementRequests

Similar to Issue #31, I occasionally see ArrayIndexOutOfBoundsExceptions when AtomicIncrementRequests are serialized. I believe the problem is again in the calculation of the buffer size. The problem appears to be that 3rd and 4th parameters (family and qualifier) do not allocate a byte for the parameter type. The writeHBaseByteArray method always writes a byte with value 11 before writing the contents of the byte array, so any byte array that is serialized using this method needs an extra byte allocated.

So this is what I think the predictSerializedSize() method should look like in AtomicIncrementRequest:

  private int predictSerializedSize() {
    int size = 0;
    size += 4;  // int:  Number of parameters.
    size += 1;  // byte: Type of the 1st parameter.
    size += 3;  // vint: region name length (3 bytes => max length = 32768).
    size += region.name().length;  // The region name.
    size += 1;  // byte: Type of the 2nd parameter.
    size += 3;  // vint: row key length (3 bytes => max length = 32768).
    size += key.length;  // The row key.
    size += 1;  // byte: Type of the 3rd parameter.
    size += 1;  // vint: Family length (guaranteed on 1 byte).
    size += family.length;  // The family.
    size += 1;  // byte: Type of the 4th parameter.
    size += 3;  // vint: Qualifier length.
    size += qualifier.length;  // The qualifier.
    size += 8;  // long: Amount.
    size += 1;  // bool: Whether or not to write to the WAL.
    return size;
  }

All of my unit tests pass with this change, and I haven't seen any AIOOBEs while executing ICVs since pushing out this change.

Prefetch or transfer META cache?

We're using asynchbase to drive the OutputFormat of some Hadoop MR1 map-only jobs (among other things.) It is freaky fast and we love it.

As our cluster grows, we've noticed that stress on our .META. region is growing, presumably because the increasing number of clients with cold caches coming online.

It seems like I should be able to solve my issue if, during my MR job submission, I could aggressively prefetch the region metadata for my table(s), publish the cache in some opaque-but-serializable form as job metadata, and construct my numerous clients with this serialized pre-warmed cache.

There doesn't appear to be any support in HBaseClient for accomplishing this, or even just prefetching the metadata without transfer (say, by enumerating regions.) Is there another way I should be doing this?

Support Netty 4.x

Netty 4.x is now out and it would be good to have asynchbase support this. The Netty APIs have significantly changed so some plan will need to be put in place to support both Netty 4.x and 3.x?

NPE in hbase 0.92-rc4

While testing asynchbase 1.2.0 from tsuna's branch I encountered an NPE. hbase bug or perhaps asynchbase bug?

2012-01-24 10:51:11,392 INFO  [main] UniqueId: Creating an ID for kind='metrics' name='...'
2012-01-24 10:51:41,441 ERROR [main] UniqueId: Put failed, attempts left=5 (retrying in 800 ms), put=PutRequest(table="tsdb-uid", key="\x00", family="id", qualifier="metrics", value=[0, 0, 0, 0, 0, 0, 73, 83], lockid=-1, durable=true, bufferable=false, attempt=0, region=RegionInfo(table="tsdb-uid", region_name="tsdb-uid,,1327007356551.d9c1e832ff11b6fa3b9e7aabd79d731f.", stop_key=""))
org.hbase.async.RemoteException: java.io.IOException: java.lang.NullPointerException
        at org.apache.hadoop.hbase.regionserver.HRegionServer.convertThrowableToIOE(HRegionServer.java:1076)
        at org.apache.hadoop.hbase.regionserver.HRegionServer.convertThrowableToIOE(HRegionServer.java:1065)
        at org.apache.hadoop.hbase.regionserver.HRegionServer.put(HRegionServer.java:1815)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
        at java.lang.reflect.Method.invoke(Method.java:597)
        at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
        at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
Caused by: java.lang.NullPointerException
        at java.util.concurrent.ConcurrentHashMap.remove(ConcurrentHashMap.java:922)
        at org.apache.hadoop.hbase.regionserver.HRegion.releaseRowLock(HRegion.java:2639)
        at org.apache.hadoop.hbase.regionserver.HRegion.put(HRegion.java:1658)
        at org.apache.hadoop.hbase.regionserver.HRegionServer.put(HRegionServer.java:1813)
        ... 6 more

        at org.hbase.async.RegionClient.deserializeException(RegionClient.java:1185) ~[asynchbase-1.1.0.jar:f0225c7]
        at org.hbase.async.RegionClient.deserialize(RegionClient.java:1160) ~[asynchbase-1.1.0.jar:f0225c7]
        at org.hbase.async.RegionClient.decode(RegionClient.java:1081) ~[asynchbase-1.1.0.jar:f0225c7]
        at org.hbase.async.RegionClient.decode(RegionClient.java:82) ~[asynchbase-1.1.0.jar:f0225c7]
        at org.jboss.netty.handler.codec.replay.ReplayingDecoder.callDecode(ReplayingDecoder.java:470) ~[netty-3.2.7.Final.jar:na]
        at org.jboss.netty.handler.codec.replay.ReplayingDecoder.messageReceived(ReplayingDecoder.java:443) ~[netty-3.2.7.Final.jar:na]
        at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:80) ~[netty-3.2.7.Final.jar:na]
        at org.hbase.async.RegionClient.handleUpstream(RegionClient.java:936) ~[asynchbase-1.1.0.jar:f0225c7]
        at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) ~[netty-3.2.7.Final.jar:na]
        at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559) ~[netty-3.2.7.Final.jar:na]
        at org.hbase.async.HBaseClient$RegionClientPipeline.sendUpstream(HBaseClient.java:1980) ~[asynchbase-1.1.0.jar:f0225c7]
        at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:274) ~[netty-3.2.7.Final.jar:na]
        at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:261) ~[netty-3.2.7.Final.jar:na]
        at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:351) ~[netty-3.2.7.Final.jar:na]
        at org.jboss.netty.channel.socket.nio.NioWorker.processSelectedKeys(NioWorker.java:282) ~[netty-3.2.7.Final.jar:na]
        at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:202) ~[netty-3.2.7.Final.jar:na]
        at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) ~[netty-3.2.7.Final.jar:na]
        at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:44) ~[netty-3.2.7.Final.jar:na]
        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) ~[na:1.6.0_27]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) ~[na:1.6.0_27]
        at java.lang.Thread.run(Thread.java:662) ~[na:1.6.0_27]

Stuck on exists probe?

Running v1.4.1 and HBase 0.92.2. Today one of the region servers in a production cluster became briefly unavailable, which caused this error whenever a new Put was attempted by our asynchbase clients:

WARN  [2013-03-30 15:01:16,654] HBaseClient - Probe Exists(table="sessions", key="d3e19aa0996e11e29121f82812c84021", family=null, qualifiers=null, attempt=0, region=RegionInfo(table="sessions", region_name="sessions,,1354532863349.a248927137c5a207fb36d15d0f97f816.", stop_key="")) failed
org.hbase.async.NonRecoverableException: Too many attempts: Exists(table="sessions", key="d3e19aa0996e11e29121f82812c84021", family=null, qualifiers=null, attempt=11, region=RegionInfo(table="sessions", region_name="sessions,,1354532863349.a248927137c5a207fb36d15d0f97f816.", stop_key=""))                                             
        at org.hbase.async.HBaseClient.tooManyAttempts(HBaseClient.java:1593)
        at org.hbase.async.HBaseClient.sendRpcToRegion(HBaseClient.java:1466)
        at org.hbase.async.HBaseClient$1RetryRpc.call(HBaseClient.java:1479)
        at org.hbase.async.HBaseClient$1RetryRpc.call(HBaseClient.java:1473)
        at com.stumbleupon.async.Deferred.doCall(Deferred.java:1262)
        at com.stumbleupon.async.Deferred.runCallbacks(Deferred.java:1241)
        at com.stumbleupon.async.Deferred.callback(Deferred.java:989)
        at org.hbase.async.HBaseRpc.callback(HBaseRpc.java:450)
        at org.hbase.async.RegionClient.decode(RegionClient.java:1185)
        at org.hbase.async.RegionClient.decode(RegionClient.java:82)
        at org.jboss.netty.handler.codec.replay.ReplayingDecoder.callDecode(ReplayingDecoder.java:511)
        at org.jboss.netty.handler.codec.replay.ReplayingDecoder.messageReceived(ReplayingDecoder.java:441)
        at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:75)
        at org.hbase.async.RegionClient.handleUpstream(RegionClient.java:1008)
        at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
        at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
        at org.hbase.async.HBaseClient$RegionClientPipeline.sendUpstream(HBaseClient.java:2431)
        at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)
        at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255)
        at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:94)
        at org.jboss.netty.channel.socket.nio.AbstractNioWorker.processSelectedKeys(AbstractNioWorker.java:364)
        at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:238)
        at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:38)
        at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:722)

This problem continued even after restarting one of the asynchbase clients - in the end the entire HBase cluster was rebooted and the problem went away.

Unfortunately I've been unable to confirm if the non-asynchbase clients were also able to put data, so can't be entirely sure it was asynchbase at fault (besides the cluster reboot making it fairly unlikely).

Any ideas what the cause might be?

Support for DDL

I'd like to avoid using the normal HBaseAdmin client entirely. Mostly I would like to create new tables at runtime.

Need more APIs to access fields of RPCs

When one creates a PutRequest, say, there's no way to get back the key/family/qualifier etc from the object. The accessors are package private. This restriction is a bit drastic and makes unit testing harder than necessary. The accessors should be public.

Compatibility with 0.92

This issue is about finding what new wire incompatibilities are in HBase 0.92 and trying to make asynchbase work with (or around) them.

make of asynchbase and opentsdb failed with error

# make
make: *** No rule to make target `.git/HEAD', needed by `build/manifest'.  Stop.

I didn't do any changes to the makefile.

# gcc -v
Using built-in specs.
Target: x86_64-redhat-linux
Configured with: ../configure --prefix=/usr --mandir=/usr/share/man --infodir=/usr/share/info --enable-shared --enable-threads=posix --enable-checking=release --with-system-zlib --enable-__cxa_atexit --disable-libunwind-exceptions --enable-libgcj-multifile --enable-languages=c,c++,objc,obj-c++,java,fortran,ada --enable-java-awt=gtk --disable-dssi --enable-plugin --with-java-home=/usr/lib/jvm/java-1.4.2-gcj-1.4.2.0/jre --with-cpu=generic --host=x86_64-redhat-linux
Thread model: posix
gcc version 4.1.2 20080704 (Red Hat 4.1.2-46)

# java -version
java version "1.6.0_26"
Java(TM) SE Runtime Environment (build 1.6.0_26-b03)
Java HotSpot(TM) 64-Bit Server VM (build 20.1-b02, mixed mode)

# uname -a
Linux phxrueidb04 2.6.18-164.el5 #1 SMP Tue Aug 18 15:51:48 EDT 2009 x86_64 x86_64 x86_64 GNU/Linux

Intermittent IndexOutOfBoundsException

Hi,

Running cdh4, hbase 0.92.1-cdh4.0.1, netty 3.5.2, I am getting an intermittent IndexOutOfBoundsException when reading from hbase


2012-10-12 15:33:01.617 [New I/O  worker #3] DEBUG org.hbase.async.RegionClient - ------------------>> ENTERING DECODE >>------------------
2012-10-12 15:33:01.617 [New I/O  worker #3] DEBUG org.hbase.async.RegionClient - [id: 0x65adfc7b, /10.211.55.2:49790 => /10.211.55.11:60020] EXCEPTION: java.lang.IndexOutOfBoundsException: Invalid index: 1862, maximum: 3
2012-10-12 15:33:01.618 [New I/O  worker #3] ERROR org.hbase.async.RegionClient - Unexpected exception from downstream on [id: 0x65adfc7b, /10.211.55.2:49790 => /10.211.55.11:60020]
java.lang.IndexOutOfBoundsException: Invalid index: 1862, maximum: 3
        at org.jboss.netty.buffer.CompositeChannelBuffer.componentId(CompositeChannelBuffer.java:677) ~[netty-3.5.2.Final.jar:na]
        at org.jboss.netty.buffer.CompositeChannelBuffer.getBytes(CompositeChannelBuffer.java:231) ~[netty-3.5.2.Final.jar:na]
        at org.jboss.netty.buffer.AbstractChannelBuffer.readBytes(AbstractChannelBuffer.java:339) ~[netty-3.5.2.Final.jar:na]
        at org.jboss.netty.buffer.AbstractChannelBuffer.readBytes(AbstractChannelBuffer.java:344) ~[netty-3.5.2.Final.jar:na]
        at org.jboss.netty.handler.codec.replay.ReplayingDecoderBuffer.readBytes(ReplayingDecoderBuffer.java:323) ~[netty-3.5.2.Final.jar:na]
        at org.hbase.async.KeyValue.fromBuffer(KeyValue.java:265) ~[asynchbase-1.4.0-branch.jar:na]
        at org.hbase.async.RegionClient.parseResult(RegionClient.java:1256) ~[asynchbase-1.4.0-branch.jar:na]
        at org.hbase.async.RegionClient.deserializeObject(RegionClient.java:1210) ~[asynchbase-1.4.0-branch.jar:na]
        at org.hbase.async.RegionClient.deserialize(RegionClient.java:1164) ~[asynchbase-1.4.0-branch.jar:na]
        at org.hbase.async.RegionClient.decode(RegionClient.java:1082) ~[asynchbase-1.4.0-branch.jar:na]
        at org.hbase.async.RegionClient.decode(RegionClient.java:82) ~[asynchbase-1.4.0-branch.jar:na]
        at org.jboss.netty.handler.codec.replay.ReplayingDecoder.callDecode(ReplayingDecoder.java:502) ~[netty-3.5.2.Final.jar:na]
        at org.jboss.netty.handler.codec.replay.ReplayingDecoder.messageReceived(ReplayingDecoder.java:487) ~[netty-3.5.2.Final.jar:na]
        at org.hbase.async.RegionClient.handleUpstream(RegionClient.java:938) ~[asynchbase-1.4.0-branch.jar:na]
        at org.hbase.async.HBaseClient$RegionClientPipeline.sendUpstream(HBaseClient.java:2408) ~[asynchbase-1.4.0-branch.jar:na]
        at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268) ~[netty-3.5.2.Final.jar:na]
        at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255) ~[netty-3.5.2.Final.jar:na]
        at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:91) ~[netty-3.5.2.Final.jar:na]
        at org.jboss.netty.channel.socket.nio.AbstractNioWorker.processSelectedKeys(AbstractNioWorker.java:373) ~[netty-3.5.2.Final.jar:na]
        at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:247) ~[netty-3.5.2.Final.jar:na]
        at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:35) ~[netty-3.5.2.Final.jar:na]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110) [na:1.7.0_07]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603) [na:1.7.0_07]
        at java.lang.Thread.run(Thread.java:722) [na:1.7.0_07]

I currently haven't been able to reproduce this in a test case yet. Is there any more debug logs that I can give to help you out in finding out where this issue lays?

Another MultiAction issue

I see the following issue with the tsuna/asynchbase fork. I did not test stumbleupon/asynchbase but it's probably the same. I'm on HBase 0.92.0.

2012-04-22 23:23:38,978 ERROR org.apache.hadoop.hbase.io.HbaseObjectWritable: Error in readFields
java.io.IOException: Error in readFields
        at org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:586)
        at org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:503)
        at org.apache.hadoop.hbase.client.MultiAction.readFields(MultiAction.java:116)
        at org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:583)
        at org.apache.hadoop.hbase.ipc.Invocation.readFields(Invocation.java:125)
        at org.apache.hadoop.hbase.ipc.HBaseServer$Connection.processData(HBaseServer.java:1238)
        at org.apache.hadoop.hbase.ipc.HBaseServer$Connection.readAndProcess(HBaseServer.java:1167)
        at org.apache.hadoop.hbase.ipc.HBaseServer$Listener.doRead(HBaseServer.java:703)
        at org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader.doRunLoop(HBaseServer.java:495)
        at org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader.run(HBaseServer.java:470)
        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
        at java.lang.Thread.run(Thread.java:662)
Caused by: java.io.IOException: Error in readFields
        at org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:586)
        at org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:503)
        at org.apache.hadoop.hbase.client.Action.readFields(Action.java:101)
        at org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:583)
        ... 12 more
Caused by: java.lang.NegativeArraySizeException: -1
        at org.apache.hadoop.hbase.util.Bytes.readByteArray(Bytes.java:147)
        at org.apache.hadoop.hbase.client.OperationWithAttributes.readAttributes(OperationWithAttributes.java:102)
        at org.apache.hadoop.hbase.client.Delete.readFields(Delete.java:262)
        at org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:583)
        ... 15 more

Problem calling hbaseclient.stats().gets()

Hello

I have just started using asynchbase and I guess I have run into a "bug". Given the code:
HBaseClient hbaseclient = new HBaseClient.... ;
When I call: hbaseclient.stats().gets();
then I get following exception:

Exception in thread "pool-1-thread-1" java.lang.NoClassDefFoundError: com/google/common/cache/RemovalListener
    at org.hbase.async.HBaseClient.stats(HBaseClient.java:505)
    at com.tlt.daisygui.hbase.controller.HBaseController.loadAttachments(HBaseController.java:72)
    at com.tlt.daisygui.view.ViewModel$1.run(ViewModel.java:194)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
    at java.lang.Thread.run(Thread.java:722)
Caused by: java.lang.ClassNotFoundException: com.google.common.cache.RemovalListener
    at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
    at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
    at java.security.AccessController.doPrivileged(Native Method)
    at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:423)
    at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:356)
    ... 6 more

If I include guava 14.0.1 in my pom.xml it works well :)

        <dependency>
            <groupId>com.google.guava</groupId>
            <artifactId>guava</artifactId>
            <version>14.0.1</version>
        </dependency>

I noticed in my local .m2 that I have a 13.0.1 and r09 version of the guava library - there is no jar in the 13.0.1 though and r09 does not contain the RemovalListener.
No compile errors are shown - only runtime.

I am using asynchbase 1.4.1:

        <dependency>
            <groupId>org.hbase</groupId>
            <artifactId>asynchbase</artifactId>
            <version>1.4.1</version>
            <exclusions>
                <exclusion>
                    <groupId>org.slf4j</groupId>
                    <artifactId>slf4j-api</artifactId>
                </exclusion>
            </exclusions>
        </dependency>

slf4j is excluded as it conflicts with the version I am using, but I guess that has nothing to do with the guava library.

Library upgrades

Netty: 3.5.9 -> 3.5.11
SLF4J: 1.6.3 -> 1.7.2
Zookeeper: 3.3.4 -> 3.4.5 (also, see HBASE-7273)
Google Guava: 12 -> 13

Integration tests fails with HBase 0.94.6-cdh4.4.0 cluster

When i run integration tests with this cluster, I get 5 tests that fail

05:02:59.442 [main] INFO  org.hbase.async.test.TestIntegration - Ran 17 tests in 43595ms
05:02:59.442 [main] ERROR org.hbase.async.test.TestIntegration - 5 tests failed: [filterList(org.hbase.async.test.TestIntegration): IPC server unable to read call parameters: null, keyRegexpFilter(org.hbase.async.test.TestIntegration): IPC server unable to read call parameters: Could not initialize class org.apache.hadoop.hbase.util.Classes, columnPrefixFilter(org.hbase.async.test.TestIntegration): IPC server unable to read call parameters: Could not initialize class org.apache.hadoop.hbase.util.Classes, columnRangeFilter(org.hbase.async.test.TestIntegration): IPC server unable to read call parameters: Could not initialize class org.apache.hadoop.hbase.util.Classes, regression2(org.hbase.async.test.TestIntegration): expected:<1> but was:<0>]

Endless recursive call when the ZK quorum spec is invalid

Give asynchbase a single hostname that doesn't exist for a ZooKeeper quorum spec and observe this:

2011-05-11 23:09:52,631 ERROR [main] HBaseClient: Failed to connect to ZooKeeper
java.net.UnknownHostException: tmpzookeeper
        at java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method) ~[na:1.6.0_24]
        at java.net.InetAddress$1.lookupAllHostAddr(InetAddress.java:850) ~[na:1.6.0_24]
        at java.net.InetAddress.getAddressFromNameService(InetAddress.java:1201) ~[na:1.6.0_24]
        at java.net.InetAddress.getAllByName0(InetAddress.java:1154) ~[na:1.6.0_24]
        at java.net.InetAddress.getAllByName(InetAddress.java:1084) ~[na:1.6.0_24]
        at java.net.InetAddress.getAllByName(InetAddress.java:1020) ~[na:1.6.0_24]
        at org.apache.zookeeper.ClientCnxn.(ClientCnxn.java:383) ~[zookeeper-3.3.1.jar:3.3.1-942149]
        at org.apache.zookeeper.ClientCnxn.(ClientCnxn.java:328) ~[zookeeper-3.3.1.jar:3.3.1-942149]
        at org.apache.zookeeper.ZooKeeper.(ZooKeeper.java:377) ~[zookeeper-3.3.1.jar:3.3.1-942149]
        at org.hbase.async.HBaseClient$ZKClient.connectZK(HBaseClient.java:2203) [hbaseasync-1.0.jar:d1aff70]
        at org.hbase.async.HBaseClient$ZKClient.getDeferredRoot(HBaseClient.java:2120) [hbaseasync-1.0.jar:d1aff70]
        at org.hbase.async.HBaseClient.locateRegion(HBaseClient.java:1102) [hbaseasync-1.0.jar:d1aff70]
        at org.hbase.async.HBaseClient.sendRpcToRegion(HBaseClient.java:1005) [hbaseasync-1.0.jar:d1aff70]
        at org.hbase.async.HBaseClient.get(HBaseClient.java:700) [hbaseasync-1.0.jar:d1aff70]
        at org.hbase.async.HBaseClient.ensureTableFamilyExists(HBaseClient.java:653) [hbaseasync-1.0.jar:d1aff70]
        at org.hbase.async.HBaseClient.ensureTableExists(HBaseClient.java:673) [hbaseasync-1.0.jar:d1aff70]
        at net.opentsdb.tools.TSDMain.main(TSDMain.java:130) [tsdb-1.0.jar:b4a37c5]
[...]
2011-05-11 23:09:52,644 ERROR [main] HBaseClient: Failed to connect to ZooKeeper
java.net.UnknownHostException: tmpzookeeper
        at java.net.InetAddress.getAllByName0(InetAddress.java:1158) ~[na:1.6.0_24]
        at java.net.InetAddress.getAllByName(InetAddress.java:1084) ~[na:1.6.0_24]
        at java.net.InetAddress.getAllByName(InetAddress.java:1020) ~[na:1.6.0_24]
        at org.apache.zookeeper.ClientCnxn.(ClientCnxn.java:383) ~[zookeeper-3.3.1.jar:3.3.1-942149]
        at org.apache.zookeeper.ClientCnxn.(ClientCnxn.java:328) ~[zookeeper-3.3.1.jar:3.3.1-942149]
        at org.apache.zookeeper.ZooKeeper.(ZooKeeper.java:377) ~[zookeeper-3.3.1.jar:3.3.1-942149]
        at org.hbase.async.HBaseClient$ZKClient.connectZK(HBaseClient.java:2203) [hbaseasync-1.0.jar:d1aff70]
        at org.hbase.async.HBaseClient$ZKClient.connectZK(HBaseClient.java:2209) [hbaseasync-1.0.jar:d1aff70]
        at org.hbase.async.HBaseClient$ZKClient.connectZK(HBaseClient.java:2209) [hbaseasync-1.0.jar:d1aff70]
        at org.hbase.async.HBaseClient$ZKClient.connectZK(HBaseClient.java:2209) [hbaseasync-1.0.jar:d1aff70]
        at org.hbase.async.HBaseClient$ZKClient.connectZK(HBaseClient.java:2209) [hbaseasync-1.0.jar:d1aff70]
        at org.hbase.async.HBaseClient$ZKClient.connectZK(HBaseClient.java:2209) [hbaseasync-1.0.jar:d1aff70]
        at org.hbase.async.HBaseClient$ZKClient.connectZK(HBaseClient.java:2209) [hbaseasync-1.0.jar:d1aff70]
        at org.hbase.async.HBaseClient$ZKClient.connectZK(HBaseClient.java:2209) [hbaseasync-1.0.jar:d1aff70]
        at org.hbase.async.HBaseClient$ZKClient.connectZK(HBaseClient.java:2209) [hbaseasync-1.0.jar:d1aff70]
        at org.hbase.async.HBaseClient$ZKClient.connectZK(HBaseClient.java:2209) [hbaseasync-1.0.jar:d1aff70]
        at org.hbase.async.HBaseClient$ZKClient.connectZK(HBaseClient.java:2209) [hbaseasync-1.0.jar:d1aff70]
        at org.hbase.async.HBaseClient$ZKClient.connectZK(HBaseClient.java:2209) [hbaseasync-1.0.jar:d1aff70]
        at org.hbase.async.HBaseClient$ZKClient.getDeferredRoot(HBaseClient.java:2120) [hbaseasync-1.0.jar:d1aff70]
        at org.hbase.async.HBaseClient.locateRegion(HBaseClient.java:1102) [hbaseasync-1.0.jar:d1aff70]
        at org.hbase.async.HBaseClient.sendRpcToRegion(HBaseClient.java:1005) [hbaseasync-1.0.jar:d1aff70]
        at org.hbase.async.HBaseClient.get(HBaseClient.java:700) [hbaseasync-1.0.jar:d1aff70]
        at org.hbase.async.HBaseClient.ensureTableFamilyExists(HBaseClient.java:653) [hbaseasync-1.0.jar:d1aff70]
        at org.hbase.async.HBaseClient.ensureTableExists(HBaseClient.java:673) [hbaseasync-1.0.jar:d1aff70]
        at net.opentsdb.tools.TSDMain.main(TSDMain.java:130) [tsdb-1.0.jar:b4a37c5]

Notice how stack frames in connectZK keep adding up? Bad.

Serialization issue with MultiAction

When serializing a MultiAction containing DeleteRequests only I get the exception below. I did not test it with other batchable requests.

2012-02-21 09:10:56,691 [Hashed wheel timer #1] ERROR org.hbase.async.RegionClient - Uncaught exception while serializing RPC: MultiAction(batch=[DeleteRequest(table="combined", key=[105, 40, 52, -7, 0, 20, 0, 40, 0, 0, 7, -37, 0, 0, 20, 0, 0, 17], family="", qualifiers=[""], attempt=1, region=RegionInfo(table="combined", region_name="combined,,1329815372745.c2db59a7b6d03d18ba5a6bb5b121e1ea.", stop_key="")), DeleteRequest(table="daily", key=[105, 40, 52, -7, 0, 20, 0, 0, 7, -37, 0, 0, 20], family="", qualifiers=[""], attempt=1, region=RegionInfo(table="daily", region_name="daily,,1329815370711.710c0961e6a4da8db3ac2aac6553bafa.", stop_key=""))])
java.lang.ArrayIndexOutOfBoundsException: -1
    at org.jboss.netty.buffer.BigEndianHeapChannelBuffer.setInt(BigEndianHeapChannelBuffer.java:103)
    at org.hbase.async.MultiAction.serialize(MultiAction.java:363)
    at org.hbase.async.RegionClient.encode(RegionClient.java:997)
    at org.hbase.async.RegionClient.sendRpc(RegionClient.java:750)
    at org.hbase.async.RegionClient.periodicFlush(RegionClient.java:236)
    at org.hbase.async.RegionClient.access$000(RegionClient.java:82)
    at org.hbase.async.RegionClient$1.run(RegionClient.java:174)
    at org.jboss.netty.util.HashedWheelTimer$HashedWheelTimeout.expire(HashedWheelTimer.java:521)
    at org.jboss.netty.util.HashedWheelTimer$Worker.notifyExpiredTimeouts(HashedWheelTimer.java:440)
    at org.jboss.netty.util.HashedWheelTimer$Worker.run(HashedWheelTimer.java:379)
    at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
    at java.lang.Thread.run(Thread.java:662)

Prevent buffered atomic increments from overflowing Deferred callback chains

Right now if one keeps incrementing the same counter over and over again in a tight loop with bufferAtomicIncrement(), it's easy to cause the Deferred to have more than 2^14 callbacks, which is the maximum allowed and triggers a StackOverflowError. Instead we need to gracefully handle that case, flush the buffered increments for that counter, and start buffering again with a new entry.

Update to Netty 3.5.x (maven is using incompatible version of netty)

Version: 1.3.1

Description
I encountered : java.lang.NoSuchFieldException: buffer error while running my application's unit test. Later on I could reproduce same error in asynchbase's uni test itself.

How to reproduce

$make pom.xml
$mvn test

Error message

Caused by: java.lang.RuntimeException: static initializer failed                
  at org.hbase.async.Bytes.<clinit>(Bytes.java:413)                             
  ... 36 more                                                                   
Caused by: java.lang.NoSuchFieldException: buffer                               
  at java.lang.Class.getDeclaredField(Class.java:1882)                          
  at org.hbase.async.Bytes.<clinit>(Bytes.java:410)                             
  ... 36 more

Reason
We are using version ranges while generating pom. I think netty's latest version (3.5.1.Final) is not compatible with asynchbase.

All include.mk files have fixed/definitive version of dependencies, why not generate pom without version ranges as well?

Support Filters for Get and Scan requests.

Official client API supports adding "Filters" to get and scan requests, which is useful pushing down qualifier selection logic to Hbase.

Just wanted to get a sense from you on how much effort would be required in adding this.

DeleteRequest doesn't honor timestamp

I have some code to delete qualifiers or family like the following:

     DeleteRequest delReq;
      if (delReqInfo.hasQualifiers()) {
        delReq = new DeleteRequest(
            delReqInfo.table.getBytes(),
            userIdBytes,
            delReqInfo.family.getBytes(),
            delReqInfo.getQualifiers(),
            tsMs);
      } else {
        delReq = new DeleteRequest(
            delReqInfo.table.getBytes(),
            userIdBytes,
            delReqInfo.family.getBytes(),
            tsMs);
      }

I found that no matter what timestamp (tsMs) I pass to the DeleteRequest, it always uses the current timstamp when generating the deletion mark in hbase (confirmed by hbase shell command: scan 'table', {RAW => true, VERSIONS => 10})

I'm testing with asynchbase 1.4.1 and hbase 0.94.7

MultiPutRequest.toString is too expensive

MultiPutRequest.toString is too expensive because it serializes every single edit in the MultiPutRequest. This method is called by the constructor of NotServingRegionException. In a high-throughput environment, this can waste a very significant amount of CPU cycles and cause a lot of unnecessary stress on the GC. We need to find a way to make MultiPutRequest.toString cheaper but still informative. I'm thinking of something based on the number and size of edits in the MultiPutRequest. Maybe just the size would be sufficient. If the MultiPutRequest contains less than 1k of data, render it as today, otherwise only show the first few items and then print how many items haven't been rendered in the string representation.

We validate timestamp of KeyValue on receive but during send

While writing a unit test I stumbled upon (no pun intended) a case where I was doing a write with timestamp in PutRequest set as zero. The put made it to the server without error. Here is the snippet

final PutRequest onecolumn = new PutRequest(_tableName, key,
                        column_family, column,
                        modelValue != null ? Bytes.toBytes(modelValue) : null,
                        0);

However during read (of same key) client will complain "Negative timestamp: 0" while deserializing.

Looks like a bug?

ArrayIndexOutOfBoundsException when serializing GetRequests

I noticed occasional ArrayIndexOutOfBoundsExceptions when sending get requests to the server, and it's happening here:

    if (server_version >= RegionClient.SERVER_VERSION_092_OR_ABOVE) {
      buf.writeInt(0);  // Attributes map: number of elements.
    }

I believe the cause of the problem is in the predictSerializedSize(byte) method above:

    if (server_version >= RegionClient.SERVER_VERSION_092_OR_ABOVE) {
      size += 1;  // int: Attributes map.  Always 0.
    }

It's only allocating an extra byte, but the ChannelBuffer is trying to write 4 bytes because it's an int:

    public void writeInt(int value) {
        setInt(writerIndex, value);
        writerIndex += 4;
    }

After making the following change I no longer see any ArrayIndexOutOfBoundsExceptions:

    if (server_version >= RegionClient.SERVER_VERSION_092_OR_ABOVE) {
      size += 4;  // int: Attributes map.  Always 0.
    }

I'll push the fix out to my fork and submit a pull request later.

Remote exceptions should embed the ip:port of the peer

When a RemoteException is thrown out, it's often not obvious which server it came from. The exception's message needs to embed the ip:port of the peer RS, to make debugging easier. Grepping logs on hundred node clusters isn't fun.

Bug when delete followed by put

Client Version: 1.3.1
Server Version: 0.92.1
Hadoop version: cdh3u0

Desciption

A special case when client does a delete for a key followed by put for same key AND it does so for two different tables (which results into MultiAction batch of two delete-put pairs) - the region server detects corruption in RPC sent to it.

How to reproduce

import org.hbase.async.DeleteRequest;
import org.hbase.async.HBaseClient;
import org.hbase.async.PutRequest;

public class PutDeleteBug {

    public static void main(String args[]) throws Exception {
        final HBaseClient client = new HBaseClient("pww-30");

        for (int i = 0; i < 2; i++) {
            PutRequest put;
            if (i % 2 == 0) {
                put = new PutRequest("t1", String.valueOf(i), "f", "q", "v");
            } else {
                put = new PutRequest("t0", String.valueOf(i), "f", "q", "v");
            }
            final DeleteRequest delete = new DeleteRequest(put.table(), put.key());
            client.delete(delete);
            client.put(put);
        }
        client.shutdown().joinUninterruptibly();
    }

}

Logs with details of error

client log:

2012-07-25 11:50:36,378 DEBUG [main] AsyncPutRequestFileWriter: Set file called for /tmp/asyncputs.bin
2012-07-25 11:50:36,383 DEBUG [main] AsyncPutRequestFileWriter: setFile ended
2012-07-25 11:50:36,383 DEBUG [main] AsyncPutRequestFileWriter: Opening AsyncFileWriter.
2012-07-25 11:50:36,410 INFO  [main] ZooKeeper: Client environment:zookeeper.version=3.3.3-1203054, built on 11/17/2011 05:47 GMT
2012-07-25 11:50:36,410 INFO  [main] ZooKeeper: Client environment:host.name=10.241.94.172
2012-07-25 11:50:36,410 INFO  [main] ZooKeeper: Client environment:java.version=1.6.0_32
2012-07-25 11:50:36,410 INFO  [main] ZooKeeper: Client environment:java.vendor=Apple Inc.
2012-07-25 11:50:36,410 INFO  [main] ZooKeeper: Client environment:java.home=/Library/Java/JavaVirtualMachines/1.6.0_32-b05-420.jdk/Contents/Home
2012-07-25 11:50:36,410 INFO  [main] ZooKeeper: Client environment:java.class.path=/Users/shrijeet/Documents/workspace/asynchbase.experiments/bin:/Users/shrijeet/git_ws/grid/thirdparty/asynchbase/target/classes:/Users/shrijeet/.m2/repository/com/google/guava/guava/12.0/guava-12.0.jar:/Users/shrijeet/.m2/repository/com/google/code/findbugs/jsr305/1.3.9/jsr305-1.3.9.jar:/Users/shrijeet/.m2/repository/io/netty/netty/3.5.1.Final/netty-3.5.1.Final.jar:/Users/shrijeet/.m2/repository/com/stumbleupon/async/1.2.0/async-1.2.0.jar:/Users/shrijeet/.m2/repository/org/slf4j/slf4j-api/1.6.4/slf4j-api-1.6.4.jar:/Users/shrijeet/.m2/repository/org/apache/zookeeper/zookeeper/3.3.4/zookeeper-3.3.4.jar:/Users/shrijeet/.m2/repository/org/slf4j/log4j-over-slf4j/1.6.4/log4j-over-slf4j-1.6.4.jar:/Users/shrijeet/.m2/repository/org/slf4j/jcl-over-slf4j/1.6.4/jcl-over-slf4j-1.6.4.jar:/Users/shrijeet/.m2/repository/junit/junit/4.10/junit-4.10.jar:/Users/shrijeet/.m2/repository/org/hamcrest/hamcrest-core/1.1/hamcrest-core-1.1.jar:/Users/shrijeet/.m2/repository/org/javassist/javassist/3.15.0-GA/javassist-3.15.0-GA.jar:/Users/shrijeet/.m2/repository/org/mockito/mockito-all/1.9.0/mockito-all-1.9.0.jar:/Users/shrijeet/.m2/repository/org/powermock/powermock-module-junit4/1.4.10/powermock-module-junit4-1.4.10.jar:/Users/shrijeet/.m2/repository/org/powermock/powermock-module-junit4-common/1.4.10/powermock-module-junit4-common-1.4.10.jar:/Users/shrijeet/.m2/repository/org/powermock/powermock-core/1.4.10/powermock-core-1.4.10.jar:/Users/shrijeet/.m2/repository/org/powermock/powermock-reflect/1.4.10/powermock-reflect-1.4.10.jar:/Users/shrijeet/.m2/repository/org/objenesis/objenesis/1.2/objenesis-1.2.jar:/Users/shrijeet/.m2/repository/org/powermock/powermock-api-mockito/1.4.10/powermock-api-mockito-1.4.10.jar:/Users/shrijeet/.m2/repository/org/powermock/powermock-api-support/1.4.10/powermock-api-support-1.4.10.jar:/Users/shrijeet/github/opentsdb/third_party/logback/logback-classic-0.9.24.jar:/Users/shrijeet/github/opentsdb/third_party/logback/logback-core-0.9.24.jar:/Users/shrijeet/github/opentsdb/third_party/suasync/suasync-1.1.0.jar:/Users/shrijeet/github/opentsdb/third_party/netty/netty-3.2.3.Final.jar:/Users/shrijeet/github/opentsdb/third_party/slf4j/slf4j-api-1.6.1.jar
2012-07-25 11:50:36,411 INFO  [main] ZooKeeper: Client environment:java.library.path=.:/Library/Java/Extensions:/System/Library/Java/Extensions:/usr/lib/java
2012-07-25 11:50:36,411 INFO  [main] ZooKeeper: Client environment:java.io.tmpdir=/var/folders/zw/qjtv8ln55pl13s1txf6241z80000gp/T/
2012-07-25 11:50:36,411 INFO  [main] ZooKeeper: Client environment:java.compiler=<NA>
2012-07-25 11:50:36,411 INFO  [main] ZooKeeper: Client environment:os.name=Mac OS X
2012-07-25 11:50:36,411 INFO  [main] ZooKeeper: Client environment:os.arch=x86_64
2012-07-25 11:50:36,411 INFO  [main] ZooKeeper: Client environment:os.version=10.7.4
2012-07-25 11:50:36,411 INFO  [main] ZooKeeper: Client environment:user.name=shrijeet
2012-07-25 11:50:36,411 INFO  [main] ZooKeeper: Client environment:user.home=/Users/shrijeet
2012-07-25 11:50:36,411 INFO  [main] ZooKeeper: Client environment:user.dir=/Users/shrijeet/Documents/workspace/asynchbase.experiments
2012-07-25 11:50:36,412 INFO  [main] ZooKeeper: Initiating client connection, connectString=pww-30 sessionTimeout=5000 watcher=org.hbase.async.HBaseClient$ZKClient@36f0b7f8
2012-07-25 11:50:36,414 DEBUG [main] ClientCnxn: zookeeper.disableAutoWatchReset is false
2012-07-25 11:50:36,429 INFO  [main] HBaseClient: Need to find the -ROOT- region
2012-07-25 11:50:36,429 INFO  [main-SendThread()] ClientCnxn: Opening socket connection to server pww-30/172.22.0.30:2181
2012-07-25 11:50:36,430 DEBUG [main] AsyncPutRequestFileWriter: Stopping AsyncFileWriter from mail thread.
2012-07-25 11:50:36,430 DEBUG [main] AsyncPutRequestFileWriter: Waiting for writer thread to join.
2012-07-25 11:50:36,430 DEBUG [AsyncWriter Thread] AsyncPutRequestFileWriter: Writer thread interrupted, exiting loop.
2012-07-25 11:50:36,430 INFO  [AsyncWriter Thread] AsyncPutRequestFileWriter: Writer has been stopped, flushing all pending items in queue before exiting, items to flush: 0
2012-07-25 11:50:36,431 DEBUG [AsyncWriter Thread] AsyncPutRequestFileWriter: Closing file /tmp/asyncputs.bin
2012-07-25 11:50:36,431 INFO  [main] AsyncPutRequestFileWriter: Total events saved in lifetime : 0
2012-07-25 11:50:36,461 INFO  [main-SendThread(pww-30:2181)] ClientCnxn: Socket connection established to pww-30/172.22.0.30:2181, initiating session
2012-07-25 11:50:36,463 DEBUG [main-SendThread(pww-30:2181)] ClientCnxn: Session establishment request sent on pww-30/172.22.0.30:2181
2012-07-25 11:50:36,491 INFO  [main-SendThread(pww-30:2181)] ClientCnxn: Session establishment complete on server pww-30/172.22.0.30:2181, sessionid = 0x135ca75f41b6469, negotiated timeout = 12000
2012-07-25 11:50:36,492 DEBUG [main-EventThread] HBaseClient: Got ZooKeeper event: WatchedEvent state:SyncConnected type:None path:null
2012-07-25 11:50:36,494 DEBUG [main-EventThread] HBaseClient: Finding the -ROOT- region in ZooKeeper
2012-07-25 11:50:36,498 DEBUG [main-EventThread] HBaseClient: Done handling ZooKeeper event: WatchedEvent state:SyncConnected type:None path:null
2012-07-25 11:50:36,524 DEBUG [main-SendThread(pww-30:2181)] ClientCnxn: Reading reply sessionid:0x135ca75f41b6469, packet:: clientPath:/hbase/root-region-server serverPath:/hbase/root-region-server finished:false header:: 1,4  replyHeader:: 1,-1,0  request:: '/hbase/root-region-server,T  response:: #ffffffff000193138323837407077772d33312e72666973657276652e6e65747077772d33312e72666973657276652e6e65742c36303032302c31333433313733333537333532,s{25784830233,25784830233,1343173363725,1343173363725,0,0,0,0,69,0,25784830233} 
2012-07-25 11:50:36,528 INFO  [main-EventThread] HBaseClient: Connecting to -ROOT- region @ 172.22.0.31:60020
2012-07-25 11:50:36,555 DEBUG [main-EventThread] RegionClient: [id: 0x6ab30913] OPEN
2012-07-25 11:50:36,562 DEBUG [main-EventThread] RegionClient: RPC queued: HBaseRpc(method="getClosestRowBefore", table="-ROOT-", key=".META.,t1,0,:,:", region=RegionInfo(table="-ROOT-", region_name="-ROOT-,,0", stop_key=""), attempt=0)
2012-07-25 11:50:36,563 DEBUG [main-EventThread] Deferred: callback=retry RPC@121398849 returned Deferred@240990493(state=PENDING, result=null, callback=type getClosestRowBefore response -> locateRegion in ROOT -> passthrough -> retry RPC -> (continuation of Deferred@173057609 after retry RPC@121398849), errback=passthrough -> passthrough -> locateRegion errback -> retry RPC -> (continuation of Deferred@173057609 after retry RPC@121398849)), so the following Deferred is getting paused: Deferred@173057609(state=PAUSED, result=Deferred@240990493, callback=<none>, errback=<none>)
2012-07-25 11:50:36,563 DEBUG [main-EventThread] RegionClient: RPC queued: HBaseRpc(method="getClosestRowBefore", table="-ROOT-", key=".META.,t1,0,:,:", region=RegionInfo(table="-ROOT-", region_name="-ROOT-,,0", stop_key=""), attempt=0)
2012-07-25 11:50:36,563 DEBUG [main-EventThread] Deferred: callback=retry RPC@886376425 returned Deferred@234515736(state=PENDING, result=null, callback=type getClosestRowBefore response -> locateRegion in ROOT -> passthrough -> retry RPC -> (continuation of Deferred@873482707 after retry RPC@886376425), errback=passthrough -> passthrough -> locateRegion errback -> retry RPC -> (continuation of Deferred@873482707 after retry RPC@886376425)), so the following Deferred is getting paused: Deferred@873482707(state=PAUSED, result=Deferred@234515736, callback=<none>, errback=<none>)
2012-07-25 11:50:36,563 DEBUG [main-EventThread] RegionClient: RPC queued: HBaseRpc(method="getClosestRowBefore", table="-ROOT-", key=".META.,t0,1,:,:", region=RegionInfo(table="-ROOT-", region_name="-ROOT-,,0", stop_key=""), attempt=0)
2012-07-25 11:50:36,563 DEBUG [main-EventThread] Deferred: callback=retry RPC@425645657 returned Deferred@558529042(state=PENDING, result=null, callback=type getClosestRowBefore response -> locateRegion in ROOT -> passthrough -> retry RPC -> (continuation of Deferred@546587857 after retry RPC@425645657), errback=passthrough -> passthrough -> locateRegion errback -> retry RPC -> (continuation of Deferred@546587857 after retry RPC@425645657)), so the following Deferred is getting paused: Deferred@546587857(state=PAUSED, result=Deferred@558529042, callback=<none>, errback=<none>)
2012-07-25 11:50:36,563 DEBUG [main-EventThread] RegionClient: RPC queued: HBaseRpc(method="getClosestRowBefore", table="-ROOT-", key=".META.,t0,1,:,:", region=RegionInfo(table="-ROOT-", region_name="-ROOT-,,0", stop_key=""), attempt=0)
2012-07-25 11:50:36,564 DEBUG [main-EventThread] Deferred: callback=retry RPC@1714554380 returned Deferred@1579889214(state=PENDING, result=null, callback=type getClosestRowBefore response -> locateRegion in ROOT -> passthrough -> retry RPC -> (continuation of Deferred@1710208190 after retry RPC@1714554380), errback=passthrough -> passthrough -> locateRegion errback -> retry RPC -> (continuation of Deferred@1710208190 after retry RPC@1714554380)), so the following Deferred is getting paused: Deferred@1710208190(state=PAUSED, result=Deferred@1579889214, callback=<none>, errback=<none>)
2012-07-25 11:50:36,567 DEBUG [main-EventThread] Deferred: callback=disconnect callback@1223313598 returned Deferred@223205208(state=PENDING, result=null, callback=wait 2 RegionClient.shutdown() -> release resources callback -> (continuation of Deferred@1903483194 after disconnect callback@1223313598), errback=passthrough -> passthrough -> (continuation of Deferred@1903483194 after disconnect callback@1223313598)), so the following Deferred is getting paused: Deferred@1903483194(state=PAUSED, result=Deferred@223205208, callback=<none>, errback=<none>)
2012-07-25 11:50:36,568 DEBUG [main-EventThread] Deferred: callback=retry shutdown@778152199 returned Deferred@1903483194(state=PAUSED, result=Deferred@223205208, callback=(continuation of Deferred@228598363 after retry shutdown@778152199), errback=(continuation of Deferred@228598363 after retry shutdown@778152199)), so the following Deferred is getting paused: Deferred@228598363(state=PAUSED, result=Deferred@1903483194, callback=wakeup thread main, errback=wakeup thread main)
2012-07-25 11:50:36,568 DEBUG [main-EventThread] HBaseClient: Ignore any DEBUG exception from ZooKeeper
2012-07-25 11:50:36,568 DEBUG [main-EventThread] ZooKeeper: Closing session: 0x135ca75f41b6469
2012-07-25 11:50:36,568 DEBUG [main-EventThread] ClientCnxn: Closing client for session: 0x135ca75f41b6469
2012-07-25 11:50:36,586 DEBUG [New I/O  worker #1] RegionClient: [id: 0x6ab30913, /10.241.94.172:61070 => /172.22.0.31:60020] BOUND: /10.241.94.172:61070
2012-07-25 11:50:36,586 DEBUG [New I/O  worker #1] RegionClient: [id: 0x6ab30913, /10.241.94.172:61070 => /172.22.0.31:60020] CONNECTED: /172.22.0.31:60020
2012-07-25 11:50:36,599 DEBUG [main-SendThread(pww-30:2181)] ClientCnxn: Reading reply sessionid:0x135ca75f41b6469, packet:: clientPath:null serverPath:null finished:false header:: 2,-11  replyHeader:: 2,25784858193,0  request:: null response:: null
2012-07-25 11:50:36,599 DEBUG [main-SendThread(pww-30:2181)] ClientCnxn: An exception was thrown while closing send thread for session 0x135ca75f41b6469 : Unable to read additional data from server sessionid 0x135ca75f41b6469, likely server has closed socket
2012-07-25 11:50:36,602 DEBUG [main-EventThread] ClientCnxn: Disconnecting client for session: 0x135ca75f41b6469
2012-07-25 11:50:36,602 INFO  [main-EventThread] ZooKeeper: Session: 0x135ca75f41b6469 closed
2012-07-25 11:50:36,603 DEBUG [main-EventThread] HBaseClient: ZooKeeper#close completed in 34564000ns
2012-07-25 11:50:36,603 INFO  [main-EventThread] ClientCnxn: EventThread shut down
2012-07-25 11:50:36,607 DEBUG [New I/O  worker #1] RegionClient: Sending RPC #0, payload=BigEndianHeapChannelBuffer(ridx=0, widx=87, cap=87) "\x00\x00\x00S\x00\x00\x00\x00\x00\x12getProtocolVersion\x00\x00\x00\x02\n,org.apache.hadoop.hbase.ipc.HRegionInterface\x06\x00\x00\x00\x00\x00\x00\x00\x18"
2012-07-25 11:50:36,609 DEBUG [main-SendThread(pww-30:2181)] ClientCnxn: Ignoring exception during shutdown input
java.net.SocketException: Socket is not connected
    at sun.nio.ch.SocketChannelImpl.shutdown(Native Method) ~[na:1.6.0_32]
    at sun.nio.ch.SocketChannelImpl.shutdownInput(SocketChannelImpl.java:665) ~[na:1.6.0_32]
    at sun.nio.ch.SocketAdaptor.shutdownInput(SocketAdaptor.java:430) ~[na:1.6.0_32]
    at org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:1232) [zookeeper-3.3.4.jar:3.3.3-1203054]
    at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1211) [zookeeper-3.3.4.jar:3.3.3-1203054]
2012-07-25 11:50:36,613 DEBUG [New I/O  worker #1] RegionClient: [id: 0x6ab30913, /10.241.94.172:61070 => /172.22.0.31:60020] WRITTEN_AMOUNT: 226
2012-07-25 11:50:36,641 DEBUG [New I/O  worker #1] RegionClient: [id: 0x6ab30913, /10.241.94.172:61070 => /172.22.0.31:60020] RECEIVED: BigEndianHeapChannelBuffer(ridx=0, widx=22, cap=22)
2012-07-25 11:50:36,641 DEBUG [New I/O  worker #1] RegionClient: ------------------>> ENTERING DECODE >>------------------
2012-07-25 11:50:36,642 DEBUG [New I/O  worker #1] RegionClient: rpcid=0, response size=22 bytes, 0 readable bytes left, rpc=HBaseRpc(method="getProtocolVersion", table=null, key=null, region=null, attempt=0)
2012-07-25 11:50:36,642 DEBUG [New I/O  worker #1] RegionClient: Executing RPC queued: HBaseRpc(method="getClosestRowBefore", table="-ROOT-", key=".META.,t1,0,:,:", region=RegionInfo(table="-ROOT-", region_name="-ROOT-,,0", stop_key=""), attempt=0)
2012-07-25 11:50:36,643 DEBUG [New I/O  worker #1] RegionClient: Sending RPC #1, payload=BigEndianHeapChannelBuffer(ridx=0, widx=80, cap=84) "\x00\x00\x00L\x00\x00\x00\x01\x01\x00\x13getClosestRowBefore\x00\x00\x00\x00\x00\x00\x00\x1D\x00\x00\x00\x00\x00\x00\x00\x03\x0B\t-ROOT-,,0\x0B\x0F.META.,t1,0,:,:\x0B\x04info\x00\x00\x00\x00"
2012-07-25 11:50:36,643 DEBUG [New I/O  worker #1] RegionClient: [id: 0x6ab30913, /10.241.94.172:61070 => /172.22.0.31:60020] WRITTEN_AMOUNT: 80
2012-07-25 11:50:36,643 DEBUG [New I/O  worker #1] RegionClient: Executing RPC queued: HBaseRpc(method="getClosestRowBefore", table="-ROOT-", key=".META.,t1,0,:,:", region=RegionInfo(table="-ROOT-", region_name="-ROOT-,,0", stop_key=""), attempt=0)
2012-07-25 11:50:36,643 DEBUG [New I/O  worker #1] RegionClient: Sending RPC #2, payload=BigEndianHeapChannelBuffer(ridx=0, widx=80, cap=84) "\x00\x00\x00L\x00\x00\x00\x02\x01\x00\x13getClosestRowBefore\x00\x00\x00\x00\x00\x00\x00\x1D\x00\x00\x00\x00\x00\x00\x00\x03\x0B\t-ROOT-,,0\x0B\x0F.META.,t1,0,:,:\x0B\x04info\x00\x00\x00\x00"
2012-07-25 11:50:36,643 DEBUG [New I/O  worker #1] RegionClient: [id: 0x6ab30913, /10.241.94.172:61070 => /172.22.0.31:60020] WRITTEN_AMOUNT: 80
2012-07-25 11:50:36,643 DEBUG [New I/O  worker #1] RegionClient: Executing RPC queued: HBaseRpc(method="getClosestRowBefore", table="-ROOT-", key=".META.,t0,1,:,:", region=RegionInfo(table="-ROOT-", region_name="-ROOT-,,0", stop_key=""), attempt=0)
2012-07-25 11:50:36,643 DEBUG [New I/O  worker #1] RegionClient: Sending RPC #3, payload=BigEndianHeapChannelBuffer(ridx=0, widx=80, cap=84) "\x00\x00\x00L\x00\x00\x00\x03\x01\x00\x13getClosestRowBefore\x00\x00\x00\x00\x00\x00\x00\x1D\x00\x00\x00\x00\x00\x00\x00\x03\x0B\t-ROOT-,,0\x0B\x0F.META.,t0,1,:,:\x0B\x04info\x00\x00\x00\x00"
2012-07-25 11:50:36,643 DEBUG [New I/O  worker #1] RegionClient: [id: 0x6ab30913, /10.241.94.172:61070 => /172.22.0.31:60020] WRITTEN_AMOUNT: 80
2012-07-25 11:50:36,643 DEBUG [New I/O  worker #1] RegionClient: Executing RPC queued: HBaseRpc(method="getClosestRowBefore", table="-ROOT-", key=".META.,t0,1,:,:", region=RegionInfo(table="-ROOT-", region_name="-ROOT-,,0", stop_key=""), attempt=0)
2012-07-25 11:50:36,644 DEBUG [New I/O  worker #1] RegionClient: Sending RPC #4, payload=BigEndianHeapChannelBuffer(ridx=0, widx=80, cap=84) "\x00\x00\x00L\x00\x00\x00\x04\x01\x00\x13getClosestRowBefore\x00\x00\x00\x00\x00\x00\x00\x1D\x00\x00\x00\x00\x00\x00\x00\x03\x0B\t-ROOT-,,0\x0B\x0F.META.,t0,1,:,:\x0B\x04info\x00\x00\x00\x00"
2012-07-25 11:50:36,644 DEBUG [New I/O  worker #1] RegionClient: [id: 0x6ab30913, /10.241.94.172:61070 => /172.22.0.31:60020] WRITTEN_AMOUNT: 80
2012-07-25 11:50:36,644 DEBUG [New I/O  worker #1] RegionClient: ------------------<< LEAVING  DECODE <<------------------ time elapsed: 2463us
2012-07-25 11:50:36,667 DEBUG [New I/O  worker #1] RegionClient: [id: 0x6ab30913, /10.241.94.172:61070 => /172.22.0.31:60020] RECEIVED: BigEndianHeapChannelBuffer(ridx=0, widx=268, cap=268)
2012-07-25 11:50:36,667 DEBUG [New I/O  worker #1] RegionClient: ------------------>> ENTERING DECODE >>------------------
2012-07-25 11:50:36,667 DEBUG [New I/O  worker #1] RegionClient: rpcid=1, response size=268 bytes, 0 readable bytes left, rpc=HBaseRpc(method="getClosestRowBefore", table="-ROOT-", key=".META.,t1,0,:,:", region=RegionInfo(table="-ROOT-", region_name="-ROOT-,,0", stop_key=""), attempt=0)
2012-07-25 11:50:36,667 DEBUG [New I/O  worker #1] RegionInfo: Got ".META."'s region [""-"") offline=false, region_id=1, region_name=".META.,,1", split=false
2012-07-25 11:50:36,668 INFO  [New I/O  worker #1] HBaseClient: Added client for region RegionInfo(table=".META.", region_name=".META.,,1", stop_key=""), which was added to the regions cache.  Now we know that RegionClient@2128040981(chan=[id: 0x6ab30913, /10.241.94.172:61070 => /172.22.0.31:60020], #pending_rpcs=0, #batched=0, #rpcs_inflight=3) is hosting 1 region.
2012-07-25 11:50:36,668 DEBUG [New I/O  worker #1] RegionClient: Sending RPC #5, payload=BigEndianHeapChannelBuffer(ridx=0, widx=71, cap=75) "\x00\x00\x00C\x00\x00\x00\x05\x01\x00\x13getClosestRowBefore\x00\x00\x00\x00\x00\x00\x00\x1D\x00\x00\x00\x00\x00\x00\x00\x03\x0B\t.META.,,1\x0B\x06t1,0,:\x0B\x04info\x00\x00\x00\x00"
2012-07-25 11:50:36,668 DEBUG [New I/O  worker #1] RegionClient: [id: 0x6ab30913, /10.241.94.172:61070 => /172.22.0.31:60020] WRITTEN_AMOUNT: 71
2012-07-25 11:50:36,669 DEBUG [New I/O  worker #1] Deferred: callback=retry RPC@199156736 returned Deferred@1447739926(state=PENDING, result=null, callback=type getClosestRowBefore response -> locateRegion in META -> release .META. lookup permit -> passthrough -> retry RPC -> (continuation of Deferred@240990493 after retry RPC@199156736), errback=passthrough -> passthrough -> release .META. lookup permit -> locateRegion errback -> retry RPC -> (continuation of Deferred@240990493 after retry RPC@199156736)), so the following Deferred is getting paused: Deferred@240990493(state=PAUSED, result=Deferred@1447739926, callback=(continuation of Deferred@173057609 after retry RPC@121398849) -> notify DeferredGroup@1175215772 -> notify DeferredGroup@1444378545, errback=(continuation of Deferred@173057609 after retry RPC@121398849) -> notify DeferredGroup@1175215772 -> notify DeferredGroup@1444378545)
2012-07-25 11:50:36,669 DEBUG [New I/O  worker #1] RegionClient: ------------------<< LEAVING  DECODE <<------------------ time elapsed: 2006us
2012-07-25 11:50:36,669 DEBUG [New I/O  worker #1] RegionClient: [id: 0x6ab30913, /10.241.94.172:61070 => /172.22.0.31:60020] RECEIVED: BigEndianHeapChannelBuffer(ridx=0, widx=536, cap=536)
2012-07-25 11:50:36,669 DEBUG [New I/O  worker #1] RegionClient: ------------------>> ENTERING DECODE >>------------------
2012-07-25 11:50:36,669 DEBUG [New I/O  worker #1] RegionClient: rpcid=2, response size=268 bytes, 268 readable bytes left, rpc=HBaseRpc(method="getClosestRowBefore", table="-ROOT-", key=".META.,t1,0,:,:", region=RegionInfo(table="-ROOT-", region_name="-ROOT-,,0", stop_key=""), attempt=0)
2012-07-25 11:50:36,669 DEBUG [New I/O  worker #1] RegionInfo: Got ".META."'s region [""-"") offline=false, region_id=1, region_name=".META.,,1", split=false
2012-07-25 11:50:36,670 DEBUG [New I/O  worker #1] RegionClient: Sending RPC #6, payload=BigEndianHeapChannelBuffer(ridx=0, widx=71, cap=75) "\x00\x00\x00C\x00\x00\x00\x06\x01\x00\x13getClosestRowBefore\x00\x00\x00\x00\x00\x00\x00\x1D\x00\x00\x00\x00\x00\x00\x00\x03\x0B\t.META.,,1\x0B\x06t1,0,:\x0B\x04info\x00\x00\x00\x00"
2012-07-25 11:50:36,670 DEBUG [New I/O  worker #1] RegionClient: [id: 0x6ab30913, /10.241.94.172:61070 => /172.22.0.31:60020] WRITTEN_AMOUNT: 71
2012-07-25 11:50:36,670 DEBUG [New I/O  worker #1] Deferred: callback=retry RPC@515060127 returned Deferred@108973432(state=PENDING, result=null, callback=type getClosestRowBefore response -> locateRegion in META -> release .META. lookup permit -> passthrough -> retry RPC -> (continuation of Deferred@234515736 after retry RPC@515060127), errback=passthrough -> passthrough -> release .META. lookup permit -> locateRegion errback -> retry RPC -> (continuation of Deferred@234515736 after retry RPC@515060127)), so the following Deferred is getting paused: Deferred@234515736(state=PAUSED, result=Deferred@108973432, callback=(continuation of Deferred@873482707 after retry RPC@886376425) -> notify DeferredGroup@1175215772 -> notify DeferredGroup@1444378545, errback=(continuation of Deferred@873482707 after retry RPC@886376425) -> notify DeferredGroup@1175215772 -> notify DeferredGroup@1444378545)
2012-07-25 11:50:36,670 DEBUG [New I/O  worker #1] RegionClient: ------------------<< LEAVING  DECODE <<------------------ time elapsed: 720us
2012-07-25 11:50:36,670 DEBUG [New I/O  worker #1] RegionClient: ------------------>> ENTERING DECODE >>------------------
2012-07-25 11:50:36,670 DEBUG [New I/O  worker #1] RegionClient: rpcid=3, response size=268 bytes, 0 readable bytes left, rpc=HBaseRpc(method="getClosestRowBefore", table="-ROOT-", key=".META.,t0,1,:,:", region=RegionInfo(table="-ROOT-", region_name="-ROOT-,,0", stop_key=""), attempt=0)
2012-07-25 11:50:36,670 DEBUG [New I/O  worker #1] RegionInfo: Got ".META."'s region [""-"") offline=false, region_id=1, region_name=".META.,,1", split=false
2012-07-25 11:50:36,670 DEBUG [New I/O  worker #1] RegionClient: Sending RPC #7, payload=BigEndianHeapChannelBuffer(ridx=0, widx=71, cap=75) "\x00\x00\x00C\x00\x00\x00\x07\x01\x00\x13getClosestRowBefore\x00\x00\x00\x00\x00\x00\x00\x1D\x00\x00\x00\x00\x00\x00\x00\x03\x0B\t.META.,,1\x0B\x06t0,1,:\x0B\x04info\x00\x00\x00\x00"
2012-07-25 11:50:36,670 DEBUG [New I/O  worker #1] RegionClient: [id: 0x6ab30913, /10.241.94.172:61070 => /172.22.0.31:60020] WRITTEN_AMOUNT: 71
2012-07-25 11:50:36,671 DEBUG [New I/O  worker #1] Deferred: callback=retry RPC@1860404380 returned Deferred@454514340(state=PENDING, result=null, callback=type getClosestRowBefore response -> locateRegion in META -> release .META. lookup permit -> passthrough -> retry RPC -> (continuation of Deferred@558529042 after retry RPC@1860404380), errback=passthrough -> passthrough -> release .META. lookup permit -> locateRegion errback -> retry RPC -> (continuation of Deferred@558529042 after retry RPC@1860404380)), so the following Deferred is getting paused: Deferred@558529042(state=PAUSED, result=Deferred@454514340, callback=(continuation of Deferred@546587857 after retry RPC@425645657) -> notify DeferredGroup@1175215772 -> notify DeferredGroup@1444378545, errback=(continuation of Deferred@546587857 after retry RPC@425645657) -> notify DeferredGroup@1175215772 -> notify DeferredGroup@1444378545)
2012-07-25 11:50:36,671 DEBUG [New I/O  worker #1] RegionClient: ------------------<< LEAVING  DECODE <<------------------ time elapsed: 684us
2012-07-25 11:50:36,686 DEBUG [New I/O  worker #1] RegionClient: [id: 0x6ab30913, /10.241.94.172:61070 => /172.22.0.31:60020] RECEIVED: BigEndianHeapChannelBuffer(ridx=0, widx=268, cap=268)
2012-07-25 11:50:36,686 DEBUG [New I/O  worker #1] RegionClient: ------------------>> ENTERING DECODE >>------------------
2012-07-25 11:50:36,686 DEBUG [New I/O  worker #1] RegionClient: rpcid=4, response size=268 bytes, 0 readable bytes left, rpc=HBaseRpc(method="getClosestRowBefore", table="-ROOT-", key=".META.,t0,1,:,:", region=RegionInfo(table="-ROOT-", region_name="-ROOT-,,0", stop_key=""), attempt=0)
2012-07-25 11:50:36,686 DEBUG [New I/O  worker #1] RegionInfo: Got ".META."'s region [""-"") offline=false, region_id=1, region_name=".META.,,1", split=false
2012-07-25 11:50:36,686 DEBUG [New I/O  worker #1] RegionClient: Sending RPC #8, payload=BigEndianHeapChannelBuffer(ridx=0, widx=71, cap=75) "\x00\x00\x00C\x00\x00\x00\x08\x01\x00\x13getClosestRowBefore\x00\x00\x00\x00\x00\x00\x00\x1D\x00\x00\x00\x00\x00\x00\x00\x03\x0B\t.META.,,1\x0B\x06t0,1,:\x0B\x04info\x00\x00\x00\x00"
2012-07-25 11:50:36,687 DEBUG [New I/O  worker #1] RegionClient: [id: 0x6ab30913, /10.241.94.172:61070 => /172.22.0.31:60020] WRITTEN_AMOUNT: 71
2012-07-25 11:50:36,687 DEBUG [New I/O  worker #1] Deferred: callback=retry RPC@423180001 returned Deferred@2067921590(state=PENDING, result=null, callback=type getClosestRowBefore response -> locateRegion in META -> release .META. lookup permit -> passthrough -> retry RPC -> (continuation of Deferred@1579889214 after retry RPC@423180001), errback=passthrough -> passthrough -> release .META. lookup permit -> locateRegion errback -> retry RPC -> (continuation of Deferred@1579889214 after retry RPC@423180001)), so the following Deferred is getting paused: Deferred@1579889214(state=PAUSED, result=Deferred@2067921590, callback=(continuation of Deferred@1710208190 after retry RPC@1714554380) -> notify DeferredGroup@1175215772 -> notify DeferredGroup@1444378545, errback=(continuation of Deferred@1710208190 after retry RPC@1714554380) -> notify DeferredGroup@1175215772 -> notify DeferredGroup@1444378545)
2012-07-25 11:50:36,687 DEBUG [New I/O  worker #1] RegionClient: ------------------<< LEAVING  DECODE <<------------------ time elapsed: 852us
2012-07-25 11:50:36,696 DEBUG [New I/O  worker #1] RegionClient: [id: 0x6ab30913, /10.241.94.172:61070 => /172.22.0.31:60020] RECEIVED: BigEndianHeapChannelBuffer(ridx=0, widx=392, cap=392)
2012-07-25 11:50:36,696 DEBUG [New I/O  worker #1] RegionClient: ------------------>> ENTERING DECODE >>------------------
2012-07-25 11:50:36,696 DEBUG [New I/O  worker #1] RegionClient: rpcid=5, response size=392 bytes, 0 readable bytes left, rpc=HBaseRpc(method="getClosestRowBefore", table=".META.", key="t1,0,:", region=RegionInfo(table=".META.", region_name=".META.,,1", stop_key=""), attempt=0)
2012-07-25 11:50:36,696 DEBUG [New I/O  worker #1] RegionInfo: Got "t1"'s region [""-"") offline=false, region_id=1343241605007, region_name="t1,,1343241605007.89bcf35b2666c0a5a6187372da617e0b.", split=false
2012-07-25 11:50:36,697 INFO  [New I/O  worker #1] HBaseClient: Added client for region RegionInfo(table="t1", region_name="t1,,1343241605007.89bcf35b2666c0a5a6187372da617e0b.", stop_key=""), which was added to the regions cache.  Now we know that RegionClient@2128040981(chan=[id: 0x6ab30913, /10.241.94.172:61070 => /172.22.0.31:60020], #pending_rpcs=0, #batched=0, #rpcs_inflight=3) is hosting 2 regions.
2012-07-25 11:50:36,701 DEBUG [New I/O  worker #1] Deferred: callback=retry RPC@1505888432 returned Deferred@1372096729(state=PENDING, result=null, callback=(continuation of Deferred@1447739926 after retry RPC@1505888432), errback=(continuation of Deferred@1447739926 after retry RPC@1505888432)), so the following Deferred is getting paused: Deferred@1447739926(state=PAUSED, result=Deferred@1372096729, callback=(continuation of Deferred@240990493 after retry RPC@199156736), errback=(continuation of Deferred@240990493 after retry RPC@199156736))
2012-07-25 11:50:36,701 DEBUG [New I/O  worker #1] RegionClient: ------------------<< LEAVING  DECODE <<------------------ time elapsed: 5409us
2012-07-25 11:50:36,702 DEBUG [New I/O  worker #1] RegionClient: [id: 0x6ab30913, /10.241.94.172:61070 => /172.22.0.31:60020] RECEIVED: BigEndianHeapChannelBuffer(ridx=0, widx=784, cap=784)
2012-07-25 11:50:36,702 DEBUG [New I/O  worker #1] RegionClient: ------------------>> ENTERING DECODE >>------------------
2012-07-25 11:50:36,702 DEBUG [New I/O  worker #1] RegionClient: rpcid=6, response size=392 bytes, 392 readable bytes left, rpc=HBaseRpc(method="getClosestRowBefore", table=".META.", key="t1,0,:", region=RegionInfo(table=".META.", region_name=".META.,,1", stop_key=""), attempt=0)
2012-07-25 11:50:36,702 DEBUG [New I/O  worker #1] RegionInfo: Got "t1"'s region [""-"") offline=false, region_id=1343241605007, region_name="t1,,1343241605007.89bcf35b2666c0a5a6187372da617e0b.", split=false
2012-07-25 11:50:36,702 DEBUG [New I/O  worker #1] Deferred: callback=retry RPC@1150322817 returned Deferred@1827795025(state=PENDING, result=null, callback=(continuation of Deferred@108973432 after retry RPC@1150322817), errback=(continuation of Deferred@108973432 after retry RPC@1150322817)), so the following Deferred is getting paused: Deferred@108973432(state=PAUSED, result=Deferred@1827795025, callback=(continuation of Deferred@234515736 after retry RPC@515060127), errback=(continuation of Deferred@234515736 after retry RPC@515060127))
2012-07-25 11:50:36,702 DEBUG [New I/O  worker #1] RegionClient: ------------------<< LEAVING  DECODE <<------------------ time elapsed: 471us
2012-07-25 11:50:36,702 DEBUG [New I/O  worker #1] RegionClient: ------------------>> ENTERING DECODE >>------------------
2012-07-25 11:50:36,702 DEBUG [New I/O  worker #1] RegionClient: rpcid=7, response size=392 bytes, 0 readable bytes left, rpc=HBaseRpc(method="getClosestRowBefore", table=".META.", key="t0,1,:", region=RegionInfo(table=".META.", region_name=".META.,,1", stop_key=""), attempt=0)
2012-07-25 11:50:36,702 DEBUG [New I/O  worker #1] RegionInfo: Got "t0"'s region [""-"") offline=false, region_id=1342830171798, region_name="t0,,1342830171798.892d1127298fd308960c5c8173264038.", split=false
2012-07-25 11:50:36,702 INFO  [New I/O  worker #1] HBaseClient: Added client for region RegionInfo(table="t0", region_name="t0,,1342830171798.892d1127298fd308960c5c8173264038.", stop_key=""), which was added to the regions cache.  Now we know that RegionClient@2128040981(chan=[id: 0x6ab30913, /10.241.94.172:61070 => /172.22.0.31:60020], #pending_rpcs=0, #batched=2, #rpcs_inflight=1) is hosting 3 regions.
2012-07-25 11:50:36,703 DEBUG [New I/O  worker #1] Deferred: callback=retry RPC@2144898487 returned Deferred@1648372467(state=PENDING, result=null, callback=(continuation of Deferred@454514340 after retry RPC@2144898487), errback=(continuation of Deferred@454514340 after retry RPC@2144898487)), so the following Deferred is getting paused: Deferred@454514340(state=PAUSED, result=Deferred@1648372467, callback=(continuation of Deferred@558529042 after retry RPC@1860404380), errback=(continuation of Deferred@558529042 after retry RPC@1860404380))
2012-07-25 11:50:36,703 DEBUG [New I/O  worker #1] RegionClient: ------------------<< LEAVING  DECODE <<------------------ time elapsed: 540us
2012-07-25 11:50:36,705 DEBUG [New I/O  worker #1] RegionClient: [id: 0x6ab30913, /10.241.94.172:61070 => /172.22.0.31:60020] RECEIVED: BigEndianHeapChannelBuffer(ridx=0, widx=392, cap=392)
2012-07-25 11:50:36,706 DEBUG [New I/O  worker #1] RegionClient: ------------------>> ENTERING DECODE >>------------------
2012-07-25 11:50:36,706 DEBUG [New I/O  worker #1] RegionClient: rpcid=8, response size=392 bytes, 0 readable bytes left, rpc=HBaseRpc(method="getClosestRowBefore", table=".META.", key="t0,1,:", region=RegionInfo(table=".META.", region_name=".META.,,1", stop_key=""), attempt=0)
2012-07-25 11:50:36,706 DEBUG [New I/O  worker #1] RegionInfo: Got "t0"'s region [""-"") offline=false, region_id=1342830171798, region_name="t0,,1342830171798.892d1127298fd308960c5c8173264038.", split=false
2012-07-25 11:50:36,706 DEBUG [New I/O  worker #1] Deferred: callback=retry RPC@1657006569 returned Deferred@741470887(state=PENDING, result=null, callback=(continuation of Deferred@2067921590 after retry RPC@1657006569), errback=(continuation of Deferred@2067921590 after retry RPC@1657006569)), so the following Deferred is getting paused: Deferred@2067921590(state=PAUSED, result=Deferred@741470887, callback=(continuation of Deferred@1579889214 after retry RPC@423180001), errback=(continuation of Deferred@1579889214 after retry RPC@423180001))
2012-07-25 11:50:36,706 DEBUG [New I/O  worker #1] RegionClient: ------------------<< LEAVING  DECODE <<------------------ time elapsed: 600us
2012-07-25 11:50:37,601 DEBUG [Hashed wheel timer #1] RegionClient: Periodic flush timer: flushing RPCs for RegionClient@2128040981(chan=[id: 0x6ab30913, /10.241.94.172:61070 => /172.22.0.31:60020], #pending_rpcs=0, #batched=4, #rpcs_inflight=0)
2012-07-25 11:50:37,602 DEBUG [Hashed wheel timer #1] RegionClient: Sending RPC #9, payload=BigEndianHeapChannelBuffer(ridx=0, widx=382, cap=404) [0, 0, 1, 122, 0, 0, 0, 9, 1, 0, 5, 109, 117, 108, 116, 105, 0, 0, 0, 0, 0, 0, 0, 29, 0, 0, 0, 0, 0, 0, 0, 1, 66, 66, 0, 0, 0, 2, 51, 116, 48, 44, 44, 49, 51, 52, 50, 56, 51, 48, 49, 55, 49, 55, 57, 56, 46, 56, 57, 50, 100, 49, 49, 50, 55, 50, 57, 56, 102, 100, 51, 48, 56, 57, 54, 48, 99, 53, 99, 56, 49, 55, 51, 50, 54, 52, 48, 51, 56, 46, 0, 0, 0, 2, 65, 65, 64, 31, 3, 1, 49, 127, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 14, 17, 14, 65, 65, 64, 35, 2, 1, 49, 127, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 1, 0, 0, 0, 1, 1, 102, 0, 0, 0, 0, 0, 0, 0, 28, 0, 0, 0, 24, 0, 0, 0, 15, 0, 0, 0, 1, 0, 1, 49, 1, 102, 113, 127, -1, -1, -1, -1, -1, -1, -1, 4, 118, 0, 0, 0, 0, 0, 0, 0, 1, 14, 17, 14, 51, 116, 49, 44, 44, 49, 51, 52, 51, 50, 52, 49, 54, 48, 53, 48, 48, 55, 46, 56, 57, 98, 99, 102, 51, 53, 98, 50, 54, 54, 54, 99, 48, 97, 53, 97, 54, 49, 56, 55, 51, 55, 50, 100, 97, 54, 49, 55, 101, 48, 98, 46, 0, 0, 0, 2, 65, 65, 64, 31, 3, 1, 48, 127, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 14, 17, 14, 65, 65, 64, 35, 2, 1, 48, 127, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 1, 0, 0, 0, 1, 1, 102, 0, 0, 0, 1, 0, 0, 0, 28, 0, 0, 0, 24, 0, 0, 0, 15, 0, 0, 0, 1, 0, 1, 48, 1, 102, 113, 127, -1, -1, -1, -1, -1, -1, -1, 4, 118, 0, 0, 0, 0, 0, 0, 0, 1, 14, 17, 14, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
2012-07-25 11:50:37,603 DEBUG [New I/O  worker #1] RegionClient: [id: 0x6ab30913, /10.241.94.172:61070 => /172.22.0.31:60020] WRITTEN_AMOUNT: 382
2012-07-25 11:50:37,633 DEBUG [New I/O  worker #1] RegionClient: [id: 0x6ab30913, /10.241.94.172:61070 => /172.22.0.31:60020] RECEIVED: BigEndianHeapChannelBuffer(ridx=0, widx=102, cap=102)
2012-07-25 11:50:37,633 DEBUG [New I/O  worker #1] RegionClient: ------------------>> ENTERING DECODE >>------------------
2012-07-25 11:50:37,634 DEBUG [New I/O  worker #1] RegionClient: rpcid=9, response size=102 bytes, 0 readable bytes left, rpc=MultiAction(batch=[DeleteRequest(table="t0", key="1", family="", qualifiers=[""], attempt=4, region=RegionInfo(table="t0", region_name="t0,,1342830171798.892d1127298fd308960c5c8173264038.", stop_key="")), PutRequest(table="t0", key="1", family="f", qualifiers=["q"], values=["v"], timestamp=9223372036854775807, lockid=-1, durable=true, bufferable=true, attempt=4, region=RegionInfo(table="t0", region_name="t0,,1342830171798.892d1127298fd308960c5c8173264038.", stop_key="")), DeleteRequest(table="t1", key="0", family="", qualifiers=[""], attempt=4, region=RegionInfo(table="t1", region_name="t1,,1343241605007.89bcf35b2666c0a5a6187372da617e0b.", stop_key="")), PutRequest(table="t1", key="0", family="f", qualifiers=["q"], values=["v"], timestamp=9223372036854775807, lockid=-1, durable=true, bufferable=true, attempt=4, region=RegionInfo(table="t1", region_name="t1,,1343241605007.89bcf35b2666c0a5a6187372da617e0b.", stop_key=""))])
2012-07-25 11:50:37,634 DEBUG [New I/O  worker #1] RegionClient: ------------------<< LEAVING  DECODE <<------------------ time elapsed: 1434us
Exception in thread "main" com.stumbleupon.async.DeferredGroupException: At least one of the Deferreds failed, first exception:
    at com.stumbleupon.async.DeferredGroup.recordCompletion(DeferredGroup.java:107)
    at com.stumbleupon.async.DeferredGroup.access$000(DeferredGroup.java:36)
    at com.stumbleupon.async.DeferredGroup$1Notify.call(DeferredGroup.java:69)
    at com.stumbleupon.async.Deferred.doCall(Deferred.java:1261)
    at com.stumbleupon.async.Deferred.runCallbacks(Deferred.java:1240)
    at com.stumbleupon.async.Deferred.callback(Deferred.java:988)
    at com.stumbleupon.async.DeferredGroup.recordCompletion(DeferredGroup.java:107)
    at com.stumbleupon.async.DeferredGroup.access$000(DeferredGroup.java:36)
    at com.stumbleupon.async.DeferredGroup$1Notify.call(DeferredGroup.java:69)
    at com.stumbleupon.async.Deferred.doCall(Deferred.java:1261)
    at com.stumbleupon.async.Deferred.runCallbacks(Deferred.java:1240)
    at com.stumbleupon.async.Deferred.access$300(Deferred.java:430)
    at com.stumbleupon.async.Deferred$Continue.call(Deferred.java:1349)
    at com.stumbleupon.async.Deferred.doCall(Deferred.java:1261)
    at com.stumbleupon.async.Deferred.runCallbacks(Deferred.java:1240)
    at com.stumbleupon.async.Deferred.access$300(Deferred.java:430)
    at com.stumbleupon.async.Deferred$Continue.call(Deferred.java:1349)
    at com.stumbleupon.async.Deferred.doCall(Deferred.java:1261)
    at com.stumbleupon.async.Deferred.runCallbacks(Deferred.java:1240)
    at com.stumbleupon.async.Deferred.callback(Deferred.java:988)
    at org.hbase.async.HBaseRpc.callback(HBaseRpc.java:443)
    at org.hbase.async.RegionClient$1MultiActionCallback.handleException(RegionClient.java:663)
    at org.hbase.async.RegionClient$1MultiActionCallback.call(RegionClient.java:630)
    at com.stumbleupon.async.Deferred.doCall(Deferred.java:1261)
    at com.stumbleupon.async.Deferred.runCallbacks(Deferred.java:1240)
    at com.stumbleupon.async.Deferred.callback(Deferred.java:988)
    at org.hbase.async.HBaseRpc.callback(HBaseRpc.java:443)
    at org.hbase.async.RegionClient.decode(RegionClient.java:1114)
    at org.hbase.async.RegionClient.decode(RegionClient.java:82)
    at org.jboss.netty.handler.codec.replay.ReplayingDecoder.callDecode(ReplayingDecoder.java:502)
    at org.jboss.netty.handler.codec.replay.ReplayingDecoder.messageReceived(ReplayingDecoder.java:437)
    at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:75)
    at org.hbase.async.RegionClient.handleUpstream(RegionClient.java:938)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:563)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:558)
    at org.hbase.async.HBaseClient$RegionClientPipeline.sendUpstream(HBaseClient.java:2425)
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255)
    at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:91)
    at org.jboss.netty.channel.socket.nio.AbstractNioWorker.processSelectedKeys(AbstractNioWorker.java:373)
    at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:247)
    at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:35)
    at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:102)
    at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
    at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
    at java.lang.Thread.run(Thread.java:680)
Caused by: com.stumbleupon.async.DeferredGroupException: At least one of the Deferreds failed, first exception:
    ... 41 more
Caused by: org.hbase.async.RemoteException: IPC server unable to read call parameters: Error in readFields
    at org.hbase.async.RegionClient.deserializeException(RegionClient.java:1186)
    at org.hbase.async.RegionClient.deserialize(RegionClient.java:1161)
    at org.hbase.async.RegionClient.decode(RegionClient.java:1082)
    ... 19 more

server log:

2012-07-25 14:50:34,946 WARN org.apache.hadoop.ipc.HBaseServer: Unable to read call parameters for client 192.168.100.252
java.io.IOException: Error in readFields
    at org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:655)
    at org.apache.hadoop.hbase.ipc.Invocation.readFields(Invocation.java:125)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Connection.processData(HBaseServer.java:1238)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Connection.readAndProcess(HBaseServer.java:1167)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Listener.doRead(HBaseServer.java:703)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader.doRunLoop(HBaseServer.java:495)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader.run(HBaseServer.java:470)
    at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
    at java.lang.Thread.run(Thread.java:619)
Caused by: java.io.IOException: Error in readFields
    at org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:655)
    at org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:565)
    at org.apache.hadoop.hbase.client.MultiAction.readFields(MultiAction.java:116)
    at org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:652)
    ... 9 more
Caused by: java.io.IOException: Error in readFields
    at org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:655)
    at org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:565)
    at org.apache.hadoop.hbase.client.Action.readFields(Action.java:101)
    at org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:652)
    ... 12 more
Caused by: java.lang.NegativeArraySizeException: -1
    at org.apache.hadoop.hbase.util.Bytes.readByteArray(Bytes.java:147)
    at org.apache.hadoop.hbase.client.OperationWithAttributes.readAttributes(OperationWithAttributes.java:102)
    at org.apache.hadoop.hbase.client.Put.readFields(Put.java:399)
    at org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:652)
    ... 15 more

Batch Gets

I couldn't find the support for sending multiple get requests in a batch. HTable supports it and I wonder why this is left out in asynchbase for it looks like a common useful operation.

Is asynchbase using Deferred concept in the right way?

Apology at first because this question should be raise in groups. But since the project has no groups or mail list, so I have to resort to the Github issue.

What I wonder is that, is asynchbase using Deferred concept in the right way. I raise a GetRequest first, then get the Deferred object, then bind the callback to the object. However I guess sometimes this way won't work. Let me take the following extremely case for example.

val deferred = client.get(request)
// wait or hang for a LONG time.
deferred addCallback {
// process.
}

Since we add callback to the chain too later, I guess asynchbase engine will not execute the callback, will it?

I think the right form should be the following instead of above

client.get(request, new Callback {
});

and before launch the operation, engine has to make sure the callback already bind to the operation.

Recently I learn Twitter's finagle, they bring up the concept of Future much like Deferred. However the biggest difference is Future binds to service, and Deferred binds to request.

META cache fails to find the right region under some circumstances

I've just uncovered at least one case where the client does a META lookup, locates the correct region, but doesn't find properly in its META cache, which causes it to do a META lookup again, and again, until it gives up.

I'm writing a unit test and fixing the bug. Will add more info later once I really understand the nature of the problem. But it definitely looks like a bug in the way regions are looked up in the META cache from my initial analysis of DEBUG-level traces.

apparent multiput wire protocol incompatibility with 0.92.0-rc4

An OOME is caused in the receiving HBase server. I captured the following byte array at the receiving end:

[0, 0, 0, -67, 0, 0, 0, 28, 1, 0, 8, 109, 117, 108, 116, 105, 80, 117, 116, 0, 0, 0, 0, 0, 0, 0, 29, 1, 0, 0, 0, 0, 0, 0, 1, 57, 57, 0, 0, 0, 0, 53, 116, 115, 100, 98, 44, 44, 49, 51, 50, 54, 57, 51, 56, 48, 51, 51, 50, 50, 49, 46, 50, 48, 56, 56, 50, 51, 50, 101, 54, 98, 99, 48, 100, 57, 48, 55, 97, 55, 97, 101, 52, 50, 101, 99, 50, 51, 99, 101, 49, 102, 54, 55, 46, 0, 0, 0, 1, 1, 13, 0, 4, 1, 0, 18, -55, 96, 0, 0, 2, 0, 0, 4, 127, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 0, 0, 0, 0, 1, 1, 116, 0, 0, 0, 1, 0, 0, 0, 48, 0, 0, 0, 44, 0, 0, 0, 28, 0, 0, 0, 8, 0, 13, 0, 4, 1, 0, 18, -55, 96, 0, 0, 2, 0, 0, 4, 1, 116, 2, 39, 127, -1, -1, -1, -1, -1, -1, -1, 4, 0, 0, 0, 0, 0, 0, 0, 12, 0, 0, 0, 0]

The cause of death is Array.newInstance with an array length somewhere over 900 million (it appeared to have decoded the array length from within the multiPut magic token).

Installing on freebsd -

Hello,

I am trying to build on FreeBSD 9.1-RC2 and have hit the following. Its past two AM so I am probable being stupid :)

I got asynchbase-1.4.1.jar from: https://code.google.com/p/opentsdb/downloads/detail?name=asynchbase-1.4.1.jar&can=2&q=

Thanks,

Andrew

root@machine:/root/opentsdb # ./build.sh
+ test -f configure
+ test -d build
+ cd build
+ test -f Makefile
+ exec make
set dummy "http://opentsdb.googlecode.com/files" "third_party/hbase/asynchbase-1.4.1.jar"; shift; ./build-aux/fetchdep.sh "third_party/hbase/asynchbase-1.4.1.jar"
./build-aux/fetchdep.sh: not found
*** [third_party/hbase/asynchbase-1.4.1.jar] Error code 127

Stop in /root/opentsdb/build.
root@machine:/root/opentsdb # ls -l third_party/hbase/asynchbase-1.4.1.jar
-rwxrwxrwx  1 root  wheel  170492 Feb 15 03:34 third_party/hbase/asynchbase-1.4.1.jar

Prevent concurrent -ROOT- lookups

HBase, unlike Bigtable, only has one region in the -ROOT- table, containing a single entry, which points to the .META. table, which itself can also only have a single region. The -ROOT- table is currently useless to HBase, and I've heard rumors that it may go away in the future. But that's besides the point.

This issue is about making sure that we only allow one concurrent lookup in -ROOT-. Whenever we kick off a getClosestRowBefore RPC, we could stash the Deferred somewhere, and next time we attempt to look something up in -ROOT-, we just return the existing Deferred instead of kicking off another RPC. One of the first callbacks on the RPC should be nulling the reference to the Deferred.

This optimization should be harmless should HBase decide to allow .META. to be split in the future, because the code handling such lookups always verifies that it got the correct key-range. So in the unlikely event that this optimization is used against a cluster that has a split .META., the code should still behave correctly albeit with poorer performance. If there are too many (~5 or more) regions in .META. there could be spurious RPC failures due to the number of retries, but that seems extremely unlikely at this point.

Allow custom Zookeeper client ports

There appears to be no way to make the HBaseClient recognize non-standard port values when the hbase.zookeeper.property.clientPort value is specified in hbase-site.xml. Instead, asynchbase seems to always use default port of 2181 when attempting to connect to Zookeeper nodes.

NSRE logic may not be handling split regions correctly

I have seen more than a few instances where the client does not correctly handle the splitting of a region correctly and gets stuck NSREing. Here are the client and server logs for one such instance:

Client log:
2013-07-29 12:36:20,967 [New I/O  worker #143] INFO  org.hbase.async.HBaseClient - Invalidated cache for RegionInfo(table="userprofile", region_name="userprofile,c823032832619135341,1375090287278.26f0768bd78b19a4b531477c9c965b4e.", stop_key="gCAESEAkzpFqZMGF3I8iOAO3JQIw") as RegionClient@687935391(chan=[id: 0x052cd074, /172.22.5.48:34054 => /172.22.4.69:60020], #pending_rpcs=0, #batched=0, #rpcs_inflight=471913) seems to be splitting or closing it.
2013-07-29 12:36:21,369 [New I/O  worker #130] INFO  org.hbase.async.HBaseClient - Added client for region RegionInfo(table="userprofile", region_name="userprofile,c823032832619135341,1375090287278.26f0768bd78b19a4b531477c9c965b4e.", stop_key="gCAESEAkzpFqZMGF3I8iOAO3JQIw"), which was added to the regions cache.  Now we know that RegionClient@687935391(chan=[id: 0x052cd074, /172.22.5.48:34054 => /172.22.4.69:60020], #pending_rpcs=0, #batched=0, #rpcs_inflight=471935) is hosting 209 regions.
2013-07-29 12:53:35,973 [New I/O  worker #143] INFO  org.hbase.async.HBaseClient - There are now 500 RPCs pending due to NSRE on "userprofile,c823032832619135341,1375090287278.26f0768bd78b19a4b531477c9c965b4e."

.. and the pending list continues to grow ..
Hbase Master log:
2013-07-29 12:36:24,859 INFO org.apache.hadoop.hbase.master.handler.SplitRegionHandler: Handled SPLIT event; parent=userprofile,c823032832619135341,1375090287278.26f0768bd78b19a4b531477c9c965b4e. daughter a=userprofile,c823032832619135341,1375115771316.b2a8c792ede06899da69e55003798f0f.daughter b=userprofile,c830351181437816853,1375115771316.d5f4062e7184a8511347050c0fe9e859.

Ill continue to investigate and try to reproduce it, but have you seen this behavior before or can you come up with a theory? Could this be a slight variation of the theory you tried to propose in #1?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.