Coder Social home page Coder Social logo

tbg / cockroach Goto Github PK

View Code? Open in Web Editor NEW

This project forked from cockroachdb/cockroach

1.0 2.0 0.0 1.49 GB

A Scalable, Geo-Replicated, Transactional Datastore

License: Other

Shell 0.50% Go 89.71% CSS 0.01% Makefile 0.14% C++ 0.10% C 0.21% Awk 0.01% Yacc 0.68% TypeScript 4.99% JavaScript 0.06% Tcl 0.25% Python 0.08% Assembly 0.01% Dockerfile 0.03% HCL 0.09% Starlark 2.73% HTML 0.03% Stylus 0.25% Ruby 0.02% SCSS 0.13%

cockroach's Introduction


CockroachDB is a cloud-native distributed SQL database designed to build, scale, and manage modern, data-intensive applications.

What is CockroachDB?

CockroachDB is a distributed SQL database built on a transactional and strongly-consistent key-value store. It scales horizontally; survives disk, machine, rack, and even datacenter failures with minimal latency disruption and no manual intervention; supports strongly-consistent ACID transactions; and provides a familiar SQL API for structuring, manipulating, and querying data.

For more details, see our FAQ or architecture document.

Docs

For guidance on installation, development, deployment, and administration, see our User Documentation.

Starting with CockroachCloud

We can run CockroachDB for you, so you don't have to run your own cluster.

See our online documentation: Quickstart with CockroachCloud

Starting with CockroachDB

  1. Install CockroachDB: using a pre-built executable or build it from source.
  2. Start a local cluster and connect to it via the built-in SQL client.
  3. Learn more about CockroachDB SQL.
  4. Use a PostgreSQL-compatible driver or ORM to build an app with CockroachDB.
  5. Explore core features, such as data replication, automatic rebalancing, and fault tolerance and recovery.

Client Drivers

CockroachDB supports the PostgreSQL wire protocol, so you can use any available PostgreSQL client drivers to connect from various languages.

Deployment

  • CockroachCloud - Steps to create a free CockroachCloud cluster on your preferred Cloud platform.
  • Manual - Steps to deploy a CockroachDB cluster manually on multiple machines.
  • Cloud - Guides for deploying CockroachDB on various cloud platforms.
  • Orchestration - Guides for running CockroachDB with popular open-source orchestration systems.

Need Help?

Building from source

See our wiki for more details.

Contributing

We welcome your contributions! If you're looking for issues to work on, try looking at the good first issue list. We do our best to tag issues suitable for new external contributors with that label, so it's a great way to find something you can help with!

See our wiki for more details.

Engineering discussions take place on our public mailing list, [email protected]. Also please join our Community Slack (there's a dedicated #contributors channel!) to ask questions, discuss your ideas, and connect with other contributors.

Design

For an in-depth discussion of the CockroachDB architecture, see our Architecture Guide. For the original design motivation, see our design doc.

Licensing

Current CockroachDB code is released under a combination of two licenses, the Business Source License (BSL) and the Cockroach Community License (CCL).

When contributing to a CockroachDB feature, you can find the relevant license in the comments at the top of each file.

For more information, see the Licensing FAQs.

Comparison with Other Databases

To see how key features of CockroachDB stack up against other databases, check out CockroachDB in Comparison.

See Also

cockroach's People

Contributors

a-robinson avatar ajwerner avatar andreimatei avatar andy-kimball avatar asubiotto avatar bdarnell avatar benesch avatar bramgruneir avatar couchand avatar danhhz avatar dt avatar irfansharif avatar jordanlewis avatar kkaneda avatar knz avatar koorosh avatar maddyblue avatar mgartner avatar nvanbenschoten avatar otan avatar petermattis avatar raduberinde avatar rafiss avatar rickystewart avatar rytaft avatar spencerkimball avatar tamird avatar tbg avatar vivekmenezes avatar yuzefovich avatar

Stargazers

 avatar

Watchers

 avatar  avatar

cockroach's Issues

teamcity: failed tests on release-banana: lint/TestLint, test/TestImportPgDump

The following tests appear to have failed:

#864629:

--- FAIL: test/TestImportPgDump/read_data_only (0.000s)
Test ended in panic.

------- Stdout: -------
I180827 20:41:54.053559 52667 storage/replica_command.go:298  [n1,s1,r23/1:/{Table/52-Max}] initiating a split of this range at key /Table/53/1/106 [r24]
I180827 20:41:54.062208 52385 storage/replica_range_lease.go:554  [replicate,n1,s1,r23/1:/Table/5{2-3/1/106}] transferring lease to s2
I180827 20:41:54.063407 52385 storage/replica_range_lease.go:617  [replicate,n1,s1,r23/1:/Table/5{2-3/1/106}] done transferring lease to s2: <nil>
I180827 20:41:54.063498 51617 storage/replica_proposal.go:210  [n2,s2,r23/3:/Table/5{2-3/1/106}] new range lease repl=(n2,s2):3 seq=3 start=1535402514.062230012,0 epo=1 pro=1535402514.062232488,0 following repl=(n1,s1):1 seq=2 start=1535402512.768597075,0 exp=1535402521.769064687,0 pro=1535402512.769088099,0
I180827 20:41:54.077824 52355 storage/replica_command.go:298  [n1,s1,r24/1:/{Table/53/1/1…-Max}] initiating a split of this range at key /Table/53/2/"\x15\x8f\xe8\u007f\\\xf3\xdf\xf0nP\xdb\xd3\xe8\x1b\"B1K\xa8l+\x96/l\v\x9e\x0e\x91\xa0D\x96\xc0J\xf1\xa1͠\xd2̃\x05\xe3\xe2?ET蛂\x00\xe5\xb0\x1a\x8e\x13Zu\xfd\xf2\x81w^\xb7\xbdH\xb8\xe4\a\x9c\xfd\x99{\xb4\"\xe5Q\x9c\x17\x85\x97\xf7Ëb\x0f\xff\xb0-vmO\xe1\xfb\xc3\xf3\xab0\xa0\x05u\x1c\xb0{B\xeamp\xbd\x8f\x99?\x87\x0f\xb2e\xe3ؿ2LN\x03\x17\xa7\x9f\xd3\x0f\x15$\x02I\xd2\xd7\x04R\x193\x9d\xddX\u007f\x01A\xcc\xde`Pm:\xdbe\xfd\xa6\a\xf8i\x88\xa7\xee\xacӸ\xbf2\x84y\xcd\n\xe6]L)\xca\xd9`x\xb4\x1b|\xe8\x13\x82\x1a(/* 3`J\xe1ٰ\xe6AdN!-\xd9"/"ॹॹ;,✅\nπ<\t\nπ\tॹπ✅a\n,\nᐿ�\nॹ✅�ॹ�\"✅ॹ\\<\"\n;a\\\n,✅π\n<\n<\nॹπᐿ�ᐿ;,�\tᐿ\nᐿaᐿ,\nπ�\t\\<ॹ\\π;π�π\"<;\"�\\<�,<�\\a�<\nॹaᐿaॹ�\\ᐿπ,✅ᐿ\"<✅✅a\t�ॹ\t<π;�ॹ\\ᐿ;✅\r\\,;\\ᐿॹ\nॹᐿππ\nᐿ\nᐿaπ\\\nπ\r\"✅�π\nπ\rॹπ\"ॹ✅a\ra�✅\nπ;ॹ✅\n;ॹ,�\nπ\rπᐿa\\\\ᐿ,π<ᐿ✅\n�,\r\nᐿ✅\n<�ᐿ\"\"✅,,\"\n<\n✅\rπa�π\n<\\ॹ\nॹ�;\ra��✅ᐿ\n,\t�;,π<\r��\r\\�\n✅\r✅�;\\\n\n,\nॹ✅π\n,\n✅\t,�<\nπ\t;aπ\n<a<\n\tπ\r\"\\✅\n\n\n<ᐿπaπ\\�\"<✅\\a,✅\n✅\n<\"\"\n\n\r\rᐿ�\\\tᐿᐿ;\n\rᐿa\\π<\n\\\n\n\";\r\r\raπ\"\r�ॹa\r�\"\n\"✅ππ✅�\t�ᐿ\tᐿ\\\r�ᐿ<\\\nᐿπ✅\tॹ<π\ta\"✅\t,ॹa✅ᐿ;\\\r✅\\,ॹ\"a\n<ॹ\\\n<\"π\\\\ᐿ\n✅\nᐿ\n,\n\r\t\n\r\n<aᐿ;ᐿ;ᐿ\r;✅a<a,,<\t\n\\ππ\\\"✅\n\\a\n\tπa<\r<π\n✅\\π<ॹ,\t;<aaaπॹᐿaॹaॹ�,\"\t,ॹ;\\<✅a\nᐿ\"\nπ\\aᐿ�ᐿ<ॹ;\\<ᐿ\nᐿ\n\"aᐿᐿπ,\"\r✅ॹ\n,<\r<\n<<,ᐿᐿa,\rᐿ<;π\\�,\"\rπ�\nππ�,✅;�\ra<;\r�ᐿ\tπ;\"πᐿ\\�a\"ᐿ\\;\\ॹ\";ॹ;;✅\tॹ\r\n<\t\n\t<aॹ\tᐿ\n\"ॹᐿ\t✅✅�ॹ;;<�\t,\n\r\n\n\ta\"\\<\rπᐿa\t<\na;\t\"\nπ<πॹ\r\n<\n✅ᐿॹ✅�<,;✅\"\n<�π<✅<<✅\\;\n\"\rᐿ\t�\n\n\r\t,ॹ�\"\rᐿᐿᐿ,\"π\nπ\",a\"\"<�\t\\πॹ\n\taॹᐿ\tπॹ,\n✅\rπa\r<<,\n\nᐿ;\t\\<\tᐿ\n\n�\"ॹ<\n\r\nᐿ\n�\n\nπᐿ;\nॹ\n\"π<\"\r\r\n\r\\ᐿ;;πॹ;�\r✅�,✅\r\r�,a;ᐿ\\ॹ\"\t\r✅;\t<,π,�\t\"πaᐿ��\\ॹ\"\n\tᐿ\t,ॹ✅�ᐿ✅\tᐿ,ॹ✅;;�\r\n✅ᐿ�\nπa;\\,✅ॹ<ᐿ\nπ\n\"\n;a\t\\π\n<\r\r\rπ\"\n\nᐿ<<ᐿ\"\n,\n\"ॹπaᐿπ��\r\n�ᐿॹ,\na\n\rॹ<�ॹ\"\n\t\r\n,π\n�,<ॹ,<\n<�✅ᐿ\r✅a✅<\r;,�a�\\\nॹ<\\<✅ॹ\"\nॹ\r�\ta;\"\\ᐿ\n\n✅\"\r;✅\t,a✅✅<\"ᐿ\t�π\\✅✅�<;\"✅π✅ॹ,\nπ\n��<,\ta\r�✅ᐿ\nॹπ\nᐿॹ✅;\nॹ\t\r\\\nᐿ�ᐿ\n\tᐿ,\r\\;<<a,\"π,\tᐿ\nπ<a\"ॹ\\aa\r\r\"\";\tᐿ,ॹa\nॹ\nπ"/PrefixEnd [r25]
I180827 20:41:54.090278 52643 storage/replica_range_lease.go:554  [n1,s1,r24/1:/Table/53/{1/106-2/"\x15…}] transferring lease to s3
I180827 20:41:54.091255 52643 storage/replica_range_lease.go:617  [n1,s1,r24/1:/Table/53/{1/106-2/"\x15…}] done transferring lease to s3: <nil>
I180827 20:41:54.092073 51863 storage/replica_proposal.go:210  [n3,s3,r24/2:/Table/53/{1/106-2/"\x15…}] new range lease repl=(n3,s3):2 seq=3 start=1535402514.090298269,0 epo=1 pro=1535402514.090300825,0 following repl=(n1,s1):1 seq=2 start=1535402512.768597075,0 exp=1535402521.769064687,0 pro=1535402512.769088099,0
I180827 20:41:54.094854 52702 storage/replica_command.go:298  [n1,s1,r25/1:/{Table/53/2/"…-Max}] initiating a split of this range at key /Table/53/2/"\xc0\t\x13\xe0*c\xe4\xcfS-\x9b,\xe2\x82\xfa\xd8Z\xf6\x99\x81\\\x18ŕ\xea\x80Db\xa7\x94\xf7Q#\x13\\\xc7(\xc4=\xaaZ\xa2Hա}\xdeI\x06\x840I\xa9\x95\xcbи\r#iH\x97F~\x10\xe4<\xb2\xefFb\xac\xee\xf90H5\xd7D\xe4:\xf0Ae\xe3\xd1<\xd1\xf7\xb9\xad\xea\xd9\xe0r\xbc\xa6\xae\x92\xfb\xb5,\xc2\U0010f26eD\xe0 \xc5\x06\xfa\x04{\xf7\xe8\xbfZQ\xa3\x05M\xbb\xa8\xbe\xf4\xc4\x0f\xe9|s{|\x8fr\xad\xdaWĢ\x9e\xdf\x17\x9f\x02\xf3п\xd3\xea\xfd\x8ew3\xb8@7ꇘN%\n\xe0@jq\xb3\xb0&y\xe3K0ȼ_s\x1e\x15\x98\xe7\xbf6\xeb\xef}$dd/\xaa\xf1\xcb.U\x8f\xd4r"/"<a✅ᐿ<\n\n\nॹ\",\n\"πॹᐿ,\rᐿ\nॹ�ॹ<\naᐿᐿ,\"ॹ\"\\✅\n�✅<\n\r<\\<\"\\π;<✅,;✅ॹa\r✅ᐿ;ᐿ\r\\�\\�,ᐿ\r\n,�✅,\t;\\\"π,;ᐿa\nॹ✅\n;ॹ<\\\n;<ᐿॹ\n\r<\n�\\\",ॹ✅,\n\"✅ᐿ\raa\n\n\t;�π<,\",ᐿ<ᐿ\\ᐿ✅;\t<π\"\"ॹ<\t\n,,π\"\t�✅\r;;ॹ,ᐿ\tॹ✅ॹ,\nᐿ\n\\;\n\nπa<✅aπ\t;✅\r;�✅;a\t\n✅ππ\t\rॹ\n\\<✅✅<\r✅\t\r✅<π\n;\n\"\"��ॹ\r,ᐿ\naॹᐿπ\\π\n\n,�\t<�a\nπ\\✅,πॹ\",πᐿa,ᐿᐿa\r<\ta\t\nॹ\n\r✅\r;πa✅�\t�π\",πa\n<✅\"a\t�\r<ᐿa,\naᐿॹᐿ\rॹᐿa�\"\\a✅\"\"✅✅ᐿ\t\"ॹ<\n\"\nπ✅✅\n\\ॹ\t;ᐿ\"a,ॹ\"aॹᐿᐿ\n,\\\nॹॹ,;ॹ;\\\n\n\t\n,\naॹπ\r\"\n\t✅ॹ\r\tᐿ✅;\r\ta�ᐿ\t\t\nπᐿ<ॹ\tᐿ\na\nᐿ\tππ<<π\t✅;\r\"ॹ\n\na�<ᐿ\r\nᐿᐿ\";a<\rॹ\\πॹ\tᐿ\n\nᐿπ;\t;;✅ᐿ✅✅<�ॹ;\tᐿ,,\t,π✅\nᐿॹ;\r\"ᐿa,ᐿᐿᐿa\nॹ<aॹ\r,;π<<\nπa�\n\\\r,ॹπ�\n\"ᐿππ\n✅ॹπ\ra,πॹ\n\t<;πᐿᐿ✅ॹ\t�a✅\r�\"a,π��,ॹa\n\\\rॹ\nॹ\"π\"π\ta�<π�\r;a,a\r<πᐿ\na<\r\t\nॹ\\\\\n\\<�\\�aπa;\r\\,,\nॹ\"✅;\"\n✅ᐿ✅a\n<ππ\tπ<a\t�\\<\"✅\\\nᐿ;\r\t✅�✅π\r\r\r\n\",ᐿ;ॹ\nᐿ\r\"\naa\"\n\t<,✅<a\\\n\"ॹᐿ\n\\\t\t\r\"ॹ<,,π�\"ॹ<ॹ,\\ᐿ<\\π\"\\<ᐿ\n\rॹ\na\t\nπ;\\π\\✅\r,\r�\",,;;,<\n\t\"\\\r\r\"ॹ✅ॹ�\n�✅�\n�\\\n\n\rᐿॹ\tπa<;;\n\n�a\n\\ॹॹ\t;\n<\t\\<ॹॹ�✅π\t\"<\n\tᐿπᐿ\"\"�;\t;ॹ<π<✅\nππ<\"\rॹ�πᐿ�\rπ,<,<ᐿ<;;�,\t<<ᐿ\t<\tॹ,π,<\\a\t;\n\r�a✅\r\r\t\nᐿᐿ✅\r;\n;�;\r✅\n,;✅ᐿ,\\\n<,<\\\t\n;<\\aᐿ\r,\n;\\\r�\rॹ\\\t\";\t�;\n,\ta✅\r\t\r,\n\\\t\nᐿ✅\\\"\naᐿ\\\\\";\r<�✅;aॹ\t\t\\\t\tᐿ�\r,\"\n\"\taᐿ\na\rππॹ\nπᐿ\rॹ\nॹ\tπ,π\r<\"\n,�\na;�\\✅,<\"\"�\nππ\t\nπaॹaa✅\\;\\a\r\rᐿ,\\�;\\ᐿᐿ<a\"\r;π\"\\ᐿπ\tπॹ;\\a\ra;\t\n\\�\r<\t<ॹ\r✅\na\t\t<\n\n✅π\nᐿ✅\\<,\nπ\rᐿ✅a\"\r\"\n✅ॹ\\�\r\n\\�\nॹ\\<\\<\n\n\n"/PrefixEnd [r26]
I180827 20:41:54.112580 52726 storage/replica_range_lease.go:554  [n1,s1,r25/1:/Table/53/2/"\x{15\x8…-c0\t\…}] transferring lease to s3
I180827 20:41:54.113319 52726 storage/replica_range_lease.go:617  [n1,s1,r25/1:/Table/53/2/"\x{15\x8…-c0\t\…}] done transferring lease to s3: <nil>
I180827 20:41:54.113657 51850 storage/replica_proposal.go:210  [n3,s3,r25/2:/Table/53/2/"\x{15\x8…-c0\t\…}] new range lease repl=(n3,s3):2 seq=3 start=1535402514.112607829,0 epo=1 pro=1535402514.112610518,0 following repl=(n1,s1):1 seq=2 start=1535402512.768597075,0 exp=1535402521.769064687,0 pro=1535402512.769088099,0
I180827 20:41:54.117098 52757 storage/replica_command.go:298  [n1,s1,r26/1:/{Table/53/2/"…-Max}] initiating a split of this range at key /Table/53/3/";π,\\✅✅ᐿπ✅,�a\r<\nπᐿॹ;π\\,✅\nᐿॹ✅�,\r�\r\r\r,;\r;ᐿ,\n\nᐿaπ\r,,✅\na,a\\<✅\"✅\\,,a\"π\r\n�✅π\"ॹπ;\nπ;<,<\n;<\n\tॹ\rπ\r,a\\\t�\n\r\\ᐿ<\t,\n\\ᐿa\t\t\n\nπ\t\\\n\\πa;π\r\rᐿ\",a<\"\n�\r\ta\r\t�\r\t✅\t;ᐿᐿ��\nᐿᐿ\\,ᐿᐿ\na\"ᐿ\"\"aa\n;✅π\nॹ\\\"\"�✅ॹॹ\\\nॹ\t\nॹ✅,\n\"πᐿ\n\n;<<;\r\tᐿ�,,\\\n\n\n\n\nπ�;\n;,\"✅\r;a\n\\;aa\n\n\n;\n\n<<ॹ�\nπa�πॹaॹ\r\n;✅✅,;ᐿ\n;π\"\\πᐿ\n\"\n\\a\\aπ✅ᐿ\n<\",\tॹ\r<;\";\nππ\"�\n\n\t,π\\\\<\\\t;ॹπ\\,;�✅ॹ,\r<\n✅�;ᐿ\",;✅\n\nॹॹ\r\n✅\n<<\n\"\",a\t\r\r,ॹ\t�;�,π\\\t,;\\\"✅\t\n\n\nᐿ\n<\\\rᐿ,;�\nπ\r\\<ᐿᐿ\n<✅ᐿaॹ�✅\n\t,π\"\r<<\n\nπ\tπ✅\n<\\πॹaᐿ\t;�ॹॹ,\"\n\\a\n,\"πॹ,\r,ॹॹ\\\";�\n\\π✅\n\"ππ\n✅�πॹ,\r✅\n;π;ᐿ\"\nᐿ✅\rॹ;;\n\"ॹ\"�\"a\n\rπ\n<\n\t\"aπ\t;\\\n\";\"π,\ta\t\n\nᐿ<,;�<ᐿ\"\\ᐿᐿa,\n;;ॹॹ\tॹπa�ᐿ\ra,π<✅\tᐿᐿ\n,✅\ra�\"\r\r\",;π�<;\n<;ᐿ\"�;ᐿ;�;✅\t\\<\\<;πa✅\rॹ\\\\\\ᐿ\n;\r\t\n\\\r\"✅\n\tπ✅\"\"<\r\rπ\r<,\n,\\✅ᐿᐿ�\t�,ππ;ᐿ\t�\"\\ππ\"ॹ,πa<\n\n<��\rॹॹ\t,\r\"ॹ✅✅\n\n;\\ॹ;π<\"�\t�<\"ᐿॹॹ;\n<\n\r\na\t�ॹᐿa\n,\"\t\r\"\n,\r<,\"\tᐿ\\\n<,;<\"\t\n\nᐿ,ᐿ\tπ✅\n,\r,\n\t<,�\\;<\\a\nπ\t,\t�ॹ\t\n�a✅\n✅\nॹ\";\r\t✅<\tᐿ\n\tᐿॹ✅\"\r\rॹ✅π\n\n,\t\\\t\\;\"a\t,ॹॹ\"aॹ\n,\n✅�\t\nॹᐿ\n\r✅<πॹ\n✅\tॹ\"ॹ\"�\r\\;\\✅;ॹπ;\n\nᐿ<\r<\"ॹ\n,\n;π\nॹ\ta✅\n�;ᐿ\"a�✅π\r✅ॹ,\n\n\",✅\nᐿ\n<�\r\nπᐿ\"πॹᐿ\r�\n<,✅a\\ॹ\r✅<;πᐿ✅ॹ<\"<✅\"π,\\\rπ\\<\"<\"π\n✅<;\\�\tॹ\n\n\r<\n\rᐿ\nᐿaॹaॹ\\\r<<\n\r\n�\ta,\nॹॹᐿ\n,π✅<;\\\nπॹπᐿॹ<;\"a\r<;\t\t<,;�π\n<✅ॹॹ\tᐿ\rᐿaaaॹ\t\\,ᐿ✅\n\\ॹ<\"π\t\r\"\tᐿ\n\ta\t,<ππ;\n\\\r�\n,\n\n\\ᐿa\nॹᐿa\n\t\n\t\n✅\"ᐿ\"\r\n\n\"�\r\n\n<<π\ra✅\\<ᐿ�\n\n✅�a✅�"/105 [r27]
I180827 20:41:54.137397 52716 storage/store_snapshot.go:615  [raftsnapshot,n3,s3,r25/2:/Table/53/2/"\x{15\x8…-c0\t\…}] sending Raft snapshot 547ab8d0 at applied index 21
I180827 20:41:54.140430 52716 storage/store_snapshot.go:657  [raftsnapshot,n3,s3,r25/2:/Table/53/2/"\x{15\x8…-c0\t\…}] streamed snapshot to (n2,s2):3: kv pairs: 14, log entries: 2, rate-limit: 8.0 MiB/sec, 22ms
I180827 20:41:54.140860 52705 storage/replica_raftstorage.go:784  [n2,s2,r25/3:/Table/53/2/"\x{15\x8…-c0\t\…}] applying Raft snapshot at index 21 (id=547ab8d0, encoded size=31270, 1 rocksdb batches, 2 log entries)
I180827 20:41:54.162696 52705 storage/replica_raftstorage.go:790  [n2,s2,r25/3:/Table/53/2/"\x{15\x8…-c0\t\…}] applied Raft snapshot in 22ms [clear=0ms batch=0ms entries=21ms commit=0ms]
I180827 20:41:54.166103 52791 storage/replica_range_lease.go:554  [n1,s1,r26/1:/Table/53/{2/"\xc0…-3/";π,…}] transferring lease to s3
I180827 20:41:54.167118 51903 storage/replica_proposal.go:210  [n3,s3,r26/2:/Table/53/{2/"\xc0…-3/";π,…}] new range lease repl=(n3,s3):2 seq=3 start=1535402514.166156675,0 epo=1 pro=1535402514.166159831,0 following repl=(n1,s1):1 seq=2 start=1535402512.768597075,0 exp=1535402521.769064687,0 pro=1535402512.769088099,0
I180827 20:41:54.167216 52791 storage/replica_range_lease.go:617  [n1,s1,r26/1:/Table/53/{2/"\xc0…-3/";π,…}] done transferring lease to s3: <nil>
I180827 20:41:54.172711 52589 storage/replica_command.go:298  [n1,s1,r27/1:/{Table/53/3/"…-Max}] initiating a split of this range at key /Table/54 [r28]
I180827 20:41:54.182740 52807 storage/replica_range_lease.go:554  [n1,s1,r27/1:/Table/5{3/3/";π…-4}] transferring lease to s2
I180827 20:41:54.183947 52807 storage/replica_range_lease.go:617  [n1,s1,r27/1:/Table/5{3/3/";π…-4}] done transferring lease to s2: <nil>
I180827 20:41:54.184954 51646 storage/replica_proposal.go:210  [n2,s2,r27/3:/Table/5{3/3/";π…-4}] new range lease repl=(n2,s2):3 seq=3 start=1535402514.182761052,0 epo=1 pro=1535402514.182764040,0 following repl=(n1,s1):1 seq=2 start=1535402512.768597075,0 exp=1535402521.769064687,0 pro=1535402512.769088099,0
--- FAIL: test/TestImportPgDump (0.000s)
Test ended in panic.

------- Stdout: -------
W180827 20:41:52.746991 50862 server/status/runtime.go:294  [n?] Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006"
I180827 20:41:52.757923 50862 server/server.go:830  [n?] monitoring forward clock jumps based on server.clock.forward_jump_check_enabled
I180827 20:41:52.758132 50862 base/addr_validation.go:260  [n?] server certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
I180827 20:41:52.758156 50862 base/addr_validation.go:300  [n?] web UI certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
I180827 20:41:52.761168 50862 server/config.go:496  [n?] 1 storage engine initialized
I180827 20:41:52.761191 50862 server/config.go:499  [n?] RocksDB cache size: 128 MiB
I180827 20:41:52.761204 50862 server/config.go:499  [n?] store 0: in-memory, size 0 B
I180827 20:41:52.767725 50862 server/node.go:373  [n?] **** cluster d5e53e69-a109-4eb6-91bf-29e74ae744ba has been created
I180827 20:41:52.767752 50862 server/server.go:1401  [n?] **** add additional nodes by specifying --join=127.0.0.1:41477
I180827 20:41:52.767936 50862 gossip/gossip.go:382  [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:41477" > attrs:<> locality:<> ServerVersion:<major_val:2 minor_val:0 patch:0 unstable:12 > build_tag:"v2.1.0-alpha.20180702-2025-gf1e7bb1" started_at:1535402512767856449 
I180827 20:41:52.770338 50862 storage/store.go:1541  [n1,s1] [n1,s1]: failed initial metrics computation: [n1,s1]: system config not yet available
I180827 20:41:52.770546 50862 server/node.go:476  [n1] initialized store [n1,s1]: disk (capacity=512 MiB, available=512 MiB, used=0 B, logicalBytes=6.9 KiB), ranges=1, leases=1, queries=0.00, writes=0.00, bytesPerReplica={p10=7103.00 p25=7103.00 p50=7103.00 p75=7103.00 p90=7103.00 pMax=7103.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
I180827 20:41:52.770626 50862 storage/stores.go:242  [n1] read 0 node addresses from persistent storage
I180827 20:41:52.770721 50862 server/node.go:697  [n1] connecting to gossip network to verify cluster ID...
I180827 20:41:52.770760 50862 server/node.go:722  [n1] node connected via gossip and verified as part of cluster "d5e53e69-a109-4eb6-91bf-29e74ae744ba"
I180827 20:41:52.770788 50862 server/node.go:546  [n1] node=1: started with [<no-attributes>=<in-mem>] engine(s) and attributes []
I180827 20:41:52.771023 50862 server/status/recorder.go:652  [n1] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180827 20:41:52.771066 50862 server/server.go:1807  [n1] Could not start heap profiler worker due to: directory to store profiles could not be determined
I180827 20:41:52.771159 50862 server/server.go:1538  [n1] starting https server at 127.0.0.1:42563 (use: 127.0.0.1:42563)
I180827 20:41:52.771188 50862 server/server.go:1540  [n1] starting grpc/postgres server at 127.0.0.1:41477
I180827 20:41:52.771209 50862 server/server.go:1541  [n1] advertising CockroachDB node at 127.0.0.1:41477
I180827 20:41:52.775258 51089 server/status/recorder.go:652  [n1,summaries] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180827 20:41:52.776337 50925 storage/replica_command.go:298  [split,n1,s1,r1/1:/M{in-ax}] initiating a split of this range at key /System/"" [r2]
I180827 20:41:52.788832 51094 storage/replica_command.go:298  [split,n1,s1,r2/1:/{System/-Max}] initiating a split of this range at key /System/NodeLiveness [r3]
W180827 20:41:52.790188 51128 storage/intent_resolver.go:668  [n1,s1] failed to push during intent resolution: failed to push "unnamed" id=ec083bbe key=/Table/SystemConfigSpan/Start rw=true pri=0.01126188 iso=SERIALIZABLE stat=PENDING epo=0 ts=1535402512.772758792,0 orig=1535402512.772758792,0 max=1535402512.772758792,0 wto=false rop=false seq=6
I180827 20:41:52.790695 51118 sql/event_log.go:126  [n1,intExec=optInToDiagnosticsStatReporting] Event: "set_cluster_setting", target: 0, info: {SettingName:diagnostics.reporting.enabled Value:true User:root}
I180827 20:41:52.795125 51100 storage/replica_command.go:298  [split,n1,s1,r3/1:/{System/NodeL…-Max}] initiating a split of this range at key /System/NodeLivenessMax [r4]
I180827 20:41:52.800783 51143 storage/replica_command.go:298  [split,n1,s1,r4/1:/{System/NodeL…-Max}] initiating a split of this range at key /System/tsd [r5]
I180827 20:41:52.807906 51165 storage/replica_command.go:298  [split,n1,s1,r5/1:/{System/tsd-Max}] initiating a split of this range at key /System/"tse" [r6]
I180827 20:41:52.811784 51141 sql/event_log.go:126  [n1,intExec=set-setting] Event: "set_cluster_setting", target: 0, info: {SettingName:version Value:2.0-12 User:root}
I180827 20:41:52.818164 50799 sql/event_log.go:126  [n1,intExec=disableNetTrace] Event: "set_cluster_setting", target: 0, info: {SettingName:trace.debug.enable Value:false User:root}
I180827 20:41:52.821094 51188 storage/replica_command.go:298  [split,n1,s1,r6/1:/{System/tse-Max}] initiating a split of this range at key /Table/SystemConfigSpan/Start [r7]
I180827 20:41:52.830709 51176 storage/replica_command.go:298  [split,n1,s1,r7/1:/{Table/System…-Max}] initiating a split of this range at key /Table/11 [r8]
I180827 20:41:52.839374 51187 sql/event_log.go:126  [n1,intExec=initializeClusterSecret] Event: "set_cluster_setting", target: 0, info: {SettingName:cluster.secret Value:045a1c98-219f-445b-bd6b-d481f04d6b0d User:root}
I180827 20:41:52.849534 51154 storage/replica_command.go:298  [split,n1,s1,r8/1:/{Table/11-Max}] initiating a split of this range at key /Table/12 [r9]
I180827 20:41:52.855898 51218 sql/event_log.go:126  [n1,intExec=create-default-db] Event: "create_database", target: 50, info: {DatabaseName:defaultdb Statement:CREATE DATABASE IF NOT EXISTS defaultdb User:root}
I180827 20:41:52.861462 51240 storage/replica_command.go:298  [split,n1,s1,r9/1:/{Table/12-Max}] initiating a split of this range at key /Table/13 [r10]
I180827 20:41:52.868342 51268 storage/replica_command.go:298  [split,n1,s1,r10/1:/{Table/13-Max}] initiating a split of this range at key /Table/14 [r11]
I180827 20:41:52.872706 51256 sql/event_log.go:126  [n1,intExec=create-default-db] Event: "create_database", target: 51, info: {DatabaseName:postgres Statement:CREATE DATABASE IF NOT EXISTS postgres User:root}
I180827 20:41:52.874819 51264 storage/replica_command.go:298  [split,n1,s1,r11/1:/{Table/14-Max}] initiating a split of this range at key /Table/15 [r12]
I180827 20:41:52.876403 50862 server/server.go:1594  [n1] done ensuring all necessary migrations have run
I180827 20:41:52.876433 50862 server/server.go:1597  [n1] serving sql connections
I180827 20:41:52.879108 51233 server/server_update.go:67  [n1] no need to upgrade, cluster already at the newest version
I180827 20:41:52.879639 51235 sql/event_log.go:126  [n1] Event: "node_join", target: 1, info: {Descriptor:{NodeID:1 Address:{NetworkField:tcp AddressField:127.0.0.1:41477} Attrs: Locality: ServerVersion:2.0-12 BuildTag:v2.1.0-alpha.20180702-2025-gf1e7bb1 StartedAt:1535402512767856449 LocalityAddress:[]} ClusterID:d5e53e69-a109-4eb6-91bf-29e74ae744ba StartedAt:1535402512767856449 LastUp:1535402512767856449}
I180827 20:41:52.880318 51302 storage/replica_command.go:298  [split,n1,s1,r12/1:/{Table/15-Max}] initiating a split of this range at key /Table/16 [r13]
I180827 20:41:52.927701 50819 storage/replica_command.go:298  [split,n1,s1,r13/1:/{Table/16-Max}] initiating a split of this range at key /Table/17 [r14]
I180827 20:41:52.940165 51323 storage/replica_command.go:298  [split,n1,s1,r14/1:/{Table/17-Max}] initiating a split of this range at key /Table/18 [r15]
I180827 20:41:52.948539 51355 storage/replica_command.go:298  [split,n1,s1,r15/1:/{Table/18-Max}] initiating a split of this range at key /Table/19 [r16]
I180827 20:41:52.953658 51380 storage/replica_command.go:298  [split,n1,s1,r16/1:/{Table/19-Max}] initiating a split of this range at key /Table/20 [r17]
I180827 20:41:52.961237 51137 storage/replica_command.go:298  [split,n1,s1,r17/1:/{Table/20-Max}] initiating a split of this range at key /Table/21 [r18]
I180827 20:41:52.966548 50832 storage/replica_command.go:298  [split,n1,s1,r18/1:/{Table/21-Max}] initiating a split of this range at key /Table/22 [r19]
I180827 20:41:52.977113 51362 storage/replica_command.go:298  [split,n1,s1,r19/1:/{Table/22-Max}] initiating a split of this range at key /Table/23 [r20]
I180827 20:41:53.041315 51440 storage/replica_command.go:298  [split,n1,s1,r20/1:/{Table/23-Max}] initiating a split of this range at key /Table/50 [r21]
I180827 20:41:53.047478 51414 storage/replica_command.go:298  [split,n1,s1,r21/1:/{Table/50-Max}] initiating a split of this range at key /Table/51 [r22]
W180827 20:41:53.081214 50862 server/status/runtime.go:294  [n?] Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006"
I180827 20:41:53.089127 50862 server/server.go:830  [n?] monitoring forward clock jumps based on server.clock.forward_jump_check_enabled
I180827 20:41:53.089322 50862 base/addr_validation.go:260  [n?] server certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
I180827 20:41:53.089338 50862 base/addr_validation.go:300  [n?] web UI certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
I180827 20:41:53.102793 50862 server/config.go:496  [n?] 1 storage engine initialized
I180827 20:41:53.102863 50862 server/config.go:499  [n?] RocksDB cache size: 128 MiB
I180827 20:41:53.102878 50862 server/config.go:499  [n?] store 0: in-memory, size 0 B
W180827 20:41:53.102953 50862 gossip/gossip.go:1371  [n?] no incoming or outgoing connections
I180827 20:41:53.103001 50862 server/server.go:1403  [n?] no stores bootstrapped and --join flag specified, awaiting init command.
I180827 20:41:53.115344 51458 gossip/client.go:129  [n?] started gossip client to 127.0.0.1:41477
I180827 20:41:53.125579 51530 gossip/server.go:217  [n1] received initial cluster-verification connection from {tcp 127.0.0.1:36113}
I180827 20:41:53.127987 50862 server/node.go:697  [n?] connecting to gossip network to verify cluster ID...
I180827 20:41:53.128034 50862 server/node.go:722  [n?] node connected via gossip and verified as part of cluster "d5e53e69-a109-4eb6-91bf-29e74ae744ba"
I180827 20:41:53.128397 51575 kv/dist_sender.go:345  [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I180827 20:41:53.134920 51574 kv/dist_sender.go:345  [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I180827 20:41:53.135628 50862 kv/dist_sender.go:345  [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I180827 20:41:53.136461 50862 server/node.go:428  [n?] new node allocated ID 2
I180827 20:41:53.136541 50862 gossip/gossip.go:382  [n2] NodeDescriptor set to node_id:2 address:<network_field:"tcp" address_field:"127.0.0.1:36113" > attrs:<> locality:<> ServerVersion:<major_val:2 minor_val:0 patch:0 unstable:12 > build_tag:"v2.1.0-alpha.20180702-2025-gf1e7bb1" started_at:1535402513136479434 
I180827 20:41:53.136591 50862 storage/stores.go:242  [n2] read 0 node addresses from persistent storage
I180827 20:41:53.136624 50862 storage/stores.go:261  [n2] wrote 1 node addresses to persistent storage
I180827 20:41:53.137485 51552 storage/stores.go:261  [n1] wrote 1 node addresses to persistent storage
I180827 20:41:53.139442 50862 server/node.go:672  [n2] bootstrapped store [n2,s2]
I180827 20:41:53.139577 50862 server/node.go:546  [n2] node=2: started with [] engine(s) and attributes []
I180827 20:41:53.140140 50862 server/status/recorder.go:652  [n2] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180827 20:41:53.140166 50862 server/server.go:1807  [n2] Could not start heap profiler worker due to: directory to store profiles could not be determined
I180827 20:41:53.140233 50862 server/server.go:1538  [n2] starting https server at 127.0.0.1:39947 (use: 127.0.0.1:39947)
I180827 20:41:53.140246 50862 server/server.go:1540  [n2] starting grpc/postgres server at 127.0.0.1:36113
I180827 20:41:53.140256 50862 server/server.go:1541  [n2] advertising CockroachDB node at 127.0.0.1:36113
I180827 20:41:53.140624 51685 server/status/recorder.go:652  [n2,summaries] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180827 20:41:53.153945 50862 server/server.go:1594  [n2] done ensuring all necessary migrations have run
I180827 20:41:53.153974 50862 server/server.go:1597  [n2] serving sql connections
W180827 20:41:53.165268 50862 server/status/runtime.go:294  [n?] Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006"
I180827 20:41:53.185802 51467 server/server_update.go:67  [n2] no need to upgrade, cluster already at the newest version
I180827 20:41:53.186848 51469 sql/event_log.go:126  [n2] Event: "node_join", target: 2, info: {Descriptor:{NodeID:2 Address:{NetworkField:tcp AddressField:127.0.0.1:36113} Attrs: Locality: ServerVersion:2.0-12 BuildTag:v2.1.0-alpha.20180702-2025-gf1e7bb1 StartedAt:1535402513136479434 LocalityAddress:[]} ClusterID:d5e53e69-a109-4eb6-91bf-29e74ae744ba StartedAt:1535402513136479434 LastUp:1535402513136479434}
I180827 20:41:53.189622 50862 server/server.go:830  [n?] monitoring forward clock jumps based on server.clock.forward_jump_check_enabled
I180827 20:41:53.189776 50862 base/addr_validation.go:260  [n?] server certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
I180827 20:41:53.189808 50862 base/addr_validation.go:300  [n?] web UI certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
I180827 20:41:53.207782 50862 server/config.go:496  [n?] 1 storage engine initialized
I180827 20:41:53.207807 50862 server/config.go:499  [n?] RocksDB cache size: 128 MiB
I180827 20:41:53.207815 50862 server/config.go:499  [n?] store 0: in-memory, size 0 B
W180827 20:41:53.207911 50862 gossip/gossip.go:1371  [n?] no incoming or outgoing connections
I180827 20:41:53.207947 50862 server/server.go:1403  [n?] no stores bootstrapped and --join flag specified, awaiting init command.
I180827 20:41:53.211471 51475 rpc/nodedialer/nodedialer.go:92  [ct-client] connection to n2 established
I180827 20:41:53.223653 51740 gossip/client.go:129  [n?] started gossip client to 127.0.0.1:41477
I180827 20:41:53.223954 51816 gossip/server.go:217  [n1] received initial cluster-verification connection from {tcp 127.0.0.1:46463}
I180827 20:41:53.224401 50862 server/node.go:697  [n?] connecting to gossip network to verify cluster ID...
I180827 20:41:53.224432 50862 server/node.go:722  [n?] node connected via gossip and verified as part of cluster "d5e53e69-a109-4eb6-91bf-29e74ae744ba"
I180827 20:41:53.224690 51837 kv/dist_sender.go:345  [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I180827 20:41:53.225445 51836 kv/dist_sender.go:345  [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I180827 20:41:53.226030 50862 kv/dist_sender.go:345  [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I180827 20:41:53.226699 50862 server/node.go:428  [n?] new node allocated ID 3
I180827 20:41:53.226763 50862 gossip/gossip.go:382  [n3] NodeDescriptor set to node_id:3 address:<network_field:"tcp" address_field:"127.0.0.1:46463" > attrs:<> locality:<> ServerVersion:<major_val:2 minor_val:0 patch:0 unstable:12 > build_tag:"v2.1.0-alpha.20180702-2025-gf1e7bb1" started_at:1535402513226706701 
I180827 20:41:53.226805 50862 storage/stores.go:242  [n3] read 0 node addresses from persistent storage
I180827 20:41:53.226851 50862 storage/stores.go:261  [n3] wrote 2 node addresses to persistent storage
I180827 20:41:53.227563 51809 storage/stores.go:261  [n1] wrote 2 node addresses to persistent storage
I180827 20:41:53.227869 51810 storage/stores.go:261  [n2] wrote 2 node addresses to persistent storage
I180827 20:41:53.228504 50862 server/node.go:672  [n3] bootstrapped store [n3,s3]
I180827 20:41:53.229044 50862 server/node.go:546  [n3] node=3: started with [] engine(s) and attributes []
I180827 20:41:53.229696 50862 server/status/recorder.go:652  [n3] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180827 20:41:53.229749 50862 server/server.go:1807  [n3] Could not start heap profiler worker due to: directory to store profiles could not be determined
I180827 20:41:53.235251 50862 server/server.go:1538  [n3] starting https server at 127.0.0.1:43307 (use: 127.0.0.1:43307)
I180827 20:41:53.235271 50862 server/server.go:1540  [n3] starting grpc/postgres server at 127.0.0.1:46463
I180827 20:41:53.235283 50862 server/server.go:1541  [n3] advertising CockroachDB node at 127.0.0.1:46463
I180827 20:41:53.240284 50862 server/server.go:1594  [n3] done ensuring all necessary migrations have run
I180827 20:41:53.240307 50862 server/server.go:1597  [n3] serving sql connections
I180827 20:41:53.243124 51945 server/status/recorder.go:652  [n3,summaries] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180827 20:41:53.248117 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r20/1:/Table/{23-50}] sending preemptive snapshot 59e1afc9 at applied index 16
I180827 20:41:53.249136 50862 testutils/testcluster/testcluster.go:536  [n1,s1] has 22 underreplicated ranges
I180827 20:41:53.251012 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r20/1:/Table/{23-50}] streamed snapshot to (n2,s2):?: kv pairs: 12, log entries: 6, rate-limit: 8.0 MiB/sec, 3ms
I180827 20:41:53.251369 51983 storage/replica_raftstorage.go:784  [n2,s2,r20/?:{-}] applying preemptive snapshot at index 16 (id=59e1afc9, encoded size=2241, 1 rocksdb batches, 6 log entries)
I180827 20:41:53.254056 51839 server/server_update.go:67  [n3] no need to upgrade, cluster already at the newest version
I180827 20:41:53.255122 51841 sql/event_log.go:126  [n3] Event: "node_join", target: 3, info: {Descriptor:{NodeID:3 Address:{NetworkField:tcp AddressField:127.0.0.1:46463} Attrs: Locality: ServerVersion:2.0-12 BuildTag:v2.1.0-alpha.20180702-2025-gf1e7bb1 StartedAt:1535402513226706701 LocalityAddress:[]} ClusterID:d5e53e69-a109-4eb6-91bf-29e74ae744ba StartedAt:1535402513226706701 LastUp:1535402513226706701}
I180827 20:41:53.256061 51983 storage/replica_raftstorage.go:790  [n2,s2,r20/?:/Table/{23-50}] applied preemptive snapshot in 1ms [clear=0ms batch=0ms entries=0ms commit=1ms]
I180827 20:41:53.256605 50930 storage/replica_command.go:812  [replicate,n1,s1,r20/1:/Table/{23-50}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r20:/Table/{23-50} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.259565 50930 storage/replica.go:3743  [n1,s1,r20/1:/Table/{23-50}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I180827 20:41:53.261627 51625 rpc/nodedialer/nodedialer.go:92  [n2] connection to n1 established
I180827 20:41:53.264544 50862 testutils/testcluster/testcluster.go:536  [n1,s1] has 22 underreplicated ranges
I180827 20:41:53.286630 50930 rpc/nodedialer/nodedialer.go:92  [replicate,n1,s1,r21/1:/Table/5{0-1}] connection to n3 established
I180827 20:41:53.287245 50862 testutils/testcluster/testcluster.go:536  [n1,s1] has 22 underreplicated ranges
I180827 20:41:53.287799 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r21/1:/Table/5{0-1}] sending preemptive snapshot de08568a at applied index 18
I180827 20:41:53.288157 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r21/1:/Table/5{0-1}] streamed snapshot to (n3,s3):?: kv pairs: 8, log entries: 8, rate-limit: 8.0 MiB/sec, 1ms
I180827 20:41:53.288623 51959 storage/replica_raftstorage.go:784  [n3,s3,r21/?:{-}] applying preemptive snapshot at index 18 (id=de08568a, encoded size=2646, 1 rocksdb batches, 8 log entries)
I180827 20:41:53.289814 51959 storage/replica_raftstorage.go:790  [n3,s3,r21/?:/Table/5{0-1}] applied preemptive snapshot in 1ms [clear=0ms batch=0ms entries=0ms commit=1ms]
I180827 20:41:53.290329 50930 storage/replica_command.go:812  [replicate,n1,s1,r21/1:/Table/5{0-1}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r21:/Table/5{0-1} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.293678 50930 storage/replica.go:3743  [n1,s1,r21/1:/Table/5{0-1}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I180827 20:41:53.294953 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r22/1:/{Table/51-Max}] sending preemptive snapshot a84e7278 at applied index 12
I180827 20:41:53.295229 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r22/1:/{Table/51-Max}] streamed snapshot to (n3,s3):?: kv pairs: 7, log entries: 2, rate-limit: 8.0 MiB/sec, 1ms
I180827 20:41:53.295441 51883 rpc/nodedialer/nodedialer.go:92  [n3] connection to n1 established
I180827 20:41:53.295585 51953 storage/replica_raftstorage.go:784  [n3,s3,r22/?:{-}] applying preemptive snapshot at index 12 (id=a84e7278, encoded size=386, 1 rocksdb batches, 2 log entries)
I180827 20:41:53.295717 51953 storage/replica_raftstorage.go:790  [n3,s3,r22/?:/{Table/51-Max}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.295955 50930 storage/replica_command.go:812  [replicate,n1,s1,r22/1:/{Table/51-Max}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r22:/{Table/51-Max} [(n1,s1):1, next=2, gen=0]
I180827 20:41:53.298097 50930 storage/replica.go:3743  [n1,s1,r22/1:/{Table/51-Max}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I180827 20:41:53.301122 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r8/1:/Table/1{1-2}] sending preemptive snapshot 201bdccc at applied index 18
I180827 20:41:53.301565 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r8/1:/Table/1{1-2}] streamed snapshot to (n3,s3):?: kv pairs: 9, log entries: 8, rate-limit: 8.0 MiB/sec, 3ms
I180827 20:41:53.306578 52088 storage/replica_raftstorage.go:784  [n3,s3,r8/?:{-}] applying preemptive snapshot at index 18 (id=201bdccc, encoded size=4352, 1 rocksdb batches, 8 log entries)
I180827 20:41:53.306868 52088 storage/replica_raftstorage.go:790  [n3,s3,r8/?:/Table/1{1-2}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.307601 50930 storage/replica_command.go:812  [replicate,n1,s1,r8/1:/Table/1{1-2}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r8:/Table/1{1-2} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.311873 50930 storage/replica.go:3743  [n1,s1,r8/1:/Table/1{1-2}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I180827 20:41:53.314134 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r17/1:/Table/2{0-1}] sending preemptive snapshot 53116eb2 at applied index 16
I180827 20:41:53.314317 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r17/1:/Table/2{0-1}] streamed snapshot to (n3,s3):?: kv pairs: 8, log entries: 6, rate-limit: 8.0 MiB/sec, 2ms
I180827 20:41:53.314683 52103 storage/replica_raftstorage.go:784  [n3,s3,r17/?:{-}] applying preemptive snapshot at index 16 (id=53116eb2, encoded size=2105, 1 rocksdb batches, 6 log entries)
I180827 20:41:53.314887 52103 storage/replica_raftstorage.go:790  [n3,s3,r17/?:/Table/2{0-1}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.315401 50930 storage/replica_command.go:812  [replicate,n1,s1,r17/1:/Table/2{0-1}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r17:/Table/2{0-1} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.318398 50930 storage/replica.go:3743  [n1,s1,r17/1:/Table/2{0-1}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I180827 20:41:53.319436 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r16/1:/Table/{19-20}] sending preemptive snapshot e0be8540 at applied index 16
I180827 20:41:53.319691 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r16/1:/Table/{19-20}] streamed snapshot to (n2,s2):?: kv pairs: 8, log entries: 6, rate-limit: 8.0 MiB/sec, 1ms
I180827 20:41:53.320127 52072 storage/replica_raftstorage.go:784  [n2,s2,r16/?:{-}] applying preemptive snapshot at index 16 (id=e0be8540, encoded size=2109, 1 rocksdb batches, 6 log entries)
I180827 20:41:53.320339 52072 storage/replica_raftstorage.go:790  [n2,s2,r16/?:/Table/{19-20}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.320816 50930 storage/replica_command.go:812  [replicate,n1,s1,r16/1:/Table/{19-20}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r16:/Table/{19-20} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.323849 50930 storage/replica.go:3743  [n1,s1,r16/1:/Table/{19-20}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I180827 20:41:53.326208 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r15/1:/Table/1{8-9}] sending preemptive snapshot d259ae5c at applied index 16
I180827 20:41:53.326404 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r15/1:/Table/1{8-9}] streamed snapshot to (n2,s2):?: kv pairs: 8, log entries: 6, rate-limit: 8.0 MiB/sec, 2ms
I180827 20:41:53.326731 52116 storage/replica_raftstorage.go:784  [n2,s2,r15/?:{-}] applying preemptive snapshot at index 16 (id=d259ae5c, encoded size=2276, 1 rocksdb batches, 6 log entries)
I180827 20:41:53.326923 50862 testutils/testcluster/testcluster.go:536  [n1,s1] has 22 underreplicated ranges
I180827 20:41:53.326953 52116 storage/replica_raftstorage.go:790  [n2,s2,r15/?:/Table/1{8-9}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.334514 50930 storage/replica_command.go:812  [replicate,n1,s1,r15/1:/Table/1{8-9}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r15:/Table/1{8-9} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.337656 50930 storage/replica.go:3743  [n1,s1,r15/1:/Table/1{8-9}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I180827 20:41:53.338767 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r14/1:/Table/1{7-8}] sending preemptive snapshot 9d0058d5 at applied index 16
I180827 20:41:53.339034 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r14/1:/Table/1{7-8}] streamed snapshot to (n2,s2):?: kv pairs: 8, log entries: 6, rate-limit: 8.0 MiB/sec, 1ms
I180827 20:41:53.339612 52090 storage/replica_raftstorage.go:784  [n2,s2,r14/?:{-}] applying preemptive snapshot at index 16 (id=9d0058d5, encoded size=2276, 1 rocksdb batches, 6 log entries)
I180827 20:41:53.339831 52090 storage/replica_raftstorage.go:790  [n2,s2,r14/?:/Table/1{7-8}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.340173 50930 storage/replica_command.go:812  [replicate,n1,s1,r14/1:/Table/1{7-8}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r14:/Table/1{7-8} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.343121 50930 storage/replica.go:3743  [n1,s1,r14/1:/Table/1{7-8}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I180827 20:41:53.345432 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r9/1:/Table/1{2-3}] sending preemptive snapshot 0eea2d20 at applied index 26
I180827 20:41:53.345859 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r9/1:/Table/1{2-3}] streamed snapshot to (n2,s2):?: kv pairs: 53, log entries: 16, rate-limit: 8.0 MiB/sec, 2ms
I180827 20:41:53.347137 52066 storage/replica_raftstorage.go:784  [n2,s2,r9/?:{-}] applying preemptive snapshot at index 26 (id=0eea2d20, encoded size=15139, 1 rocksdb batches, 16 log entries)
I180827 20:41:53.347467 52066 storage/replica_raftstorage.go:790  [n2,s2,r9/?:/Table/1{2-3}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.348208 50930 storage/replica_command.go:812  [replicate,n1,s1,r9/1:/Table/1{2-3}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r9:/Table/1{2-3} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.352166 50930 storage/replica.go:3743  [n1,s1,r9/1:/Table/1{2-3}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I180827 20:41:53.353188 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r4/1:/System/{NodeLive…-tsd}] sending preemptive snapshot 0cdee511 at applied index 39
I180827 20:41:53.353765 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r4/1:/System/{NodeLive…-tsd}] streamed snapshot to (n2,s2):?: kv pairs: 36, log entries: 29, rate-limit: 8.0 MiB/sec, 1ms
I180827 20:41:53.354286 51723 storage/replica_raftstorage.go:784  [n2,s2,r4/?:{-}] applying preemptive snapshot at index 39 (id=0cdee511, encoded size=98384, 1 rocksdb batches, 29 log entries)
I180827 20:41:53.354994 51723 storage/replica_raftstorage.go:790  [n2,s2,r4/?:/System/{NodeLive…-tsd}] applied preemptive snapshot in 1ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.355529 50930 storage/replica_command.go:812  [replicate,n1,s1,r4/1:/System/{NodeLive…-tsd}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r4:/System/{NodeLivenessMax-tsd} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.358523 50930 storage/replica.go:3743  [n1,s1,r4/1:/System/{NodeLive…-tsd}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I180827 20:41:53.360250 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r3/1:/System/NodeLiveness{-Max}] sending preemptive snapshot 965d58b1 at applied index 19
I180827 20:41:53.360436 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r3/1:/System/NodeLiveness{-Max}] streamed snapshot to (n3,s3):?: kv pairs: 10, log entries: 9, rate-limit: 8.0 MiB/sec, 1ms
I180827 20:41:53.360789 52150 storage/replica_raftstorage.go:784  [n3,s3,r3/?:{-}] applying preemptive snapshot at index 19 (id=965d58b1, encoded size=4003, 1 rocksdb batches, 9 log entries)
I180827 20:41:53.361043 52150 storage/replica_raftstorage.go:790  [n3,s3,r3/?:/System/NodeLiveness{-Max}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.361522 50930 storage/replica_command.go:812  [replicate,n1,s1,r3/1:/System/NodeLiveness{-Max}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r3:/System/NodeLiveness{-Max} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.364392 50930 storage/replica.go:3743  [n1,s1,r3/1:/System/NodeLiveness{-Max}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I180827 20:41:53.366422 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r12/1:/Table/1{5-6}] sending preemptive snapshot 811af376 at applied index 16
I180827 20:41:53.366638 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r12/1:/Table/1{5-6}] streamed snapshot to (n3,s3):?: kv pairs: 8, log entries: 6, rate-limit: 8.0 MiB/sec, 2ms
I180827 20:41:53.367089 52137 storage/replica_raftstorage.go:784  [n3,s3,r12/?:{-}] applying preemptive snapshot at index 16 (id=811af376, encoded size=2276, 1 rocksdb batches, 6 log entries)
I180827 20:41:53.367359 52137 storage/replica_raftstorage.go:790  [n3,s3,r12/?:/Table/1{5-6}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.368127 50930 storage/replica_command.go:812  [replicate,n1,s1,r12/1:/Table/1{5-6}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r12:/Table/1{5-6} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.371691 50930 storage/replica.go:3743  [n1,s1,r12/1:/Table/1{5-6}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I180827 20:41:53.374563 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r19/1:/Table/2{2-3}] sending preemptive snapshot 9cd02555 at applied index 16
I180827 20:41:53.374760 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r19/1:/Table/2{2-3}] streamed snapshot to (n3,s3):?: kv pairs: 8, log entries: 6, rate-limit: 8.0 MiB/sec, 2ms
I180827 20:41:53.375252 52080 storage/replica_raftstorage.go:784  [n3,s3,r19/?:{-}] applying preemptive snapshot at index 16 (id=9cd02555, encoded size=2276, 1 rocksdb batches, 6 log entries)
I180827 20:41:53.375582 52080 storage/replica_raftstorage.go:790  [n3,s3,r19/?:/Table/2{2-3}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.375950 50930 storage/replica_command.go:812  [replicate,n1,s1,r19/1:/Table/2{2-3}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r19:/Table/2{2-3} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.381819 50930 storage/replica.go:3743  [n1,s1,r19/1:/Table/2{2-3}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I180827 20:41:53.386461 52091 rpc/nodedialer/nodedialer.go:92  [ct-client] connection to n3 established
I180827 20:41:53.386637 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r10/1:/Table/1{3-4}] sending preemptive snapshot a16f4b15 at applied index 64
I180827 20:41:53.388005 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r10/1:/Table/1{3-4}] streamed snapshot to (n3,s3):?: kv pairs: 204, log entries: 54, rate-limit: 8.0 MiB/sec, 4ms
I180827 20:41:53.388536 52181 storage/replica_raftstorage.go:784  [n3,s3,r10/?:{-}] applying preemptive snapshot at index 64 (id=a16f4b15, encoded size=62836, 1 rocksdb batches, 54 log entries)
I180827 20:41:53.389154 52181 storage/replica_raftstorage.go:790  [n3,s3,r10/?:/Table/1{3-4}] applied preemptive snapshot in 1ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.389513 50930 storage/replica_command.go:812  [replicate,n1,s1,r10/1:/Table/1{3-4}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r10:/Table/1{3-4} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.392649 50930 storage/replica.go:3743  [n1,s1,r10/1:/Table/1{3-4}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I180827 20:41:53.394122 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r2/1:/System/{-NodeLive…}] sending preemptive snapshot 69adabc1 at applied index 23
I180827 20:41:53.394365 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r2/1:/System/{-NodeLive…}] streamed snapshot to (n2,s2):?: kv pairs: 7, log entries: 13, rate-limit: 8.0 MiB/sec, 1ms
I180827 20:41:53.394729 52213 storage/replica_raftstorage.go:784  [n2,s2,r2/?:{-}] applying preemptive snapshot at index 23 (id=69adabc1, encoded size=6277, 1 rocksdb batches, 13 log entries)
I180827 20:41:53.394981 52213 storage/replica_raftstorage.go:790  [n2,s2,r2/?:/System/{-NodeLive…}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.395465 50930 storage/replica_command.go:812  [replicate,n1,s1,r2/1:/System/{-NodeLive…}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r2:/System/{-NodeLiveness} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.398757 50930 storage/replica.go:3743  [n1,s1,r2/1:/System/{-NodeLive…}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I180827 20:41:53.399709 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r18/1:/Table/2{1-2}] sending preemptive snapshot e9df2a4a at applied index 16
I180827 20:41:53.400036 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r18/1:/Table/2{1-2}] streamed snapshot to (n3,s3):?: kv pairs: 8, log entries: 6, rate-limit: 8.0 MiB/sec, 1ms
I180827 20:41:53.400391 52185 storage/replica_raftstorage.go:784  [n3,s3,r18/?:{-}] applying preemptive snapshot at index 16 (id=e9df2a4a, encoded size=2272, 1 rocksdb batches, 6 log entries)
I180827 20:41:53.400594 52185 storage/replica_raftstorage.go:790  [n3,s3,r18/?:/Table/2{1-2}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.400882 50930 storage/replica_command.go:812  [replicate,n1,s1,r18/1:/Table/2{1-2}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r18:/Table/2{1-2} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.407636 50930 storage/replica.go:3743  [n1,s1,r18/1:/Table/2{1-2}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I180827 20:41:53.408861 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r13/1:/Table/1{6-7}] sending preemptive snapshot 6f914d55 at applied index 16
I180827 20:41:53.409071 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r13/1:/Table/1{6-7}] streamed snapshot to (n2,s2):?: kv pairs: 8, log entries: 6, rate-limit: 8.0 MiB/sec, 1ms
I180827 20:41:53.409426 52218 storage/replica_raftstorage.go:784  [n2,s2,r13/?:{-}] applying preemptive snapshot at index 16 (id=6f914d55, encoded size=2276, 1 rocksdb batches, 6 log entries)
I180827 20:41:53.409616 52218 storage/replica_raftstorage.go:790  [n2,s2,r13/?:/Table/1{6-7}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.409970 50930 storage/replica_command.go:812  [replicate,n1,s1,r13/1:/Table/1{6-7}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r13:/Table/1{6-7} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.411262 50862 testutils/testcluster/testcluster.go:536  [n1,s1] has 22 underreplicated ranges
I180827 20:41:53.412831 50930 storage/replica.go:3743  [n1,s1,r13/1:/Table/1{6-7}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I180827 20:41:53.414081 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r11/1:/Table/1{4-5}] sending preemptive snapshot cca961c1 at applied index 16
I180827 20:41:53.414277 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r11/1:/Table/1{4-5}] streamed snapshot to (n3,s3):?: kv pairs: 8, log entries: 6, rate-limit: 8.0 MiB/sec, 1ms
I180827 20:41:53.414576 52199 storage/replica_raftstorage.go:784  [n3,s3,r11/?:{-}] applying preemptive snapshot at index 16 (id=cca961c1, encoded size=2272, 1 rocksdb batches, 6 log entries)
I180827 20:41:53.414816 52199 storage/replica_raftstorage.go:790  [n3,s3,r11/?:/Table/1{4-5}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.415293 50930 storage/replica_command.go:812  [replicate,n1,s1,r11/1:/Table/1{4-5}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r11:/Table/1{4-5} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.418111 50930 storage/replica.go:3743  [n1,s1,r11/1:/Table/1{4-5}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I180827 20:41:53.419054 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r5/1:/System/ts{d-e}] sending preemptive snapshot 3c3a015f at applied index 27
I180827 20:41:53.423022 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r5/1:/System/ts{d-e}] streamed snapshot to (n3,s3):?: kv pairs: 1391, log entries: 2, rate-limit: 8.0 MiB/sec, 4ms
I180827 20:41:53.423893 52201 storage/replica_raftstorage.go:784  [n3,s3,r5/?:{-}] applying preemptive snapshot at index 27 (id=3c3a015f, encoded size=194658, 1 rocksdb batches, 2 log entries)
I180827 20:41:53.429501 52201 storage/replica_raftstorage.go:790  [n3,s3,r5/?:/System/ts{d-e}] applied preemptive snapshot in 6ms [clear=0ms batch=0ms entries=2ms commit=4ms]
I180827 20:41:53.433500 50930 storage/replica_command.go:812  [replicate,n1,s1,r5/1:/System/ts{d-e}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r5:/System/ts{d-e} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.437580 50930 storage/replica.go:3743  [n1,s1,r5/1:/System/ts{d-e}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I180827 20:41:53.440575 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r6/1:/{System/tse-Table/System…}] sending preemptive snapshot cbd412df at applied index 21
I180827 20:41:53.440794 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r6/1:/{System/tse-Table/System…}] streamed snapshot to (n3,s3):?: kv pairs: 8, log entries: 11, rate-limit: 8.0 MiB/sec, 2ms
I180827 20:41:53.441181 52260 storage/replica_raftstorage.go:784  [n3,s3,r6/?:{-}] applying preemptive snapshot at index 21 (id=cbd412df, encoded size=4339, 1 rocksdb batches, 11 log entries)
I180827 20:41:53.441400 52260 storage/replica_raftstorage.go:790  [n3,s3,r6/?:/{System/tse-Table/System…}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.441676 50930 storage/replica_command.go:812  [replicate,n1,s1,r6/1:/{System/tse-Table/System…}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r6:/{System/tse-Table/SystemConfigSpan/Start} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.448564 52224 rpc/nodedialer/nodedialer.go:92  [ct-client] connection to n2 established
I180827 20:41:53.461587 50930 storage/replica.go:3743  [n1,s1,r6/1:/{System/tse-Table/System…}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I180827 20:41:53.463345 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r7/1:/Table/{SystemCon…-11}] sending preemptive snapshot 114f4385 at applied index 29
I180827 20:41:53.464896 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r7/1:/Table/{SystemCon…-11}] streamed snapshot to (n2,s2):?: kv pairs: 59, log entries: 19, rate-limit: 8.0 MiB/sec, 3ms
I180827 20:41:53.465343 52280 storage/replica_raftstorage.go:784  [n2,s2,r7/?:{-}] applying preemptive snapshot at index 29 (id=114f4385, encoded size=16646, 1 rocksdb batches, 19 log entries)
I180827 20:41:53.465821 52280 storage/replica_raftstorage.go:790  [n2,s2,r7/?:/Table/{SystemCon…-11}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.466988 50930 storage/replica_command.go:812  [replicate,n1,s1,r7/1:/Table/{SystemCon…-11}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r7:/Table/{SystemConfigSpan/Start-11} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.472743 50930 storage/replica.go:3743  [n1,s1,r7/1:/Table/{SystemCon…-11}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I180827 20:41:53.474632 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r1/1:/{Min-System/}] sending preemptive snapshot 0a244018 at applied index 114
I180827 20:41:53.475250 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r1/1:/{Min-System/}] streamed snapshot to (n2,s2):?: kv pairs: 73, log entries: 90, rate-limit: 8.0 MiB/sec, 1ms
I180827 20:41:53.475827 52267 storage/replica_raftstorage.go:784  [n2,s2,r1/?:{-}] applying preemptive snapshot at index 114 (id=0a244018, encoded size=40271, 1 rocksdb batches, 90 log entries)
I180827 20:41:53.476525 52267 storage/replica_raftstorage.go:790  [n2,s2,r1/?:/{Min-System/}] applied preemptive snapshot in 1ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.476869 50930 storage/replica_command.go:812  [replicate,n1,s1,r1/1:/{Min-System/}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r1:/{Min-System/} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.482912 50930 storage/replica.go:3743  [n1,s1,r1/1:/{Min-System/}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I180827 20:41:53.483281 50930 storage/queue.go:873  [n1,replicate] purgatory is now empty
I180827 20:41:53.485684 52286 storage/store_snapshot.go:615  [replicate,n1,s1,r20/1:/Table/{23-50}] sending preemptive snapshot f1426c69 at applied index 19
I180827 20:41:53.487316 52286 storage/store_snapshot.go:657  [replicate,n1,s1,r20/1:/Table/{23-50}] streamed snapshot to (n3,s3):?: kv pairs: 13, log entries: 9, rate-limit: 8.0 MiB/sec, 4ms
I180827 20:41:53.487681 52252 storage/replica_raftstorage.go:784  [n3,s3,r20/?:{-}] applying preemptive snapshot at index 19 (id=f1426c69, encoded size=3273, 1 rocksdb batches, 9 log entries)
I180827 20:41:53.487932 52252 storage/replica_raftstorage.go:790  [n3,s3,r20/?:/Table/{23-50}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.488311 52286 storage/replica_command.go:812  [replicate,n1,s1,r20/1:/Table/{23-50}] change replicas (ADD_REPLICA (n3,s3):3): read existing descriptor r20:/Table/{23-50} [(n1,s1):1, (n2,s2):2, next=3, gen=1]
I180827 20:41:53.503580 52286 storage/replica.go:3743  [n1,s1,r20/1:/Table/{23-50}] proposing ADD_REPLICA((n3,s3):3): updated=[(n1,s1):1 (n2,s2):2 (n3,s3):3] next=4
I180827 20:41:53.505707 52235 storage/store_snapshot.go:615  [replicate,n1,s1,r1/1:/{Min-System/}] sending preemptive snapshot 99036b07 at applied index 119
I180827 20:41:53.506514 52235 storage/store_snapshot.go:657  [replicate,n1,s1,r1/1:/{Min-System/}] streamed snapshot to (n3,s3):?: kv pairs: 78, log entries: 95, rate-limit: 8.0 MiB/sec, 1ms
I180827 20:41:53.507282 52188 storage/replica_raftstorage.go:784  [n3,s3,r1/?:{-}] applying preemptive snapshot at index 119 (id=99036b07, encoded size=42101, 1 rocksdb batches, 95 log entries)
I180827 20:41:53.508109 52188 storage/replica_raftstorage.go:790  [n3,s3,r1/?:/{Min-System/}] applied preemptive snapshot in 1ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.508641 52235 storage/replica_command.go:812  [replicate,n1,s1,r1/1:/{Min-System/}] change replicas (ADD_REPLICA (n3,s3):3): read existing descriptor r1:/{Min-System/} [(n1,s1):1, (n2,s2):2, next=3, gen=1]
I180827 20:41:53.512524 52235 storage/replica.go:3743  [n1,s1,r1/1:/{Min-System/}] proposing ADD_REPLICA((n3,s3):3): updated=[(n1,s1):1 (n2,s2):2 (n3,s3):3] next=4
I180827 20:41:53.513999 52209 storage/store_snapshot.go:615  [replicate,n1,s1,r7/1:/Table/{SystemCon…-11}] sending preemptive snapshot bb53109c at applied index 32
I180827 20:41:53.514379 52209 storage/store_snapshot.go:657  [replicate,n1,s1,r7/1:/Table/{SystemCon…-11}] streamed snapshot to (n3,s3):?: kv pairs: 60, log entries: 22, rate-limit: 8.0 MiB/sec, 1ms
I180827 20:41:53.514821 52292 storage/replica_raftstorage.go:784  [n3,s3,r7/?:{-}] applying preemptive snapshot at index 32 (id=bb53109c, encoded size=17687, 1 rocksdb batches, 22 log entries)
I180827 20:41:53.515905 52292 storage/replica_raftstorage.go:790  [n3,s3,r7/?:/Table/{SystemCon…-11}] applied preemptive snapshot in 1ms [clear=1ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.516367 52209 storage/replica_command.go:812  [replicate,n1,s1,r7/1:/Table/{SystemCon…-11}] change replicas (ADD_REPLICA (n3,s3):3): read existing descriptor r7:/Table/{SystemConfigSpan/Start-11} [(n1,s1):1, (n2,s2):2, next=3, gen=1]
I180827 20:41:53.520158 52209 storage/replica.go:3743  [n1,s1,r7/1:/Table/{SystemCon…-11}] proposing ADD_REPLICA((n3,s3):3): updated=[(n1,s1):1 (n2,s2):2 (n3,s3):3] next=4
I180827 20:41:53.521958 52312 storage/store_snapshot.go:615  [replicate,n1,s1,r6/1:/{System/tse-Table/System…}] sending preemptive snapshot 2ca43612 at applied index 24
I180827 20:41:53.522776 52312 storage/store_snapshot.go:657  [replicate,n1,s1,r6/1:/{System/tse-Table/System…}] streamed snapshot to (n2,s2):?: kv pairs: 9, log entries: 14, rate-limit: 8.0 MiB/sec, 2ms
I180827 20:41:53.523128 52239 storage/replica_raftstorage.go:784  [n2,s2,r6/?:{-}] applying preemptive snapshot at index 24 (id=2ca43612, encoded size=5410, 1 rocksdb batches, 14 log entries)
I180827 20:41:53.523377 52239 storage/replica_raftstorage.go:790  [n2,s2,r6/?:/{System/tse-Table/System…}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.523701 52312 storage/replica_command.go:812  [replicate,n1,s1,r6/1:/{System/tse-Table/System…}] change replicas (ADD_REPLICA (n2,s2):3): read existing descriptor r6:/{System/tse-Table/SystemConfigSpan/Start} [(n1,s1):1, (n3,s3):2, next=3, gen=1]
I180827 20:41:53.525176 50862 testutils/testcluster/testcluster.go:536  [n1,s1] has 19 underreplicated ranges
I180827 20:41:53.527482 52312 storage/replica.go:3743  [n1,s1,r6/1:/{System/tse-Table/System…}] proposing ADD_REPLICA((n2,s2):3): updated=[(n1,s1):1 (n3,s3):2 (n2,s2):3] next=4
I180827 20:41:53.528875 52327 storage/store_snapshot.go:615  [replicate,n1,s1,r5/1:/System/ts{d-e}] sending preemptive snapshot 731be2ae at applied index 30
I180827 20:41:53.532860 52327 storage/store_snapshot.go:657  [replicate,n1,s1,r5/1:/System/ts{d-e}] streamed snapshot to (n2,s2):?: kv pairs: 1392, log entries: 5, rate-limit: 8.0 MiB/sec, 4ms
I180827 20:41:53.533361 52316 storage/replica_raftstorage.go:784  [n2,s2,r5/?:{-}] applying preemptive snapshot at index 30 (id=731be2ae, encoded size=195741, 1 rocksdb batches, 5 log entries)
I180827 20:41:53.535834 52316 storage/replica_raftstorage.go:790  [n2,s2,r5/?:/System/ts{d-e}] applied preemptive snapshot in 2ms [clear=0ms batch=0ms entries=0ms commit=2ms]
I180827 20:41:53.536253 52327 storage/replica_command.go:812  [replicate,n1,s1,r5/1:/System/ts{d-e}] change replicas (ADD_REPLICA (n2,s2):3): read existing descriptor r5:/System/ts{d-e} [(n1,s1):1, (n3,s3):2, next=3, gen=1]
I180827 20:41:53.540576 52327 storage/replica.go:3743  [n1,s1,r5/1:/System/ts{d-e}] proposing ADD_REPLICA((n2,s2):3): updated=[(n1,s1):1 (n3,s3):2 (n2,s2):3] next=4
I180827 20:41:53.545804 52341 storage/store_snapshot.go:615  [replicate,n1,s1,r11/1:/Table/1{4-5}] sending preemptive snapshot 7497a95f at applied index 19
I180827 20:41:53.546108 52341 storage/store_snapshot.go:657  [replicate,n1,s1,r11/1:/Table/1{4-5}] streamed snapshot to (n2,s2):?: kv pairs: 9, log entries: 9, rate-limit: 8.0 MiB/sec, 4ms
I180827 20:41:53.546590 52275 storage/replica_raftstorage.go:784  [n2,s2,r11/?:{-}] applying preemptive snapshot at index 19 (id=7497a95f, encoded size=3304, 1 rocksdb batches, 9 log entries)
I180827 20:41:53.546960 52275 storage/replica_raftstorage.go:790  [n2,s2,r11/?:/Table/1{4-5}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.547386 52341 storage/replica_command.go:812  [replicate,n1,s1,r11/1:/Table/1{4-5}] change replicas (ADD_REPLICA (

Please assign, take a look and update the issue accordingly.

Test failure in CI build 506

The following test appears to have failed:

#506:

I0408 17:25:38.251064     277 multiraft.go:633] node 257: group 1 raft ready
I0408 17:25:38.251203     277 multiraft.go:650] Outgoing Message[0]: 257->514 MsgHeartbeat Term:6 Log:0/0
I0408 17:25:38.251293     277 multiraft.go:650] Outgoing Message[1]: 257->514 MsgHeartbeat Term:6 Log:0/0
I0408 17:25:38.251894     277 multiraft.go:820] node 514: connecting to new node 257
W0408 17:25:38.252041     277 multiraft.go:828] node 514 failed to send message to 257
--- FAIL: TestFailedReplicaChange (0.16s)
    <autogenerated>:31: Test appears to have leaked an rpc client:
        net.(*pollDesc).Wait(0xc208679100, 0x72, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:84 +0x63
        net.(*pollDesc).WaitRead(0xc208679100, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:89 +0x51
        net.(*netFD).Read(0xc2086790a0, 0xc2083ff000, 0x1000, 0x1000, 0x0, 0x7f8763ee7e20, 0xc2080f5508)
            /usr/src/go/src/net/fd_unix.go:242 +0x4b3
        net.(*conn).Read(0xc20802e058, 0xc2083ff000, 0x1000, 0x1000, 0xc208171640, 0x0, 0x0)
            /usr/src/go/src/net/net.go:121 +0x125
        net.(*TCPConn).Read(0xc20802e058, 0xc2083ff000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
--
I0408 17:25:38.510655     277 multiraft.go:650] Outgoing Message[1]: 514->257 MsgAppResp Term:6 Log:0/18
I0408 17:25:38.510760     277 multiraft.go:650] Outgoing Message[2]: 514->257 MsgAppResp Term:6 Log:0/18
I0408 17:25:38.515438     277 client_raft_test.go:367] read value 39
I0408 17:25:38.515970     277 multiraft.go:633] node 257: group 1 raft ready
I0408 17:25:38.516108     277 multiraft.go:650] Outgoing Message[0]: 257->514 MsgHeartbeat Term:6 Log:0/0 Commit:18
--- FAIL: TestReplicateAfterTruncation (0.27s)
    <autogenerated>:31: Test appears to have leaked an rpc client:
        net.(*pollDesc).Wait(0xc208679100, 0x72, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:84 +0x63
        net.(*pollDesc).WaitRead(0xc208679100, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:89 +0x51
        net.(*netFD).Read(0xc2086790a0, 0xc2083ff000, 0x1000, 0x1000, 0x0, 0x7f8763ee7e20, 0xc2080f5508)
            /usr/src/go/src/net/fd_unix.go:242 +0x4b3
        net.(*conn).Read(0xc20802e058, 0xc2083ff000, 0x1000, 0x1000, 0xc208171640, 0x0, 0x0)
            /usr/src/go/src/net/net.go:121 +0x125
        net.(*TCPConn).Read(0xc20802e058, 0xc2083ff000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
--
I0408 17:25:38.973793     277 multiraft.go:448] node 257: group 18446744073709551615 got message 514->257 MsgHeartbeat Term:0 Log:0/0
I0408 17:25:38.979372     277 multiraft.go:448] node 771: group 18446744073709551615 got message 514->0 MsgHeartbeatResp Term:0 Log:0/0
I0408 17:25:38.979482     277 multiraft.go:633] node 257: group 1 raft ready
I0408 17:25:38.979582     277 multiraft.go:650] Outgoing Message[0]: 257->771 MsgHeartbeat Term:6 Log:0/0 Commit:17
I0408 17:25:38.979659     277 multiraft.go:650] Outgoing Message[1]: 257->514 MsgHeartbeat Term:6 Log:0/0 Commit:17
--- FAIL: TestStoreRangeReplicate (0.47s)
    <autogenerated>:31: Test appears to have leaked an rpc client:
        net.(*pollDesc).Wait(0xc208679100, 0x72, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:84 +0x63
        net.(*pollDesc).WaitRead(0xc208679100, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:89 +0x51
        net.(*netFD).Read(0xc2086790a0, 0xc2083ff000, 0x1000, 0x1000, 0x0, 0x7f8763ee7e20, 0xc2080f5508)
            /usr/src/go/src/net/fd_unix.go:242 +0x4b3
        net.(*conn).Read(0xc20802e058, 0xc2083ff000, 0x1000, 0x1000, 0xc208171640, 0x0, 0x0)
            /usr/src/go/src/net/net.go:121 +0x125
        net.(*TCPConn).Read(0xc20802e058, 0xc2083ff000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
--
            /go/src/github.com/cockroachdb/cockroach/storage/main_test.go:29 +0x36
        main.main()
            github.com/cockroachdb/cockroach/storage/_test/_testmain.go:332 +0x28d
=== RUN TestSetupRangeTree
I0408 17:25:39.073684     277 multiraft.go:407] node 257 starting
--- FAIL: TestSetupRangeTree (0.08s)
    <autogenerated>:31: Test appears to have leaked an rpc client:
        net.(*pollDesc).Wait(0xc208679100, 0x72, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:84 +0x63
        net.(*pollDesc).WaitRead(0xc208679100, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:89 +0x51
        net.(*netFD).Read(0xc2086790a0, 0xc2083ff000, 0x1000, 0x1000, 0x0, 0x7f8763ee7e20, 0xc2080f5508)
            /usr/src/go/src/net/fd_unix.go:242 +0x4b3
        net.(*conn).Read(0xc20802e058, 0xc2083ff000, 0x1000, 0x1000, 0xc208171640, 0x0, 0x0)
            /usr/src/go/src/net/net.go:121 +0x125
        net.(*TCPConn).Read(0xc20802e058, 0xc2083ff000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
--
I0408 17:25:39.606680     277 retry.go:93] Get failed; retrying immediately
I0408 17:25:39.607066     277 multiraft.go:633] node 257: group 4 raft ready
I0408 17:25:39.607193     277 multiraft.go:638] HardState updated: {Term:6 Vote:257 Commit:24 XXX_unrecognized:[]}
I0408 17:25:39.608144     277 multiraft.go:641] New Entry[0]: 6/24 EntryNormal 0000000000000000678877f0fa6cb0f2: raft_id:4 cmd:<internal_resolve_intent:<header:<timestamp:<wall_time:0 logical:157 > cmd_id:<wall_time:0 random:0 > key:"\000\000\000kc\000\001rtn-" 
I0408 17:25:39.608972     277 multiraft.go:644] Committed Entry[0]: 6/24 EntryNormal 0000000000000000678877f0fa6cb0f2: raft_id:4 cmd:<internal_resolve_intent:<header:<timestamp:<wall_time:0 logical:157 > cmd_id:<wall_time:0 random:0 > key:"\000\000\000kc\000\001rtn-" 
--- FAIL: TestInsertRight (0.55s)
    <autogenerated>:31: Test appears to have leaked an rpc client:
        net.(*pollDesc).Wait(0xc208679100, 0x72, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:84 +0x63
        net.(*pollDesc).WaitRead(0xc208679100, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:89 +0x51
        net.(*netFD).Read(0xc2086790a0, 0xc2083ff000, 0x1000, 0x1000, 0x0, 0x7f8763ee7e20, 0xc2080f5508)
            /usr/src/go/src/net/fd_unix.go:242 +0x4b3
        net.(*conn).Read(0xc20802e058, 0xc2083ff000, 0x1000, 0x1000, 0xc208171640, 0x0, 0x0)
            /usr/src/go/src/net/net.go:121 +0x125
        net.(*TCPConn).Read(0xc20802e058, 0xc2083ff000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
--
I0408 17:25:40.164272     277 retry.go:93] Get failed; retrying immediately
I0408 17:25:40.164717     277 multiraft.go:633] node 257: group 5 raft ready
I0408 17:25:40.164822     277 multiraft.go:638] HardState updated: {Term:6 Vote:257 Commit:17 XXX_unrecognized:[]}
I0408 17:25:40.165628     277 multiraft.go:641] New Entry[0]: 6/17 EntryNormal 00000000000000003c00c9f3537cf162: raft_id:5 cmd:<internal_resolve_intent:<header:<timestamp:<wall_time:0 logical:166 > cmd_id:<wall_time:0 random:0 > key:"\000\000\000kb\000\001rtn-" 
I0408 17:25:40.166374     277 multiraft.go:644] Committed Entry[0]: 6/17 EntryNormal 00000000000000003c00c9f3537cf162: raft_id:5 cmd:<internal_resolve_intent:<header:<timestamp:<wall_time:0 logical:166 > cmd_id:<wall_time:0 random:0 > key:"\000\000\000kb\000\001rtn-" 
--- FAIL: TestInsertLeft (0.56s)
    <autogenerated>:31: Test appears to have leaked an rpc client:
        net.(*pollDesc).Wait(0xc208679100, 0x72, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:84 +0x63
        net.(*pollDesc).WaitRead(0xc208679100, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:89 +0x51
        net.(*netFD).Read(0xc2086790a0, 0xc2083ff000, 0x1000, 0x1000, 0x0, 0x7f8763ee7e20, 0xc2080f5508)
            /usr/src/go/src/net/fd_unix.go:242 +0x4b3
        net.(*conn).Read(0xc20802e058, 0xc2083ff000, 0x1000, 0x1000, 0xc208171640, 0x0, 0x0)
            /usr/src/go/src/net/net.go:121 +0x125
        net.(*TCPConn).Read(0xc20802e058, 0xc2083ff000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
--
            /go/src/github.com/cockroachdb/cockroach/storage/main_test.go:29 +0x36
        main.main()
            github.com/cockroachdb/cockroach/storage/_test/_testmain.go:332 +0x28d
=== RUN TestStoreRangeSplitAtIllegalKeys
I0408 17:25:40.262072     277 multiraft.go:407] node 257 starting
--- FAIL: TestStoreRangeSplitAtIllegalKeys (0.08s)
    <autogenerated>:31: Test appears to have leaked an rpc client:
        net.(*pollDesc).Wait(0xc208679100, 0x72, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:84 +0x63
        net.(*pollDesc).WaitRead(0xc208679100, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:89 +0x51
        net.(*netFD).Read(0xc2086790a0, 0xc2083ff000, 0x1000, 0x1000, 0x0, 0x7f8763ee7e20, 0xc2080f5508)
            /usr/src/go/src/net/fd_unix.go:242 +0x4b3
        net.(*conn).Read(0xc20802e058, 0xc2083ff000, 0x1000, 0x1000, 0xc208171640, 0x0, 0x0)
            /usr/src/go/src/net/net.go:121 +0x125
        net.(*TCPConn).Read(0xc20802e058, 0xc2083ff000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
--
I0408 17:25:40.398630     277 multiraft.go:638] HardState updated: {Term:6 Vote:257 Commit:21 XXX_unrecognized:[]}
I0408 17:25:40.399831     277 multiraft.go:641] New Entry[0]: 6/21 EntryNormal 13d319dc3548fbfe5acaa5c0c26b4e61: raft_id:1 cmd:<end_transaction:<header:<timestamp:<wall_time:0 logical:8 > cmd_id:<wall_time:1428513940384054270 random:6542223656023182945 > key:"a"
I0408 17:25:40.400863     277 multiraft.go:644] Committed Entry[0]: 6/21 EntryNormal 13d319dc3548fbfe5acaa5c0c26b4e61: raft_id:1 cmd:<end_transaction:<header:<timestamp:<wall_time:0 logical:8 > cmd_id:<wall_time:1428513940384054270 random:6542223656023182945 > key:"a"
I0408 17:25:40.405616     277 raft.go:390] raft: 101 became follower at term 5
I0408 17:25:40.405899     277 raft.go:207] raft: newRaft 101 [peers: [101], term: 5, commit: 10, applied: 10, lastindex: 10, lastterm: 5]
--- FAIL: TestStoreRangeSplitAtRangeBounds (0.15s)
    <autogenerated>:31: Test appears to have leaked an rpc client:
        net.(*pollDesc).Wait(0xc208679100, 0x72, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:84 +0x63
        net.(*pollDesc).WaitRead(0xc208679100, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:89 +0x51
        net.(*netFD).Read(0xc2086790a0, 0xc2083ff000, 0x1000, 0x1000, 0x0, 0x7f8763ee7e20, 0xc2080f5508)
            /usr/src/go/src/net/fd_unix.go:242 +0x4b3
        net.(*conn).Read(0xc20802e058, 0xc2083ff000, 0x1000, 0x1000, 0xc208171640, 0x0, 0x0)
            /usr/src/go/src/net/net.go:121 +0x125
        net.(*TCPConn).Read(0xc20802e058, 0xc2083ff000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
--
I0408 17:25:40.566404     277 multiraft.go:638] HardState updated: {Term:6 Vote:257 Commit:27 XXX_unrecognized:[]}
I0408 17:25:40.567090     277 multiraft.go:641] New Entry[0]: 6/26 EntryNormal 00000000000000004e25865f44fa13d1: raft_id:1 cmd:<internal_resolve_intent:<header:<timestamp:<wall_time:0 logical:17 > cmd_id:<wall_time:0 random:0 > key:"\000\000meta2\377\377" user:"
I0408 17:25:40.567856     277 multiraft.go:641] New Entry[1]: 6/27 EntryNormal 0000000000000000131339e37bcd83b5: raft_id:1 cmd:<internal_resolve_intent:<header:<timestamp:<wall_time:0 logical:17 > cmd_id:<wall_time:0 random:0 > key:"\000range-tree-root" user:"ro
I0408 17:25:40.568653     277 multiraft.go:644] Committed Entry[0]: 6/26 EntryNormal 00000000000000004e25865f44fa13d1: raft_id:1 cmd:<internal_resolve_intent:<header:<timestamp:<wall_time:0 logical:17 > cmd_id:<wall_time:0 random:0 > key:"\000\000meta2\377\377" user:"
I0408 17:25:40.569305     277 multiraft.go:644] Committed Entry[1]: 6/27 EntryNormal 0000000000000000131339e37bcd83b5: raft_id:1 cmd:<internal_resolve_intent:<header:<timestamp:<wall_time:0 logical:17 > cmd_id:<wall_time:0 random:0 > key:"\000range-tree-root" user:"ro
--- FAIL: TestStoreRangeSplitConcurrent (0.16s)
    <autogenerated>:31: Test appears to have leaked an rpc client:
        net.(*pollDesc).Wait(0xc208679100, 0x72, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:84 +0x63
        net.(*pollDesc).WaitRead(0xc208679100, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:89 +0x51
        net.(*netFD).Read(0xc2086790a0, 0xc2083ff000, 0x1000, 0x1000, 0x0, 0x7f8763ee7e20, 0xc2080f5508)
            /usr/src/go/src/net/fd_unix.go:242 +0x4b3
        net.(*conn).Read(0xc20802e058, 0xc2083ff000, 0x1000, 0x1000, 0xc208171640, 0x0, 0x0)
            /usr/src/go/src/net/net.go:121 +0x125
        net.(*TCPConn).Read(0xc20802e058, 0xc2083ff000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
--
I0408 17:25:40.704305     277 multiraft.go:638] HardState updated: {Term:6 Vote:257 Commit:25 XXX_unrecognized:[]}
I0408 17:25:40.705296     277 multiraft.go:641] New Entry[0]: 6/25 EntryNormal 13d319dc47c35c103940d667a3aa3ac2: raft_id:1 cmd:<end_transaction:<header:<timestamp:<wall_time:0 logical:12 > cmd_id:<wall_time:1428513940694064144 random:4125532999287192258 > key:"m
I0408 17:25:40.706198     277 multiraft.go:644] Committed Entry[0]: 6/25 EntryNormal 13d319dc47c35c103940d667a3aa3ac2: raft_id:1 cmd:<end_transaction:<header:<timestamp:<wall_time:0 logical:12 > cmd_id:<wall_time:1428513940694064144 random:4125532999287192258 > key:"m
I0408 17:25:40.710409     277 raft.go:390] raft: 101 became follower at term 5
I0408 17:25:40.710651     277 raft.go:207] raft: newRaft 101 [peers: [101], term: 5, commit: 10, applied: 10, lastindex: 10, lastterm: 5]
--- FAIL: TestStoreRangeSplit (0.16s)
    <autogenerated>:31: Test appears to have leaked an rpc client:
        net.(*pollDesc).Wait(0xc208679100, 0x72, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:84 +0x63
        net.(*pollDesc).WaitRead(0xc208679100, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:89 +0x51
        net.(*netFD).Read(0xc2086790a0, 0xc2083ff000, 0x1000, 0x1000, 0x0, 0x7f8763ee7e20, 0xc2080f5508)
            /usr/src/go/src/net/fd_unix.go:242 +0x4b3
        net.(*conn).Read(0xc20802e058, 0xc2083ff000, 0x1000, 0x1000, 0xc208171640, 0x0, 0x0)
            /usr/src/go/src/net/net.go:121 +0x125
        net.(*TCPConn).Read(0xc20802e058, 0xc2083ff000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
--
I0408 17:25:41.232811     277 multiraft.go:638] HardState updated: {Term:6 Vote:257 Commit:118 XXX_unrecognized:[]}
I0408 17:25:41.233943     277 multiraft.go:641] New Entry[0]: 6/118 EntryNormal 13d319dc672f30c879edf21af6548515: raft_id:2 cmd:<end_transaction:<header:<timestamp:<wall_time:0 logical:239 > cmd_id:<wall_time:1428513941221224648 random:8785944645685511445 > key:
I0408 17:25:41.234941     277 multiraft.go:644] Committed Entry[0]: 6/118 EntryNormal 13d319dc672f30c879edf21af6548515: raft_id:2 cmd:<end_transaction:<header:<timestamp:<wall_time:0 logical:239 > cmd_id:<wall_time:1428513941221224648 random:8785944645685511445 > key:
I0408 17:25:41.245152     277 raft.go:390] raft: 101 became follower at term 5
I0408 17:25:41.245371     277 raft.go:207] raft: newRaft 101 [peers: [101], term: 5, commit: 10, applied: 10, lastindex: 10, lastterm: 5]
--- FAIL: TestStoreRangeSplitStats (0.54s)
    <autogenerated>:31: Test appears to have leaked an rpc client:
        net.(*pollDesc).Wait(0xc208679100, 0x72, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:84 +0x63
        net.(*pollDesc).WaitRead(0xc208679100, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:89 +0x51
        net.(*netFD).Read(0xc2086790a0, 0xc2083ff000, 0x1000, 0x1000, 0x0, 0x7f8763ee7e20, 0xc2080f5508)
            /usr/src/go/src/net/fd_unix.go:242 +0x4b3
        net.(*conn).Read(0xc20802e058, 0xc2083ff000, 0x1000, 0x1000, 0xc208171640, 0x0, 0x0)
            /usr/src/go/src/net/net.go:121 +0x125
        net.(*TCPConn).Read(0xc20802e058, 0xc2083ff000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
--
I0408 17:25:42.250743     277 multiraft.go:644] Committed Entry[3]: 6/268 EntryNormal 0000000000000000275799e550b6d0b6: raft_id:1 cmd:<internal_resolve_intent:<header:<timestamp:<wall_time:0 logical:495 > cmd_id:<wall_time:0 random:0 > key:"\000\000meta2testZHMtQoLrvc
I0408 17:25:42.251426     277 multiraft.go:644] Committed Entry[4]: 6/269 EntryNormal 0000000000000000249c4c120ad92b6f: raft_id:1 cmd:<internal_resolve_intent:<header:<timestamp:<wall_time:0 logical:495 > cmd_id:<wall_time:0 random:0 > key:"\000\000meta2\377\377" user
I0408 17:25:42.252172     277 multiraft.go:644] Committed Entry[5]: 6/270 EntryNormal 00000000000000003682bdd11f910f6a: raft_id:1 cmd:<internal_resolve_intent:<header:<timestamp:<wall_time:0 logical:495 > cmd_id:<wall_time:0 random:0 > key:"\000range-tree-root" user:"
I0408 17:25:42.257246     277 queue.go:207] processing range range=1 (""-"testZHMtQoLrvcssrtoMUpwiwLoiYLgAbUibFATwYLcrfzqyLotKmLcdRBEIZmDgeHbtygdWYtWlXDsXTwEYHjPgGFXiwfrGuOhjMCPH") from split queue...
I0408 17:25:42.257438     277 queue.go:211] processed range range=1 (""-"testZHMtQoLrvcssrtoMUpwiwLoiYLgAbUibFATwYLcrfzqyLotKmLcdRBEIZmDgeHbtygdWYtWlXDsXTwEYHjPgGFXiwfrGuOhjMCPH") from split queue in 200.682µs
--- FAIL: TestStoreZoneUpdateAndRangeSplit (1.00s)
    <autogenerated>:31: Test appears to have leaked an rpc client:
        net.(*pollDesc).Wait(0xc208679100, 0x72, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:84 +0x63
        net.(*pollDesc).WaitRead(0xc208679100, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:89 +0x51
        net.(*netFD).Read(0xc2086790a0, 0xc2083ff000, 0x1000, 0x1000, 0x0, 0x7f8763ee7e20, 0xc2080f5508)
            /usr/src/go/src/net/fd_unix.go:242 +0x4b3
        net.(*conn).Read(0xc20802e058, 0xc2083ff000, 0x1000, 0x1000, 0xc208171640, 0x0, 0x0)
            /usr/src/go/src/net/net.go:121 +0x125
        net.(*TCPConn).Read(0xc20802e058, 0xc2083ff000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
--
I0408 17:25:42.947396     277 multiraft.go:638] HardState updated: {Term:6 Vote:257 Commit:76 XXX_unrecognized:[]}
I0408 17:25:42.948127     277 multiraft.go:641] New Entry[0]: 6/75 EntryNormal 00000000000000002b976c7e501daf7d: raft_id:1 cmd:<internal_resolve_intent:<header:<timestamp:<wall_time:0 logical:267 > cmd_id:<wall_time:0 random:0 > key:"\000\000meta2db5" user:"root
I0408 17:25:42.948873     277 multiraft.go:641] New Entry[1]: 6/76 EntryNormal 0000000000000000156b400d48604236: raft_id:1 cmd:<internal_resolve_intent:<header:<timestamp:<wall_time:0 logical:267 > cmd_id:<wall_time:0 random:0 > key:"\000\000meta2\377\377" user:
I0408 17:25:42.949487     277 multiraft.go:644] Committed Entry[0]: 6/75 EntryNormal 00000000000000002b976c7e501daf7d: raft_id:1 cmd:<internal_resolve_intent:<header:<timestamp:<wall_time:0 logical:267 > cmd_id:<wall_time:0 random:0 > key:"\000\000meta2db5" user:"root
I0408 17:25:42.950132     277 multiraft.go:644] Committed Entry[1]: 6/76 EntryNormal 0000000000000000156b400d48604236: raft_id:1 cmd:<internal_resolve_intent:<header:<timestamp:<wall_time:0 logical:267 > cmd_id:<wall_time:0 random:0 > key:"\000\000meta2\377\377" user:
--- FAIL: TestStoreRangeSplitOnConfigs (0.70s)
    <autogenerated>:31: Test appears to have leaked an rpc client:
        net.(*pollDesc).Wait(0xc208679100, 0x72, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:84 +0x63
        net.(*pollDesc).WaitRead(0xc208679100, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:89 +0x51
        net.(*netFD).Read(0xc2086790a0, 0xc2083ff000, 0x1000, 0x1000, 0x0, 0x7f8763ee7e20, 0xc2080f5508)
            /usr/src/go/src/net/fd_unix.go:242 +0x4b3
        net.(*conn).Read(0xc20802e058, 0xc2083ff000, 0x1000, 0x1000, 0xc208171640, 0x0, 0x0)
            /usr/src/go/src/net/net.go:121 +0x125
        net.(*TCPConn).Read(0xc20802e058, 0xc2083ff000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
--
I0408 17:25:43.167849     277 multiraft.go:644] Committed Entry[0]: 6/45 EntryNormal 13d319dcdad699cb0d4476bebf1ae7ab: raft_id:1 cmd:<put:<header:<timestamp:<wall_time:0 logical:39 > cmd_id:<wall_time:1428513943161575883 random:956019582531463083 > key:"\000\000meta2\
I0408 17:25:43.169486     277 multiraft.go:633] node 257: group 1 raft ready
I0408 17:25:43.169541     277 multiraft.go:638] HardState updated: {Term:6 Vote:257 Commit:46 XXX_unrecognized:[]}
I0408 17:25:43.170065     277 multiraft.go:641] New Entry[0]: 6/46 EntryNormal 13d319dcdad700e655a8448a8aec25fb: raft_id:1 cmd:<put:<header:<timestamp:<wall_time:0 logical:40 > cmd_id:<wall_time:1428513943161602278 random:6172258651138172411 > key:"\000\000meta1
I0408 17:25:43.170540     277 multiraft.go:644] Committed Entry[0]: 6/46 EntryNormal 13d319dcdad700e655a8448a8aec25fb: raft_id:1 cmd:<put:<header:<timestamp:<wall_time:0 logical:40 > cmd_id:<wall_time:1428513943161602278 random:6172258651138172411 > key:"\000\000meta1
--- FAIL: TestUpdateRangeAddressing (0.21s)
    <autogenerated>:31: Test appears to have leaked an rpc client:
        net.(*pollDesc).Wait(0xc208679100, 0x72, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:84 +0x63
        net.(*pollDesc).WaitRead(0xc208679100, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:89 +0x51
        net.(*netFD).Read(0xc2086790a0, 0xc2083ff000, 0x1000, 0x1000, 0x0, 0x7f8763ee7e20, 0xc2080f5508)
            /usr/src/go/src/net/fd_unix.go:242 +0x4b3
        net.(*conn).Read(0xc20802e058, 0xc2083ff000, 0x1000, 0x1000, 0xc208171640, 0x0, 0x0)
            /usr/src/go/src/net/net.go:121 +0x125
        net.(*TCPConn).Read(0xc20802e058, 0xc2083ff000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
--
            /go/src/github.com/cockroachdb/cockroach/storage/main_test.go:29 +0x36
        main.main()
            github.com/cockroachdb/cockroach/storage/_test/_testmain.go:332 +0x28d
=== RUN TestUpdateRangeAddressingSplitMeta1
I0408 17:25:43.263294     277 multiraft.go:407] node 257 starting
--- FAIL: TestUpdateRangeAddressingSplitMeta1 (0.09s)
    <autogenerated>:31: Test appears to have leaked an rpc client:
        net.(*pollDesc).Wait(0xc208679100, 0x72, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:84 +0x63
        net.(*pollDesc).WaitRead(0xc208679100, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:89 +0x51
        net.(*netFD).Read(0xc2086790a0, 0xc2083ff000, 0x1000, 0x1000, 0x0, 0x7f8763ee7e20, 0xc2080f5508)
            /usr/src/go/src/net/fd_unix.go:242 +0x4b3
        net.(*conn).Read(0xc20802e058, 0xc2083ff000, 0x1000, 0x1000, 0xc208171640, 0x0, 0x0)
            /usr/src/go/src/net/net.go:121 +0x125
        net.(*TCPConn).Read(0xc20802e058, 0xc2083ff000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
--
            /go/src/github.com/cockroachdb/cockroach/util/leaktest/leaktest.go:34 +0x36
        github.com/cockroachdb/cockroach/storage.TestMain(0xc208030b90)
            /go/src/github.com/cockroachdb/cockroach/storage/main_test.go:29 +0x36
        main.main()
            github.com/cockroachdb/cockroach/storage/_test/_testmain.go:332 +0x28d
FAIL
FAIL    github.com/cockroachdb/cockroach/storage    9.527s
=== RUN TestBatchBasics
--- PASS: TestBatchBasics (0.00s)
=== RUN TestBatchGet
--- PASS: TestBatchGet (0.00s)
=== RUN TestBatchMerge
--- PASS: TestBatchMerge (0.00s)
=== RUN TestBatchProto
--- PASS: TestBatchProto (0.00s)
=== RUN TestBatchScan
--- PASS: TestBatchScan (0.00s)
I0408 17:25:38.251064     277 multiraft.go:633] node 257: group 1 raft ready
I0408 17:25:38.251203     277 multiraft.go:650] Outgoing Message[0]: 257->514 MsgHeartbeat Term:6 Log:0/0
I0408 17:25:38.251293     277 multiraft.go:650] Outgoing Message[1]: 257->514 MsgHeartbeat Term:6 Log:0/0
I0408 17:25:38.251894     277 multiraft.go:820] node 514: connecting to new node 257
W0408 17:25:38.252041     277 multiraft.go:828] node 514 failed to send message to 257
--- FAIL: TestFailedReplicaChange (0.16s)
    <autogenerated>:31: Test appears to have leaked an rpc client:
        net.(*pollDesc).Wait(0xc208679100, 0x72, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:84 +0x63
        net.(*pollDesc).WaitRead(0xc208679100, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:89 +0x51
        net.(*netFD).Read(0xc2086790a0, 0xc2083ff000, 0x1000, 0x1000, 0x0, 0x7f8763ee7e20, 0xc2080f5508)
            /usr/src/go/src/net/fd_unix.go:242 +0x4b3
        net.(*conn).Read(0xc20802e058, 0xc2083ff000, 0x1000, 0x1000, 0xc208171640, 0x0, 0x0)
            /usr/src/go/src/net/net.go:121 +0x125
        net.(*TCPConn).Read(0xc20802e058, 0xc2083ff000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
--
I0408 17:25:38.510655     277 multiraft.go:650] Outgoing Message[1]: 514->257 MsgAppResp Term:6 Log:0/18
I0408 17:25:38.510760     277 multiraft.go:650] Outgoing Message[2]: 514->257 MsgAppResp Term:6 Log:0/18
I0408 17:25:38.515438     277 client_raft_test.go:367] read value 39
I0408 17:25:38.515970     277 multiraft.go:633] node 257: group 1 raft ready
I0408 17:25:38.516108     277 multiraft.go:650] Outgoing Message[0]: 257->514 MsgHeartbeat Term:6 Log:0/0 Commit:18
--- FAIL: TestReplicateAfterTruncation (0.27s)
    <autogenerated>:31: Test appears to have leaked an rpc client:
        net.(*pollDesc).Wait(0xc208679100, 0x72, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:84 +0x63
        net.(*pollDesc).WaitRead(0xc208679100, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:89 +0x51
        net.(*netFD).Read(0xc2086790a0, 0xc2083ff000, 0x1000, 0x1000, 0x0, 0x7f8763ee7e20, 0xc2080f5508)
            /usr/src/go/src/net/fd_unix.go:242 +0x4b3
        net.(*conn).Read(0xc20802e058, 0xc2083ff000, 0x1000, 0x1000, 0xc208171640, 0x0, 0x0)
            /usr/src/go/src/net/net.go:121 +0x125
        net.(*TCPConn).Read(0xc20802e058, 0xc2083ff000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
--
I0408 17:25:38.973793     277 multiraft.go:448] node 257: group 18446744073709551615 got message 514->257 MsgHeartbeat Term:0 Log:0/0
I0408 17:25:38.979372     277 multiraft.go:448] node 771: group 18446744073709551615 got message 514->0 MsgHeartbeatResp Term:0 Log:0/0
I0408 17:25:38.979482     277 multiraft.go:633] node 257: group 1 raft ready
I0408 17:25:38.979582     277 multiraft.go:650] Outgoing Message[0]: 257->771 MsgHeartbeat Term:6 Log:0/0 Commit:17
I0408 17:25:38.979659     277 multiraft.go:650] Outgoing Message[1]: 257->514 MsgHeartbeat Term:6 Log:0/0 Commit:17
--- FAIL: TestStoreRangeReplicate (0.47s)
    <autogenerated>:31: Test appears to have leaked an rpc client:
        net.(*pollDesc).Wait(0xc208679100, 0x72, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:84 +0x63
        net.(*pollDesc).WaitRead(0xc208679100, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:89 +0x51
        net.(*netFD).Read(0xc2086790a0, 0xc2083ff000, 0x1000, 0x1000, 0x0, 0x7f8763ee7e20, 0xc2080f5508)
            /usr/src/go/src/net/fd_unix.go:242 +0x4b3
        net.(*conn).Read(0xc20802e058, 0xc2083ff000, 0x1000, 0x1000, 0xc208171640, 0x0, 0x0)
            /usr/src/go/src/net/net.go:121 +0x125
        net.(*TCPConn).Read(0xc20802e058, 0xc2083ff000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
--
            /go/src/github.com/cockroachdb/cockroach/storage/main_test.go:29 +0x36
        main.main()
            github.com/cockroachdb/cockroach/storage/_test/_testmain.go:332 +0x28d
=== RUN TestSetupRangeTree
I0408 17:25:39.073684     277 multiraft.go:407] node 257 starting
--- FAIL: TestSetupRangeTree (0.08s)
    <autogenerated>:31: Test appears to have leaked an rpc client:
        net.(*pollDesc).Wait(0xc208679100, 0x72, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:84 +0x63
        net.(*pollDesc).WaitRead(0xc208679100, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:89 +0x51
        net.(*netFD).Read(0xc2086790a0, 0xc2083ff000, 0x1000, 0x1000, 0x0, 0x7f8763ee7e20, 0xc2080f5508)
            /usr/src/go/src/net/fd_unix.go:242 +0x4b3
        net.(*conn).Read(0xc20802e058, 0xc2083ff000, 0x1000, 0x1000, 0xc208171640, 0x0, 0x0)
            /usr/src/go/src/net/net.go:121 +0x125
        net.(*TCPConn).Read(0xc20802e058, 0xc2083ff000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
--
I0408 17:25:39.606680     277 retry.go:93] Get failed; retrying immediately
I0408 17:25:39.607066     277 multiraft.go:633] node 257: group 4 raft ready
I0408 17:25:39.607193     277 multiraft.go:638] HardState updated: {Term:6 Vote:257 Commit:24 XXX_unrecognized:[]}
I0408 17:25:39.608144     277 multiraft.go:641] New Entry[0]: 6/24 EntryNormal 0000000000000000678877f0fa6cb0f2: raft_id:4 cmd:<internal_resolve_intent:<header:<timestamp:<wall_time:0 logical:157 > cmd_id:<wall_time:0 random:0 > key:"\000\000\000kc\000\001rtn-" 
I0408 17:25:39.608972     277 multiraft.go:644] Committed Entry[0]: 6/24 EntryNormal 0000000000000000678877f0fa6cb0f2: raft_id:4 cmd:<internal_resolve_intent:<header:<timestamp:<wall_time:0 logical:157 > cmd_id:<wall_time:0 random:0 > key:"\000\000\000kc\000\001rtn-" 
--- FAIL: TestInsertRight (0.55s)
    <autogenerated>:31: Test appears to have leaked an rpc client:
        net.(*pollDesc).Wait(0xc208679100, 0x72, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:84 +0x63
        net.(*pollDesc).WaitRead(0xc208679100, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:89 +0x51
        net.(*netFD).Read(0xc2086790a0, 0xc2083ff000, 0x1000, 0x1000, 0x0, 0x7f8763ee7e20, 0xc2080f5508)
            /usr/src/go/src/net/fd_unix.go:242 +0x4b3
        net.(*conn).Read(0xc20802e058, 0xc2083ff000, 0x1000, 0x1000, 0xc208171640, 0x0, 0x0)
            /usr/src/go/src/net/net.go:121 +0x125
        net.(*TCPConn).Read(0xc20802e058, 0xc2083ff000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
--
I0408 17:25:40.164272     277 retry.go:93] Get failed; retrying immediately
I0408 17:25:40.164717     277 multiraft.go:633] node 257: group 5 raft ready
I0408 17:25:40.164822     277 multiraft.go:638] HardState updated: {Term:6 Vote:257 Commit:17 XXX_unrecognized:[]}
I0408 17:25:40.165628     277 multiraft.go:641] New Entry[0]: 6/17 EntryNormal 00000000000000003c00c9f3537cf162: raft_id:5 cmd:<internal_resolve_intent:<header:<timestamp:<wall_time:0 logical:166 > cmd_id:<wall_time:0 random:0 > key:"\000\000\000kb\000\001rtn-" 
I0408 17:25:40.166374     277 multiraft.go:644] Committed Entry[0]: 6/17 EntryNormal 00000000000000003c00c9f3537cf162: raft_id:5 cmd:<internal_resolve_intent:<header:<timestamp:<wall_time:0 logical:166 > cmd_id:<wall_time:0 random:0 > key:"\000\000\000kb\000\001rtn-" 
--- FAIL: TestInsertLeft (0.56s)
    <autogenerated>:31: Test appears to have leaked an rpc client:
        net.(*pollDesc).Wait(0xc208679100, 0x72, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:84 +0x63
        net.(*pollDesc).WaitRead(0xc208679100, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:89 +0x51
        net.(*netFD).Read(0xc2086790a0, 0xc2083ff000, 0x1000, 0x1000, 0x0, 0x7f8763ee7e20, 0xc2080f5508)
            /usr/src/go/src/net/fd_unix.go:242 +0x4b3
        net.(*conn).Read(0xc20802e058, 0xc2083ff000, 0x1000, 0x1000, 0xc208171640, 0x0, 0x0)
            /usr/src/go/src/net/net.go:121 +0x125
        net.(*TCPConn).Read(0xc20802e058, 0xc2083ff000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
--
            /go/src/github.com/cockroachdb/cockroach/storage/main_test.go:29 +0x36
        main.main()
            github.com/cockroachdb/cockroach/storage/_test/_testmain.go:332 +0x28d
=== RUN TestStoreRangeSplitAtIllegalKeys
I0408 17:25:40.262072     277 multiraft.go:407] node 257 starting
--- FAIL: TestStoreRangeSplitAtIllegalKeys (0.08s)
    <autogenerated>:31: Test appears to have leaked an rpc client:
        net.(*pollDesc).Wait(0xc208679100, 0x72, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:84 +0x63
        net.(*pollDesc).WaitRead(0xc208679100, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:89 +0x51
        net.(*netFD).Read(0xc2086790a0, 0xc2083ff000, 0x1000, 0x1000, 0x0, 0x7f8763ee7e20, 0xc2080f5508)
            /usr/src/go/src/net/fd_unix.go:242 +0x4b3
        net.(*conn).Read(0xc20802e058, 0xc2083ff000, 0x1000, 0x1000, 0xc208171640, 0x0, 0x0)
            /usr/src/go/src/net/net.go:121 +0x125
        net.(*TCPConn).Read(0xc20802e058, 0xc2083ff000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
--
I0408 17:25:40.398630     277 multiraft.go:638] HardState updated: {Term:6 Vote:257 Commit:21 XXX_unrecognized:[]}
I0408 17:25:40.399831     277 multiraft.go:641] New Entry[0]: 6/21 EntryNormal 13d319dc3548fbfe5acaa5c0c26b4e61: raft_id:1 cmd:<end_transaction:<header:<timestamp:<wall_time:0 logical:8 > cmd_id:<wall_time:1428513940384054270 random:6542223656023182945 > key:"a"
I0408 17:25:40.400863     277 multiraft.go:644] Committed Entry[0]: 6/21 EntryNormal 13d319dc3548fbfe5acaa5c0c26b4e61: raft_id:1 cmd:<end_transaction:<header:<timestamp:<wall_time:0 logical:8 > cmd_id:<wall_time:1428513940384054270 random:6542223656023182945 > key:"a"
I0408 17:25:40.405616     277 raft.go:390] raft: 101 became follower at term 5
I0408 17:25:40.405899     277 raft.go:207] raft: newRaft 101 [peers: [101], term: 5, commit: 10, applied: 10, lastindex: 10, lastterm: 5]
--- FAIL: TestStoreRangeSplitAtRangeBounds (0.15s)
    <autogenerated>:31: Test appears to have leaked an rpc client:
        net.(*pollDesc).Wait(0xc208679100, 0x72, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:84 +0x63
        net.(*pollDesc).WaitRead(0xc208679100, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:89 +0x51
        net.(*netFD).Read(0xc2086790a0, 0xc2083ff000, 0x1000, 0x1000, 0x0, 0x7f8763ee7e20, 0xc2080f5508)
            /usr/src/go/src/net/fd_unix.go:242 +0x4b3
        net.(*conn).Read(0xc20802e058, 0xc2083ff000, 0x1000, 0x1000, 0xc208171640, 0x0, 0x0)
            /usr/src/go/src/net/net.go:121 +0x125
        net.(*TCPConn).Read(0xc20802e058, 0xc2083ff000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
--
I0408 17:25:40.566404     277 multiraft.go:638] HardState updated: {Term:6 Vote:257 Commit:27 XXX_unrecognized:[]}
I0408 17:25:40.567090     277 multiraft.go:641] New Entry[0]: 6/26 EntryNormal 00000000000000004e25865f44fa13d1: raft_id:1 cmd:<internal_resolve_intent:<header:<timestamp:<wall_time:0 logical:17 > cmd_id:<wall_time:0 random:0 > key:"\000\000meta2\377\377" user:"
I0408 17:25:40.567856     277 multiraft.go:641] New Entry[1]: 6/27 EntryNormal 0000000000000000131339e37bcd83b5: raft_id:1 cmd:<internal_resolve_intent:<header:<timestamp:<wall_time:0 logical:17 > cmd_id:<wall_time:0 random:0 > key:"\000range-tree-root" user:"ro
I0408 17:25:40.568653     277 multiraft.go:644] Committed Entry[0]: 6/26 EntryNormal 00000000000000004e25865f44fa13d1: raft_id:1 cmd:<internal_resolve_intent:<header:<timestamp:<wall_time:0 logical:17 > cmd_id:<wall_time:0 random:0 > key:"\000\000meta2\377\377" user:"
I0408 17:25:40.569305     277 multiraft.go:644] Committed Entry[1]: 6/27 EntryNormal 0000000000000000131339e37bcd83b5: raft_id:1 cmd:<internal_resolve_intent:<header:<timestamp:<wall_time:0 logical:17 > cmd_id:<wall_time:0 random:0 > key:"\000range-tree-root" user:"ro
--- FAIL: TestStoreRangeSplitConcurrent (0.16s)
    <autogenerated>:31: Test appears to have leaked an rpc client:
        net.(*pollDesc).Wait(0xc208679100, 0x72, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:84 +0x63
        net.(*pollDesc).WaitRead(0xc208679100, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:89 +0x51
        net.(*netFD).Read(0xc2086790a0, 0xc2083ff000, 0x1000, 0x1000, 0x0, 0x7f8763ee7e20, 0xc2080f5508)
            /usr/src/go/src/net/fd_unix.go:242 +0x4b3
        net.(*conn).Read(0xc20802e058, 0xc2083ff000, 0x1000, 0x1000, 0xc208171640, 0x0, 0x0)
            /usr/src/go/src/net/net.go:121 +0x125
        net.(*TCPConn).Read(0xc20802e058, 0xc2083ff000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
--
I0408 17:25:40.704305     277 multiraft.go:638] HardState updated: {Term:6 Vote:257 Commit:25 XXX_unrecognized:[]}
I0408 17:25:40.705296     277 multiraft.go:641] New Entry[0]: 6/25 EntryNormal 13d319dc47c35c103940d667a3aa3ac2: raft_id:1 cmd:<end_transaction:<header:<timestamp:<wall_time:0 logical:12 > cmd_id:<wall_time:1428513940694064144 random:4125532999287192258 > key:"m
I0408 17:25:40.706198     277 multiraft.go:644] Committed Entry[0]: 6/25 EntryNormal 13d319dc47c35c103940d667a3aa3ac2: raft_id:1 cmd:<end_transaction:<header:<timestamp:<wall_time:0 logical:12 > cmd_id:<wall_time:1428513940694064144 random:4125532999287192258 > key:"m
I0408 17:25:40.710409     277 raft.go:390] raft: 101 became follower at term 5
I0408 17:25:40.710651     277 raft.go:207] raft: newRaft 101 [peers: [101], term: 5, commit: 10, applied: 10, lastindex: 10, lastterm: 5]
--- FAIL: TestStoreRangeSplit (0.16s)
    <autogenerated>:31: Test appears to have leaked an rpc client:
        net.(*pollDesc).Wait(0xc208679100, 0x72, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:84 +0x63
        net.(*pollDesc).WaitRead(0xc208679100, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:89 +0x51
        net.(*netFD).Read(0xc2086790a0, 0xc2083ff000, 0x1000, 0x1000, 0x0, 0x7f8763ee7e20, 0xc2080f5508)
            /usr/src/go/src/net/fd_unix.go:242 +0x4b3
        net.(*conn).Read(0xc20802e058, 0xc2083ff000, 0x1000, 0x1000, 0xc208171640, 0x0, 0x0)
            /usr/src/go/src/net/net.go:121 +0x125
        net.(*TCPConn).Read(0xc20802e058, 0xc2083ff000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
--
I0408 17:25:41.232811     277 multiraft.go:638] HardState updated: {Term:6 Vote:257 Commit:118 XXX_unrecognized:[]}
I0408 17:25:41.233943     277 multiraft.go:641] New Entry[0]: 6/118 EntryNormal 13d319dc672f30c879edf21af6548515: raft_id:2 cmd:<end_transaction:<header:<timestamp:<wall_time:0 logical:239 > cmd_id:<wall_time:1428513941221224648 random:8785944645685511445 > key:
I0408 17:25:41.234941     277 multiraft.go:644] Committed Entry[0]: 6/118 EntryNormal 13d319dc672f30c879edf21af6548515: raft_id:2 cmd:<end_transaction:<header:<timestamp:<wall_time:0 logical:239 > cmd_id:<wall_time:1428513941221224648 random:8785944645685511445 > key:
I0408 17:25:41.245152     277 raft.go:390] raft: 101 became follower at term 5
I0408 17:25:41.245371     277 raft.go:207] raft: newRaft 101 [peers: [101], term: 5, commit: 10, applied: 10, lastindex: 10, lastterm: 5]
--- FAIL: TestStoreRangeSplitStats (0.54s)
    <autogenerated>:31: Test appears to have leaked an rpc client:
        net.(*pollDesc).Wait(0xc208679100, 0x72, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:84 +0x63
        net.(*pollDesc).WaitRead(0xc208679100, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:89 +0x51
        net.(*netFD).Read(0xc2086790a0, 0xc2083ff000, 0x1000, 0x1000, 0x0, 0x7f8763ee7e20, 0xc2080f5508)
            /usr/src/go/src/net/fd_unix.go:242 +0x4b3
        net.(*conn).Read(0xc20802e058, 0xc2083ff000, 0x1000, 0x1000, 0xc208171640, 0x0, 0x0)
            /usr/src/go/src/net/net.go:121 +0x125
        net.(*TCPConn).Read(0xc20802e058, 0xc2083ff000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
--
I0408 17:25:42.250743     277 multiraft.go:644] Committed Entry[3]: 6/268 EntryNormal 0000000000000000275799e550b6d0b6: raft_id:1 cmd:<internal_resolve_intent:<header:<timestamp:<wall_time:0 logical:495 > cmd_id:<wall_time:0 random:0 > key:"\000\000meta2testZHMtQoLrvc
I0408 17:25:42.251426     277 multiraft.go:644] Committed Entry[4]: 6/269 EntryNormal 0000000000000000249c4c120ad92b6f: raft_id:1 cmd:<internal_resolve_intent:<header:<timestamp:<wall_time:0 logical:495 > cmd_id:<wall_time:0 random:0 > key:"\000\000meta2\377\377" user
I0408 17:25:42.252172     277 multiraft.go:644] Committed Entry[5]: 6/270 EntryNormal 00000000000000003682bdd11f910f6a: raft_id:1 cmd:<internal_resolve_intent:<header:<timestamp:<wall_time:0 logical:495 > cmd_id:<wall_time:0 random:0 > key:"\000range-tree-root" user:"
I0408 17:25:42.257246     277 queue.go:207] processing range range=1 (""-"testZHMtQoLrvcssrtoMUpwiwLoiYLgAbUibFATwYLcrfzqyLotKmLcdRBEIZmDgeHbtygdWYtWlXDsXTwEYHjPgGFXiwfrGuOhjMCPH") from split queue...
I0408 17:25:42.257438     277 queue.go:211] processed range range=1 (""-"testZHMtQoLrvcssrtoMUpwiwLoiYLgAbUibFATwYLcrfzqyLotKmLcdRBEIZmDgeHbtygdWYtWlXDsXTwEYHjPgGFXiwfrGuOhjMCPH") from split queue in 200.682µs
--- FAIL: TestStoreZoneUpdateAndRangeSplit (1.00s)
    <autogenerated>:31: Test appears to have leaked an rpc client:
        net.(*pollDesc).Wait(0xc208679100, 0x72, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:84 +0x63
        net.(*pollDesc).WaitRead(0xc208679100, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:89 +0x51
        net.(*netFD).Read(0xc2086790a0, 0xc2083ff000, 0x1000, 0x1000, 0x0, 0x7f8763ee7e20, 0xc2080f5508)
            /usr/src/go/src/net/fd_unix.go:242 +0x4b3
        net.(*conn).Read(0xc20802e058, 0xc2083ff000, 0x1000, 0x1000, 0xc208171640, 0x0, 0x0)
            /usr/src/go/src/net/net.go:121 +0x125
        net.(*TCPConn).Read(0xc20802e058, 0xc2083ff000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
--
I0408 17:25:42.947396     277 multiraft.go:638] HardState updated: {Term:6 Vote:257 Commit:76 XXX_unrecognized:[]}
I0408 17:25:42.948127     277 multiraft.go:641] New Entry[0]: 6/75 EntryNormal 00000000000000002b976c7e501daf7d: raft_id:1 cmd:<internal_resolve_intent:<header:<timestamp:<wall_time:0 logical:267 > cmd_id:<wall_time:0 random:0 > key:"\000\000meta2db5" user:"root
I0408 17:25:42.948873     277 multiraft.go:641] New Entry[1]: 6/76 EntryNormal 0000000000000000156b400d48604236: raft_id:1 cmd:<internal_resolve_intent:<header:<timestamp:<wall_time:0 logical:267 > cmd_id:<wall_time:0 random:0 > key:"\000\000meta2\377\377" user:
I0408 17:25:42.949487     277 multiraft.go:644] Committed Entry[0]: 6/75 EntryNormal 00000000000000002b976c7e501daf7d: raft_id:1 cmd:<internal_resolve_intent:<header:<timestamp:<wall_time:0 logical:267 > cmd_id:<wall_time:0 random:0 > key:"\000\000meta2db5" user:"root
I0408 17:25:42.950132     277 multiraft.go:644] Committed Entry[1]: 6/76 EntryNormal 0000000000000000156b400d48604236: raft_id:1 cmd:<internal_resolve_intent:<header:<timestamp:<wall_time:0 logical:267 > cmd_id:<wall_time:0 random:0 > key:"\000\000meta2\377\377" user:
--- FAIL: TestStoreRangeSplitOnConfigs (0.70s)
    <autogenerated>:31: Test appears to have leaked an rpc client:
        net.(*pollDesc).Wait(0xc208679100, 0x72, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:84 +0x63
        net.(*pollDesc).WaitRead(0xc208679100, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:89 +0x51
        net.(*netFD).Read(0xc2086790a0, 0xc2083ff000, 0x1000, 0x1000, 0x0, 0x7f8763ee7e20, 0xc2080f5508)
            /usr/src/go/src/net/fd_unix.go:242 +0x4b3
        net.(*conn).Read(0xc20802e058, 0xc2083ff000, 0x1000, 0x1000, 0xc208171640, 0x0, 0x0)
            /usr/src/go/src/net/net.go:121 +0x125
        net.(*TCPConn).Read(0xc20802e058, 0xc2083ff000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
--
I0408 17:25:43.167849     277 multiraft.go:644] Committed Entry[0]: 6/45 EntryNormal 13d319dcdad699cb0d4476bebf1ae7ab: raft_id:1 cmd:<put:<header:<timestamp:<wall_time:0 logical:39 > cmd_id:<wall_time:1428513943161575883 random:956019582531463083 > key:"\000\000meta2\
I0408 17:25:43.169486     277 multiraft.go:633] node 257: group 1 raft ready
I0408 17:25:43.169541     277 multiraft.go:638] HardState updated: {Term:6 Vote:257 Commit:46 XXX_unrecognized:[]}
I0408 17:25:43.170065     277 multiraft.go:641] New Entry[0]: 6/46 EntryNormal 13d319dcdad700e655a8448a8aec25fb: raft_id:1 cmd:<put:<header:<timestamp:<wall_time:0 logical:40 > cmd_id:<wall_time:1428513943161602278 random:6172258651138172411 > key:"\000\000meta1
I0408 17:25:43.170540     277 multiraft.go:644] Committed Entry[0]: 6/46 EntryNormal 13d319dcdad700e655a8448a8aec25fb: raft_id:1 cmd:<put:<header:<timestamp:<wall_time:0 logical:40 > cmd_id:<wall_time:1428513943161602278 random:6172258651138172411 > key:"\000\000meta1
--- FAIL: TestUpdateRangeAddressing (0.21s)
    <autogenerated>:31: Test appears to have leaked an rpc client:
        net.(*pollDesc).Wait(0xc208679100, 0x72, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:84 +0x63
        net.(*pollDesc).WaitRead(0xc208679100, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:89 +0x51
        net.(*netFD).Read(0xc2086790a0, 0xc2083ff000, 0x1000, 0x1000, 0x0, 0x7f8763ee7e20, 0xc2080f5508)
            /usr/src/go/src/net/fd_unix.go:242 +0x4b3
        net.(*conn).Read(0xc20802e058, 0xc2083ff000, 0x1000, 0x1000, 0xc208171640, 0x0, 0x0)
            /usr/src/go/src/net/net.go:121 +0x125
        net.(*TCPConn).Read(0xc20802e058, 0xc2083ff000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
--
            /go/src/github.com/cockroachdb/cockroach/storage/main_test.go:29 +0x36
        main.main()
            github.com/cockroachdb/cockroach/storage/_test/_testmain.go:332 +0x28d
=== RUN TestUpdateRangeAddressingSplitMeta1
I0408 17:25:43.263294     277 multiraft.go:407] node 257 starting
--- FAIL: TestUpdateRangeAddressingSplitMeta1 (0.09s)
    <autogenerated>:31: Test appears to have leaked an rpc client:
        net.(*pollDesc).Wait(0xc208679100, 0x72, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:84 +0x63
        net.(*pollDesc).WaitRead(0xc208679100, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:89 +0x51
        net.(*netFD).Read(0xc2086790a0, 0xc2083ff000, 0x1000, 0x1000, 0x0, 0x7f8763ee7e20, 0xc2080f5508)
            /usr/src/go/src/net/fd_unix.go:242 +0x4b3
        net.(*conn).Read(0xc20802e058, 0xc2083ff000, 0x1000, 0x1000, 0xc208171640, 0x0, 0x0)
            /usr/src/go/src/net/net.go:121 +0x125
        net.(*TCPConn).Read(0xc20802e058, 0xc2083ff000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
--
            /go/src/github.com/cockroachdb/cockroach/util/leaktest/leaktest.go:34 +0x36
        github.com/cockroachdb/cockroach/storage.TestMain(0xc208030b90)
            /go/src/github.com/cockroachdb/cockroach/storage/main_test.go:29 +0x36
        main.main()
            github.com/cockroachdb/cockroach/storage/_test/_testmain.go:332 +0x28d
FAIL
FAIL    github.com/cockroachdb/cockroach/storage    9.527s
=== RUN TestBatchBasics
--- PASS: TestBatchBasics (0.00s)
=== RUN TestBatchGet
--- PASS: TestBatchGet (0.00s)
=== RUN TestBatchMerge
--- PASS: TestBatchMerge (0.00s)
=== RUN TestBatchProto
--- PASS: TestBatchProto (0.00s)
=== RUN TestBatchScan
--- PASS: TestBatchScan (0.00s)

Please assign, take a look and update the issue accordingly.

test failure #

The following test appears to have failed:

#:


Please assign, take a look and update the issue accordingly.

Test failure in CI build 428

The following test appears to have failed:

#428:

I0331 16:39:38.036788      79 multiraft.go:448] node 1: group 1 got message 6->1 MsgHeartbeatResp Term:6 Log:0/0
I0331 16:39:38.036815      79 multiraft.go:448] node 1: group 1 got message 8->1 MsgHeartbeatResp Term:6 Log:0/0
I0331 16:39:38.036856      79 multiraft.go:448] node 1: group 1 got message 10->1 MsgHeartbeatResp Term:6 Log:0/0
I0331 16:39:38.036884      79 multiraft.go:448] node 1: group 1 got message 3->1 MsgHeartbeatResp Term:6 Log:0/0
E0331 16:39:39.022641      79 heartbeat_test.go:52] timeout when reading from intercept channel
panic: test timed out after 30s

goroutine 427 [running]:
testing.func·008()
    /usr/src/go/src/testing/testing.go:681 +0x12f
created by time.goFunc
    /usr/src/go/src/time/sleep.go:129 +0x4b

goroutine 1 [chan receive]:
testing.RunTests(0x9aa860, 0xb12340, 0x7, 0x7, 0xc207ffc301)
    /usr/src/go/src/testing/testing.go:556 +0xad6
--
goroutine 392 [select]:
github.com/cockroachdb/cockroach/multiraft.func·009()
    /go/src/github.com/cockroachdb/cockroach/multiraft/events_test.go:51 +0x3fc
created by github.com/cockroachdb/cockroach/multiraft.(*eventDemux).start
    /go/src/github.com/cockroachdb/cockroach/multiraft/events_test.go:72 +0x8c
FAIL    github.com/cockroachdb/cockroach/multiraft  30.012s
=== RUN TestMemoryStorage
--- PASS: TestMemoryStorage (0.00s)
PASS
ok      github.com/cockroachdb/cockroach/multiraft/storagetest  0.004s
=== RUN TestClientCmdIDIsEmpty
--- PASS: TestClientCmdIDIsEmpty (0.00s)
=== RUN TestResponseHeaderSetGoError
--- PASS: TestResponseHeaderSetGoError (0.00s)
=== RUN TestResponseHeaderNilError
--- PASS: TestResponseHeaderNilError (0.00s)
I0331 16:39:38.036788      79 multiraft.go:448] node 1: group 1 got message 6->1 MsgHeartbeatResp Term:6 Log:0/0
I0331 16:39:38.036815      79 multiraft.go:448] node 1: group 1 got message 8->1 MsgHeartbeatResp Term:6 Log:0/0
I0331 16:39:38.036856      79 multiraft.go:448] node 1: group 1 got message 10->1 MsgHeartbeatResp Term:6 Log:0/0
I0331 16:39:38.036884      79 multiraft.go:448] node 1: group 1 got message 3->1 MsgHeartbeatResp Term:6 Log:0/0
E0331 16:39:39.022641      79 heartbeat_test.go:52] timeout when reading from intercept channel
panic: test timed out after 30s

goroutine 427 [running]:
testing.func·008()
    /usr/src/go/src/testing/testing.go:681 +0x12f
created by time.goFunc
    /usr/src/go/src/time/sleep.go:129 +0x4b

goroutine 1 [chan receive]:
testing.RunTests(0x9aa860, 0xb12340, 0x7, 0x7, 0xc207ffc301)
    /usr/src/go/src/testing/testing.go:556 +0xad6
--
goroutine 392 [select]:
github.com/cockroachdb/cockroach/multiraft.func·009()
    /go/src/github.com/cockroachdb/cockroach/multiraft/events_test.go:51 +0x3fc
created by github.com/cockroachdb/cockroach/multiraft.(*eventDemux).start
    /go/src/github.com/cockroachdb/cockroach/multiraft/events_test.go:72 +0x8c
FAIL    github.com/cockroachdb/cockroach/multiraft  30.012s
=== RUN TestMemoryStorage
--- PASS: TestMemoryStorage (0.00s)
PASS
ok      github.com/cockroachdb/cockroach/multiraft/storagetest  0.004s
=== RUN TestClientCmdIDIsEmpty
--- PASS: TestClientCmdIDIsEmpty (0.00s)
=== RUN TestResponseHeaderSetGoError
--- PASS: TestResponseHeaderSetGoError (0.00s)
=== RUN TestResponseHeaderNilError
--- PASS: TestResponseHeaderNilError (0.00s)

Please assign, take a look and update the issue accordingly.

test failure #405

The following test appears to have failed:

#405:

--- PASS: TestRawBroadcast (0.00s)
=== RUN TestMetricSystemStop
--- PASS: TestMetricSystemStop (0.00s)
=== RUN: ExampleMetricSystem
==================
WARNING: DATA RACE
Read by goroutine 44:
  github.com/cockroachdb/cockroach/util/metrics.func·006()
      /go/src/github.com/cockroachdb/cockroach/util/metrics/metrics.go:591 +0x571
  github.com/cockroachdb/cockroach/util/metrics.func·005()
      /go/src/github.com/cockroachdb/cockroach/util/metrics/metrics.go:530 +0x8f

Previous write by main goroutine:
  sync/atomic.AddInt64()
      /usr/src/go/src/runtime/race_amd64.s:261 +0xc
  github.com/cockroachdb/cockroach/util/metrics.(*MetricSystem).Histogram()
      /go/src/github.com/cockroachdb/cockroach/util/metrics/metrics.go:295 +0xd93
  github.com/cockroachdb/cockroach/util/metrics.(*MetricSystem).StopTimer()
      /go/src/github.com/cockroachdb/cockroach/util/metrics/metrics.go:226 +0x75
  github.com/cockroachdb/cockroach/util/metrics.ExampleMetricSystem()
      /go/src/github.com/cockroachdb/cockroach/util/metrics/metrics_test.go:37 +0x1cd
  testing.runExample()
      /usr/src/go/src/testing/example.go:98 +0x5e6
--
Goroutine 44 (running) created at:
  github.com/cockroachdb/cockroach/util/metrics.(*MetricSystem).reaper()
      /go/src/github.com/cockroachdb/cockroach/util/metrics/metrics.go:532 +0x1ed
==================
==================
WARNING: DATA RACE
Read by goroutine 44:
  github.com/cockroachdb/cockroach/util/metrics.func·006()
      /go/src/github.com/cockroachdb/cockroach/util/metrics/metrics.go:593 +0x6d4
  github.com/cockroachdb/cockroach/util/metrics.func·005()
      /go/src/github.com/cockroachdb/cockroach/util/metrics/metrics.go:530 +0x8f

Previous write by main goroutine:
  sync/atomic.AddInt64()
      /usr/src/go/src/runtime/race_amd64.s:261 +0xc
  github.com/cockroachdb/cockroach/util/metrics.(*MetricSystem).Histogram()
      /go/src/github.com/cockroachdb/cockroach/util/metrics/metrics.go:294 +0xce9
  github.com/cockroachdb/cockroach/util/metrics.(*MetricSystem).StopTimer()
      /go/src/github.com/cockroachdb/cockroach/util/metrics/metrics.go:226 +0x75
  github.com/cockroachdb/cockroach/util/metrics.ExampleMetricSystem()
      /go/src/github.com/cockroachdb/cockroach/util/metrics/metrics_test.go:37 +0x1cd
  testing.runExample()
      /usr/src/go/src/testing/example.go:98 +0x5e6
"test"

Please assign, take a look and update the issue accordingly.

test failure #406

The following test appears to have failed:

#406:

--- PASS: TestRawBroadcast (0.00s)
=== RUN TestMetricSystemStop
--- PASS: TestMetricSystemStop (0.00s)
=== RUN: ExampleMetricSystem
==================
WARNING: DATA RACE
Read by goroutine 44:
  github.com/cockroachdb/cockroach/util/metrics.func·006()
      /go/src/github.com/cockroachdb/cockroach/util/metrics/metrics.go:591 +0x571
  github.com/cockroachdb/cockroach/util/metrics.func·005()
      /go/src/github.com/cockroachdb/cockroach/util/metrics/metrics.go:530 +0x8f

Previous write by main goroutine:
  sync/atomic.AddInt64()
      /usr/src/go/src/runtime/race_amd64.s:261 +0xc
  github.com/cockroachdb/cockroach/util/metrics.(*MetricSystem).Histogram()
      /go/src/github.com/cockroachdb/cockroach/util/metrics/metrics.go:295 +0xd93
  github.com/cockroachdb/cockroach/util/metrics.(*MetricSystem).StopTimer()
      /go/src/github.com/cockroachdb/cockroach/util/metrics/metrics.go:226 +0x75
  github.com/cockroachdb/cockroach/util/metrics.ExampleMetricSystem()
      /go/src/github.com/cockroachdb/cockroach/util/metrics/metrics_test.go:37 +0x1cd
  testing.runExample()
      /usr/src/go/src/testing/example.go:98 +0x5e6
--
Goroutine 44 (running) created at:
  github.com/cockroachdb/cockroach/util/metrics.(*MetricSystem).reaper()
      /go/src/github.com/cockroachdb/cockroach/util/metrics/metrics.go:532 +0x1ed
==================
==================
WARNING: DATA RACE
Read by goroutine 44:
  github.com/cockroachdb/cockroach/util/metrics.func·006()
      /go/src/github.com/cockroachdb/cockroach/util/metrics/metrics.go:593 +0x6d4
  github.com/cockroachdb/cockroach/util/metrics.func·005()
      /go/src/github.com/cockroachdb/cockroach/util/metrics/metrics.go:530 +0x8f

Previous write by main goroutine:
  sync/atomic.AddInt64()
      /usr/src/go/src/runtime/race_amd64.s:261 +0xc
  github.com/cockroachdb/cockroach/util/metrics.(*MetricSystem).Histogram()
      /go/src/github.com/cockroachdb/cockroach/util/metrics/metrics.go:294 +0xce9
  github.com/cockroachdb/cockroach/util/metrics.(*MetricSystem).StopTimer()
      /go/src/github.com/cockroachdb/cockroach/util/metrics/metrics.go:226 +0x75
  github.com/cockroachdb/cockroach/util/metrics.ExampleMetricSystem()
      /go/src/github.com/cockroachdb/cockroach/util/metrics/metrics_test.go:37 +0x1cd
  testing.runExample()
      /usr/src/go/src/testing/example.go:98 +0x5e6

Please assign, take a look and update the issue accordingly.

teamcity: failed tests on release-banana: lint/TestLint, test/TestImportPgDump

The following tests appear to have failed:

#864629:

--- FAIL: test/TestImportPgDump/read_data_only (0.000s)
Test ended in panic.

------- Stdout: -------
I180827 20:41:54.053559 52667 storage/replica_command.go:298  [n1,s1,r23/1:/{Table/52-Max}] initiating a split of this range at key /Table/53/1/106 [r24]
I180827 20:41:54.062208 52385 storage/replica_range_lease.go:554  [replicate,n1,s1,r23/1:/Table/5{2-3/1/106}] transferring lease to s2
I180827 20:41:54.063407 52385 storage/replica_range_lease.go:617  [replicate,n1,s1,r23/1:/Table/5{2-3/1/106}] done transferring lease to s2: <nil>
I180827 20:41:54.063498 51617 storage/replica_proposal.go:210  [n2,s2,r23/3:/Table/5{2-3/1/106}] new range lease repl=(n2,s2):3 seq=3 start=1535402514.062230012,0 epo=1 pro=1535402514.062232488,0 following repl=(n1,s1):1 seq=2 start=1535402512.768597075,0 exp=1535402521.769064687,0 pro=1535402512.769088099,0
I180827 20:41:54.077824 52355 storage/replica_command.go:298  [n1,s1,r24/1:/{Table/53/1/1…-Max}] initiating a split of this range at key /Table/53/2/"\x15\x8f\xe8\u007f\\\xf3\xdf\xf0nP\xdb\xd3\xe8\x1b\"B1K\xa8l+\x96/l\v\x9e\x0e\x91\xa0D\x96\xc0J\xf1\xa1͠\xd2̃\x05\xe3\xe2?ET蛂\x00\xe5\xb0\x1a\x8e\x13Zu\xfd\xf2\x81w^\xb7\xbdH\xb8\xe4\a\x9c\xfd\x99{\xb4\"\xe5Q\x9c\x17\x85\x97\xf7Ëb\x0f\xff\xb0-vmO\xe1\xfb\xc3\xf3\xab0\xa0\x05u\x1c\xb0{B\xeamp\xbd\x8f\x99?\x87\x0f\xb2e\xe3ؿ2LN\x03\x17\xa7\x9f\xd3\x0f\x15$\x02I\xd2\xd7\x04R\x193\x9d\xddX\u007f\x01A\xcc\xde`Pm:\xdbe\xfd\xa6\a\xf8i\x88\xa7\xee\xacӸ\xbf2\x84y\xcd\n\xe6]L)\xca\xd9`x\xb4\x1b|\xe8\x13\x82\x1a(/* 3`J\xe1ٰ\xe6AdN!-\xd9"/"ॹॹ;,✅\nπ<\t\nπ\tॹπ✅a\n,\nᐿ�\nॹ✅�ॹ�\"✅ॹ\\<\"\n;a\\\n,✅π\n<\n<\nॹπᐿ�ᐿ;,�\tᐿ\nᐿaᐿ,\nπ�\t\\<ॹ\\π;π�π\"<;\"�\\<�,<�\\a�<\nॹaᐿaॹ�\\ᐿπ,✅ᐿ\"<✅✅a\t�ॹ\t<π;�ॹ\\ᐿ;✅\r\\,;\\ᐿॹ\nॹᐿππ\nᐿ\nᐿaπ\\\nπ\r\"✅�π\nπ\rॹπ\"ॹ✅a\ra�✅\nπ;ॹ✅\n;ॹ,�\nπ\rπᐿa\\\\ᐿ,π<ᐿ✅\n�,\r\nᐿ✅\n<�ᐿ\"\"✅,,\"\n<\n✅\rπa�π\n<\\ॹ\nॹ�;\ra��✅ᐿ\n,\t�;,π<\r��\r\\�\n✅\r✅�;\\\n\n,\nॹ✅π\n,\n✅\t,�<\nπ\t;aπ\n<a<\n\tπ\r\"\\✅\n\n\n<ᐿπaπ\\�\"<✅\\a,✅\n✅\n<\"\"\n\n\r\rᐿ�\\\tᐿᐿ;\n\rᐿa\\π<\n\\\n\n\";\r\r\raπ\"\r�ॹa\r�\"\n\"✅ππ✅�\t�ᐿ\tᐿ\\\r�ᐿ<\\\nᐿπ✅\tॹ<π\ta\"✅\t,ॹa✅ᐿ;\\\r✅\\,ॹ\"a\n<ॹ\\\n<\"π\\\\ᐿ\n✅\nᐿ\n,\n\r\t\n\r\n<aᐿ;ᐿ;ᐿ\r;✅a<a,,<\t\n\\ππ\\\"✅\n\\a\n\tπa<\r<π\n✅\\π<ॹ,\t;<aaaπॹᐿaॹaॹ�,\"\t,ॹ;\\<✅a\nᐿ\"\nπ\\aᐿ�ᐿ<ॹ;\\<ᐿ\nᐿ\n\"aᐿᐿπ,\"\r✅ॹ\n,<\r<\n<<,ᐿᐿa,\rᐿ<;π\\�,\"\rπ�\nππ�,✅;�\ra<;\r�ᐿ\tπ;\"πᐿ\\�a\"ᐿ\\;\\ॹ\";ॹ;;✅\tॹ\r\n<\t\n\t<aॹ\tᐿ\n\"ॹᐿ\t✅✅�ॹ;;<�\t,\n\r\n\n\ta\"\\<\rπᐿa\t<\na;\t\"\nπ<πॹ\r\n<\n✅ᐿॹ✅�<,;✅\"\n<�π<✅<<✅\\;\n\"\rᐿ\t�\n\n\r\t,ॹ�\"\rᐿᐿᐿ,\"π\nπ\",a\"\"<�\t\\πॹ\n\taॹᐿ\tπॹ,\n✅\rπa\r<<,\n\nᐿ;\t\\<\tᐿ\n\n�\"ॹ<\n\r\nᐿ\n�\n\nπᐿ;\nॹ\n\"π<\"\r\r\n\r\\ᐿ;;πॹ;�\r✅�,✅\r\r�,a;ᐿ\\ॹ\"\t\r✅;\t<,π,�\t\"πaᐿ��\\ॹ\"\n\tᐿ\t,ॹ✅�ᐿ✅\tᐿ,ॹ✅;;�\r\n✅ᐿ�\nπa;\\,✅ॹ<ᐿ\nπ\n\"\n;a\t\\π\n<\r\r\rπ\"\n\nᐿ<<ᐿ\"\n,\n\"ॹπaᐿπ��\r\n�ᐿॹ,\na\n\rॹ<�ॹ\"\n\t\r\n,π\n�,<ॹ,<\n<�✅ᐿ\r✅a✅<\r;,�a�\\\nॹ<\\<✅ॹ\"\nॹ\r�\ta;\"\\ᐿ\n\n✅\"\r;✅\t,a✅✅<\"ᐿ\t�π\\✅✅�<;\"✅π✅ॹ,\nπ\n��<,\ta\r�✅ᐿ\nॹπ\nᐿॹ✅;\nॹ\t\r\\\nᐿ�ᐿ\n\tᐿ,\r\\;<<a,\"π,\tᐿ\nπ<a\"ॹ\\aa\r\r\"\";\tᐿ,ॹa\nॹ\nπ"/PrefixEnd [r25]
I180827 20:41:54.090278 52643 storage/replica_range_lease.go:554  [n1,s1,r24/1:/Table/53/{1/106-2/"\x15…}] transferring lease to s3
I180827 20:41:54.091255 52643 storage/replica_range_lease.go:617  [n1,s1,r24/1:/Table/53/{1/106-2/"\x15…}] done transferring lease to s3: <nil>
I180827 20:41:54.092073 51863 storage/replica_proposal.go:210  [n3,s3,r24/2:/Table/53/{1/106-2/"\x15…}] new range lease repl=(n3,s3):2 seq=3 start=1535402514.090298269,0 epo=1 pro=1535402514.090300825,0 following repl=(n1,s1):1 seq=2 start=1535402512.768597075,0 exp=1535402521.769064687,0 pro=1535402512.769088099,0
I180827 20:41:54.094854 52702 storage/replica_command.go:298  [n1,s1,r25/1:/{Table/53/2/"…-Max}] initiating a split of this range at key /Table/53/2/"\xc0\t\x13\xe0*c\xe4\xcfS-\x9b,\xe2\x82\xfa\xd8Z\xf6\x99\x81\\\x18ŕ\xea\x80Db\xa7\x94\xf7Q#\x13\\\xc7(\xc4=\xaaZ\xa2Hա}\xdeI\x06\x840I\xa9\x95\xcbи\r#iH\x97F~\x10\xe4<\xb2\xefFb\xac\xee\xf90H5\xd7D\xe4:\xf0Ae\xe3\xd1<\xd1\xf7\xb9\xad\xea\xd9\xe0r\xbc\xa6\xae\x92\xfb\xb5,\xc2\U0010f26eD\xe0 \xc5\x06\xfa\x04{\xf7\xe8\xbfZQ\xa3\x05M\xbb\xa8\xbe\xf4\xc4\x0f\xe9|s{|\x8fr\xad\xdaWĢ\x9e\xdf\x17\x9f\x02\xf3п\xd3\xea\xfd\x8ew3\xb8@7ꇘN%\n\xe0@jq\xb3\xb0&y\xe3K0ȼ_s\x1e\x15\x98\xe7\xbf6\xeb\xef}$dd/\xaa\xf1\xcb.U\x8f\xd4r"/"<a✅ᐿ<\n\n\nॹ\",\n\"πॹᐿ,\rᐿ\nॹ�ॹ<\naᐿᐿ,\"ॹ\"\\✅\n�✅<\n\r<\\<\"\\π;<✅,;✅ॹa\r✅ᐿ;ᐿ\r\\�\\�,ᐿ\r\n,�✅,\t;\\\"π,;ᐿa\nॹ✅\n;ॹ<\\\n;<ᐿॹ\n\r<\n�\\\",ॹ✅,\n\"✅ᐿ\raa\n\n\t;�π<,\",ᐿ<ᐿ\\ᐿ✅;\t<π\"\"ॹ<\t\n,,π\"\t�✅\r;;ॹ,ᐿ\tॹ✅ॹ,\nᐿ\n\\;\n\nπa<✅aπ\t;✅\r;�✅;a\t\n✅ππ\t\rॹ\n\\<✅✅<\r✅\t\r✅<π\n;\n\"\"��ॹ\r,ᐿ\naॹᐿπ\\π\n\n,�\t<�a\nπ\\✅,πॹ\",πᐿa,ᐿᐿa\r<\ta\t\nॹ\n\r✅\r;πa✅�\t�π\",πa\n<✅\"a\t�\r<ᐿa,\naᐿॹᐿ\rॹᐿa�\"\\a✅\"\"✅✅ᐿ\t\"ॹ<\n\"\nπ✅✅\n\\ॹ\t;ᐿ\"a,ॹ\"aॹᐿᐿ\n,\\\nॹॹ,;ॹ;\\\n\n\t\n,\naॹπ\r\"\n\t✅ॹ\r\tᐿ✅;\r\ta�ᐿ\t\t\nπᐿ<ॹ\tᐿ\na\nᐿ\tππ<<π\t✅;\r\"ॹ\n\na�<ᐿ\r\nᐿᐿ\";a<\rॹ\\πॹ\tᐿ\n\nᐿπ;\t;;✅ᐿ✅✅<�ॹ;\tᐿ,,\t,π✅\nᐿॹ;\r\"ᐿa,ᐿᐿᐿa\nॹ<aॹ\r,;π<<\nπa�\n\\\r,ॹπ�\n\"ᐿππ\n✅ॹπ\ra,πॹ\n\t<;πᐿᐿ✅ॹ\t�a✅\r�\"a,π��,ॹa\n\\\rॹ\nॹ\"π\"π\ta�<π�\r;a,a\r<πᐿ\na<\r\t\nॹ\\\\\n\\<�\\�aπa;\r\\,,\nॹ\"✅;\"\n✅ᐿ✅a\n<ππ\tπ<a\t�\\<\"✅\\\nᐿ;\r\t✅�✅π\r\r\r\n\",ᐿ;ॹ\nᐿ\r\"\naa\"\n\t<,✅<a\\\n\"ॹᐿ\n\\\t\t\r\"ॹ<,,π�\"ॹ<ॹ,\\ᐿ<\\π\"\\<ᐿ\n\rॹ\na\t\nπ;\\π\\✅\r,\r�\",,;;,<\n\t\"\\\r\r\"ॹ✅ॹ�\n�✅�\n�\\\n\n\rᐿॹ\tπa<;;\n\n�a\n\\ॹॹ\t;\n<\t\\<ॹॹ�✅π\t\"<\n\tᐿπᐿ\"\"�;\t;ॹ<π<✅\nππ<\"\rॹ�πᐿ�\rπ,<,<ᐿ<;;�,\t<<ᐿ\t<\tॹ,π,<\\a\t;\n\r�a✅\r\r\t\nᐿᐿ✅\r;\n;�;\r✅\n,;✅ᐿ,\\\n<,<\\\t\n;<\\aᐿ\r,\n;\\\r�\rॹ\\\t\";\t�;\n,\ta✅\r\t\r,\n\\\t\nᐿ✅\\\"\naᐿ\\\\\";\r<�✅;aॹ\t\t\\\t\tᐿ�\r,\"\n\"\taᐿ\na\rππॹ\nπᐿ\rॹ\nॹ\tπ,π\r<\"\n,�\na;�\\✅,<\"\"�\nππ\t\nπaॹaa✅\\;\\a\r\rᐿ,\\�;\\ᐿᐿ<a\"\r;π\"\\ᐿπ\tπॹ;\\a\ra;\t\n\\�\r<\t<ॹ\r✅\na\t\t<\n\n✅π\nᐿ✅\\<,\nπ\rᐿ✅a\"\r\"\n✅ॹ\\�\r\n\\�\nॹ\\<\\<\n\n\n"/PrefixEnd [r26]
I180827 20:41:54.112580 52726 storage/replica_range_lease.go:554  [n1,s1,r25/1:/Table/53/2/"\x{15\x8…-c0\t\…}] transferring lease to s3
I180827 20:41:54.113319 52726 storage/replica_range_lease.go:617  [n1,s1,r25/1:/Table/53/2/"\x{15\x8…-c0\t\…}] done transferring lease to s3: <nil>
I180827 20:41:54.113657 51850 storage/replica_proposal.go:210  [n3,s3,r25/2:/Table/53/2/"\x{15\x8…-c0\t\…}] new range lease repl=(n3,s3):2 seq=3 start=1535402514.112607829,0 epo=1 pro=1535402514.112610518,0 following repl=(n1,s1):1 seq=2 start=1535402512.768597075,0 exp=1535402521.769064687,0 pro=1535402512.769088099,0
I180827 20:41:54.117098 52757 storage/replica_command.go:298  [n1,s1,r26/1:/{Table/53/2/"…-Max}] initiating a split of this range at key /Table/53/3/";π,\\✅✅ᐿπ✅,�a\r<\nπᐿॹ;π\\,✅\nᐿॹ✅�,\r�\r\r\r,;\r;ᐿ,\n\nᐿaπ\r,,✅\na,a\\<✅\"✅\\,,a\"π\r\n�✅π\"ॹπ;\nπ;<,<\n;<\n\tॹ\rπ\r,a\\\t�\n\r\\ᐿ<\t,\n\\ᐿa\t\t\n\nπ\t\\\n\\πa;π\r\rᐿ\",a<\"\n�\r\ta\r\t�\r\t✅\t;ᐿᐿ��\nᐿᐿ\\,ᐿᐿ\na\"ᐿ\"\"aa\n;✅π\nॹ\\\"\"�✅ॹॹ\\\nॹ\t\nॹ✅,\n\"πᐿ\n\n;<<;\r\tᐿ�,,\\\n\n\n\n\nπ�;\n;,\"✅\r;a\n\\;aa\n\n\n;\n\n<<ॹ�\nπa�πॹaॹ\r\n;✅✅,;ᐿ\n;π\"\\πᐿ\n\"\n\\a\\aπ✅ᐿ\n<\",\tॹ\r<;\";\nππ\"�\n\n\t,π\\\\<\\\t;ॹπ\\,;�✅ॹ,\r<\n✅�;ᐿ\",;✅\n\nॹॹ\r\n✅\n<<\n\"\",a\t\r\r,ॹ\t�;�,π\\\t,;\\\"✅\t\n\n\nᐿ\n<\\\rᐿ,;�\nπ\r\\<ᐿᐿ\n<✅ᐿaॹ�✅\n\t,π\"\r<<\n\nπ\tπ✅\n<\\πॹaᐿ\t;�ॹॹ,\"\n\\a\n,\"πॹ,\r,ॹॹ\\\";�\n\\π✅\n\"ππ\n✅�πॹ,\r✅\n;π;ᐿ\"\nᐿ✅\rॹ;;\n\"ॹ\"�\"a\n\rπ\n<\n\t\"aπ\t;\\\n\";\"π,\ta\t\n\nᐿ<,;�<ᐿ\"\\ᐿᐿa,\n;;ॹॹ\tॹπa�ᐿ\ra,π<✅\tᐿᐿ\n,✅\ra�\"\r\r\",;π�<;\n<;ᐿ\"�;ᐿ;�;✅\t\\<\\<;πa✅\rॹ\\\\\\ᐿ\n;\r\t\n\\\r\"✅\n\tπ✅\"\"<\r\rπ\r<,\n,\\✅ᐿᐿ�\t�,ππ;ᐿ\t�\"\\ππ\"ॹ,πa<\n\n<��\rॹॹ\t,\r\"ॹ✅✅\n\n;\\ॹ;π<\"�\t�<\"ᐿॹॹ;\n<\n\r\na\t�ॹᐿa\n,\"\t\r\"\n,\r<,\"\tᐿ\\\n<,;<\"\t\n\nᐿ,ᐿ\tπ✅\n,\r,\n\t<,�\\;<\\a\nπ\t,\t�ॹ\t\n�a✅\n✅\nॹ\";\r\t✅<\tᐿ\n\tᐿॹ✅\"\r\rॹ✅π\n\n,\t\\\t\\;\"a\t,ॹॹ\"aॹ\n,\n✅�\t\nॹᐿ\n\r✅<πॹ\n✅\tॹ\"ॹ\"�\r\\;\\✅;ॹπ;\n\nᐿ<\r<\"ॹ\n,\n;π\nॹ\ta✅\n�;ᐿ\"a�✅π\r✅ॹ,\n\n\",✅\nᐿ\n<�\r\nπᐿ\"πॹᐿ\r�\n<,✅a\\ॹ\r✅<;πᐿ✅ॹ<\"<✅\"π,\\\rπ\\<\"<\"π\n✅<;\\�\tॹ\n\n\r<\n\rᐿ\nᐿaॹaॹ\\\r<<\n\r\n�\ta,\nॹॹᐿ\n,π✅<;\\\nπॹπᐿॹ<;\"a\r<;\t\t<,;�π\n<✅ॹॹ\tᐿ\rᐿaaaॹ\t\\,ᐿ✅\n\\ॹ<\"π\t\r\"\tᐿ\n\ta\t,<ππ;\n\\\r�\n,\n\n\\ᐿa\nॹᐿa\n\t\n\t\n✅\"ᐿ\"\r\n\n\"�\r\n\n<<π\ra✅\\<ᐿ�\n\n✅�a✅�"/105 [r27]
I180827 20:41:54.137397 52716 storage/store_snapshot.go:615  [raftsnapshot,n3,s3,r25/2:/Table/53/2/"\x{15\x8…-c0\t\…}] sending Raft snapshot 547ab8d0 at applied index 21
I180827 20:41:54.140430 52716 storage/store_snapshot.go:657  [raftsnapshot,n3,s3,r25/2:/Table/53/2/"\x{15\x8…-c0\t\…}] streamed snapshot to (n2,s2):3: kv pairs: 14, log entries: 2, rate-limit: 8.0 MiB/sec, 22ms
I180827 20:41:54.140860 52705 storage/replica_raftstorage.go:784  [n2,s2,r25/3:/Table/53/2/"\x{15\x8…-c0\t\…}] applying Raft snapshot at index 21 (id=547ab8d0, encoded size=31270, 1 rocksdb batches, 2 log entries)
I180827 20:41:54.162696 52705 storage/replica_raftstorage.go:790  [n2,s2,r25/3:/Table/53/2/"\x{15\x8…-c0\t\…}] applied Raft snapshot in 22ms [clear=0ms batch=0ms entries=21ms commit=0ms]
I180827 20:41:54.166103 52791 storage/replica_range_lease.go:554  [n1,s1,r26/1:/Table/53/{2/"\xc0…-3/";π,…}] transferring lease to s3
I180827 20:41:54.167118 51903 storage/replica_proposal.go:210  [n3,s3,r26/2:/Table/53/{2/"\xc0…-3/";π,…}] new range lease repl=(n3,s3):2 seq=3 start=1535402514.166156675,0 epo=1 pro=1535402514.166159831,0 following repl=(n1,s1):1 seq=2 start=1535402512.768597075,0 exp=1535402521.769064687,0 pro=1535402512.769088099,0
I180827 20:41:54.167216 52791 storage/replica_range_lease.go:617  [n1,s1,r26/1:/Table/53/{2/"\xc0…-3/";π,…}] done transferring lease to s3: <nil>
I180827 20:41:54.172711 52589 storage/replica_command.go:298  [n1,s1,r27/1:/{Table/53/3/"…-Max}] initiating a split of this range at key /Table/54 [r28]
I180827 20:41:54.182740 52807 storage/replica_range_lease.go:554  [n1,s1,r27/1:/Table/5{3/3/";π…-4}] transferring lease to s2
I180827 20:41:54.183947 52807 storage/replica_range_lease.go:617  [n1,s1,r27/1:/Table/5{3/3/";π…-4}] done transferring lease to s2: <nil>
I180827 20:41:54.184954 51646 storage/replica_proposal.go:210  [n2,s2,r27/3:/Table/5{3/3/";π…-4}] new range lease repl=(n2,s2):3 seq=3 start=1535402514.182761052,0 epo=1 pro=1535402514.182764040,0 following repl=(n1,s1):1 seq=2 start=1535402512.768597075,0 exp=1535402521.769064687,0 pro=1535402512.769088099,0
--- FAIL: test/TestImportPgDump (0.000s)
Test ended in panic.

------- Stdout: -------
W180827 20:41:52.746991 50862 server/status/runtime.go:294  [n?] Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006"
I180827 20:41:52.757923 50862 server/server.go:830  [n?] monitoring forward clock jumps based on server.clock.forward_jump_check_enabled
I180827 20:41:52.758132 50862 base/addr_validation.go:260  [n?] server certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
I180827 20:41:52.758156 50862 base/addr_validation.go:300  [n?] web UI certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
I180827 20:41:52.761168 50862 server/config.go:496  [n?] 1 storage engine initialized
I180827 20:41:52.761191 50862 server/config.go:499  [n?] RocksDB cache size: 128 MiB
I180827 20:41:52.761204 50862 server/config.go:499  [n?] store 0: in-memory, size 0 B
I180827 20:41:52.767725 50862 server/node.go:373  [n?] **** cluster d5e53e69-a109-4eb6-91bf-29e74ae744ba has been created
I180827 20:41:52.767752 50862 server/server.go:1401  [n?] **** add additional nodes by specifying --join=127.0.0.1:41477
I180827 20:41:52.767936 50862 gossip/gossip.go:382  [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:41477" > attrs:<> locality:<> ServerVersion:<major_val:2 minor_val:0 patch:0 unstable:12 > build_tag:"v2.1.0-alpha.20180702-2025-gf1e7bb1" started_at:1535402512767856449 
I180827 20:41:52.770338 50862 storage/store.go:1541  [n1,s1] [n1,s1]: failed initial metrics computation: [n1,s1]: system config not yet available
I180827 20:41:52.770546 50862 server/node.go:476  [n1] initialized store [n1,s1]: disk (capacity=512 MiB, available=512 MiB, used=0 B, logicalBytes=6.9 KiB), ranges=1, leases=1, queries=0.00, writes=0.00, bytesPerReplica={p10=7103.00 p25=7103.00 p50=7103.00 p75=7103.00 p90=7103.00 pMax=7103.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
I180827 20:41:52.770626 50862 storage/stores.go:242  [n1] read 0 node addresses from persistent storage
I180827 20:41:52.770721 50862 server/node.go:697  [n1] connecting to gossip network to verify cluster ID...
I180827 20:41:52.770760 50862 server/node.go:722  [n1] node connected via gossip and verified as part of cluster "d5e53e69-a109-4eb6-91bf-29e74ae744ba"
I180827 20:41:52.770788 50862 server/node.go:546  [n1] node=1: started with [<no-attributes>=<in-mem>] engine(s) and attributes []
I180827 20:41:52.771023 50862 server/status/recorder.go:652  [n1] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180827 20:41:52.771066 50862 server/server.go:1807  [n1] Could not start heap profiler worker due to: directory to store profiles could not be determined
I180827 20:41:52.771159 50862 server/server.go:1538  [n1] starting https server at 127.0.0.1:42563 (use: 127.0.0.1:42563)
I180827 20:41:52.771188 50862 server/server.go:1540  [n1] starting grpc/postgres server at 127.0.0.1:41477
I180827 20:41:52.771209 50862 server/server.go:1541  [n1] advertising CockroachDB node at 127.0.0.1:41477
I180827 20:41:52.775258 51089 server/status/recorder.go:652  [n1,summaries] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180827 20:41:52.776337 50925 storage/replica_command.go:298  [split,n1,s1,r1/1:/M{in-ax}] initiating a split of this range at key /System/"" [r2]
I180827 20:41:52.788832 51094 storage/replica_command.go:298  [split,n1,s1,r2/1:/{System/-Max}] initiating a split of this range at key /System/NodeLiveness [r3]
W180827 20:41:52.790188 51128 storage/intent_resolver.go:668  [n1,s1] failed to push during intent resolution: failed to push "unnamed" id=ec083bbe key=/Table/SystemConfigSpan/Start rw=true pri=0.01126188 iso=SERIALIZABLE stat=PENDING epo=0 ts=1535402512.772758792,0 orig=1535402512.772758792,0 max=1535402512.772758792,0 wto=false rop=false seq=6
I180827 20:41:52.790695 51118 sql/event_log.go:126  [n1,intExec=optInToDiagnosticsStatReporting] Event: "set_cluster_setting", target: 0, info: {SettingName:diagnostics.reporting.enabled Value:true User:root}
I180827 20:41:52.795125 51100 storage/replica_command.go:298  [split,n1,s1,r3/1:/{System/NodeL…-Max}] initiating a split of this range at key /System/NodeLivenessMax [r4]
I180827 20:41:52.800783 51143 storage/replica_command.go:298  [split,n1,s1,r4/1:/{System/NodeL…-Max}] initiating a split of this range at key /System/tsd [r5]
I180827 20:41:52.807906 51165 storage/replica_command.go:298  [split,n1,s1,r5/1:/{System/tsd-Max}] initiating a split of this range at key /System/"tse" [r6]
I180827 20:41:52.811784 51141 sql/event_log.go:126  [n1,intExec=set-setting] Event: "set_cluster_setting", target: 0, info: {SettingName:version Value:2.0-12 User:root}
I180827 20:41:52.818164 50799 sql/event_log.go:126  [n1,intExec=disableNetTrace] Event: "set_cluster_setting", target: 0, info: {SettingName:trace.debug.enable Value:false User:root}
I180827 20:41:52.821094 51188 storage/replica_command.go:298  [split,n1,s1,r6/1:/{System/tse-Max}] initiating a split of this range at key /Table/SystemConfigSpan/Start [r7]
I180827 20:41:52.830709 51176 storage/replica_command.go:298  [split,n1,s1,r7/1:/{Table/System…-Max}] initiating a split of this range at key /Table/11 [r8]
I180827 20:41:52.839374 51187 sql/event_log.go:126  [n1,intExec=initializeClusterSecret] Event: "set_cluster_setting", target: 0, info: {SettingName:cluster.secret Value:045a1c98-219f-445b-bd6b-d481f04d6b0d User:root}
I180827 20:41:52.849534 51154 storage/replica_command.go:298  [split,n1,s1,r8/1:/{Table/11-Max}] initiating a split of this range at key /Table/12 [r9]
I180827 20:41:52.855898 51218 sql/event_log.go:126  [n1,intExec=create-default-db] Event: "create_database", target: 50, info: {DatabaseName:defaultdb Statement:CREATE DATABASE IF NOT EXISTS defaultdb User:root}
I180827 20:41:52.861462 51240 storage/replica_command.go:298  [split,n1,s1,r9/1:/{Table/12-Max}] initiating a split of this range at key /Table/13 [r10]
I180827 20:41:52.868342 51268 storage/replica_command.go:298  [split,n1,s1,r10/1:/{Table/13-Max}] initiating a split of this range at key /Table/14 [r11]
I180827 20:41:52.872706 51256 sql/event_log.go:126  [n1,intExec=create-default-db] Event: "create_database", target: 51, info: {DatabaseName:postgres Statement:CREATE DATABASE IF NOT EXISTS postgres User:root}
I180827 20:41:52.874819 51264 storage/replica_command.go:298  [split,n1,s1,r11/1:/{Table/14-Max}] initiating a split of this range at key /Table/15 [r12]
I180827 20:41:52.876403 50862 server/server.go:1594  [n1] done ensuring all necessary migrations have run
I180827 20:41:52.876433 50862 server/server.go:1597  [n1] serving sql connections
I180827 20:41:52.879108 51233 server/server_update.go:67  [n1] no need to upgrade, cluster already at the newest version
I180827 20:41:52.879639 51235 sql/event_log.go:126  [n1] Event: "node_join", target: 1, info: {Descriptor:{NodeID:1 Address:{NetworkField:tcp AddressField:127.0.0.1:41477} Attrs: Locality: ServerVersion:2.0-12 BuildTag:v2.1.0-alpha.20180702-2025-gf1e7bb1 StartedAt:1535402512767856449 LocalityAddress:[]} ClusterID:d5e53e69-a109-4eb6-91bf-29e74ae744ba StartedAt:1535402512767856449 LastUp:1535402512767856449}
I180827 20:41:52.880318 51302 storage/replica_command.go:298  [split,n1,s1,r12/1:/{Table/15-Max}] initiating a split of this range at key /Table/16 [r13]
I180827 20:41:52.927701 50819 storage/replica_command.go:298  [split,n1,s1,r13/1:/{Table/16-Max}] initiating a split of this range at key /Table/17 [r14]
I180827 20:41:52.940165 51323 storage/replica_command.go:298  [split,n1,s1,r14/1:/{Table/17-Max}] initiating a split of this range at key /Table/18 [r15]
I180827 20:41:52.948539 51355 storage/replica_command.go:298  [split,n1,s1,r15/1:/{Table/18-Max}] initiating a split of this range at key /Table/19 [r16]
I180827 20:41:52.953658 51380 storage/replica_command.go:298  [split,n1,s1,r16/1:/{Table/19-Max}] initiating a split of this range at key /Table/20 [r17]
I180827 20:41:52.961237 51137 storage/replica_command.go:298  [split,n1,s1,r17/1:/{Table/20-Max}] initiating a split of this range at key /Table/21 [r18]
I180827 20:41:52.966548 50832 storage/replica_command.go:298  [split,n1,s1,r18/1:/{Table/21-Max}] initiating a split of this range at key /Table/22 [r19]
I180827 20:41:52.977113 51362 storage/replica_command.go:298  [split,n1,s1,r19/1:/{Table/22-Max}] initiating a split of this range at key /Table/23 [r20]
I180827 20:41:53.041315 51440 storage/replica_command.go:298  [split,n1,s1,r20/1:/{Table/23-Max}] initiating a split of this range at key /Table/50 [r21]
I180827 20:41:53.047478 51414 storage/replica_command.go:298  [split,n1,s1,r21/1:/{Table/50-Max}] initiating a split of this range at key /Table/51 [r22]
W180827 20:41:53.081214 50862 server/status/runtime.go:294  [n?] Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006"
I180827 20:41:53.089127 50862 server/server.go:830  [n?] monitoring forward clock jumps based on server.clock.forward_jump_check_enabled
I180827 20:41:53.089322 50862 base/addr_validation.go:260  [n?] server certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
I180827 20:41:53.089338 50862 base/addr_validation.go:300  [n?] web UI certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
I180827 20:41:53.102793 50862 server/config.go:496  [n?] 1 storage engine initialized
I180827 20:41:53.102863 50862 server/config.go:499  [n?] RocksDB cache size: 128 MiB
I180827 20:41:53.102878 50862 server/config.go:499  [n?] store 0: in-memory, size 0 B
W180827 20:41:53.102953 50862 gossip/gossip.go:1371  [n?] no incoming or outgoing connections
I180827 20:41:53.103001 50862 server/server.go:1403  [n?] no stores bootstrapped and --join flag specified, awaiting init command.
I180827 20:41:53.115344 51458 gossip/client.go:129  [n?] started gossip client to 127.0.0.1:41477
I180827 20:41:53.125579 51530 gossip/server.go:217  [n1] received initial cluster-verification connection from {tcp 127.0.0.1:36113}
I180827 20:41:53.127987 50862 server/node.go:697  [n?] connecting to gossip network to verify cluster ID...
I180827 20:41:53.128034 50862 server/node.go:722  [n?] node connected via gossip and verified as part of cluster "d5e53e69-a109-4eb6-91bf-29e74ae744ba"
I180827 20:41:53.128397 51575 kv/dist_sender.go:345  [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I180827 20:41:53.134920 51574 kv/dist_sender.go:345  [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I180827 20:41:53.135628 50862 kv/dist_sender.go:345  [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I180827 20:41:53.136461 50862 server/node.go:428  [n?] new node allocated ID 2
I180827 20:41:53.136541 50862 gossip/gossip.go:382  [n2] NodeDescriptor set to node_id:2 address:<network_field:"tcp" address_field:"127.0.0.1:36113" > attrs:<> locality:<> ServerVersion:<major_val:2 minor_val:0 patch:0 unstable:12 > build_tag:"v2.1.0-alpha.20180702-2025-gf1e7bb1" started_at:1535402513136479434 
I180827 20:41:53.136591 50862 storage/stores.go:242  [n2] read 0 node addresses from persistent storage
I180827 20:41:53.136624 50862 storage/stores.go:261  [n2] wrote 1 node addresses to persistent storage
I180827 20:41:53.137485 51552 storage/stores.go:261  [n1] wrote 1 node addresses to persistent storage
I180827 20:41:53.139442 50862 server/node.go:672  [n2] bootstrapped store [n2,s2]
I180827 20:41:53.139577 50862 server/node.go:546  [n2] node=2: started with [] engine(s) and attributes []
I180827 20:41:53.140140 50862 server/status/recorder.go:652  [n2] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180827 20:41:53.140166 50862 server/server.go:1807  [n2] Could not start heap profiler worker due to: directory to store profiles could not be determined
I180827 20:41:53.140233 50862 server/server.go:1538  [n2] starting https server at 127.0.0.1:39947 (use: 127.0.0.1:39947)
I180827 20:41:53.140246 50862 server/server.go:1540  [n2] starting grpc/postgres server at 127.0.0.1:36113
I180827 20:41:53.140256 50862 server/server.go:1541  [n2] advertising CockroachDB node at 127.0.0.1:36113
I180827 20:41:53.140624 51685 server/status/recorder.go:652  [n2,summaries] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180827 20:41:53.153945 50862 server/server.go:1594  [n2] done ensuring all necessary migrations have run
I180827 20:41:53.153974 50862 server/server.go:1597  [n2] serving sql connections
W180827 20:41:53.165268 50862 server/status/runtime.go:294  [n?] Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006"
I180827 20:41:53.185802 51467 server/server_update.go:67  [n2] no need to upgrade, cluster already at the newest version
I180827 20:41:53.186848 51469 sql/event_log.go:126  [n2] Event: "node_join", target: 2, info: {Descriptor:{NodeID:2 Address:{NetworkField:tcp AddressField:127.0.0.1:36113} Attrs: Locality: ServerVersion:2.0-12 BuildTag:v2.1.0-alpha.20180702-2025-gf1e7bb1 StartedAt:1535402513136479434 LocalityAddress:[]} ClusterID:d5e53e69-a109-4eb6-91bf-29e74ae744ba StartedAt:1535402513136479434 LastUp:1535402513136479434}
I180827 20:41:53.189622 50862 server/server.go:830  [n?] monitoring forward clock jumps based on server.clock.forward_jump_check_enabled
I180827 20:41:53.189776 50862 base/addr_validation.go:260  [n?] server certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
I180827 20:41:53.189808 50862 base/addr_validation.go:300  [n?] web UI certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
I180827 20:41:53.207782 50862 server/config.go:496  [n?] 1 storage engine initialized
I180827 20:41:53.207807 50862 server/config.go:499  [n?] RocksDB cache size: 128 MiB
I180827 20:41:53.207815 50862 server/config.go:499  [n?] store 0: in-memory, size 0 B
W180827 20:41:53.207911 50862 gossip/gossip.go:1371  [n?] no incoming or outgoing connections
I180827 20:41:53.207947 50862 server/server.go:1403  [n?] no stores bootstrapped and --join flag specified, awaiting init command.
I180827 20:41:53.211471 51475 rpc/nodedialer/nodedialer.go:92  [ct-client] connection to n2 established
I180827 20:41:53.223653 51740 gossip/client.go:129  [n?] started gossip client to 127.0.0.1:41477
I180827 20:41:53.223954 51816 gossip/server.go:217  [n1] received initial cluster-verification connection from {tcp 127.0.0.1:46463}
I180827 20:41:53.224401 50862 server/node.go:697  [n?] connecting to gossip network to verify cluster ID...
I180827 20:41:53.224432 50862 server/node.go:722  [n?] node connected via gossip and verified as part of cluster "d5e53e69-a109-4eb6-91bf-29e74ae744ba"
I180827 20:41:53.224690 51837 kv/dist_sender.go:345  [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I180827 20:41:53.225445 51836 kv/dist_sender.go:345  [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I180827 20:41:53.226030 50862 kv/dist_sender.go:345  [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I180827 20:41:53.226699 50862 server/node.go:428  [n?] new node allocated ID 3
I180827 20:41:53.226763 50862 gossip/gossip.go:382  [n3] NodeDescriptor set to node_id:3 address:<network_field:"tcp" address_field:"127.0.0.1:46463" > attrs:<> locality:<> ServerVersion:<major_val:2 minor_val:0 patch:0 unstable:12 > build_tag:"v2.1.0-alpha.20180702-2025-gf1e7bb1" started_at:1535402513226706701 
I180827 20:41:53.226805 50862 storage/stores.go:242  [n3] read 0 node addresses from persistent storage
I180827 20:41:53.226851 50862 storage/stores.go:261  [n3] wrote 2 node addresses to persistent storage
I180827 20:41:53.227563 51809 storage/stores.go:261  [n1] wrote 2 node addresses to persistent storage
I180827 20:41:53.227869 51810 storage/stores.go:261  [n2] wrote 2 node addresses to persistent storage
I180827 20:41:53.228504 50862 server/node.go:672  [n3] bootstrapped store [n3,s3]
I180827 20:41:53.229044 50862 server/node.go:546  [n3] node=3: started with [] engine(s) and attributes []
I180827 20:41:53.229696 50862 server/status/recorder.go:652  [n3] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180827 20:41:53.229749 50862 server/server.go:1807  [n3] Could not start heap profiler worker due to: directory to store profiles could not be determined
I180827 20:41:53.235251 50862 server/server.go:1538  [n3] starting https server at 127.0.0.1:43307 (use: 127.0.0.1:43307)
I180827 20:41:53.235271 50862 server/server.go:1540  [n3] starting grpc/postgres server at 127.0.0.1:46463
I180827 20:41:53.235283 50862 server/server.go:1541  [n3] advertising CockroachDB node at 127.0.0.1:46463
I180827 20:41:53.240284 50862 server/server.go:1594  [n3] done ensuring all necessary migrations have run
I180827 20:41:53.240307 50862 server/server.go:1597  [n3] serving sql connections
I180827 20:41:53.243124 51945 server/status/recorder.go:652  [n3,summaries] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180827 20:41:53.248117 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r20/1:/Table/{23-50}] sending preemptive snapshot 59e1afc9 at applied index 16
I180827 20:41:53.249136 50862 testutils/testcluster/testcluster.go:536  [n1,s1] has 22 underreplicated ranges
I180827 20:41:53.251012 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r20/1:/Table/{23-50}] streamed snapshot to (n2,s2):?: kv pairs: 12, log entries: 6, rate-limit: 8.0 MiB/sec, 3ms
I180827 20:41:53.251369 51983 storage/replica_raftstorage.go:784  [n2,s2,r20/?:{-}] applying preemptive snapshot at index 16 (id=59e1afc9, encoded size=2241, 1 rocksdb batches, 6 log entries)
I180827 20:41:53.254056 51839 server/server_update.go:67  [n3] no need to upgrade, cluster already at the newest version
I180827 20:41:53.255122 51841 sql/event_log.go:126  [n3] Event: "node_join", target: 3, info: {Descriptor:{NodeID:3 Address:{NetworkField:tcp AddressField:127.0.0.1:46463} Attrs: Locality: ServerVersion:2.0-12 BuildTag:v2.1.0-alpha.20180702-2025-gf1e7bb1 StartedAt:1535402513226706701 LocalityAddress:[]} ClusterID:d5e53e69-a109-4eb6-91bf-29e74ae744ba StartedAt:1535402513226706701 LastUp:1535402513226706701}
I180827 20:41:53.256061 51983 storage/replica_raftstorage.go:790  [n2,s2,r20/?:/Table/{23-50}] applied preemptive snapshot in 1ms [clear=0ms batch=0ms entries=0ms commit=1ms]
I180827 20:41:53.256605 50930 storage/replica_command.go:812  [replicate,n1,s1,r20/1:/Table/{23-50}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r20:/Table/{23-50} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.259565 50930 storage/replica.go:3743  [n1,s1,r20/1:/Table/{23-50}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I180827 20:41:53.261627 51625 rpc/nodedialer/nodedialer.go:92  [n2] connection to n1 established
I180827 20:41:53.264544 50862 testutils/testcluster/testcluster.go:536  [n1,s1] has 22 underreplicated ranges
I180827 20:41:53.286630 50930 rpc/nodedialer/nodedialer.go:92  [replicate,n1,s1,r21/1:/Table/5{0-1}] connection to n3 established
I180827 20:41:53.287245 50862 testutils/testcluster/testcluster.go:536  [n1,s1] has 22 underreplicated ranges
I180827 20:41:53.287799 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r21/1:/Table/5{0-1}] sending preemptive snapshot de08568a at applied index 18
I180827 20:41:53.288157 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r21/1:/Table/5{0-1}] streamed snapshot to (n3,s3):?: kv pairs: 8, log entries: 8, rate-limit: 8.0 MiB/sec, 1ms
I180827 20:41:53.288623 51959 storage/replica_raftstorage.go:784  [n3,s3,r21/?:{-}] applying preemptive snapshot at index 18 (id=de08568a, encoded size=2646, 1 rocksdb batches, 8 log entries)
I180827 20:41:53.289814 51959 storage/replica_raftstorage.go:790  [n3,s3,r21/?:/Table/5{0-1}] applied preemptive snapshot in 1ms [clear=0ms batch=0ms entries=0ms commit=1ms]
I180827 20:41:53.290329 50930 storage/replica_command.go:812  [replicate,n1,s1,r21/1:/Table/5{0-1}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r21:/Table/5{0-1} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.293678 50930 storage/replica.go:3743  [n1,s1,r21/1:/Table/5{0-1}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I180827 20:41:53.294953 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r22/1:/{Table/51-Max}] sending preemptive snapshot a84e7278 at applied index 12
I180827 20:41:53.295229 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r22/1:/{Table/51-Max}] streamed snapshot to (n3,s3):?: kv pairs: 7, log entries: 2, rate-limit: 8.0 MiB/sec, 1ms
I180827 20:41:53.295441 51883 rpc/nodedialer/nodedialer.go:92  [n3] connection to n1 established
I180827 20:41:53.295585 51953 storage/replica_raftstorage.go:784  [n3,s3,r22/?:{-}] applying preemptive snapshot at index 12 (id=a84e7278, encoded size=386, 1 rocksdb batches, 2 log entries)
I180827 20:41:53.295717 51953 storage/replica_raftstorage.go:790  [n3,s3,r22/?:/{Table/51-Max}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.295955 50930 storage/replica_command.go:812  [replicate,n1,s1,r22/1:/{Table/51-Max}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r22:/{Table/51-Max} [(n1,s1):1, next=2, gen=0]
I180827 20:41:53.298097 50930 storage/replica.go:3743  [n1,s1,r22/1:/{Table/51-Max}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I180827 20:41:53.301122 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r8/1:/Table/1{1-2}] sending preemptive snapshot 201bdccc at applied index 18
I180827 20:41:53.301565 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r8/1:/Table/1{1-2}] streamed snapshot to (n3,s3):?: kv pairs: 9, log entries: 8, rate-limit: 8.0 MiB/sec, 3ms
I180827 20:41:53.306578 52088 storage/replica_raftstorage.go:784  [n3,s3,r8/?:{-}] applying preemptive snapshot at index 18 (id=201bdccc, encoded size=4352, 1 rocksdb batches, 8 log entries)
I180827 20:41:53.306868 52088 storage/replica_raftstorage.go:790  [n3,s3,r8/?:/Table/1{1-2}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.307601 50930 storage/replica_command.go:812  [replicate,n1,s1,r8/1:/Table/1{1-2}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r8:/Table/1{1-2} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.311873 50930 storage/replica.go:3743  [n1,s1,r8/1:/Table/1{1-2}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I180827 20:41:53.314134 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r17/1:/Table/2{0-1}] sending preemptive snapshot 53116eb2 at applied index 16
I180827 20:41:53.314317 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r17/1:/Table/2{0-1}] streamed snapshot to (n3,s3):?: kv pairs: 8, log entries: 6, rate-limit: 8.0 MiB/sec, 2ms
I180827 20:41:53.314683 52103 storage/replica_raftstorage.go:784  [n3,s3,r17/?:{-}] applying preemptive snapshot at index 16 (id=53116eb2, encoded size=2105, 1 rocksdb batches, 6 log entries)
I180827 20:41:53.314887 52103 storage/replica_raftstorage.go:790  [n3,s3,r17/?:/Table/2{0-1}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.315401 50930 storage/replica_command.go:812  [replicate,n1,s1,r17/1:/Table/2{0-1}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r17:/Table/2{0-1} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.318398 50930 storage/replica.go:3743  [n1,s1,r17/1:/Table/2{0-1}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I180827 20:41:53.319436 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r16/1:/Table/{19-20}] sending preemptive snapshot e0be8540 at applied index 16
I180827 20:41:53.319691 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r16/1:/Table/{19-20}] streamed snapshot to (n2,s2):?: kv pairs: 8, log entries: 6, rate-limit: 8.0 MiB/sec, 1ms
I180827 20:41:53.320127 52072 storage/replica_raftstorage.go:784  [n2,s2,r16/?:{-}] applying preemptive snapshot at index 16 (id=e0be8540, encoded size=2109, 1 rocksdb batches, 6 log entries)
I180827 20:41:53.320339 52072 storage/replica_raftstorage.go:790  [n2,s2,r16/?:/Table/{19-20}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.320816 50930 storage/replica_command.go:812  [replicate,n1,s1,r16/1:/Table/{19-20}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r16:/Table/{19-20} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.323849 50930 storage/replica.go:3743  [n1,s1,r16/1:/Table/{19-20}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I180827 20:41:53.326208 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r15/1:/Table/1{8-9}] sending preemptive snapshot d259ae5c at applied index 16
I180827 20:41:53.326404 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r15/1:/Table/1{8-9}] streamed snapshot to (n2,s2):?: kv pairs: 8, log entries: 6, rate-limit: 8.0 MiB/sec, 2ms
I180827 20:41:53.326731 52116 storage/replica_raftstorage.go:784  [n2,s2,r15/?:{-}] applying preemptive snapshot at index 16 (id=d259ae5c, encoded size=2276, 1 rocksdb batches, 6 log entries)
I180827 20:41:53.326923 50862 testutils/testcluster/testcluster.go:536  [n1,s1] has 22 underreplicated ranges
I180827 20:41:53.326953 52116 storage/replica_raftstorage.go:790  [n2,s2,r15/?:/Table/1{8-9}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.334514 50930 storage/replica_command.go:812  [replicate,n1,s1,r15/1:/Table/1{8-9}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r15:/Table/1{8-9} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.337656 50930 storage/replica.go:3743  [n1,s1,r15/1:/Table/1{8-9}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I180827 20:41:53.338767 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r14/1:/Table/1{7-8}] sending preemptive snapshot 9d0058d5 at applied index 16
I180827 20:41:53.339034 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r14/1:/Table/1{7-8}] streamed snapshot to (n2,s2):?: kv pairs: 8, log entries: 6, rate-limit: 8.0 MiB/sec, 1ms
I180827 20:41:53.339612 52090 storage/replica_raftstorage.go:784  [n2,s2,r14/?:{-}] applying preemptive snapshot at index 16 (id=9d0058d5, encoded size=2276, 1 rocksdb batches, 6 log entries)
I180827 20:41:53.339831 52090 storage/replica_raftstorage.go:790  [n2,s2,r14/?:/Table/1{7-8}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.340173 50930 storage/replica_command.go:812  [replicate,n1,s1,r14/1:/Table/1{7-8}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r14:/Table/1{7-8} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.343121 50930 storage/replica.go:3743  [n1,s1,r14/1:/Table/1{7-8}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I180827 20:41:53.345432 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r9/1:/Table/1{2-3}] sending preemptive snapshot 0eea2d20 at applied index 26
I180827 20:41:53.345859 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r9/1:/Table/1{2-3}] streamed snapshot to (n2,s2):?: kv pairs: 53, log entries: 16, rate-limit: 8.0 MiB/sec, 2ms
I180827 20:41:53.347137 52066 storage/replica_raftstorage.go:784  [n2,s2,r9/?:{-}] applying preemptive snapshot at index 26 (id=0eea2d20, encoded size=15139, 1 rocksdb batches, 16 log entries)
I180827 20:41:53.347467 52066 storage/replica_raftstorage.go:790  [n2,s2,r9/?:/Table/1{2-3}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.348208 50930 storage/replica_command.go:812  [replicate,n1,s1,r9/1:/Table/1{2-3}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r9:/Table/1{2-3} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.352166 50930 storage/replica.go:3743  [n1,s1,r9/1:/Table/1{2-3}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I180827 20:41:53.353188 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r4/1:/System/{NodeLive…-tsd}] sending preemptive snapshot 0cdee511 at applied index 39
I180827 20:41:53.353765 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r4/1:/System/{NodeLive…-tsd}] streamed snapshot to (n2,s2):?: kv pairs: 36, log entries: 29, rate-limit: 8.0 MiB/sec, 1ms
I180827 20:41:53.354286 51723 storage/replica_raftstorage.go:784  [n2,s2,r4/?:{-}] applying preemptive snapshot at index 39 (id=0cdee511, encoded size=98384, 1 rocksdb batches, 29 log entries)
I180827 20:41:53.354994 51723 storage/replica_raftstorage.go:790  [n2,s2,r4/?:/System/{NodeLive…-tsd}] applied preemptive snapshot in 1ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.355529 50930 storage/replica_command.go:812  [replicate,n1,s1,r4/1:/System/{NodeLive…-tsd}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r4:/System/{NodeLivenessMax-tsd} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.358523 50930 storage/replica.go:3743  [n1,s1,r4/1:/System/{NodeLive…-tsd}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I180827 20:41:53.360250 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r3/1:/System/NodeLiveness{-Max}] sending preemptive snapshot 965d58b1 at applied index 19
I180827 20:41:53.360436 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r3/1:/System/NodeLiveness{-Max}] streamed snapshot to (n3,s3):?: kv pairs: 10, log entries: 9, rate-limit: 8.0 MiB/sec, 1ms
I180827 20:41:53.360789 52150 storage/replica_raftstorage.go:784  [n3,s3,r3/?:{-}] applying preemptive snapshot at index 19 (id=965d58b1, encoded size=4003, 1 rocksdb batches, 9 log entries)
I180827 20:41:53.361043 52150 storage/replica_raftstorage.go:790  [n3,s3,r3/?:/System/NodeLiveness{-Max}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.361522 50930 storage/replica_command.go:812  [replicate,n1,s1,r3/1:/System/NodeLiveness{-Max}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r3:/System/NodeLiveness{-Max} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.364392 50930 storage/replica.go:3743  [n1,s1,r3/1:/System/NodeLiveness{-Max}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I180827 20:41:53.366422 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r12/1:/Table/1{5-6}] sending preemptive snapshot 811af376 at applied index 16
I180827 20:41:53.366638 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r12/1:/Table/1{5-6}] streamed snapshot to (n3,s3):?: kv pairs: 8, log entries: 6, rate-limit: 8.0 MiB/sec, 2ms
I180827 20:41:53.367089 52137 storage/replica_raftstorage.go:784  [n3,s3,r12/?:{-}] applying preemptive snapshot at index 16 (id=811af376, encoded size=2276, 1 rocksdb batches, 6 log entries)
I180827 20:41:53.367359 52137 storage/replica_raftstorage.go:790  [n3,s3,r12/?:/Table/1{5-6}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.368127 50930 storage/replica_command.go:812  [replicate,n1,s1,r12/1:/Table/1{5-6}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r12:/Table/1{5-6} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.371691 50930 storage/replica.go:3743  [n1,s1,r12/1:/Table/1{5-6}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I180827 20:41:53.374563 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r19/1:/Table/2{2-3}] sending preemptive snapshot 9cd02555 at applied index 16
I180827 20:41:53.374760 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r19/1:/Table/2{2-3}] streamed snapshot to (n3,s3):?: kv pairs: 8, log entries: 6, rate-limit: 8.0 MiB/sec, 2ms
I180827 20:41:53.375252 52080 storage/replica_raftstorage.go:784  [n3,s3,r19/?:{-}] applying preemptive snapshot at index 16 (id=9cd02555, encoded size=2276, 1 rocksdb batches, 6 log entries)
I180827 20:41:53.375582 52080 storage/replica_raftstorage.go:790  [n3,s3,r19/?:/Table/2{2-3}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.375950 50930 storage/replica_command.go:812  [replicate,n1,s1,r19/1:/Table/2{2-3}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r19:/Table/2{2-3} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.381819 50930 storage/replica.go:3743  [n1,s1,r19/1:/Table/2{2-3}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I180827 20:41:53.386461 52091 rpc/nodedialer/nodedialer.go:92  [ct-client] connection to n3 established
I180827 20:41:53.386637 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r10/1:/Table/1{3-4}] sending preemptive snapshot a16f4b15 at applied index 64
I180827 20:41:53.388005 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r10/1:/Table/1{3-4}] streamed snapshot to (n3,s3):?: kv pairs: 204, log entries: 54, rate-limit: 8.0 MiB/sec, 4ms
I180827 20:41:53.388536 52181 storage/replica_raftstorage.go:784  [n3,s3,r10/?:{-}] applying preemptive snapshot at index 64 (id=a16f4b15, encoded size=62836, 1 rocksdb batches, 54 log entries)
I180827 20:41:53.389154 52181 storage/replica_raftstorage.go:790  [n3,s3,r10/?:/Table/1{3-4}] applied preemptive snapshot in 1ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.389513 50930 storage/replica_command.go:812  [replicate,n1,s1,r10/1:/Table/1{3-4}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r10:/Table/1{3-4} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.392649 50930 storage/replica.go:3743  [n1,s1,r10/1:/Table/1{3-4}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I180827 20:41:53.394122 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r2/1:/System/{-NodeLive…}] sending preemptive snapshot 69adabc1 at applied index 23
I180827 20:41:53.394365 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r2/1:/System/{-NodeLive…}] streamed snapshot to (n2,s2):?: kv pairs: 7, log entries: 13, rate-limit: 8.0 MiB/sec, 1ms
I180827 20:41:53.394729 52213 storage/replica_raftstorage.go:784  [n2,s2,r2/?:{-}] applying preemptive snapshot at index 23 (id=69adabc1, encoded size=6277, 1 rocksdb batches, 13 log entries)
I180827 20:41:53.394981 52213 storage/replica_raftstorage.go:790  [n2,s2,r2/?:/System/{-NodeLive…}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.395465 50930 storage/replica_command.go:812  [replicate,n1,s1,r2/1:/System/{-NodeLive…}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r2:/System/{-NodeLiveness} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.398757 50930 storage/replica.go:3743  [n1,s1,r2/1:/System/{-NodeLive…}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I180827 20:41:53.399709 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r18/1:/Table/2{1-2}] sending preemptive snapshot e9df2a4a at applied index 16
I180827 20:41:53.400036 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r18/1:/Table/2{1-2}] streamed snapshot to (n3,s3):?: kv pairs: 8, log entries: 6, rate-limit: 8.0 MiB/sec, 1ms
I180827 20:41:53.400391 52185 storage/replica_raftstorage.go:784  [n3,s3,r18/?:{-}] applying preemptive snapshot at index 16 (id=e9df2a4a, encoded size=2272, 1 rocksdb batches, 6 log entries)
I180827 20:41:53.400594 52185 storage/replica_raftstorage.go:790  [n3,s3,r18/?:/Table/2{1-2}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.400882 50930 storage/replica_command.go:812  [replicate,n1,s1,r18/1:/Table/2{1-2}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r18:/Table/2{1-2} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.407636 50930 storage/replica.go:3743  [n1,s1,r18/1:/Table/2{1-2}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I180827 20:41:53.408861 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r13/1:/Table/1{6-7}] sending preemptive snapshot 6f914d55 at applied index 16
I180827 20:41:53.409071 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r13/1:/Table/1{6-7}] streamed snapshot to (n2,s2):?: kv pairs: 8, log entries: 6, rate-limit: 8.0 MiB/sec, 1ms
I180827 20:41:53.409426 52218 storage/replica_raftstorage.go:784  [n2,s2,r13/?:{-}] applying preemptive snapshot at index 16 (id=6f914d55, encoded size=2276, 1 rocksdb batches, 6 log entries)
I180827 20:41:53.409616 52218 storage/replica_raftstorage.go:790  [n2,s2,r13/?:/Table/1{6-7}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.409970 50930 storage/replica_command.go:812  [replicate,n1,s1,r13/1:/Table/1{6-7}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r13:/Table/1{6-7} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.411262 50862 testutils/testcluster/testcluster.go:536  [n1,s1] has 22 underreplicated ranges
I180827 20:41:53.412831 50930 storage/replica.go:3743  [n1,s1,r13/1:/Table/1{6-7}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I180827 20:41:53.414081 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r11/1:/Table/1{4-5}] sending preemptive snapshot cca961c1 at applied index 16
I180827 20:41:53.414277 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r11/1:/Table/1{4-5}] streamed snapshot to (n3,s3):?: kv pairs: 8, log entries: 6, rate-limit: 8.0 MiB/sec, 1ms
I180827 20:41:53.414576 52199 storage/replica_raftstorage.go:784  [n3,s3,r11/?:{-}] applying preemptive snapshot at index 16 (id=cca961c1, encoded size=2272, 1 rocksdb batches, 6 log entries)
I180827 20:41:53.414816 52199 storage/replica_raftstorage.go:790  [n3,s3,r11/?:/Table/1{4-5}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.415293 50930 storage/replica_command.go:812  [replicate,n1,s1,r11/1:/Table/1{4-5}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r11:/Table/1{4-5} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.418111 50930 storage/replica.go:3743  [n1,s1,r11/1:/Table/1{4-5}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I180827 20:41:53.419054 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r5/1:/System/ts{d-e}] sending preemptive snapshot 3c3a015f at applied index 27
I180827 20:41:53.423022 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r5/1:/System/ts{d-e}] streamed snapshot to (n3,s3):?: kv pairs: 1391, log entries: 2, rate-limit: 8.0 MiB/sec, 4ms
I180827 20:41:53.423893 52201 storage/replica_raftstorage.go:784  [n3,s3,r5/?:{-}] applying preemptive snapshot at index 27 (id=3c3a015f, encoded size=194658, 1 rocksdb batches, 2 log entries)
I180827 20:41:53.429501 52201 storage/replica_raftstorage.go:790  [n3,s3,r5/?:/System/ts{d-e}] applied preemptive snapshot in 6ms [clear=0ms batch=0ms entries=2ms commit=4ms]
I180827 20:41:53.433500 50930 storage/replica_command.go:812  [replicate,n1,s1,r5/1:/System/ts{d-e}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r5:/System/ts{d-e} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.437580 50930 storage/replica.go:3743  [n1,s1,r5/1:/System/ts{d-e}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I180827 20:41:53.440575 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r6/1:/{System/tse-Table/System…}] sending preemptive snapshot cbd412df at applied index 21
I180827 20:41:53.440794 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r6/1:/{System/tse-Table/System…}] streamed snapshot to (n3,s3):?: kv pairs: 8, log entries: 11, rate-limit: 8.0 MiB/sec, 2ms
I180827 20:41:53.441181 52260 storage/replica_raftstorage.go:784  [n3,s3,r6/?:{-}] applying preemptive snapshot at index 21 (id=cbd412df, encoded size=4339, 1 rocksdb batches, 11 log entries)
I180827 20:41:53.441400 52260 storage/replica_raftstorage.go:790  [n3,s3,r6/?:/{System/tse-Table/System…}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.441676 50930 storage/replica_command.go:812  [replicate,n1,s1,r6/1:/{System/tse-Table/System…}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r6:/{System/tse-Table/SystemConfigSpan/Start} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.448564 52224 rpc/nodedialer/nodedialer.go:92  [ct-client] connection to n2 established
I180827 20:41:53.461587 50930 storage/replica.go:3743  [n1,s1,r6/1:/{System/tse-Table/System…}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I180827 20:41:53.463345 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r7/1:/Table/{SystemCon…-11}] sending preemptive snapshot 114f4385 at applied index 29
I180827 20:41:53.464896 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r7/1:/Table/{SystemCon…-11}] streamed snapshot to (n2,s2):?: kv pairs: 59, log entries: 19, rate-limit: 8.0 MiB/sec, 3ms
I180827 20:41:53.465343 52280 storage/replica_raftstorage.go:784  [n2,s2,r7/?:{-}] applying preemptive snapshot at index 29 (id=114f4385, encoded size=16646, 1 rocksdb batches, 19 log entries)
I180827 20:41:53.465821 52280 storage/replica_raftstorage.go:790  [n2,s2,r7/?:/Table/{SystemCon…-11}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.466988 50930 storage/replica_command.go:812  [replicate,n1,s1,r7/1:/Table/{SystemCon…-11}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r7:/Table/{SystemConfigSpan/Start-11} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.472743 50930 storage/replica.go:3743  [n1,s1,r7/1:/Table/{SystemCon…-11}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I180827 20:41:53.474632 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r1/1:/{Min-System/}] sending preemptive snapshot 0a244018 at applied index 114
I180827 20:41:53.475250 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r1/1:/{Min-System/}] streamed snapshot to (n2,s2):?: kv pairs: 73, log entries: 90, rate-limit: 8.0 MiB/sec, 1ms
I180827 20:41:53.475827 52267 storage/replica_raftstorage.go:784  [n2,s2,r1/?:{-}] applying preemptive snapshot at index 114 (id=0a244018, encoded size=40271, 1 rocksdb batches, 90 log entries)
I180827 20:41:53.476525 52267 storage/replica_raftstorage.go:790  [n2,s2,r1/?:/{Min-System/}] applied preemptive snapshot in 1ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.476869 50930 storage/replica_command.go:812  [replicate,n1,s1,r1/1:/{Min-System/}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r1:/{Min-System/} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.482912 50930 storage/replica.go:3743  [n1,s1,r1/1:/{Min-System/}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I180827 20:41:53.483281 50930 storage/queue.go:873  [n1,replicate] purgatory is now empty
I180827 20:41:53.485684 52286 storage/store_snapshot.go:615  [replicate,n1,s1,r20/1:/Table/{23-50}] sending preemptive snapshot f1426c69 at applied index 19
I180827 20:41:53.487316 52286 storage/store_snapshot.go:657  [replicate,n1,s1,r20/1:/Table/{23-50}] streamed snapshot to (n3,s3):?: kv pairs: 13, log entries: 9, rate-limit: 8.0 MiB/sec, 4ms
I180827 20:41:53.487681 52252 storage/replica_raftstorage.go:784  [n3,s3,r20/?:{-}] applying preemptive snapshot at index 19 (id=f1426c69, encoded size=3273, 1 rocksdb batches, 9 log entries)
I180827 20:41:53.487932 52252 storage/replica_raftstorage.go:790  [n3,s3,r20/?:/Table/{23-50}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.488311 52286 storage/replica_command.go:812  [replicate,n1,s1,r20/1:/Table/{23-50}] change replicas (ADD_REPLICA (n3,s3):3): read existing descriptor r20:/Table/{23-50} [(n1,s1):1, (n2,s2):2, next=3, gen=1]
I180827 20:41:53.503580 52286 storage/replica.go:3743  [n1,s1,r20/1:/Table/{23-50}] proposing ADD_REPLICA((n3,s3):3): updated=[(n1,s1):1 (n2,s2):2 (n3,s3):3] next=4
I180827 20:41:53.505707 52235 storage/store_snapshot.go:615  [replicate,n1,s1,r1/1:/{Min-System/}] sending preemptive snapshot 99036b07 at applied index 119
I180827 20:41:53.506514 52235 storage/store_snapshot.go:657  [replicate,n1,s1,r1/1:/{Min-System/}] streamed snapshot to (n3,s3):?: kv pairs: 78, log entries: 95, rate-limit: 8.0 MiB/sec, 1ms
I180827 20:41:53.507282 52188 storage/replica_raftstorage.go:784  [n3,s3,r1/?:{-}] applying preemptive snapshot at index 119 (id=99036b07, encoded size=42101, 1 rocksdb batches, 95 log entries)
I180827 20:41:53.508109 52188 storage/replica_raftstorage.go:790  [n3,s3,r1/?:/{Min-System/}] applied preemptive snapshot in 1ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.508641 52235 storage/replica_command.go:812  [replicate,n1,s1,r1/1:/{Min-System/}] change replicas (ADD_REPLICA (n3,s3):3): read existing descriptor r1:/{Min-System/} [(n1,s1):1, (n2,s2):2, next=3, gen=1]
I180827 20:41:53.512524 52235 storage/replica.go:3743  [n1,s1,r1/1:/{Min-System/}] proposing ADD_REPLICA((n3,s3):3): updated=[(n1,s1):1 (n2,s2):2 (n3,s3):3] next=4
I180827 20:41:53.513999 52209 storage/store_snapshot.go:615  [replicate,n1,s1,r7/1:/Table/{SystemCon…-11}] sending preemptive snapshot bb53109c at applied index 32
I180827 20:41:53.514379 52209 storage/store_snapshot.go:657  [replicate,n1,s1,r7/1:/Table/{SystemCon…-11}] streamed snapshot to (n3,s3):?: kv pairs: 60, log entries: 22, rate-limit: 8.0 MiB/sec, 1ms
I180827 20:41:53.514821 52292 storage/replica_raftstorage.go:784  [n3,s3,r7/?:{-}] applying preemptive snapshot at index 32 (id=bb53109c, encoded size=17687, 1 rocksdb batches, 22 log entries)
I180827 20:41:53.515905 52292 storage/replica_raftstorage.go:790  [n3,s3,r7/?:/Table/{SystemCon…-11}] applied preemptive snapshot in 1ms [clear=1ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.516367 52209 storage/replica_command.go:812  [replicate,n1,s1,r7/1:/Table/{SystemCon…-11}] change replicas (ADD_REPLICA (n3,s3):3): read existing descriptor r7:/Table/{SystemConfigSpan/Start-11} [(n1,s1):1, (n2,s2):2, next=3, gen=1]
I180827 20:41:53.520158 52209 storage/replica.go:3743  [n1,s1,r7/1:/Table/{SystemCon…-11}] proposing ADD_REPLICA((n3,s3):3): updated=[(n1,s1):1 (n2,s2):2 (n3,s3):3] next=4
I180827 20:41:53.521958 52312 storage/store_snapshot.go:615  [replicate,n1,s1,r6/1:/{System/tse-Table/System…}] sending preemptive snapshot 2ca43612 at applied index 24
I180827 20:41:53.522776 52312 storage/store_snapshot.go:657  [replicate,n1,s1,r6/1:/{System/tse-Table/System…}] streamed snapshot to (n2,s2):?: kv pairs: 9, log entries: 14, rate-limit: 8.0 MiB/sec, 2ms
I180827 20:41:53.523128 52239 storage/replica_raftstorage.go:784  [n2,s2,r6/?:{-}] applying preemptive snapshot at index 24 (id=2ca43612, encoded size=5410, 1 rocksdb batches, 14 log entries)
I180827 20:41:53.523377 52239 storage/replica_raftstorage.go:790  [n2,s2,r6/?:/{System/tse-Table/System…}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.523701 52312 storage/replica_command.go:812  [replicate,n1,s1,r6/1:/{System/tse-Table/System…}] change replicas (ADD_REPLICA (n2,s2):3): read existing descriptor r6:/{System/tse-Table/SystemConfigSpan/Start} [(n1,s1):1, (n3,s3):2, next=3, gen=1]
I180827 20:41:53.525176 50862 testutils/testcluster/testcluster.go:536  [n1,s1] has 19 underreplicated ranges
I180827 20:41:53.527482 52312 storage/replica.go:3743  [n1,s1,r6/1:/{System/tse-Table/System…}] proposing ADD_REPLICA((n2,s2):3): updated=[(n1,s1):1 (n3,s3):2 (n2,s2):3] next=4
I180827 20:41:53.528875 52327 storage/store_snapshot.go:615  [replicate,n1,s1,r5/1:/System/ts{d-e}] sending preemptive snapshot 731be2ae at applied index 30
I180827 20:41:53.532860 52327 storage/store_snapshot.go:657  [replicate,n1,s1,r5/1:/System/ts{d-e}] streamed snapshot to (n2,s2):?: kv pairs: 1392, log entries: 5, rate-limit: 8.0 MiB/sec, 4ms
I180827 20:41:53.533361 52316 storage/replica_raftstorage.go:784  [n2,s2,r5/?:{-}] applying preemptive snapshot at index 30 (id=731be2ae, encoded size=195741, 1 rocksdb batches, 5 log entries)
I180827 20:41:53.535834 52316 storage/replica_raftstorage.go:790  [n2,s2,r5/?:/System/ts{d-e}] applied preemptive snapshot in 2ms [clear=0ms batch=0ms entries=0ms commit=2ms]
I180827 20:41:53.536253 52327 storage/replica_command.go:812  [replicate,n1,s1,r5/1:/System/ts{d-e}] change replicas (ADD_REPLICA (n2,s2):3): read existing descriptor r5:/System/ts{d-e} [(n1,s1):1, (n3,s3):2, next=3, gen=1]
I180827 20:41:53.540576 52327 storage/replica.go:3743  [n1,s1,r5/1:/System/ts{d-e}] proposing ADD_REPLICA((n2,s2):3): updated=[(n1,s1):1 (n3,s3):2 (n2,s2):3] next=4
I180827 20:41:53.545804 52341 storage/store_snapshot.go:615  [replicate,n1,s1,r11/1:/Table/1{4-5}] sending preemptive snapshot 7497a95f at applied index 19
I180827 20:41:53.546108 52341 storage/store_snapshot.go:657  [replicate,n1,s1,r11/1:/Table/1{4-5}] streamed snapshot to (n2,s2):?: kv pairs: 9, log entries: 9, rate-limit: 8.0 MiB/sec, 4ms
I180827 20:41:53.546590 52275 storage/replica_raftstorage.go:784  [n2,s2,r11/?:{-}] applying preemptive snapshot at index 19 (id=7497a95f, encoded size=3304, 1 rocksdb batches, 9 log entries)
I180827 20:41:53.546960 52275 storage/replica_raftstorage.go:790  [n2,s2,r11/?:/Table/1{4-5}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.547386 52341 storage/replica_command.go:812  [replicate,n1,s1,r11/1:/Table/1{4-5}] change replicas (ADD_REPLICA (

Please assign, take a look and update the issue accordingly.

test failure #412

The following test appears to have failed:

#412:

I0324 08:21:51.317284      67 multiraft.go:626] node 257: group 1 raft ready
I0324 08:21:51.317369      67 multiraft.go:631] HardState updated: {Term:6 Vote:257 Commit:224 XXX_unrecognized:[]}
I0324 08:21:51.317692      67 multiraft.go:634] New Entry[0]: 6/224 EntryNormal 13ce617aa506d48a3e2a1a36895895e6: raft_id:1 cmd:<put:<header:<timestamp:<wall_time:0 logical:807 > cmd_id:<wall_time:1427185311305618570 random:4479421600908219878 > key:"\000\000\00
I0324 08:21:51.317897      67 multiraft.go:637] Committed Entry[0]: 6/224 EntryNormal 13ce617aa506d48a3e2a1a36895895e6: raft_id:1 cmd:<put:<header:<timestamp:<wall_time:0 logical:807 > cmd_id:<wall_time:1427185311305618570 random:4479421600908219878 > key:"\000\000\00
I0324 08:21:51.318316      67 queue.go:158] adding range range=1 (""-"IZNbVZhCts") to split queue
panic: test timed out after 30s

goroutine 704 [running]:
testing.func·008()
    /usr/src/go/src/testing/testing.go:681 +0x12f
created by time.goFunc
    /usr/src/go/src/time/sleep.go:129 +0x4b

goroutine 1 [chan receive]:
testing.RunTests(0xfce3c8, 0x1447320, 0x2b, 0x2b, 0xc208029a01)
    /usr/src/go/src/testing/testing.go:556 +0xad6
--
    /go/src/github.com/cockroachdb/cockroach/kv/local_sender.go:163 +0x175
github.com/cockroachdb/cockroach/kv.func·009()
    /go/src/github.com/cockroachdb/cockroach/kv/txn_coord_sender.go:133 +0x1f4
created by github.com/cockroachdb/cockroach/kv.(*txnMetadata).close
    /go/src/github.com/cockroachdb/cockroach/kv/txn_coord_sender.go:137 +0x867
FAIL    github.com/cockroachdb/cockroach/kv 30.019s
=== RUN TestHeartbeatSingleGroup
I0324 08:21:51.311592      76 multiraft.go:408] node 1 starting
I0324 08:21:51.311771      76 multiraft.go:408] node 2 starting
I0324 08:21:51.312003      76 raft.go:315] raft: 1 became follower at term 5
I0324 08:21:51.312095      76 raft.go:134] raft: newRaft 1 [peers: [1,2], term: 5, commit: 10, applied: 10, lastindex: 10, lastterm: 5]
I0324 08:21:51.312212      76 raft.go:315] raft: 2 became follower at term 5
I0324 08:21:51.312289      76 raft.go:134] raft: newRaft 2 [peers: [1,2], term: 5, commit: 10, applied: 10, lastindex: 10, lastterm: 5]
I0324 08:21:51.312377      76 raft.go:389] raft: 1 is starting a new election at term 5
I0324 08:21:51.312447      76 raft.go:328] raft: 1 became candidate at term 6
I0324 08:21:51.312479      76 raft.go:372] raft: 1 received vote from 1 at term 6
I0324 08:21:51.317284      67 multiraft.go:626] node 257: group 1 raft ready
I0324 08:21:51.317369      67 multiraft.go:631] HardState updated: {Term:6 Vote:257 Commit:224 XXX_unrecognized:[]}
I0324 08:21:51.317692      67 multiraft.go:634] New Entry[0]: 6/224 EntryNormal 13ce617aa506d48a3e2a1a36895895e6: raft_id:1 cmd:<put:<header:<timestamp:<wall_time:0 logical:807 > cmd_id:<wall_time:1427185311305618570 random:4479421600908219878 > key:"\000\000\00
I0324 08:21:51.317897      67 multiraft.go:637] Committed Entry[0]: 6/224 EntryNormal 13ce617aa506d48a3e2a1a36895895e6: raft_id:1 cmd:<put:<header:<timestamp:<wall_time:0 logical:807 > cmd_id:<wall_time:1427185311305618570 random:4479421600908219878 > key:"\000\000\00
I0324 08:21:51.318316      67 queue.go:158] adding range range=1 (""-"IZNbVZhCts") to split queue
panic: test timed out after 30s

goroutine 704 [running]:
testing.func·008()
    /usr/src/go/src/testing/testing.go:681 +0x12f
created by time.goFunc
    /usr/src/go/src/time/sleep.go:129 +0x4b

goroutine 1 [chan receive]:
testing.RunTests(0xfce3c8, 0x1447320, 0x2b, 0x2b, 0xc208029a01)
    /usr/src/go/src/testing/testing.go:556 +0xad6
--
    /go/src/github.com/cockroachdb/cockroach/kv/local_sender.go:163 +0x175
github.com/cockroachdb/cockroach/kv.func·009()
    /go/src/github.com/cockroachdb/cockroach/kv/txn_coord_sender.go:133 +0x1f4
created by github.com/cockroachdb/cockroach/kv.(*txnMetadata).close
    /go/src/github.com/cockroachdb/cockroach/kv/txn_coord_sender.go:137 +0x867
FAIL    github.com/cockroachdb/cockroach/kv 30.019s
=== RUN TestHeartbeatSingleGroup
I0324 08:21:51.311592      76 multiraft.go:408] node 1 starting
I0324 08:21:51.311771      76 multiraft.go:408] node 2 starting
I0324 08:21:51.312003      76 raft.go:315] raft: 1 became follower at term 5
I0324 08:21:51.312095      76 raft.go:134] raft: newRaft 1 [peers: [1,2], term: 5, commit: 10, applied: 10, lastindex: 10, lastterm: 5]
I0324 08:21:51.312212      76 raft.go:315] raft: 2 became follower at term 5
I0324 08:21:51.312289      76 raft.go:134] raft: newRaft 2 [peers: [1,2], term: 5, commit: 10, applied: 10, lastindex: 10, lastterm: 5]
I0324 08:21:51.312377      76 raft.go:389] raft: 1 is starting a new election at term 5
I0324 08:21:51.312447      76 raft.go:328] raft: 1 became candidate at term 6
I0324 08:21:51.312479      76 raft.go:372] raft: 1 received vote from 1 at term 6

Please assign, take a look and update the issue accordingly.

test failure #

The following test appears to have failed:

#:


Please assign, take a look and update the issue accordingly.

test failure #

The following test appears to have failed:

#:


Please assign, take a look and update the issue accordingly.

test failure #

The following test appears to have failed:

#:


Please assign, take a look and update the issue accordingly.

teamcity: failed tests on release-banana: lint/TestLint, test/TestImportPgDump

The following tests appear to have failed:

#864629:

--- FAIL: test/TestImportPgDump/read_data_only (0.000s)
Test ended in panic.

------- Stdout: -------
I180827 20:41:54.053559 52667 storage/replica_command.go:298  [n1,s1,r23/1:/{Table/52-Max}] initiating a split of this range at key /Table/53/1/106 [r24]
I180827 20:41:54.062208 52385 storage/replica_range_lease.go:554  [replicate,n1,s1,r23/1:/Table/5{2-3/1/106}] transferring lease to s2
I180827 20:41:54.063407 52385 storage/replica_range_lease.go:617  [replicate,n1,s1,r23/1:/Table/5{2-3/1/106}] done transferring lease to s2: <nil>
I180827 20:41:54.063498 51617 storage/replica_proposal.go:210  [n2,s2,r23/3:/Table/5{2-3/1/106}] new range lease repl=(n2,s2):3 seq=3 start=1535402514.062230012,0 epo=1 pro=1535402514.062232488,0 following repl=(n1,s1):1 seq=2 start=1535402512.768597075,0 exp=1535402521.769064687,0 pro=1535402512.769088099,0
I180827 20:41:54.077824 52355 storage/replica_command.go:298  [n1,s1,r24/1:/{Table/53/1/1…-Max}] initiating a split of this range at key /Table/53/2/"\x15\x8f\xe8\u007f\\\xf3\xdf\xf0nP\xdb\xd3\xe8\x1b\"B1K\xa8l+\x96/l\v\x9e\x0e\x91\xa0D\x96\xc0J\xf1\xa1͠\xd2̃\x05\xe3\xe2?ET蛂\x00\xe5\xb0\x1a\x8e\x13Zu\xfd\xf2\x81w^\xb7\xbdH\xb8\xe4\a\x9c\xfd\x99{\xb4\"\xe5Q\x9c\x17\x85\x97\xf7Ëb\x0f\xff\xb0-vmO\xe1\xfb\xc3\xf3\xab0\xa0\x05u\x1c\xb0{B\xeamp\xbd\x8f\x99?\x87\x0f\xb2e\xe3ؿ2LN\x03\x17\xa7\x9f\xd3\x0f\x15$\x02I\xd2\xd7\x04R\x193\x9d\xddX\u007f\x01A\xcc\xde`Pm:\xdbe\xfd\xa6\a\xf8i\x88\xa7\xee\xacӸ\xbf2\x84y\xcd\n\xe6]L)\xca\xd9`x\xb4\x1b|\xe8\x13\x82\x1a(/* 3`J\xe1ٰ\xe6AdN!-\xd9"/"ॹॹ;,✅\nπ<\t\nπ\tॹπ✅a\n,\nᐿ�\nॹ✅�ॹ�\"✅ॹ\\<\"\n;a\\\n,✅π\n<\n<\nॹπᐿ�ᐿ;,�\tᐿ\nᐿaᐿ,\nπ�\t\\<ॹ\\π;π�π\"<;\"�\\<�,<�\\a�<\nॹaᐿaॹ�\\ᐿπ,✅ᐿ\"<✅✅a\t�ॹ\t<π;�ॹ\\ᐿ;✅\r\\,;\\ᐿॹ\nॹᐿππ\nᐿ\nᐿaπ\\\nπ\r\"✅�π\nπ\rॹπ\"ॹ✅a\ra�✅\nπ;ॹ✅\n;ॹ,�\nπ\rπᐿa\\\\ᐿ,π<ᐿ✅\n�,\r\nᐿ✅\n<�ᐿ\"\"✅,,\"\n<\n✅\rπa�π\n<\\ॹ\nॹ�;\ra��✅ᐿ\n,\t�;,π<\r��\r\\�\n✅\r✅�;\\\n\n,\nॹ✅π\n,\n✅\t,�<\nπ\t;aπ\n<a<\n\tπ\r\"\\✅\n\n\n<ᐿπaπ\\�\"<✅\\a,✅\n✅\n<\"\"\n\n\r\rᐿ�\\\tᐿᐿ;\n\rᐿa\\π<\n\\\n\n\";\r\r\raπ\"\r�ॹa\r�\"\n\"✅ππ✅�\t�ᐿ\tᐿ\\\r�ᐿ<\\\nᐿπ✅\tॹ<π\ta\"✅\t,ॹa✅ᐿ;\\\r✅\\,ॹ\"a\n<ॹ\\\n<\"π\\\\ᐿ\n✅\nᐿ\n,\n\r\t\n\r\n<aᐿ;ᐿ;ᐿ\r;✅a<a,,<\t\n\\ππ\\\"✅\n\\a\n\tπa<\r<π\n✅\\π<ॹ,\t;<aaaπॹᐿaॹaॹ�,\"\t,ॹ;\\<✅a\nᐿ\"\nπ\\aᐿ�ᐿ<ॹ;\\<ᐿ\nᐿ\n\"aᐿᐿπ,\"\r✅ॹ\n,<\r<\n<<,ᐿᐿa,\rᐿ<;π\\�,\"\rπ�\nππ�,✅;�\ra<;\r�ᐿ\tπ;\"πᐿ\\�a\"ᐿ\\;\\ॹ\";ॹ;;✅\tॹ\r\n<\t\n\t<aॹ\tᐿ\n\"ॹᐿ\t✅✅�ॹ;;<�\t,\n\r\n\n\ta\"\\<\rπᐿa\t<\na;\t\"\nπ<πॹ\r\n<\n✅ᐿॹ✅�<,;✅\"\n<�π<✅<<✅\\;\n\"\rᐿ\t�\n\n\r\t,ॹ�\"\rᐿᐿᐿ,\"π\nπ\",a\"\"<�\t\\πॹ\n\taॹᐿ\tπॹ,\n✅\rπa\r<<,\n\nᐿ;\t\\<\tᐿ\n\n�\"ॹ<\n\r\nᐿ\n�\n\nπᐿ;\nॹ\n\"π<\"\r\r\n\r\\ᐿ;;πॹ;�\r✅�,✅\r\r�,a;ᐿ\\ॹ\"\t\r✅;\t<,π,�\t\"πaᐿ��\\ॹ\"\n\tᐿ\t,ॹ✅�ᐿ✅\tᐿ,ॹ✅;;�\r\n✅ᐿ�\nπa;\\,✅ॹ<ᐿ\nπ\n\"\n;a\t\\π\n<\r\r\rπ\"\n\nᐿ<<ᐿ\"\n,\n\"ॹπaᐿπ��\r\n�ᐿॹ,\na\n\rॹ<�ॹ\"\n\t\r\n,π\n�,<ॹ,<\n<�✅ᐿ\r✅a✅<\r;,�a�\\\nॹ<\\<✅ॹ\"\nॹ\r�\ta;\"\\ᐿ\n\n✅\"\r;✅\t,a✅✅<\"ᐿ\t�π\\✅✅�<;\"✅π✅ॹ,\nπ\n��<,\ta\r�✅ᐿ\nॹπ\nᐿॹ✅;\nॹ\t\r\\\nᐿ�ᐿ\n\tᐿ,\r\\;<<a,\"π,\tᐿ\nπ<a\"ॹ\\aa\r\r\"\";\tᐿ,ॹa\nॹ\nπ"/PrefixEnd [r25]
I180827 20:41:54.090278 52643 storage/replica_range_lease.go:554  [n1,s1,r24/1:/Table/53/{1/106-2/"\x15…}] transferring lease to s3
I180827 20:41:54.091255 52643 storage/replica_range_lease.go:617  [n1,s1,r24/1:/Table/53/{1/106-2/"\x15…}] done transferring lease to s3: <nil>
I180827 20:41:54.092073 51863 storage/replica_proposal.go:210  [n3,s3,r24/2:/Table/53/{1/106-2/"\x15…}] new range lease repl=(n3,s3):2 seq=3 start=1535402514.090298269,0 epo=1 pro=1535402514.090300825,0 following repl=(n1,s1):1 seq=2 start=1535402512.768597075,0 exp=1535402521.769064687,0 pro=1535402512.769088099,0
I180827 20:41:54.094854 52702 storage/replica_command.go:298  [n1,s1,r25/1:/{Table/53/2/"…-Max}] initiating a split of this range at key /Table/53/2/"\xc0\t\x13\xe0*c\xe4\xcfS-\x9b,\xe2\x82\xfa\xd8Z\xf6\x99\x81\\\x18ŕ\xea\x80Db\xa7\x94\xf7Q#\x13\\\xc7(\xc4=\xaaZ\xa2Hա}\xdeI\x06\x840I\xa9\x95\xcbи\r#iH\x97F~\x10\xe4<\xb2\xefFb\xac\xee\xf90H5\xd7D\xe4:\xf0Ae\xe3\xd1<\xd1\xf7\xb9\xad\xea\xd9\xe0r\xbc\xa6\xae\x92\xfb\xb5,\xc2\U0010f26eD\xe0 \xc5\x06\xfa\x04{\xf7\xe8\xbfZQ\xa3\x05M\xbb\xa8\xbe\xf4\xc4\x0f\xe9|s{|\x8fr\xad\xdaWĢ\x9e\xdf\x17\x9f\x02\xf3п\xd3\xea\xfd\x8ew3\xb8@7ꇘN%\n\xe0@jq\xb3\xb0&y\xe3K0ȼ_s\x1e\x15\x98\xe7\xbf6\xeb\xef}$dd/\xaa\xf1\xcb.U\x8f\xd4r"/"<a✅ᐿ<\n\n\nॹ\",\n\"πॹᐿ,\rᐿ\nॹ�ॹ<\naᐿᐿ,\"ॹ\"\\✅\n�✅<\n\r<\\<\"\\π;<✅,;✅ॹa\r✅ᐿ;ᐿ\r\\�\\�,ᐿ\r\n,�✅,\t;\\\"π,;ᐿa\nॹ✅\n;ॹ<\\\n;<ᐿॹ\n\r<\n�\\\",ॹ✅,\n\"✅ᐿ\raa\n\n\t;�π<,\",ᐿ<ᐿ\\ᐿ✅;\t<π\"\"ॹ<\t\n,,π\"\t�✅\r;;ॹ,ᐿ\tॹ✅ॹ,\nᐿ\n\\;\n\nπa<✅aπ\t;✅\r;�✅;a\t\n✅ππ\t\rॹ\n\\<✅✅<\r✅\t\r✅<π\n;\n\"\"��ॹ\r,ᐿ\naॹᐿπ\\π\n\n,�\t<�a\nπ\\✅,πॹ\",πᐿa,ᐿᐿa\r<\ta\t\nॹ\n\r✅\r;πa✅�\t�π\",πa\n<✅\"a\t�\r<ᐿa,\naᐿॹᐿ\rॹᐿa�\"\\a✅\"\"✅✅ᐿ\t\"ॹ<\n\"\nπ✅✅\n\\ॹ\t;ᐿ\"a,ॹ\"aॹᐿᐿ\n,\\\nॹॹ,;ॹ;\\\n\n\t\n,\naॹπ\r\"\n\t✅ॹ\r\tᐿ✅;\r\ta�ᐿ\t\t\nπᐿ<ॹ\tᐿ\na\nᐿ\tππ<<π\t✅;\r\"ॹ\n\na�<ᐿ\r\nᐿᐿ\";a<\rॹ\\πॹ\tᐿ\n\nᐿπ;\t;;✅ᐿ✅✅<�ॹ;\tᐿ,,\t,π✅\nᐿॹ;\r\"ᐿa,ᐿᐿᐿa\nॹ<aॹ\r,;π<<\nπa�\n\\\r,ॹπ�\n\"ᐿππ\n✅ॹπ\ra,πॹ\n\t<;πᐿᐿ✅ॹ\t�a✅\r�\"a,π��,ॹa\n\\\rॹ\nॹ\"π\"π\ta�<π�\r;a,a\r<πᐿ\na<\r\t\nॹ\\\\\n\\<�\\�aπa;\r\\,,\nॹ\"✅;\"\n✅ᐿ✅a\n<ππ\tπ<a\t�\\<\"✅\\\nᐿ;\r\t✅�✅π\r\r\r\n\",ᐿ;ॹ\nᐿ\r\"\naa\"\n\t<,✅<a\\\n\"ॹᐿ\n\\\t\t\r\"ॹ<,,π�\"ॹ<ॹ,\\ᐿ<\\π\"\\<ᐿ\n\rॹ\na\t\nπ;\\π\\✅\r,\r�\",,;;,<\n\t\"\\\r\r\"ॹ✅ॹ�\n�✅�\n�\\\n\n\rᐿॹ\tπa<;;\n\n�a\n\\ॹॹ\t;\n<\t\\<ॹॹ�✅π\t\"<\n\tᐿπᐿ\"\"�;\t;ॹ<π<✅\nππ<\"\rॹ�πᐿ�\rπ,<,<ᐿ<;;�,\t<<ᐿ\t<\tॹ,π,<\\a\t;\n\r�a✅\r\r\t\nᐿᐿ✅\r;\n;�;\r✅\n,;✅ᐿ,\\\n<,<\\\t\n;<\\aᐿ\r,\n;\\\r�\rॹ\\\t\";\t�;\n,\ta✅\r\t\r,\n\\\t\nᐿ✅\\\"\naᐿ\\\\\";\r<�✅;aॹ\t\t\\\t\tᐿ�\r,\"\n\"\taᐿ\na\rππॹ\nπᐿ\rॹ\nॹ\tπ,π\r<\"\n,�\na;�\\✅,<\"\"�\nππ\t\nπaॹaa✅\\;\\a\r\rᐿ,\\�;\\ᐿᐿ<a\"\r;π\"\\ᐿπ\tπॹ;\\a\ra;\t\n\\�\r<\t<ॹ\r✅\na\t\t<\n\n✅π\nᐿ✅\\<,\nπ\rᐿ✅a\"\r\"\n✅ॹ\\�\r\n\\�\nॹ\\<\\<\n\n\n"/PrefixEnd [r26]
I180827 20:41:54.112580 52726 storage/replica_range_lease.go:554  [n1,s1,r25/1:/Table/53/2/"\x{15\x8…-c0\t\…}] transferring lease to s3
I180827 20:41:54.113319 52726 storage/replica_range_lease.go:617  [n1,s1,r25/1:/Table/53/2/"\x{15\x8…-c0\t\…}] done transferring lease to s3: <nil>
I180827 20:41:54.113657 51850 storage/replica_proposal.go:210  [n3,s3,r25/2:/Table/53/2/"\x{15\x8…-c0\t\…}] new range lease repl=(n3,s3):2 seq=3 start=1535402514.112607829,0 epo=1 pro=1535402514.112610518,0 following repl=(n1,s1):1 seq=2 start=1535402512.768597075,0 exp=1535402521.769064687,0 pro=1535402512.769088099,0
I180827 20:41:54.117098 52757 storage/replica_command.go:298  [n1,s1,r26/1:/{Table/53/2/"…-Max}] initiating a split of this range at key /Table/53/3/";π,\\✅✅ᐿπ✅,�a\r<\nπᐿॹ;π\\,✅\nᐿॹ✅�,\r�\r\r\r,;\r;ᐿ,\n\nᐿaπ\r,,✅\na,a\\<✅\"✅\\,,a\"π\r\n�✅π\"ॹπ;\nπ;<,<\n;<\n\tॹ\rπ\r,a\\\t�\n\r\\ᐿ<\t,\n\\ᐿa\t\t\n\nπ\t\\\n\\πa;π\r\rᐿ\",a<\"\n�\r\ta\r\t�\r\t✅\t;ᐿᐿ��\nᐿᐿ\\,ᐿᐿ\na\"ᐿ\"\"aa\n;✅π\nॹ\\\"\"�✅ॹॹ\\\nॹ\t\nॹ✅,\n\"πᐿ\n\n;<<;\r\tᐿ�,,\\\n\n\n\n\nπ�;\n;,\"✅\r;a\n\\;aa\n\n\n;\n\n<<ॹ�\nπa�πॹaॹ\r\n;✅✅,;ᐿ\n;π\"\\πᐿ\n\"\n\\a\\aπ✅ᐿ\n<\",\tॹ\r<;\";\nππ\"�\n\n\t,π\\\\<\\\t;ॹπ\\,;�✅ॹ,\r<\n✅�;ᐿ\",;✅\n\nॹॹ\r\n✅\n<<\n\"\",a\t\r\r,ॹ\t�;�,π\\\t,;\\\"✅\t\n\n\nᐿ\n<\\\rᐿ,;�\nπ\r\\<ᐿᐿ\n<✅ᐿaॹ�✅\n\t,π\"\r<<\n\nπ\tπ✅\n<\\πॹaᐿ\t;�ॹॹ,\"\n\\a\n,\"πॹ,\r,ॹॹ\\\";�\n\\π✅\n\"ππ\n✅�πॹ,\r✅\n;π;ᐿ\"\nᐿ✅\rॹ;;\n\"ॹ\"�\"a\n\rπ\n<\n\t\"aπ\t;\\\n\";\"π,\ta\t\n\nᐿ<,;�<ᐿ\"\\ᐿᐿa,\n;;ॹॹ\tॹπa�ᐿ\ra,π<✅\tᐿᐿ\n,✅\ra�\"\r\r\",;π�<;\n<;ᐿ\"�;ᐿ;�;✅\t\\<\\<;πa✅\rॹ\\\\\\ᐿ\n;\r\t\n\\\r\"✅\n\tπ✅\"\"<\r\rπ\r<,\n,\\✅ᐿᐿ�\t�,ππ;ᐿ\t�\"\\ππ\"ॹ,πa<\n\n<��\rॹॹ\t,\r\"ॹ✅✅\n\n;\\ॹ;π<\"�\t�<\"ᐿॹॹ;\n<\n\r\na\t�ॹᐿa\n,\"\t\r\"\n,\r<,\"\tᐿ\\\n<,;<\"\t\n\nᐿ,ᐿ\tπ✅\n,\r,\n\t<,�\\;<\\a\nπ\t,\t�ॹ\t\n�a✅\n✅\nॹ\";\r\t✅<\tᐿ\n\tᐿॹ✅\"\r\rॹ✅π\n\n,\t\\\t\\;\"a\t,ॹॹ\"aॹ\n,\n✅�\t\nॹᐿ\n\r✅<πॹ\n✅\tॹ\"ॹ\"�\r\\;\\✅;ॹπ;\n\nᐿ<\r<\"ॹ\n,\n;π\nॹ\ta✅\n�;ᐿ\"a�✅π\r✅ॹ,\n\n\",✅\nᐿ\n<�\r\nπᐿ\"πॹᐿ\r�\n<,✅a\\ॹ\r✅<;πᐿ✅ॹ<\"<✅\"π,\\\rπ\\<\"<\"π\n✅<;\\�\tॹ\n\n\r<\n\rᐿ\nᐿaॹaॹ\\\r<<\n\r\n�\ta,\nॹॹᐿ\n,π✅<;\\\nπॹπᐿॹ<;\"a\r<;\t\t<,;�π\n<✅ॹॹ\tᐿ\rᐿaaaॹ\t\\,ᐿ✅\n\\ॹ<\"π\t\r\"\tᐿ\n\ta\t,<ππ;\n\\\r�\n,\n\n\\ᐿa\nॹᐿa\n\t\n\t\n✅\"ᐿ\"\r\n\n\"�\r\n\n<<π\ra✅\\<ᐿ�\n\n✅�a✅�"/105 [r27]
I180827 20:41:54.137397 52716 storage/store_snapshot.go:615  [raftsnapshot,n3,s3,r25/2:/Table/53/2/"\x{15\x8…-c0\t\…}] sending Raft snapshot 547ab8d0 at applied index 21
I180827 20:41:54.140430 52716 storage/store_snapshot.go:657  [raftsnapshot,n3,s3,r25/2:/Table/53/2/"\x{15\x8…-c0\t\…}] streamed snapshot to (n2,s2):3: kv pairs: 14, log entries: 2, rate-limit: 8.0 MiB/sec, 22ms
I180827 20:41:54.140860 52705 storage/replica_raftstorage.go:784  [n2,s2,r25/3:/Table/53/2/"\x{15\x8…-c0\t\…}] applying Raft snapshot at index 21 (id=547ab8d0, encoded size=31270, 1 rocksdb batches, 2 log entries)
I180827 20:41:54.162696 52705 storage/replica_raftstorage.go:790  [n2,s2,r25/3:/Table/53/2/"\x{15\x8…-c0\t\…}] applied Raft snapshot in 22ms [clear=0ms batch=0ms entries=21ms commit=0ms]
I180827 20:41:54.166103 52791 storage/replica_range_lease.go:554  [n1,s1,r26/1:/Table/53/{2/"\xc0…-3/";π,…}] transferring lease to s3
I180827 20:41:54.167118 51903 storage/replica_proposal.go:210  [n3,s3,r26/2:/Table/53/{2/"\xc0…-3/";π,…}] new range lease repl=(n3,s3):2 seq=3 start=1535402514.166156675,0 epo=1 pro=1535402514.166159831,0 following repl=(n1,s1):1 seq=2 start=1535402512.768597075,0 exp=1535402521.769064687,0 pro=1535402512.769088099,0
I180827 20:41:54.167216 52791 storage/replica_range_lease.go:617  [n1,s1,r26/1:/Table/53/{2/"\xc0…-3/";π,…}] done transferring lease to s3: <nil>
I180827 20:41:54.172711 52589 storage/replica_command.go:298  [n1,s1,r27/1:/{Table/53/3/"…-Max}] initiating a split of this range at key /Table/54 [r28]
I180827 20:41:54.182740 52807 storage/replica_range_lease.go:554  [n1,s1,r27/1:/Table/5{3/3/";π…-4}] transferring lease to s2
I180827 20:41:54.183947 52807 storage/replica_range_lease.go:617  [n1,s1,r27/1:/Table/5{3/3/";π…-4}] done transferring lease to s2: <nil>
I180827 20:41:54.184954 51646 storage/replica_proposal.go:210  [n2,s2,r27/3:/Table/5{3/3/";π…-4}] new range lease repl=(n2,s2):3 seq=3 start=1535402514.182761052,0 epo=1 pro=1535402514.182764040,0 following repl=(n1,s1):1 seq=2 start=1535402512.768597075,0 exp=1535402521.769064687,0 pro=1535402512.769088099,0
--- FAIL: test/TestImportPgDump (0.000s)
Test ended in panic.

------- Stdout: -------
W180827 20:41:52.746991 50862 server/status/runtime.go:294  [n?] Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006"
I180827 20:41:52.757923 50862 server/server.go:830  [n?] monitoring forward clock jumps based on server.clock.forward_jump_check_enabled
I180827 20:41:52.758132 50862 base/addr_validation.go:260  [n?] server certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
I180827 20:41:52.758156 50862 base/addr_validation.go:300  [n?] web UI certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
I180827 20:41:52.761168 50862 server/config.go:496  [n?] 1 storage engine initialized
I180827 20:41:52.761191 50862 server/config.go:499  [n?] RocksDB cache size: 128 MiB
I180827 20:41:52.761204 50862 server/config.go:499  [n?] store 0: in-memory, size 0 B
I180827 20:41:52.767725 50862 server/node.go:373  [n?] **** cluster d5e53e69-a109-4eb6-91bf-29e74ae744ba has been created
I180827 20:41:52.767752 50862 server/server.go:1401  [n?] **** add additional nodes by specifying --join=127.0.0.1:41477
I180827 20:41:52.767936 50862 gossip/gossip.go:382  [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:41477" > attrs:<> locality:<> ServerVersion:<major_val:2 minor_val:0 patch:0 unstable:12 > build_tag:"v2.1.0-alpha.20180702-2025-gf1e7bb1" started_at:1535402512767856449 
I180827 20:41:52.770338 50862 storage/store.go:1541  [n1,s1] [n1,s1]: failed initial metrics computation: [n1,s1]: system config not yet available
I180827 20:41:52.770546 50862 server/node.go:476  [n1] initialized store [n1,s1]: disk (capacity=512 MiB, available=512 MiB, used=0 B, logicalBytes=6.9 KiB), ranges=1, leases=1, queries=0.00, writes=0.00, bytesPerReplica={p10=7103.00 p25=7103.00 p50=7103.00 p75=7103.00 p90=7103.00 pMax=7103.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
I180827 20:41:52.770626 50862 storage/stores.go:242  [n1] read 0 node addresses from persistent storage
I180827 20:41:52.770721 50862 server/node.go:697  [n1] connecting to gossip network to verify cluster ID...
I180827 20:41:52.770760 50862 server/node.go:722  [n1] node connected via gossip and verified as part of cluster "d5e53e69-a109-4eb6-91bf-29e74ae744ba"
I180827 20:41:52.770788 50862 server/node.go:546  [n1] node=1: started with [<no-attributes>=<in-mem>] engine(s) and attributes []
I180827 20:41:52.771023 50862 server/status/recorder.go:652  [n1] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180827 20:41:52.771066 50862 server/server.go:1807  [n1] Could not start heap profiler worker due to: directory to store profiles could not be determined
I180827 20:41:52.771159 50862 server/server.go:1538  [n1] starting https server at 127.0.0.1:42563 (use: 127.0.0.1:42563)
I180827 20:41:52.771188 50862 server/server.go:1540  [n1] starting grpc/postgres server at 127.0.0.1:41477
I180827 20:41:52.771209 50862 server/server.go:1541  [n1] advertising CockroachDB node at 127.0.0.1:41477
I180827 20:41:52.775258 51089 server/status/recorder.go:652  [n1,summaries] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180827 20:41:52.776337 50925 storage/replica_command.go:298  [split,n1,s1,r1/1:/M{in-ax}] initiating a split of this range at key /System/"" [r2]
I180827 20:41:52.788832 51094 storage/replica_command.go:298  [split,n1,s1,r2/1:/{System/-Max}] initiating a split of this range at key /System/NodeLiveness [r3]
W180827 20:41:52.790188 51128 storage/intent_resolver.go:668  [n1,s1] failed to push during intent resolution: failed to push "unnamed" id=ec083bbe key=/Table/SystemConfigSpan/Start rw=true pri=0.01126188 iso=SERIALIZABLE stat=PENDING epo=0 ts=1535402512.772758792,0 orig=1535402512.772758792,0 max=1535402512.772758792,0 wto=false rop=false seq=6
I180827 20:41:52.790695 51118 sql/event_log.go:126  [n1,intExec=optInToDiagnosticsStatReporting] Event: "set_cluster_setting", target: 0, info: {SettingName:diagnostics.reporting.enabled Value:true User:root}
I180827 20:41:52.795125 51100 storage/replica_command.go:298  [split,n1,s1,r3/1:/{System/NodeL…-Max}] initiating a split of this range at key /System/NodeLivenessMax [r4]
I180827 20:41:52.800783 51143 storage/replica_command.go:298  [split,n1,s1,r4/1:/{System/NodeL…-Max}] initiating a split of this range at key /System/tsd [r5]
I180827 20:41:52.807906 51165 storage/replica_command.go:298  [split,n1,s1,r5/1:/{System/tsd-Max}] initiating a split of this range at key /System/"tse" [r6]
I180827 20:41:52.811784 51141 sql/event_log.go:126  [n1,intExec=set-setting] Event: "set_cluster_setting", target: 0, info: {SettingName:version Value:2.0-12 User:root}
I180827 20:41:52.818164 50799 sql/event_log.go:126  [n1,intExec=disableNetTrace] Event: "set_cluster_setting", target: 0, info: {SettingName:trace.debug.enable Value:false User:root}
I180827 20:41:52.821094 51188 storage/replica_command.go:298  [split,n1,s1,r6/1:/{System/tse-Max}] initiating a split of this range at key /Table/SystemConfigSpan/Start [r7]
I180827 20:41:52.830709 51176 storage/replica_command.go:298  [split,n1,s1,r7/1:/{Table/System…-Max}] initiating a split of this range at key /Table/11 [r8]
I180827 20:41:52.839374 51187 sql/event_log.go:126  [n1,intExec=initializeClusterSecret] Event: "set_cluster_setting", target: 0, info: {SettingName:cluster.secret Value:045a1c98-219f-445b-bd6b-d481f04d6b0d User:root}
I180827 20:41:52.849534 51154 storage/replica_command.go:298  [split,n1,s1,r8/1:/{Table/11-Max}] initiating a split of this range at key /Table/12 [r9]
I180827 20:41:52.855898 51218 sql/event_log.go:126  [n1,intExec=create-default-db] Event: "create_database", target: 50, info: {DatabaseName:defaultdb Statement:CREATE DATABASE IF NOT EXISTS defaultdb User:root}
I180827 20:41:52.861462 51240 storage/replica_command.go:298  [split,n1,s1,r9/1:/{Table/12-Max}] initiating a split of this range at key /Table/13 [r10]
I180827 20:41:52.868342 51268 storage/replica_command.go:298  [split,n1,s1,r10/1:/{Table/13-Max}] initiating a split of this range at key /Table/14 [r11]
I180827 20:41:52.872706 51256 sql/event_log.go:126  [n1,intExec=create-default-db] Event: "create_database", target: 51, info: {DatabaseName:postgres Statement:CREATE DATABASE IF NOT EXISTS postgres User:root}
I180827 20:41:52.874819 51264 storage/replica_command.go:298  [split,n1,s1,r11/1:/{Table/14-Max}] initiating a split of this range at key /Table/15 [r12]
I180827 20:41:52.876403 50862 server/server.go:1594  [n1] done ensuring all necessary migrations have run
I180827 20:41:52.876433 50862 server/server.go:1597  [n1] serving sql connections
I180827 20:41:52.879108 51233 server/server_update.go:67  [n1] no need to upgrade, cluster already at the newest version
I180827 20:41:52.879639 51235 sql/event_log.go:126  [n1] Event: "node_join", target: 1, info: {Descriptor:{NodeID:1 Address:{NetworkField:tcp AddressField:127.0.0.1:41477} Attrs: Locality: ServerVersion:2.0-12 BuildTag:v2.1.0-alpha.20180702-2025-gf1e7bb1 StartedAt:1535402512767856449 LocalityAddress:[]} ClusterID:d5e53e69-a109-4eb6-91bf-29e74ae744ba StartedAt:1535402512767856449 LastUp:1535402512767856449}
I180827 20:41:52.880318 51302 storage/replica_command.go:298  [split,n1,s1,r12/1:/{Table/15-Max}] initiating a split of this range at key /Table/16 [r13]
I180827 20:41:52.927701 50819 storage/replica_command.go:298  [split,n1,s1,r13/1:/{Table/16-Max}] initiating a split of this range at key /Table/17 [r14]
I180827 20:41:52.940165 51323 storage/replica_command.go:298  [split,n1,s1,r14/1:/{Table/17-Max}] initiating a split of this range at key /Table/18 [r15]
I180827 20:41:52.948539 51355 storage/replica_command.go:298  [split,n1,s1,r15/1:/{Table/18-Max}] initiating a split of this range at key /Table/19 [r16]
I180827 20:41:52.953658 51380 storage/replica_command.go:298  [split,n1,s1,r16/1:/{Table/19-Max}] initiating a split of this range at key /Table/20 [r17]
I180827 20:41:52.961237 51137 storage/replica_command.go:298  [split,n1,s1,r17/1:/{Table/20-Max}] initiating a split of this range at key /Table/21 [r18]
I180827 20:41:52.966548 50832 storage/replica_command.go:298  [split,n1,s1,r18/1:/{Table/21-Max}] initiating a split of this range at key /Table/22 [r19]
I180827 20:41:52.977113 51362 storage/replica_command.go:298  [split,n1,s1,r19/1:/{Table/22-Max}] initiating a split of this range at key /Table/23 [r20]
I180827 20:41:53.041315 51440 storage/replica_command.go:298  [split,n1,s1,r20/1:/{Table/23-Max}] initiating a split of this range at key /Table/50 [r21]
I180827 20:41:53.047478 51414 storage/replica_command.go:298  [split,n1,s1,r21/1:/{Table/50-Max}] initiating a split of this range at key /Table/51 [r22]
W180827 20:41:53.081214 50862 server/status/runtime.go:294  [n?] Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006"
I180827 20:41:53.089127 50862 server/server.go:830  [n?] monitoring forward clock jumps based on server.clock.forward_jump_check_enabled
I180827 20:41:53.089322 50862 base/addr_validation.go:260  [n?] server certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
I180827 20:41:53.089338 50862 base/addr_validation.go:300  [n?] web UI certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
I180827 20:41:53.102793 50862 server/config.go:496  [n?] 1 storage engine initialized
I180827 20:41:53.102863 50862 server/config.go:499  [n?] RocksDB cache size: 128 MiB
I180827 20:41:53.102878 50862 server/config.go:499  [n?] store 0: in-memory, size 0 B
W180827 20:41:53.102953 50862 gossip/gossip.go:1371  [n?] no incoming or outgoing connections
I180827 20:41:53.103001 50862 server/server.go:1403  [n?] no stores bootstrapped and --join flag specified, awaiting init command.
I180827 20:41:53.115344 51458 gossip/client.go:129  [n?] started gossip client to 127.0.0.1:41477
I180827 20:41:53.125579 51530 gossip/server.go:217  [n1] received initial cluster-verification connection from {tcp 127.0.0.1:36113}
I180827 20:41:53.127987 50862 server/node.go:697  [n?] connecting to gossip network to verify cluster ID...
I180827 20:41:53.128034 50862 server/node.go:722  [n?] node connected via gossip and verified as part of cluster "d5e53e69-a109-4eb6-91bf-29e74ae744ba"
I180827 20:41:53.128397 51575 kv/dist_sender.go:345  [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I180827 20:41:53.134920 51574 kv/dist_sender.go:345  [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I180827 20:41:53.135628 50862 kv/dist_sender.go:345  [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I180827 20:41:53.136461 50862 server/node.go:428  [n?] new node allocated ID 2
I180827 20:41:53.136541 50862 gossip/gossip.go:382  [n2] NodeDescriptor set to node_id:2 address:<network_field:"tcp" address_field:"127.0.0.1:36113" > attrs:<> locality:<> ServerVersion:<major_val:2 minor_val:0 patch:0 unstable:12 > build_tag:"v2.1.0-alpha.20180702-2025-gf1e7bb1" started_at:1535402513136479434 
I180827 20:41:53.136591 50862 storage/stores.go:242  [n2] read 0 node addresses from persistent storage
I180827 20:41:53.136624 50862 storage/stores.go:261  [n2] wrote 1 node addresses to persistent storage
I180827 20:41:53.137485 51552 storage/stores.go:261  [n1] wrote 1 node addresses to persistent storage
I180827 20:41:53.139442 50862 server/node.go:672  [n2] bootstrapped store [n2,s2]
I180827 20:41:53.139577 50862 server/node.go:546  [n2] node=2: started with [] engine(s) and attributes []
I180827 20:41:53.140140 50862 server/status/recorder.go:652  [n2] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180827 20:41:53.140166 50862 server/server.go:1807  [n2] Could not start heap profiler worker due to: directory to store profiles could not be determined
I180827 20:41:53.140233 50862 server/server.go:1538  [n2] starting https server at 127.0.0.1:39947 (use: 127.0.0.1:39947)
I180827 20:41:53.140246 50862 server/server.go:1540  [n2] starting grpc/postgres server at 127.0.0.1:36113
I180827 20:41:53.140256 50862 server/server.go:1541  [n2] advertising CockroachDB node at 127.0.0.1:36113
I180827 20:41:53.140624 51685 server/status/recorder.go:652  [n2,summaries] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180827 20:41:53.153945 50862 server/server.go:1594  [n2] done ensuring all necessary migrations have run
I180827 20:41:53.153974 50862 server/server.go:1597  [n2] serving sql connections
W180827 20:41:53.165268 50862 server/status/runtime.go:294  [n?] Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006"
I180827 20:41:53.185802 51467 server/server_update.go:67  [n2] no need to upgrade, cluster already at the newest version
I180827 20:41:53.186848 51469 sql/event_log.go:126  [n2] Event: "node_join", target: 2, info: {Descriptor:{NodeID:2 Address:{NetworkField:tcp AddressField:127.0.0.1:36113} Attrs: Locality: ServerVersion:2.0-12 BuildTag:v2.1.0-alpha.20180702-2025-gf1e7bb1 StartedAt:1535402513136479434 LocalityAddress:[]} ClusterID:d5e53e69-a109-4eb6-91bf-29e74ae744ba StartedAt:1535402513136479434 LastUp:1535402513136479434}
I180827 20:41:53.189622 50862 server/server.go:830  [n?] monitoring forward clock jumps based on server.clock.forward_jump_check_enabled
I180827 20:41:53.189776 50862 base/addr_validation.go:260  [n?] server certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
I180827 20:41:53.189808 50862 base/addr_validation.go:300  [n?] web UI certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
I180827 20:41:53.207782 50862 server/config.go:496  [n?] 1 storage engine initialized
I180827 20:41:53.207807 50862 server/config.go:499  [n?] RocksDB cache size: 128 MiB
I180827 20:41:53.207815 50862 server/config.go:499  [n?] store 0: in-memory, size 0 B
W180827 20:41:53.207911 50862 gossip/gossip.go:1371  [n?] no incoming or outgoing connections
I180827 20:41:53.207947 50862 server/server.go:1403  [n?] no stores bootstrapped and --join flag specified, awaiting init command.
I180827 20:41:53.211471 51475 rpc/nodedialer/nodedialer.go:92  [ct-client] connection to n2 established
I180827 20:41:53.223653 51740 gossip/client.go:129  [n?] started gossip client to 127.0.0.1:41477
I180827 20:41:53.223954 51816 gossip/server.go:217  [n1] received initial cluster-verification connection from {tcp 127.0.0.1:46463}
I180827 20:41:53.224401 50862 server/node.go:697  [n?] connecting to gossip network to verify cluster ID...
I180827 20:41:53.224432 50862 server/node.go:722  [n?] node connected via gossip and verified as part of cluster "d5e53e69-a109-4eb6-91bf-29e74ae744ba"
I180827 20:41:53.224690 51837 kv/dist_sender.go:345  [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I180827 20:41:53.225445 51836 kv/dist_sender.go:345  [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I180827 20:41:53.226030 50862 kv/dist_sender.go:345  [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I180827 20:41:53.226699 50862 server/node.go:428  [n?] new node allocated ID 3
I180827 20:41:53.226763 50862 gossip/gossip.go:382  [n3] NodeDescriptor set to node_id:3 address:<network_field:"tcp" address_field:"127.0.0.1:46463" > attrs:<> locality:<> ServerVersion:<major_val:2 minor_val:0 patch:0 unstable:12 > build_tag:"v2.1.0-alpha.20180702-2025-gf1e7bb1" started_at:1535402513226706701 
I180827 20:41:53.226805 50862 storage/stores.go:242  [n3] read 0 node addresses from persistent storage
I180827 20:41:53.226851 50862 storage/stores.go:261  [n3] wrote 2 node addresses to persistent storage
I180827 20:41:53.227563 51809 storage/stores.go:261  [n1] wrote 2 node addresses to persistent storage
I180827 20:41:53.227869 51810 storage/stores.go:261  [n2] wrote 2 node addresses to persistent storage
I180827 20:41:53.228504 50862 server/node.go:672  [n3] bootstrapped store [n3,s3]
I180827 20:41:53.229044 50862 server/node.go:546  [n3] node=3: started with [] engine(s) and attributes []
I180827 20:41:53.229696 50862 server/status/recorder.go:652  [n3] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180827 20:41:53.229749 50862 server/server.go:1807  [n3] Could not start heap profiler worker due to: directory to store profiles could not be determined
I180827 20:41:53.235251 50862 server/server.go:1538  [n3] starting https server at 127.0.0.1:43307 (use: 127.0.0.1:43307)
I180827 20:41:53.235271 50862 server/server.go:1540  [n3] starting grpc/postgres server at 127.0.0.1:46463
I180827 20:41:53.235283 50862 server/server.go:1541  [n3] advertising CockroachDB node at 127.0.0.1:46463
I180827 20:41:53.240284 50862 server/server.go:1594  [n3] done ensuring all necessary migrations have run
I180827 20:41:53.240307 50862 server/server.go:1597  [n3] serving sql connections
I180827 20:41:53.243124 51945 server/status/recorder.go:652  [n3,summaries] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180827 20:41:53.248117 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r20/1:/Table/{23-50}] sending preemptive snapshot 59e1afc9 at applied index 16
I180827 20:41:53.249136 50862 testutils/testcluster/testcluster.go:536  [n1,s1] has 22 underreplicated ranges
I180827 20:41:53.251012 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r20/1:/Table/{23-50}] streamed snapshot to (n2,s2):?: kv pairs: 12, log entries: 6, rate-limit: 8.0 MiB/sec, 3ms
I180827 20:41:53.251369 51983 storage/replica_raftstorage.go:784  [n2,s2,r20/?:{-}] applying preemptive snapshot at index 16 (id=59e1afc9, encoded size=2241, 1 rocksdb batches, 6 log entries)
I180827 20:41:53.254056 51839 server/server_update.go:67  [n3] no need to upgrade, cluster already at the newest version
I180827 20:41:53.255122 51841 sql/event_log.go:126  [n3] Event: "node_join", target: 3, info: {Descriptor:{NodeID:3 Address:{NetworkField:tcp AddressField:127.0.0.1:46463} Attrs: Locality: ServerVersion:2.0-12 BuildTag:v2.1.0-alpha.20180702-2025-gf1e7bb1 StartedAt:1535402513226706701 LocalityAddress:[]} ClusterID:d5e53e69-a109-4eb6-91bf-29e74ae744ba StartedAt:1535402513226706701 LastUp:1535402513226706701}
I180827 20:41:53.256061 51983 storage/replica_raftstorage.go:790  [n2,s2,r20/?:/Table/{23-50}] applied preemptive snapshot in 1ms [clear=0ms batch=0ms entries=0ms commit=1ms]
I180827 20:41:53.256605 50930 storage/replica_command.go:812  [replicate,n1,s1,r20/1:/Table/{23-50}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r20:/Table/{23-50} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.259565 50930 storage/replica.go:3743  [n1,s1,r20/1:/Table/{23-50}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I180827 20:41:53.261627 51625 rpc/nodedialer/nodedialer.go:92  [n2] connection to n1 established
I180827 20:41:53.264544 50862 testutils/testcluster/testcluster.go:536  [n1,s1] has 22 underreplicated ranges
I180827 20:41:53.286630 50930 rpc/nodedialer/nodedialer.go:92  [replicate,n1,s1,r21/1:/Table/5{0-1}] connection to n3 established
I180827 20:41:53.287245 50862 testutils/testcluster/testcluster.go:536  [n1,s1] has 22 underreplicated ranges
I180827 20:41:53.287799 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r21/1:/Table/5{0-1}] sending preemptive snapshot de08568a at applied index 18
I180827 20:41:53.288157 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r21/1:/Table/5{0-1}] streamed snapshot to (n3,s3):?: kv pairs: 8, log entries: 8, rate-limit: 8.0 MiB/sec, 1ms
I180827 20:41:53.288623 51959 storage/replica_raftstorage.go:784  [n3,s3,r21/?:{-}] applying preemptive snapshot at index 18 (id=de08568a, encoded size=2646, 1 rocksdb batches, 8 log entries)
I180827 20:41:53.289814 51959 storage/replica_raftstorage.go:790  [n3,s3,r21/?:/Table/5{0-1}] applied preemptive snapshot in 1ms [clear=0ms batch=0ms entries=0ms commit=1ms]
I180827 20:41:53.290329 50930 storage/replica_command.go:812  [replicate,n1,s1,r21/1:/Table/5{0-1}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r21:/Table/5{0-1} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.293678 50930 storage/replica.go:3743  [n1,s1,r21/1:/Table/5{0-1}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I180827 20:41:53.294953 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r22/1:/{Table/51-Max}] sending preemptive snapshot a84e7278 at applied index 12
I180827 20:41:53.295229 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r22/1:/{Table/51-Max}] streamed snapshot to (n3,s3):?: kv pairs: 7, log entries: 2, rate-limit: 8.0 MiB/sec, 1ms
I180827 20:41:53.295441 51883 rpc/nodedialer/nodedialer.go:92  [n3] connection to n1 established
I180827 20:41:53.295585 51953 storage/replica_raftstorage.go:784  [n3,s3,r22/?:{-}] applying preemptive snapshot at index 12 (id=a84e7278, encoded size=386, 1 rocksdb batches, 2 log entries)
I180827 20:41:53.295717 51953 storage/replica_raftstorage.go:790  [n3,s3,r22/?:/{Table/51-Max}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.295955 50930 storage/replica_command.go:812  [replicate,n1,s1,r22/1:/{Table/51-Max}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r22:/{Table/51-Max} [(n1,s1):1, next=2, gen=0]
I180827 20:41:53.298097 50930 storage/replica.go:3743  [n1,s1,r22/1:/{Table/51-Max}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I180827 20:41:53.301122 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r8/1:/Table/1{1-2}] sending preemptive snapshot 201bdccc at applied index 18
I180827 20:41:53.301565 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r8/1:/Table/1{1-2}] streamed snapshot to (n3,s3):?: kv pairs: 9, log entries: 8, rate-limit: 8.0 MiB/sec, 3ms
I180827 20:41:53.306578 52088 storage/replica_raftstorage.go:784  [n3,s3,r8/?:{-}] applying preemptive snapshot at index 18 (id=201bdccc, encoded size=4352, 1 rocksdb batches, 8 log entries)
I180827 20:41:53.306868 52088 storage/replica_raftstorage.go:790  [n3,s3,r8/?:/Table/1{1-2}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.307601 50930 storage/replica_command.go:812  [replicate,n1,s1,r8/1:/Table/1{1-2}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r8:/Table/1{1-2} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.311873 50930 storage/replica.go:3743  [n1,s1,r8/1:/Table/1{1-2}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I180827 20:41:53.314134 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r17/1:/Table/2{0-1}] sending preemptive snapshot 53116eb2 at applied index 16
I180827 20:41:53.314317 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r17/1:/Table/2{0-1}] streamed snapshot to (n3,s3):?: kv pairs: 8, log entries: 6, rate-limit: 8.0 MiB/sec, 2ms
I180827 20:41:53.314683 52103 storage/replica_raftstorage.go:784  [n3,s3,r17/?:{-}] applying preemptive snapshot at index 16 (id=53116eb2, encoded size=2105, 1 rocksdb batches, 6 log entries)
I180827 20:41:53.314887 52103 storage/replica_raftstorage.go:790  [n3,s3,r17/?:/Table/2{0-1}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.315401 50930 storage/replica_command.go:812  [replicate,n1,s1,r17/1:/Table/2{0-1}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r17:/Table/2{0-1} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.318398 50930 storage/replica.go:3743  [n1,s1,r17/1:/Table/2{0-1}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I180827 20:41:53.319436 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r16/1:/Table/{19-20}] sending preemptive snapshot e0be8540 at applied index 16
I180827 20:41:53.319691 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r16/1:/Table/{19-20}] streamed snapshot to (n2,s2):?: kv pairs: 8, log entries: 6, rate-limit: 8.0 MiB/sec, 1ms
I180827 20:41:53.320127 52072 storage/replica_raftstorage.go:784  [n2,s2,r16/?:{-}] applying preemptive snapshot at index 16 (id=e0be8540, encoded size=2109, 1 rocksdb batches, 6 log entries)
I180827 20:41:53.320339 52072 storage/replica_raftstorage.go:790  [n2,s2,r16/?:/Table/{19-20}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.320816 50930 storage/replica_command.go:812  [replicate,n1,s1,r16/1:/Table/{19-20}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r16:/Table/{19-20} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.323849 50930 storage/replica.go:3743  [n1,s1,r16/1:/Table/{19-20}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I180827 20:41:53.326208 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r15/1:/Table/1{8-9}] sending preemptive snapshot d259ae5c at applied index 16
I180827 20:41:53.326404 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r15/1:/Table/1{8-9}] streamed snapshot to (n2,s2):?: kv pairs: 8, log entries: 6, rate-limit: 8.0 MiB/sec, 2ms
I180827 20:41:53.326731 52116 storage/replica_raftstorage.go:784  [n2,s2,r15/?:{-}] applying preemptive snapshot at index 16 (id=d259ae5c, encoded size=2276, 1 rocksdb batches, 6 log entries)
I180827 20:41:53.326923 50862 testutils/testcluster/testcluster.go:536  [n1,s1] has 22 underreplicated ranges
I180827 20:41:53.326953 52116 storage/replica_raftstorage.go:790  [n2,s2,r15/?:/Table/1{8-9}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.334514 50930 storage/replica_command.go:812  [replicate,n1,s1,r15/1:/Table/1{8-9}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r15:/Table/1{8-9} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.337656 50930 storage/replica.go:3743  [n1,s1,r15/1:/Table/1{8-9}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I180827 20:41:53.338767 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r14/1:/Table/1{7-8}] sending preemptive snapshot 9d0058d5 at applied index 16
I180827 20:41:53.339034 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r14/1:/Table/1{7-8}] streamed snapshot to (n2,s2):?: kv pairs: 8, log entries: 6, rate-limit: 8.0 MiB/sec, 1ms
I180827 20:41:53.339612 52090 storage/replica_raftstorage.go:784  [n2,s2,r14/?:{-}] applying preemptive snapshot at index 16 (id=9d0058d5, encoded size=2276, 1 rocksdb batches, 6 log entries)
I180827 20:41:53.339831 52090 storage/replica_raftstorage.go:790  [n2,s2,r14/?:/Table/1{7-8}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.340173 50930 storage/replica_command.go:812  [replicate,n1,s1,r14/1:/Table/1{7-8}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r14:/Table/1{7-8} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.343121 50930 storage/replica.go:3743  [n1,s1,r14/1:/Table/1{7-8}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I180827 20:41:53.345432 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r9/1:/Table/1{2-3}] sending preemptive snapshot 0eea2d20 at applied index 26
I180827 20:41:53.345859 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r9/1:/Table/1{2-3}] streamed snapshot to (n2,s2):?: kv pairs: 53, log entries: 16, rate-limit: 8.0 MiB/sec, 2ms
I180827 20:41:53.347137 52066 storage/replica_raftstorage.go:784  [n2,s2,r9/?:{-}] applying preemptive snapshot at index 26 (id=0eea2d20, encoded size=15139, 1 rocksdb batches, 16 log entries)
I180827 20:41:53.347467 52066 storage/replica_raftstorage.go:790  [n2,s2,r9/?:/Table/1{2-3}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.348208 50930 storage/replica_command.go:812  [replicate,n1,s1,r9/1:/Table/1{2-3}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r9:/Table/1{2-3} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.352166 50930 storage/replica.go:3743  [n1,s1,r9/1:/Table/1{2-3}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I180827 20:41:53.353188 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r4/1:/System/{NodeLive…-tsd}] sending preemptive snapshot 0cdee511 at applied index 39
I180827 20:41:53.353765 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r4/1:/System/{NodeLive…-tsd}] streamed snapshot to (n2,s2):?: kv pairs: 36, log entries: 29, rate-limit: 8.0 MiB/sec, 1ms
I180827 20:41:53.354286 51723 storage/replica_raftstorage.go:784  [n2,s2,r4/?:{-}] applying preemptive snapshot at index 39 (id=0cdee511, encoded size=98384, 1 rocksdb batches, 29 log entries)
I180827 20:41:53.354994 51723 storage/replica_raftstorage.go:790  [n2,s2,r4/?:/System/{NodeLive…-tsd}] applied preemptive snapshot in 1ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.355529 50930 storage/replica_command.go:812  [replicate,n1,s1,r4/1:/System/{NodeLive…-tsd}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r4:/System/{NodeLivenessMax-tsd} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.358523 50930 storage/replica.go:3743  [n1,s1,r4/1:/System/{NodeLive…-tsd}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I180827 20:41:53.360250 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r3/1:/System/NodeLiveness{-Max}] sending preemptive snapshot 965d58b1 at applied index 19
I180827 20:41:53.360436 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r3/1:/System/NodeLiveness{-Max}] streamed snapshot to (n3,s3):?: kv pairs: 10, log entries: 9, rate-limit: 8.0 MiB/sec, 1ms
I180827 20:41:53.360789 52150 storage/replica_raftstorage.go:784  [n3,s3,r3/?:{-}] applying preemptive snapshot at index 19 (id=965d58b1, encoded size=4003, 1 rocksdb batches, 9 log entries)
I180827 20:41:53.361043 52150 storage/replica_raftstorage.go:790  [n3,s3,r3/?:/System/NodeLiveness{-Max}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.361522 50930 storage/replica_command.go:812  [replicate,n1,s1,r3/1:/System/NodeLiveness{-Max}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r3:/System/NodeLiveness{-Max} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.364392 50930 storage/replica.go:3743  [n1,s1,r3/1:/System/NodeLiveness{-Max}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I180827 20:41:53.366422 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r12/1:/Table/1{5-6}] sending preemptive snapshot 811af376 at applied index 16
I180827 20:41:53.366638 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r12/1:/Table/1{5-6}] streamed snapshot to (n3,s3):?: kv pairs: 8, log entries: 6, rate-limit: 8.0 MiB/sec, 2ms
I180827 20:41:53.367089 52137 storage/replica_raftstorage.go:784  [n3,s3,r12/?:{-}] applying preemptive snapshot at index 16 (id=811af376, encoded size=2276, 1 rocksdb batches, 6 log entries)
I180827 20:41:53.367359 52137 storage/replica_raftstorage.go:790  [n3,s3,r12/?:/Table/1{5-6}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.368127 50930 storage/replica_command.go:812  [replicate,n1,s1,r12/1:/Table/1{5-6}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r12:/Table/1{5-6} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.371691 50930 storage/replica.go:3743  [n1,s1,r12/1:/Table/1{5-6}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I180827 20:41:53.374563 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r19/1:/Table/2{2-3}] sending preemptive snapshot 9cd02555 at applied index 16
I180827 20:41:53.374760 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r19/1:/Table/2{2-3}] streamed snapshot to (n3,s3):?: kv pairs: 8, log entries: 6, rate-limit: 8.0 MiB/sec, 2ms
I180827 20:41:53.375252 52080 storage/replica_raftstorage.go:784  [n3,s3,r19/?:{-}] applying preemptive snapshot at index 16 (id=9cd02555, encoded size=2276, 1 rocksdb batches, 6 log entries)
I180827 20:41:53.375582 52080 storage/replica_raftstorage.go:790  [n3,s3,r19/?:/Table/2{2-3}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.375950 50930 storage/replica_command.go:812  [replicate,n1,s1,r19/1:/Table/2{2-3}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r19:/Table/2{2-3} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.381819 50930 storage/replica.go:3743  [n1,s1,r19/1:/Table/2{2-3}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I180827 20:41:53.386461 52091 rpc/nodedialer/nodedialer.go:92  [ct-client] connection to n3 established
I180827 20:41:53.386637 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r10/1:/Table/1{3-4}] sending preemptive snapshot a16f4b15 at applied index 64
I180827 20:41:53.388005 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r10/1:/Table/1{3-4}] streamed snapshot to (n3,s3):?: kv pairs: 204, log entries: 54, rate-limit: 8.0 MiB/sec, 4ms
I180827 20:41:53.388536 52181 storage/replica_raftstorage.go:784  [n3,s3,r10/?:{-}] applying preemptive snapshot at index 64 (id=a16f4b15, encoded size=62836, 1 rocksdb batches, 54 log entries)
I180827 20:41:53.389154 52181 storage/replica_raftstorage.go:790  [n3,s3,r10/?:/Table/1{3-4}] applied preemptive snapshot in 1ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.389513 50930 storage/replica_command.go:812  [replicate,n1,s1,r10/1:/Table/1{3-4}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r10:/Table/1{3-4} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.392649 50930 storage/replica.go:3743  [n1,s1,r10/1:/Table/1{3-4}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I180827 20:41:53.394122 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r2/1:/System/{-NodeLive…}] sending preemptive snapshot 69adabc1 at applied index 23
I180827 20:41:53.394365 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r2/1:/System/{-NodeLive…}] streamed snapshot to (n2,s2):?: kv pairs: 7, log entries: 13, rate-limit: 8.0 MiB/sec, 1ms
I180827 20:41:53.394729 52213 storage/replica_raftstorage.go:784  [n2,s2,r2/?:{-}] applying preemptive snapshot at index 23 (id=69adabc1, encoded size=6277, 1 rocksdb batches, 13 log entries)
I180827 20:41:53.394981 52213 storage/replica_raftstorage.go:790  [n2,s2,r2/?:/System/{-NodeLive…}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.395465 50930 storage/replica_command.go:812  [replicate,n1,s1,r2/1:/System/{-NodeLive…}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r2:/System/{-NodeLiveness} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.398757 50930 storage/replica.go:3743  [n1,s1,r2/1:/System/{-NodeLive…}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I180827 20:41:53.399709 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r18/1:/Table/2{1-2}] sending preemptive snapshot e9df2a4a at applied index 16
I180827 20:41:53.400036 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r18/1:/Table/2{1-2}] streamed snapshot to (n3,s3):?: kv pairs: 8, log entries: 6, rate-limit: 8.0 MiB/sec, 1ms
I180827 20:41:53.400391 52185 storage/replica_raftstorage.go:784  [n3,s3,r18/?:{-}] applying preemptive snapshot at index 16 (id=e9df2a4a, encoded size=2272, 1 rocksdb batches, 6 log entries)
I180827 20:41:53.400594 52185 storage/replica_raftstorage.go:790  [n3,s3,r18/?:/Table/2{1-2}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.400882 50930 storage/replica_command.go:812  [replicate,n1,s1,r18/1:/Table/2{1-2}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r18:/Table/2{1-2} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.407636 50930 storage/replica.go:3743  [n1,s1,r18/1:/Table/2{1-2}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I180827 20:41:53.408861 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r13/1:/Table/1{6-7}] sending preemptive snapshot 6f914d55 at applied index 16
I180827 20:41:53.409071 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r13/1:/Table/1{6-7}] streamed snapshot to (n2,s2):?: kv pairs: 8, log entries: 6, rate-limit: 8.0 MiB/sec, 1ms
I180827 20:41:53.409426 52218 storage/replica_raftstorage.go:784  [n2,s2,r13/?:{-}] applying preemptive snapshot at index 16 (id=6f914d55, encoded size=2276, 1 rocksdb batches, 6 log entries)
I180827 20:41:53.409616 52218 storage/replica_raftstorage.go:790  [n2,s2,r13/?:/Table/1{6-7}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.409970 50930 storage/replica_command.go:812  [replicate,n1,s1,r13/1:/Table/1{6-7}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r13:/Table/1{6-7} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.411262 50862 testutils/testcluster/testcluster.go:536  [n1,s1] has 22 underreplicated ranges
I180827 20:41:53.412831 50930 storage/replica.go:3743  [n1,s1,r13/1:/Table/1{6-7}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I180827 20:41:53.414081 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r11/1:/Table/1{4-5}] sending preemptive snapshot cca961c1 at applied index 16
I180827 20:41:53.414277 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r11/1:/Table/1{4-5}] streamed snapshot to (n3,s3):?: kv pairs: 8, log entries: 6, rate-limit: 8.0 MiB/sec, 1ms
I180827 20:41:53.414576 52199 storage/replica_raftstorage.go:784  [n3,s3,r11/?:{-}] applying preemptive snapshot at index 16 (id=cca961c1, encoded size=2272, 1 rocksdb batches, 6 log entries)
I180827 20:41:53.414816 52199 storage/replica_raftstorage.go:790  [n3,s3,r11/?:/Table/1{4-5}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.415293 50930 storage/replica_command.go:812  [replicate,n1,s1,r11/1:/Table/1{4-5}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r11:/Table/1{4-5} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.418111 50930 storage/replica.go:3743  [n1,s1,r11/1:/Table/1{4-5}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I180827 20:41:53.419054 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r5/1:/System/ts{d-e}] sending preemptive snapshot 3c3a015f at applied index 27
I180827 20:41:53.423022 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r5/1:/System/ts{d-e}] streamed snapshot to (n3,s3):?: kv pairs: 1391, log entries: 2, rate-limit: 8.0 MiB/sec, 4ms
I180827 20:41:53.423893 52201 storage/replica_raftstorage.go:784  [n3,s3,r5/?:{-}] applying preemptive snapshot at index 27 (id=3c3a015f, encoded size=194658, 1 rocksdb batches, 2 log entries)
I180827 20:41:53.429501 52201 storage/replica_raftstorage.go:790  [n3,s3,r5/?:/System/ts{d-e}] applied preemptive snapshot in 6ms [clear=0ms batch=0ms entries=2ms commit=4ms]
I180827 20:41:53.433500 50930 storage/replica_command.go:812  [replicate,n1,s1,r5/1:/System/ts{d-e}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r5:/System/ts{d-e} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.437580 50930 storage/replica.go:3743  [n1,s1,r5/1:/System/ts{d-e}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I180827 20:41:53.440575 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r6/1:/{System/tse-Table/System…}] sending preemptive snapshot cbd412df at applied index 21
I180827 20:41:53.440794 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r6/1:/{System/tse-Table/System…}] streamed snapshot to (n3,s3):?: kv pairs: 8, log entries: 11, rate-limit: 8.0 MiB/sec, 2ms
I180827 20:41:53.441181 52260 storage/replica_raftstorage.go:784  [n3,s3,r6/?:{-}] applying preemptive snapshot at index 21 (id=cbd412df, encoded size=4339, 1 rocksdb batches, 11 log entries)
I180827 20:41:53.441400 52260 storage/replica_raftstorage.go:790  [n3,s3,r6/?:/{System/tse-Table/System…}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.441676 50930 storage/replica_command.go:812  [replicate,n1,s1,r6/1:/{System/tse-Table/System…}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r6:/{System/tse-Table/SystemConfigSpan/Start} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.448564 52224 rpc/nodedialer/nodedialer.go:92  [ct-client] connection to n2 established
I180827 20:41:53.461587 50930 storage/replica.go:3743  [n1,s1,r6/1:/{System/tse-Table/System…}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I180827 20:41:53.463345 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r7/1:/Table/{SystemCon…-11}] sending preemptive snapshot 114f4385 at applied index 29
I180827 20:41:53.464896 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r7/1:/Table/{SystemCon…-11}] streamed snapshot to (n2,s2):?: kv pairs: 59, log entries: 19, rate-limit: 8.0 MiB/sec, 3ms
I180827 20:41:53.465343 52280 storage/replica_raftstorage.go:784  [n2,s2,r7/?:{-}] applying preemptive snapshot at index 29 (id=114f4385, encoded size=16646, 1 rocksdb batches, 19 log entries)
I180827 20:41:53.465821 52280 storage/replica_raftstorage.go:790  [n2,s2,r7/?:/Table/{SystemCon…-11}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.466988 50930 storage/replica_command.go:812  [replicate,n1,s1,r7/1:/Table/{SystemCon…-11}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r7:/Table/{SystemConfigSpan/Start-11} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.472743 50930 storage/replica.go:3743  [n1,s1,r7/1:/Table/{SystemCon…-11}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I180827 20:41:53.474632 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r1/1:/{Min-System/}] sending preemptive snapshot 0a244018 at applied index 114
I180827 20:41:53.475250 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r1/1:/{Min-System/}] streamed snapshot to (n2,s2):?: kv pairs: 73, log entries: 90, rate-limit: 8.0 MiB/sec, 1ms
I180827 20:41:53.475827 52267 storage/replica_raftstorage.go:784  [n2,s2,r1/?:{-}] applying preemptive snapshot at index 114 (id=0a244018, encoded size=40271, 1 rocksdb batches, 90 log entries)
I180827 20:41:53.476525 52267 storage/replica_raftstorage.go:790  [n2,s2,r1/?:/{Min-System/}] applied preemptive snapshot in 1ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.476869 50930 storage/replica_command.go:812  [replicate,n1,s1,r1/1:/{Min-System/}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r1:/{Min-System/} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.482912 50930 storage/replica.go:3743  [n1,s1,r1/1:/{Min-System/}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I180827 20:41:53.483281 50930 storage/queue.go:873  [n1,replicate] purgatory is now empty
I180827 20:41:53.485684 52286 storage/store_snapshot.go:615  [replicate,n1,s1,r20/1:/Table/{23-50}] sending preemptive snapshot f1426c69 at applied index 19
I180827 20:41:53.487316 52286 storage/store_snapshot.go:657  [replicate,n1,s1,r20/1:/Table/{23-50}] streamed snapshot to (n3,s3):?: kv pairs: 13, log entries: 9, rate-limit: 8.0 MiB/sec, 4ms
I180827 20:41:53.487681 52252 storage/replica_raftstorage.go:784  [n3,s3,r20/?:{-}] applying preemptive snapshot at index 19 (id=f1426c69, encoded size=3273, 1 rocksdb batches, 9 log entries)
I180827 20:41:53.487932 52252 storage/replica_raftstorage.go:790  [n3,s3,r20/?:/Table/{23-50}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.488311 52286 storage/replica_command.go:812  [replicate,n1,s1,r20/1:/Table/{23-50}] change replicas (ADD_REPLICA (n3,s3):3): read existing descriptor r20:/Table/{23-50} [(n1,s1):1, (n2,s2):2, next=3, gen=1]
I180827 20:41:53.503580 52286 storage/replica.go:3743  [n1,s1,r20/1:/Table/{23-50}] proposing ADD_REPLICA((n3,s3):3): updated=[(n1,s1):1 (n2,s2):2 (n3,s3):3] next=4
I180827 20:41:53.505707 52235 storage/store_snapshot.go:615  [replicate,n1,s1,r1/1:/{Min-System/}] sending preemptive snapshot 99036b07 at applied index 119
I180827 20:41:53.506514 52235 storage/store_snapshot.go:657  [replicate,n1,s1,r1/1:/{Min-System/}] streamed snapshot to (n3,s3):?: kv pairs: 78, log entries: 95, rate-limit: 8.0 MiB/sec, 1ms
I180827 20:41:53.507282 52188 storage/replica_raftstorage.go:784  [n3,s3,r1/?:{-}] applying preemptive snapshot at index 119 (id=99036b07, encoded size=42101, 1 rocksdb batches, 95 log entries)
I180827 20:41:53.508109 52188 storage/replica_raftstorage.go:790  [n3,s3,r1/?:/{Min-System/}] applied preemptive snapshot in 1ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.508641 52235 storage/replica_command.go:812  [replicate,n1,s1,r1/1:/{Min-System/}] change replicas (ADD_REPLICA (n3,s3):3): read existing descriptor r1:/{Min-System/} [(n1,s1):1, (n2,s2):2, next=3, gen=1]
I180827 20:41:53.512524 52235 storage/replica.go:3743  [n1,s1,r1/1:/{Min-System/}] proposing ADD_REPLICA((n3,s3):3): updated=[(n1,s1):1 (n2,s2):2 (n3,s3):3] next=4
I180827 20:41:53.513999 52209 storage/store_snapshot.go:615  [replicate,n1,s1,r7/1:/Table/{SystemCon…-11}] sending preemptive snapshot bb53109c at applied index 32
I180827 20:41:53.514379 52209 storage/store_snapshot.go:657  [replicate,n1,s1,r7/1:/Table/{SystemCon…-11}] streamed snapshot to (n3,s3):?: kv pairs: 60, log entries: 22, rate-limit: 8.0 MiB/sec, 1ms
I180827 20:41:53.514821 52292 storage/replica_raftstorage.go:784  [n3,s3,r7/?:{-}] applying preemptive snapshot at index 32 (id=bb53109c, encoded size=17687, 1 rocksdb batches, 22 log entries)
I180827 20:41:53.515905 52292 storage/replica_raftstorage.go:790  [n3,s3,r7/?:/Table/{SystemCon…-11}] applied preemptive snapshot in 1ms [clear=1ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.516367 52209 storage/replica_command.go:812  [replicate,n1,s1,r7/1:/Table/{SystemCon…-11}] change replicas (ADD_REPLICA (n3,s3):3): read existing descriptor r7:/Table/{SystemConfigSpan/Start-11} [(n1,s1):1, (n2,s2):2, next=3, gen=1]
I180827 20:41:53.520158 52209 storage/replica.go:3743  [n1,s1,r7/1:/Table/{SystemCon…-11}] proposing ADD_REPLICA((n3,s3):3): updated=[(n1,s1):1 (n2,s2):2 (n3,s3):3] next=4
I180827 20:41:53.521958 52312 storage/store_snapshot.go:615  [replicate,n1,s1,r6/1:/{System/tse-Table/System…}] sending preemptive snapshot 2ca43612 at applied index 24
I180827 20:41:53.522776 52312 storage/store_snapshot.go:657  [replicate,n1,s1,r6/1:/{System/tse-Table/System…}] streamed snapshot to (n2,s2):?: kv pairs: 9, log entries: 14, rate-limit: 8.0 MiB/sec, 2ms
I180827 20:41:53.523128 52239 storage/replica_raftstorage.go:784  [n2,s2,r6/?:{-}] applying preemptive snapshot at index 24 (id=2ca43612, encoded size=5410, 1 rocksdb batches, 14 log entries)
I180827 20:41:53.523377 52239 storage/replica_raftstorage.go:790  [n2,s2,r6/?:/{System/tse-Table/System…}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.523701 52312 storage/replica_command.go:812  [replicate,n1,s1,r6/1:/{System/tse-Table/System…}] change replicas (ADD_REPLICA (n2,s2):3): read existing descriptor r6:/{System/tse-Table/SystemConfigSpan/Start} [(n1,s1):1, (n3,s3):2, next=3, gen=1]
I180827 20:41:53.525176 50862 testutils/testcluster/testcluster.go:536  [n1,s1] has 19 underreplicated ranges
I180827 20:41:53.527482 52312 storage/replica.go:3743  [n1,s1,r6/1:/{System/tse-Table/System…}] proposing ADD_REPLICA((n2,s2):3): updated=[(n1,s1):1 (n3,s3):2 (n2,s2):3] next=4
I180827 20:41:53.528875 52327 storage/store_snapshot.go:615  [replicate,n1,s1,r5/1:/System/ts{d-e}] sending preemptive snapshot 731be2ae at applied index 30
I180827 20:41:53.532860 52327 storage/store_snapshot.go:657  [replicate,n1,s1,r5/1:/System/ts{d-e}] streamed snapshot to (n2,s2):?: kv pairs: 1392, log entries: 5, rate-limit: 8.0 MiB/sec, 4ms
I180827 20:41:53.533361 52316 storage/replica_raftstorage.go:784  [n2,s2,r5/?:{-}] applying preemptive snapshot at index 30 (id=731be2ae, encoded size=195741, 1 rocksdb batches, 5 log entries)
I180827 20:41:53.535834 52316 storage/replica_raftstorage.go:790  [n2,s2,r5/?:/System/ts{d-e}] applied preemptive snapshot in 2ms [clear=0ms batch=0ms entries=0ms commit=2ms]
I180827 20:41:53.536253 52327 storage/replica_command.go:812  [replicate,n1,s1,r5/1:/System/ts{d-e}] change replicas (ADD_REPLICA (n2,s2):3): read existing descriptor r5:/System/ts{d-e} [(n1,s1):1, (n3,s3):2, next=3, gen=1]
I180827 20:41:53.540576 52327 storage/replica.go:3743  [n1,s1,r5/1:/System/ts{d-e}] proposing ADD_REPLICA((n2,s2):3): updated=[(n1,s1):1 (n3,s3):2 (n2,s2):3] next=4
I180827 20:41:53.545804 52341 storage/store_snapshot.go:615  [replicate,n1,s1,r11/1:/Table/1{4-5}] sending preemptive snapshot 7497a95f at applied index 19
I180827 20:41:53.546108 52341 storage/store_snapshot.go:657  [replicate,n1,s1,r11/1:/Table/1{4-5}] streamed snapshot to (n2,s2):?: kv pairs: 9, log entries: 9, rate-limit: 8.0 MiB/sec, 4ms
I180827 20:41:53.546590 52275 storage/replica_raftstorage.go:784  [n2,s2,r11/?:{-}] applying preemptive snapshot at index 19 (id=7497a95f, encoded size=3304, 1 rocksdb batches, 9 log entries)
I180827 20:41:53.546960 52275 storage/replica_raftstorage.go:790  [n2,s2,r11/?:/Table/1{4-5}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.547386 52341 storage/replica_command.go:812  [replicate,n1,s1,r11/1:/Table/1{4-5}] change replicas (ADD_REPLICA (

Please assign, take a look and update the issue accordingly.

teamcity: failed tests on release-banana: test/TestImportPgDump, lint/TestLint

The following tests appear to have failed:

#864629:

--- FAIL: test/TestImportPgDump/read_data_only (0.000s)
Test ended in panic.

------- Stdout: -------
I180827 20:41:54.053559 52667 storage/replica_command.go:298  [n1,s1,r23/1:/{Table/52-Max}] initiating a split of this range at key /Table/53/1/106 [r24]
I180827 20:41:54.062208 52385 storage/replica_range_lease.go:554  [replicate,n1,s1,r23/1:/Table/5{2-3/1/106}] transferring lease to s2
I180827 20:41:54.063407 52385 storage/replica_range_lease.go:617  [replicate,n1,s1,r23/1:/Table/5{2-3/1/106}] done transferring lease to s2: <nil>
I180827 20:41:54.063498 51617 storage/replica_proposal.go:210  [n2,s2,r23/3:/Table/5{2-3/1/106}] new range lease repl=(n2,s2):3 seq=3 start=1535402514.062230012,0 epo=1 pro=1535402514.062232488,0 following repl=(n1,s1):1 seq=2 start=1535402512.768597075,0 exp=1535402521.769064687,0 pro=1535402512.769088099,0
I180827 20:41:54.077824 52355 storage/replica_command.go:298  [n1,s1,r24/1:/{Table/53/1/1…-Max}] initiating a split of this range at key /Table/53/2/"\x15\x8f\xe8\u007f\\\xf3\xdf\xf0nP\xdb\xd3\xe8\x1b\"B1K\xa8l+\x96/l\v\x9e\x0e\x91\xa0D\x96\xc0J\xf1\xa1͠\xd2̃\x05\xe3\xe2?ET蛂\x00\xe5\xb0\x1a\x8e\x13Zu\xfd\xf2\x81w^\xb7\xbdH\xb8\xe4\a\x9c\xfd\x99{\xb4\"\xe5Q\x9c\x17\x85\x97\xf7Ëb\x0f\xff\xb0-vmO\xe1\xfb\xc3\xf3\xab0\xa0\x05u\x1c\xb0{B\xeamp\xbd\x8f\x99?\x87\x0f\xb2e\xe3ؿ2LN\x03\x17\xa7\x9f\xd3\x0f\x15$\x02I\xd2\xd7\x04R\x193\x9d\xddX\u007f\x01A\xcc\xde`Pm:\xdbe\xfd\xa6\a\xf8i\x88\xa7\xee\xacӸ\xbf2\x84y\xcd\n\xe6]L)\xca\xd9`x\xb4\x1b|\xe8\x13\x82\x1a(/* 3`J\xe1ٰ\xe6AdN!-\xd9"/"ॹॹ;,✅\nπ<\t\nπ\tॹπ✅a\n,\nᐿ�\nॹ✅�ॹ�\"✅ॹ\\<\"\n;a\\\n,✅π\n<\n<\nॹπᐿ�ᐿ;,�\tᐿ\nᐿaᐿ,\nπ�\t\\<ॹ\\π;π�π\"<;\"�\\<�,<�\\a�<\nॹaᐿaॹ�\\ᐿπ,✅ᐿ\"<✅✅a\t�ॹ\t<π;�ॹ\\ᐿ;✅\r\\,;\\ᐿॹ\nॹᐿππ\nᐿ\nᐿaπ\\\nπ\r\"✅�π\nπ\rॹπ\"ॹ✅a\ra�✅\nπ;ॹ✅\n;ॹ,�\nπ\rπᐿa\\\\ᐿ,π<ᐿ✅\n�,\r\nᐿ✅\n<�ᐿ\"\"✅,,\"\n<\n✅\rπa�π\n<\\ॹ\nॹ�;\ra��✅ᐿ\n,\t�;,π<\r��\r\\�\n✅\r✅�;\\\n\n,\nॹ✅π\n,\n✅\t,�<\nπ\t;aπ\n<a<\n\tπ\r\"\\✅\n\n\n<ᐿπaπ\\�\"<✅\\a,✅\n✅\n<\"\"\n\n\r\rᐿ�\\\tᐿᐿ;\n\rᐿa\\π<\n\\\n\n\";\r\r\raπ\"\r�ॹa\r�\"\n\"✅ππ✅�\t�ᐿ\tᐿ\\\r�ᐿ<\\\nᐿπ✅\tॹ<π\ta\"✅\t,ॹa✅ᐿ;\\\r✅\\,ॹ\"a\n<ॹ\\\n<\"π\\\\ᐿ\n✅\nᐿ\n,\n\r\t\n\r\n<aᐿ;ᐿ;ᐿ\r;✅a<a,,<\t\n\\ππ\\\"✅\n\\a\n\tπa<\r<π\n✅\\π<ॹ,\t;<aaaπॹᐿaॹaॹ�,\"\t,ॹ;\\<✅a\nᐿ\"\nπ\\aᐿ�ᐿ<ॹ;\\<ᐿ\nᐿ\n\"aᐿᐿπ,\"\r✅ॹ\n,<\r<\n<<,ᐿᐿa,\rᐿ<;π\\�,\"\rπ�\nππ�,✅;�\ra<;\r�ᐿ\tπ;\"πᐿ\\�a\"ᐿ\\;\\ॹ\";ॹ;;✅\tॹ\r\n<\t\n\t<aॹ\tᐿ\n\"ॹᐿ\t✅✅�ॹ;;<�\t,\n\r\n\n\ta\"\\<\rπᐿa\t<\na;\t\"\nπ<πॹ\r\n<\n✅ᐿॹ✅�<,;✅\"\n<�π<✅<<✅\\;\n\"\rᐿ\t�\n\n\r\t,ॹ�\"\rᐿᐿᐿ,\"π\nπ\",a\"\"<�\t\\πॹ\n\taॹᐿ\tπॹ,\n✅\rπa\r<<,\n\nᐿ;\t\\<\tᐿ\n\n�\"ॹ<\n\r\nᐿ\n�\n\nπᐿ;\nॹ\n\"π<\"\r\r\n\r\\ᐿ;;πॹ;�\r✅�,✅\r\r�,a;ᐿ\\ॹ\"\t\r✅;\t<,π,�\t\"πaᐿ��\\ॹ\"\n\tᐿ\t,ॹ✅�ᐿ✅\tᐿ,ॹ✅;;�\r\n✅ᐿ�\nπa;\\,✅ॹ<ᐿ\nπ\n\"\n;a\t\\π\n<\r\r\rπ\"\n\nᐿ<<ᐿ\"\n,\n\"ॹπaᐿπ��\r\n�ᐿॹ,\na\n\rॹ<�ॹ\"\n\t\r\n,π\n�,<ॹ,<\n<�✅ᐿ\r✅a✅<\r;,�a�\\\nॹ<\\<✅ॹ\"\nॹ\r�\ta;\"\\ᐿ\n\n✅\"\r;✅\t,a✅✅<\"ᐿ\t�π\\✅✅�<;\"✅π✅ॹ,\nπ\n��<,\ta\r�✅ᐿ\nॹπ\nᐿॹ✅;\nॹ\t\r\\\nᐿ�ᐿ\n\tᐿ,\r\\;<<a,\"π,\tᐿ\nπ<a\"ॹ\\aa\r\r\"\";\tᐿ,ॹa\nॹ\nπ"/PrefixEnd [r25]
I180827 20:41:54.090278 52643 storage/replica_range_lease.go:554  [n1,s1,r24/1:/Table/53/{1/106-2/"\x15…}] transferring lease to s3
I180827 20:41:54.091255 52643 storage/replica_range_lease.go:617  [n1,s1,r24/1:/Table/53/{1/106-2/"\x15…}] done transferring lease to s3: <nil>
I180827 20:41:54.092073 51863 storage/replica_proposal.go:210  [n3,s3,r24/2:/Table/53/{1/106-2/"\x15…}] new range lease repl=(n3,s3):2 seq=3 start=1535402514.090298269,0 epo=1 pro=1535402514.090300825,0 following repl=(n1,s1):1 seq=2 start=1535402512.768597075,0 exp=1535402521.769064687,0 pro=1535402512.769088099,0
I180827 20:41:54.094854 52702 storage/replica_command.go:298  [n1,s1,r25/1:/{Table/53/2/"…-Max}] initiating a split of this range at key /Table/53/2/"\xc0\t\x13\xe0*c\xe4\xcfS-\x9b,\xe2\x82\xfa\xd8Z\xf6\x99\x81\\\x18ŕ\xea\x80Db\xa7\x94\xf7Q#\x13\\\xc7(\xc4=\xaaZ\xa2Hա}\xdeI\x06\x840I\xa9\x95\xcbи\r#iH\x97F~\x10\xe4<\xb2\xefFb\xac\xee\xf90H5\xd7D\xe4:\xf0Ae\xe3\xd1<\xd1\xf7\xb9\xad\xea\xd9\xe0r\xbc\xa6\xae\x92\xfb\xb5,\xc2\U0010f26eD\xe0 \xc5\x06\xfa\x04{\xf7\xe8\xbfZQ\xa3\x05M\xbb\xa8\xbe\xf4\xc4\x0f\xe9|s{|\x8fr\xad\xdaWĢ\x9e\xdf\x17\x9f\x02\xf3п\xd3\xea\xfd\x8ew3\xb8@7ꇘN%\n\xe0@jq\xb3\xb0&y\xe3K0ȼ_s\x1e\x15\x98\xe7\xbf6\xeb\xef}$dd/\xaa\xf1\xcb.U\x8f\xd4r"/"<a✅ᐿ<\n\n\nॹ\",\n\"πॹᐿ,\rᐿ\nॹ�ॹ<\naᐿᐿ,\"ॹ\"\\✅\n�✅<\n\r<\\<\"\\π;<✅,;✅ॹa\r✅ᐿ;ᐿ\r\\�\\�,ᐿ\r\n,�✅,\t;\\\"π,;ᐿa\nॹ✅\n;ॹ<\\\n;<ᐿॹ\n\r<\n�\\\",ॹ✅,\n\"✅ᐿ\raa\n\n\t;�π<,\",ᐿ<ᐿ\\ᐿ✅;\t<π\"\"ॹ<\t\n,,π\"\t�✅\r;;ॹ,ᐿ\tॹ✅ॹ,\nᐿ\n\\;\n\nπa<✅aπ\t;✅\r;�✅;a\t\n✅ππ\t\rॹ\n\\<✅✅<\r✅\t\r✅<π\n;\n\"\"��ॹ\r,ᐿ\naॹᐿπ\\π\n\n,�\t<�a\nπ\\✅,πॹ\",πᐿa,ᐿᐿa\r<\ta\t\nॹ\n\r✅\r;πa✅�\t�π\",πa\n<✅\"a\t�\r<ᐿa,\naᐿॹᐿ\rॹᐿa�\"\\a✅\"\"✅✅ᐿ\t\"ॹ<\n\"\nπ✅✅\n\\ॹ\t;ᐿ\"a,ॹ\"aॹᐿᐿ\n,\\\nॹॹ,;ॹ;\\\n\n\t\n,\naॹπ\r\"\n\t✅ॹ\r\tᐿ✅;\r\ta�ᐿ\t\t\nπᐿ<ॹ\tᐿ\na\nᐿ\tππ<<π\t✅;\r\"ॹ\n\na�<ᐿ\r\nᐿᐿ\";a<\rॹ\\πॹ\tᐿ\n\nᐿπ;\t;;✅ᐿ✅✅<�ॹ;\tᐿ,,\t,π✅\nᐿॹ;\r\"ᐿa,ᐿᐿᐿa\nॹ<aॹ\r,;π<<\nπa�\n\\\r,ॹπ�\n\"ᐿππ\n✅ॹπ\ra,πॹ\n\t<;πᐿᐿ✅ॹ\t�a✅\r�\"a,π��,ॹa\n\\\rॹ\nॹ\"π\"π\ta�<π�\r;a,a\r<πᐿ\na<\r\t\nॹ\\\\\n\\<�\\�aπa;\r\\,,\nॹ\"✅;\"\n✅ᐿ✅a\n<ππ\tπ<a\t�\\<\"✅\\\nᐿ;\r\t✅�✅π\r\r\r\n\",ᐿ;ॹ\nᐿ\r\"\naa\"\n\t<,✅<a\\\n\"ॹᐿ\n\\\t\t\r\"ॹ<,,π�\"ॹ<ॹ,\\ᐿ<\\π\"\\<ᐿ\n\rॹ\na\t\nπ;\\π\\✅\r,\r�\",,;;,<\n\t\"\\\r\r\"ॹ✅ॹ�\n�✅�\n�\\\n\n\rᐿॹ\tπa<;;\n\n�a\n\\ॹॹ\t;\n<\t\\<ॹॹ�✅π\t\"<\n\tᐿπᐿ\"\"�;\t;ॹ<π<✅\nππ<\"\rॹ�πᐿ�\rπ,<,<ᐿ<;;�,\t<<ᐿ\t<\tॹ,π,<\\a\t;\n\r�a✅\r\r\t\nᐿᐿ✅\r;\n;�;\r✅\n,;✅ᐿ,\\\n<,<\\\t\n;<\\aᐿ\r,\n;\\\r�\rॹ\\\t\";\t�;\n,\ta✅\r\t\r,\n\\\t\nᐿ✅\\\"\naᐿ\\\\\";\r<�✅;aॹ\t\t\\\t\tᐿ�\r,\"\n\"\taᐿ\na\rππॹ\nπᐿ\rॹ\nॹ\tπ,π\r<\"\n,�\na;�\\✅,<\"\"�\nππ\t\nπaॹaa✅\\;\\a\r\rᐿ,\\�;\\ᐿᐿ<a\"\r;π\"\\ᐿπ\tπॹ;\\a\ra;\t\n\\�\r<\t<ॹ\r✅\na\t\t<\n\n✅π\nᐿ✅\\<,\nπ\rᐿ✅a\"\r\"\n✅ॹ\\�\r\n\\�\nॹ\\<\\<\n\n\n"/PrefixEnd [r26]
I180827 20:41:54.112580 52726 storage/replica_range_lease.go:554  [n1,s1,r25/1:/Table/53/2/"\x{15\x8…-c0\t\…}] transferring lease to s3
I180827 20:41:54.113319 52726 storage/replica_range_lease.go:617  [n1,s1,r25/1:/Table/53/2/"\x{15\x8…-c0\t\…}] done transferring lease to s3: <nil>
I180827 20:41:54.113657 51850 storage/replica_proposal.go:210  [n3,s3,r25/2:/Table/53/2/"\x{15\x8…-c0\t\…}] new range lease repl=(n3,s3):2 seq=3 start=1535402514.112607829,0 epo=1 pro=1535402514.112610518,0 following repl=(n1,s1):1 seq=2 start=1535402512.768597075,0 exp=1535402521.769064687,0 pro=1535402512.769088099,0
I180827 20:41:54.117098 52757 storage/replica_command.go:298  [n1,s1,r26/1:/{Table/53/2/"…-Max}] initiating a split of this range at key /Table/53/3/";π,\\✅✅ᐿπ✅,�a\r<\nπᐿॹ;π\\,✅\nᐿॹ✅�,\r�\r\r\r,;\r;ᐿ,\n\nᐿaπ\r,,✅\na,a\\<✅\"✅\\,,a\"π\r\n�✅π\"ॹπ;\nπ;<,<\n;<\n\tॹ\rπ\r,a\\\t�\n\r\\ᐿ<\t,\n\\ᐿa\t\t\n\nπ\t\\\n\\πa;π\r\rᐿ\",a<\"\n�\r\ta\r\t�\r\t✅\t;ᐿᐿ��\nᐿᐿ\\,ᐿᐿ\na\"ᐿ\"\"aa\n;✅π\nॹ\\\"\"�✅ॹॹ\\\nॹ\t\nॹ✅,\n\"πᐿ\n\n;<<;\r\tᐿ�,,\\\n\n\n\n\nπ�;\n;,\"✅\r;a\n\\;aa\n\n\n;\n\n<<ॹ�\nπa�πॹaॹ\r\n;✅✅,;ᐿ\n;π\"\\πᐿ\n\"\n\\a\\aπ✅ᐿ\n<\",\tॹ\r<;\";\nππ\"�\n\n\t,π\\\\<\\\t;ॹπ\\,;�✅ॹ,\r<\n✅�;ᐿ\",;✅\n\nॹॹ\r\n✅\n<<\n\"\",a\t\r\r,ॹ\t�;�,π\\\t,;\\\"✅\t\n\n\nᐿ\n<\\\rᐿ,;�\nπ\r\\<ᐿᐿ\n<✅ᐿaॹ�✅\n\t,π\"\r<<\n\nπ\tπ✅\n<\\πॹaᐿ\t;�ॹॹ,\"\n\\a\n,\"πॹ,\r,ॹॹ\\\";�\n\\π✅\n\"ππ\n✅�πॹ,\r✅\n;π;ᐿ\"\nᐿ✅\rॹ;;\n\"ॹ\"�\"a\n\rπ\n<\n\t\"aπ\t;\\\n\";\"π,\ta\t\n\nᐿ<,;�<ᐿ\"\\ᐿᐿa,\n;;ॹॹ\tॹπa�ᐿ\ra,π<✅\tᐿᐿ\n,✅\ra�\"\r\r\",;π�<;\n<;ᐿ\"�;ᐿ;�;✅\t\\<\\<;πa✅\rॹ\\\\\\ᐿ\n;\r\t\n\\\r\"✅\n\tπ✅\"\"<\r\rπ\r<,\n,\\✅ᐿᐿ�\t�,ππ;ᐿ\t�\"\\ππ\"ॹ,πa<\n\n<��\rॹॹ\t,\r\"ॹ✅✅\n\n;\\ॹ;π<\"�\t�<\"ᐿॹॹ;\n<\n\r\na\t�ॹᐿa\n,\"\t\r\"\n,\r<,\"\tᐿ\\\n<,;<\"\t\n\nᐿ,ᐿ\tπ✅\n,\r,\n\t<,�\\;<\\a\nπ\t,\t�ॹ\t\n�a✅\n✅\nॹ\";\r\t✅<\tᐿ\n\tᐿॹ✅\"\r\rॹ✅π\n\n,\t\\\t\\;\"a\t,ॹॹ\"aॹ\n,\n✅�\t\nॹᐿ\n\r✅<πॹ\n✅\tॹ\"ॹ\"�\r\\;\\✅;ॹπ;\n\nᐿ<\r<\"ॹ\n,\n;π\nॹ\ta✅\n�;ᐿ\"a�✅π\r✅ॹ,\n\n\",✅\nᐿ\n<�\r\nπᐿ\"πॹᐿ\r�\n<,✅a\\ॹ\r✅<;πᐿ✅ॹ<\"<✅\"π,\\\rπ\\<\"<\"π\n✅<;\\�\tॹ\n\n\r<\n\rᐿ\nᐿaॹaॹ\\\r<<\n\r\n�\ta,\nॹॹᐿ\n,π✅<;\\\nπॹπᐿॹ<;\"a\r<;\t\t<,;�π\n<✅ॹॹ\tᐿ\rᐿaaaॹ\t\\,ᐿ✅\n\\ॹ<\"π\t\r\"\tᐿ\n\ta\t,<ππ;\n\\\r�\n,\n\n\\ᐿa\nॹᐿa\n\t\n\t\n✅\"ᐿ\"\r\n\n\"�\r\n\n<<π\ra✅\\<ᐿ�\n\n✅�a✅�"/105 [r27]
I180827 20:41:54.137397 52716 storage/store_snapshot.go:615  [raftsnapshot,n3,s3,r25/2:/Table/53/2/"\x{15\x8…-c0\t\…}] sending Raft snapshot 547ab8d0 at applied index 21
I180827 20:41:54.140430 52716 storage/store_snapshot.go:657  [raftsnapshot,n3,s3,r25/2:/Table/53/2/"\x{15\x8…-c0\t\…}] streamed snapshot to (n2,s2):3: kv pairs: 14, log entries: 2, rate-limit: 8.0 MiB/sec, 22ms
I180827 20:41:54.140860 52705 storage/replica_raftstorage.go:784  [n2,s2,r25/3:/Table/53/2/"\x{15\x8…-c0\t\…}] applying Raft snapshot at index 21 (id=547ab8d0, encoded size=31270, 1 rocksdb batches, 2 log entries)
I180827 20:41:54.162696 52705 storage/replica_raftstorage.go:790  [n2,s2,r25/3:/Table/53/2/"\x{15\x8…-c0\t\…}] applied Raft snapshot in 22ms [clear=0ms batch=0ms entries=21ms commit=0ms]
I180827 20:41:54.166103 52791 storage/replica_range_lease.go:554  [n1,s1,r26/1:/Table/53/{2/"\xc0…-3/";π,…}] transferring lease to s3
I180827 20:41:54.167118 51903 storage/replica_proposal.go:210  [n3,s3,r26/2:/Table/53/{2/"\xc0…-3/";π,…}] new range lease repl=(n3,s3):2 seq=3 start=1535402514.166156675,0 epo=1 pro=1535402514.166159831,0 following repl=(n1,s1):1 seq=2 start=1535402512.768597075,0 exp=1535402521.769064687,0 pro=1535402512.769088099,0
I180827 20:41:54.167216 52791 storage/replica_range_lease.go:617  [n1,s1,r26/1:/Table/53/{2/"\xc0…-3/";π,…}] done transferring lease to s3: <nil>
I180827 20:41:54.172711 52589 storage/replica_command.go:298  [n1,s1,r27/1:/{Table/53/3/"…-Max}] initiating a split of this range at key /Table/54 [r28]
I180827 20:41:54.182740 52807 storage/replica_range_lease.go:554  [n1,s1,r27/1:/Table/5{3/3/";π…-4}] transferring lease to s2
I180827 20:41:54.183947 52807 storage/replica_range_lease.go:617  [n1,s1,r27/1:/Table/5{3/3/";π…-4}] done transferring lease to s2: <nil>
I180827 20:41:54.184954 51646 storage/replica_proposal.go:210  [n2,s2,r27/3:/Table/5{3/3/";π…-4}] new range lease repl=(n2,s2):3 seq=3 start=1535402514.182761052,0 epo=1 pro=1535402514.182764040,0 following repl=(n1,s1):1 seq=2 start=1535402512.768597075,0 exp=1535402521.769064687,0 pro=1535402512.769088099,0
--- FAIL: test/TestImportPgDump (0.000s)
Test ended in panic.

------- Stdout: -------
W180827 20:41:52.746991 50862 server/status/runtime.go:294  [n?] Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006"
I180827 20:41:52.757923 50862 server/server.go:830  [n?] monitoring forward clock jumps based on server.clock.forward_jump_check_enabled
I180827 20:41:52.758132 50862 base/addr_validation.go:260  [n?] server certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
I180827 20:41:52.758156 50862 base/addr_validation.go:300  [n?] web UI certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
I180827 20:41:52.761168 50862 server/config.go:496  [n?] 1 storage engine initialized
I180827 20:41:52.761191 50862 server/config.go:499  [n?] RocksDB cache size: 128 MiB
I180827 20:41:52.761204 50862 server/config.go:499  [n?] store 0: in-memory, size 0 B
I180827 20:41:52.767725 50862 server/node.go:373  [n?] **** cluster d5e53e69-a109-4eb6-91bf-29e74ae744ba has been created
I180827 20:41:52.767752 50862 server/server.go:1401  [n?] **** add additional nodes by specifying --join=127.0.0.1:41477
I180827 20:41:52.767936 50862 gossip/gossip.go:382  [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:41477" > attrs:<> locality:<> ServerVersion:<major_val:2 minor_val:0 patch:0 unstable:12 > build_tag:"v2.1.0-alpha.20180702-2025-gf1e7bb1" started_at:1535402512767856449 
I180827 20:41:52.770338 50862 storage/store.go:1541  [n1,s1] [n1,s1]: failed initial metrics computation: [n1,s1]: system config not yet available
I180827 20:41:52.770546 50862 server/node.go:476  [n1] initialized store [n1,s1]: disk (capacity=512 MiB, available=512 MiB, used=0 B, logicalBytes=6.9 KiB), ranges=1, leases=1, queries=0.00, writes=0.00, bytesPerReplica={p10=7103.00 p25=7103.00 p50=7103.00 p75=7103.00 p90=7103.00 pMax=7103.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
I180827 20:41:52.770626 50862 storage/stores.go:242  [n1] read 0 node addresses from persistent storage
I180827 20:41:52.770721 50862 server/node.go:697  [n1] connecting to gossip network to verify cluster ID...
I180827 20:41:52.770760 50862 server/node.go:722  [n1] node connected via gossip and verified as part of cluster "d5e53e69-a109-4eb6-91bf-29e74ae744ba"
I180827 20:41:52.770788 50862 server/node.go:546  [n1] node=1: started with [<no-attributes>=<in-mem>] engine(s) and attributes []
I180827 20:41:52.771023 50862 server/status/recorder.go:652  [n1] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180827 20:41:52.771066 50862 server/server.go:1807  [n1] Could not start heap profiler worker due to: directory to store profiles could not be determined
I180827 20:41:52.771159 50862 server/server.go:1538  [n1] starting https server at 127.0.0.1:42563 (use: 127.0.0.1:42563)
I180827 20:41:52.771188 50862 server/server.go:1540  [n1] starting grpc/postgres server at 127.0.0.1:41477
I180827 20:41:52.771209 50862 server/server.go:1541  [n1] advertising CockroachDB node at 127.0.0.1:41477
I180827 20:41:52.775258 51089 server/status/recorder.go:652  [n1,summaries] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180827 20:41:52.776337 50925 storage/replica_command.go:298  [split,n1,s1,r1/1:/M{in-ax}] initiating a split of this range at key /System/"" [r2]
I180827 20:41:52.788832 51094 storage/replica_command.go:298  [split,n1,s1,r2/1:/{System/-Max}] initiating a split of this range at key /System/NodeLiveness [r3]
W180827 20:41:52.790188 51128 storage/intent_resolver.go:668  [n1,s1] failed to push during intent resolution: failed to push "unnamed" id=ec083bbe key=/Table/SystemConfigSpan/Start rw=true pri=0.01126188 iso=SERIALIZABLE stat=PENDING epo=0 ts=1535402512.772758792,0 orig=1535402512.772758792,0 max=1535402512.772758792,0 wto=false rop=false seq=6
I180827 20:41:52.790695 51118 sql/event_log.go:126  [n1,intExec=optInToDiagnosticsStatReporting] Event: "set_cluster_setting", target: 0, info: {SettingName:diagnostics.reporting.enabled Value:true User:root}
I180827 20:41:52.795125 51100 storage/replica_command.go:298  [split,n1,s1,r3/1:/{System/NodeL…-Max}] initiating a split of this range at key /System/NodeLivenessMax [r4]
I180827 20:41:52.800783 51143 storage/replica_command.go:298  [split,n1,s1,r4/1:/{System/NodeL…-Max}] initiating a split of this range at key /System/tsd [r5]
I180827 20:41:52.807906 51165 storage/replica_command.go:298  [split,n1,s1,r5/1:/{System/tsd-Max}] initiating a split of this range at key /System/"tse" [r6]
I180827 20:41:52.811784 51141 sql/event_log.go:126  [n1,intExec=set-setting] Event: "set_cluster_setting", target: 0, info: {SettingName:version Value:2.0-12 User:root}
I180827 20:41:52.818164 50799 sql/event_log.go:126  [n1,intExec=disableNetTrace] Event: "set_cluster_setting", target: 0, info: {SettingName:trace.debug.enable Value:false User:root}
I180827 20:41:52.821094 51188 storage/replica_command.go:298  [split,n1,s1,r6/1:/{System/tse-Max}] initiating a split of this range at key /Table/SystemConfigSpan/Start [r7]
I180827 20:41:52.830709 51176 storage/replica_command.go:298  [split,n1,s1,r7/1:/{Table/System…-Max}] initiating a split of this range at key /Table/11 [r8]
I180827 20:41:52.839374 51187 sql/event_log.go:126  [n1,intExec=initializeClusterSecret] Event: "set_cluster_setting", target: 0, info: {SettingName:cluster.secret Value:045a1c98-219f-445b-bd6b-d481f04d6b0d User:root}
I180827 20:41:52.849534 51154 storage/replica_command.go:298  [split,n1,s1,r8/1:/{Table/11-Max}] initiating a split of this range at key /Table/12 [r9]
I180827 20:41:52.855898 51218 sql/event_log.go:126  [n1,intExec=create-default-db] Event: "create_database", target: 50, info: {DatabaseName:defaultdb Statement:CREATE DATABASE IF NOT EXISTS defaultdb User:root}
I180827 20:41:52.861462 51240 storage/replica_command.go:298  [split,n1,s1,r9/1:/{Table/12-Max}] initiating a split of this range at key /Table/13 [r10]
I180827 20:41:52.868342 51268 storage/replica_command.go:298  [split,n1,s1,r10/1:/{Table/13-Max}] initiating a split of this range at key /Table/14 [r11]
I180827 20:41:52.872706 51256 sql/event_log.go:126  [n1,intExec=create-default-db] Event: "create_database", target: 51, info: {DatabaseName:postgres Statement:CREATE DATABASE IF NOT EXISTS postgres User:root}
I180827 20:41:52.874819 51264 storage/replica_command.go:298  [split,n1,s1,r11/1:/{Table/14-Max}] initiating a split of this range at key /Table/15 [r12]
I180827 20:41:52.876403 50862 server/server.go:1594  [n1] done ensuring all necessary migrations have run
I180827 20:41:52.876433 50862 server/server.go:1597  [n1] serving sql connections
I180827 20:41:52.879108 51233 server/server_update.go:67  [n1] no need to upgrade, cluster already at the newest version
I180827 20:41:52.879639 51235 sql/event_log.go:126  [n1] Event: "node_join", target: 1, info: {Descriptor:{NodeID:1 Address:{NetworkField:tcp AddressField:127.0.0.1:41477} Attrs: Locality: ServerVersion:2.0-12 BuildTag:v2.1.0-alpha.20180702-2025-gf1e7bb1 StartedAt:1535402512767856449 LocalityAddress:[]} ClusterID:d5e53e69-a109-4eb6-91bf-29e74ae744ba StartedAt:1535402512767856449 LastUp:1535402512767856449}
I180827 20:41:52.880318 51302 storage/replica_command.go:298  [split,n1,s1,r12/1:/{Table/15-Max}] initiating a split of this range at key /Table/16 [r13]
I180827 20:41:52.927701 50819 storage/replica_command.go:298  [split,n1,s1,r13/1:/{Table/16-Max}] initiating a split of this range at key /Table/17 [r14]
I180827 20:41:52.940165 51323 storage/replica_command.go:298  [split,n1,s1,r14/1:/{Table/17-Max}] initiating a split of this range at key /Table/18 [r15]
I180827 20:41:52.948539 51355 storage/replica_command.go:298  [split,n1,s1,r15/1:/{Table/18-Max}] initiating a split of this range at key /Table/19 [r16]
I180827 20:41:52.953658 51380 storage/replica_command.go:298  [split,n1,s1,r16/1:/{Table/19-Max}] initiating a split of this range at key /Table/20 [r17]
I180827 20:41:52.961237 51137 storage/replica_command.go:298  [split,n1,s1,r17/1:/{Table/20-Max}] initiating a split of this range at key /Table/21 [r18]
I180827 20:41:52.966548 50832 storage/replica_command.go:298  [split,n1,s1,r18/1:/{Table/21-Max}] initiating a split of this range at key /Table/22 [r19]
I180827 20:41:52.977113 51362 storage/replica_command.go:298  [split,n1,s1,r19/1:/{Table/22-Max}] initiating a split of this range at key /Table/23 [r20]
I180827 20:41:53.041315 51440 storage/replica_command.go:298  [split,n1,s1,r20/1:/{Table/23-Max}] initiating a split of this range at key /Table/50 [r21]
I180827 20:41:53.047478 51414 storage/replica_command.go:298  [split,n1,s1,r21/1:/{Table/50-Max}] initiating a split of this range at key /Table/51 [r22]
W180827 20:41:53.081214 50862 server/status/runtime.go:294  [n?] Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006"
I180827 20:41:53.089127 50862 server/server.go:830  [n?] monitoring forward clock jumps based on server.clock.forward_jump_check_enabled
I180827 20:41:53.089322 50862 base/addr_validation.go:260  [n?] server certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
I180827 20:41:53.089338 50862 base/addr_validation.go:300  [n?] web UI certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
I180827 20:41:53.102793 50862 server/config.go:496  [n?] 1 storage engine initialized
I180827 20:41:53.102863 50862 server/config.go:499  [n?] RocksDB cache size: 128 MiB
I180827 20:41:53.102878 50862 server/config.go:499  [n?] store 0: in-memory, size 0 B
W180827 20:41:53.102953 50862 gossip/gossip.go:1371  [n?] no incoming or outgoing connections
I180827 20:41:53.103001 50862 server/server.go:1403  [n?] no stores bootstrapped and --join flag specified, awaiting init command.
I180827 20:41:53.115344 51458 gossip/client.go:129  [n?] started gossip client to 127.0.0.1:41477
I180827 20:41:53.125579 51530 gossip/server.go:217  [n1] received initial cluster-verification connection from {tcp 127.0.0.1:36113}
I180827 20:41:53.127987 50862 server/node.go:697  [n?] connecting to gossip network to verify cluster ID...
I180827 20:41:53.128034 50862 server/node.go:722  [n?] node connected via gossip and verified as part of cluster "d5e53e69-a109-4eb6-91bf-29e74ae744ba"
I180827 20:41:53.128397 51575 kv/dist_sender.go:345  [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I180827 20:41:53.134920 51574 kv/dist_sender.go:345  [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I180827 20:41:53.135628 50862 kv/dist_sender.go:345  [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I180827 20:41:53.136461 50862 server/node.go:428  [n?] new node allocated ID 2
I180827 20:41:53.136541 50862 gossip/gossip.go:382  [n2] NodeDescriptor set to node_id:2 address:<network_field:"tcp" address_field:"127.0.0.1:36113" > attrs:<> locality:<> ServerVersion:<major_val:2 minor_val:0 patch:0 unstable:12 > build_tag:"v2.1.0-alpha.20180702-2025-gf1e7bb1" started_at:1535402513136479434 
I180827 20:41:53.136591 50862 storage/stores.go:242  [n2] read 0 node addresses from persistent storage
I180827 20:41:53.136624 50862 storage/stores.go:261  [n2] wrote 1 node addresses to persistent storage
I180827 20:41:53.137485 51552 storage/stores.go:261  [n1] wrote 1 node addresses to persistent storage
I180827 20:41:53.139442 50862 server/node.go:672  [n2] bootstrapped store [n2,s2]
I180827 20:41:53.139577 50862 server/node.go:546  [n2] node=2: started with [] engine(s) and attributes []
I180827 20:41:53.140140 50862 server/status/recorder.go:652  [n2] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180827 20:41:53.140166 50862 server/server.go:1807  [n2] Could not start heap profiler worker due to: directory to store profiles could not be determined
I180827 20:41:53.140233 50862 server/server.go:1538  [n2] starting https server at 127.0.0.1:39947 (use: 127.0.0.1:39947)
I180827 20:41:53.140246 50862 server/server.go:1540  [n2] starting grpc/postgres server at 127.0.0.1:36113
I180827 20:41:53.140256 50862 server/server.go:1541  [n2] advertising CockroachDB node at 127.0.0.1:36113
I180827 20:41:53.140624 51685 server/status/recorder.go:652  [n2,summaries] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180827 20:41:53.153945 50862 server/server.go:1594  [n2] done ensuring all necessary migrations have run
I180827 20:41:53.153974 50862 server/server.go:1597  [n2] serving sql connections
W180827 20:41:53.165268 50862 server/status/runtime.go:294  [n?] Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006"
I180827 20:41:53.185802 51467 server/server_update.go:67  [n2] no need to upgrade, cluster already at the newest version
I180827 20:41:53.186848 51469 sql/event_log.go:126  [n2] Event: "node_join", target: 2, info: {Descriptor:{NodeID:2 Address:{NetworkField:tcp AddressField:127.0.0.1:36113} Attrs: Locality: ServerVersion:2.0-12 BuildTag:v2.1.0-alpha.20180702-2025-gf1e7bb1 StartedAt:1535402513136479434 LocalityAddress:[]} ClusterID:d5e53e69-a109-4eb6-91bf-29e74ae744ba StartedAt:1535402513136479434 LastUp:1535402513136479434}
I180827 20:41:53.189622 50862 server/server.go:830  [n?] monitoring forward clock jumps based on server.clock.forward_jump_check_enabled
I180827 20:41:53.189776 50862 base/addr_validation.go:260  [n?] server certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
I180827 20:41:53.189808 50862 base/addr_validation.go:300  [n?] web UI certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
I180827 20:41:53.207782 50862 server/config.go:496  [n?] 1 storage engine initialized
I180827 20:41:53.207807 50862 server/config.go:499  [n?] RocksDB cache size: 128 MiB
I180827 20:41:53.207815 50862 server/config.go:499  [n?] store 0: in-memory, size 0 B
W180827 20:41:53.207911 50862 gossip/gossip.go:1371  [n?] no incoming or outgoing connections
I180827 20:41:53.207947 50862 server/server.go:1403  [n?] no stores bootstrapped and --join flag specified, awaiting init command.
I180827 20:41:53.211471 51475 rpc/nodedialer/nodedialer.go:92  [ct-client] connection to n2 established
I180827 20:41:53.223653 51740 gossip/client.go:129  [n?] started gossip client to 127.0.0.1:41477
I180827 20:41:53.223954 51816 gossip/server.go:217  [n1] received initial cluster-verification connection from {tcp 127.0.0.1:46463}
I180827 20:41:53.224401 50862 server/node.go:697  [n?] connecting to gossip network to verify cluster ID...
I180827 20:41:53.224432 50862 server/node.go:722  [n?] node connected via gossip and verified as part of cluster "d5e53e69-a109-4eb6-91bf-29e74ae744ba"
I180827 20:41:53.224690 51837 kv/dist_sender.go:345  [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I180827 20:41:53.225445 51836 kv/dist_sender.go:345  [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I180827 20:41:53.226030 50862 kv/dist_sender.go:345  [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I180827 20:41:53.226699 50862 server/node.go:428  [n?] new node allocated ID 3
I180827 20:41:53.226763 50862 gossip/gossip.go:382  [n3] NodeDescriptor set to node_id:3 address:<network_field:"tcp" address_field:"127.0.0.1:46463" > attrs:<> locality:<> ServerVersion:<major_val:2 minor_val:0 patch:0 unstable:12 > build_tag:"v2.1.0-alpha.20180702-2025-gf1e7bb1" started_at:1535402513226706701 
I180827 20:41:53.226805 50862 storage/stores.go:242  [n3] read 0 node addresses from persistent storage
I180827 20:41:53.226851 50862 storage/stores.go:261  [n3] wrote 2 node addresses to persistent storage
I180827 20:41:53.227563 51809 storage/stores.go:261  [n1] wrote 2 node addresses to persistent storage
I180827 20:41:53.227869 51810 storage/stores.go:261  [n2] wrote 2 node addresses to persistent storage
I180827 20:41:53.228504 50862 server/node.go:672  [n3] bootstrapped store [n3,s3]
I180827 20:41:53.229044 50862 server/node.go:546  [n3] node=3: started with [] engine(s) and attributes []
I180827 20:41:53.229696 50862 server/status/recorder.go:652  [n3] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180827 20:41:53.229749 50862 server/server.go:1807  [n3] Could not start heap profiler worker due to: directory to store profiles could not be determined
I180827 20:41:53.235251 50862 server/server.go:1538  [n3] starting https server at 127.0.0.1:43307 (use: 127.0.0.1:43307)
I180827 20:41:53.235271 50862 server/server.go:1540  [n3] starting grpc/postgres server at 127.0.0.1:46463
I180827 20:41:53.235283 50862 server/server.go:1541  [n3] advertising CockroachDB node at 127.0.0.1:46463
I180827 20:41:53.240284 50862 server/server.go:1594  [n3] done ensuring all necessary migrations have run
I180827 20:41:53.240307 50862 server/server.go:1597  [n3] serving sql connections
I180827 20:41:53.243124 51945 server/status/recorder.go:652  [n3,summaries] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180827 20:41:53.248117 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r20/1:/Table/{23-50}] sending preemptive snapshot 59e1afc9 at applied index 16
I180827 20:41:53.249136 50862 testutils/testcluster/testcluster.go:536  [n1,s1] has 22 underreplicated ranges
I180827 20:41:53.251012 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r20/1:/Table/{23-50}] streamed snapshot to (n2,s2):?: kv pairs: 12, log entries: 6, rate-limit: 8.0 MiB/sec, 3ms
I180827 20:41:53.251369 51983 storage/replica_raftstorage.go:784  [n2,s2,r20/?:{-}] applying preemptive snapshot at index 16 (id=59e1afc9, encoded size=2241, 1 rocksdb batches, 6 log entries)
I180827 20:41:53.254056 51839 server/server_update.go:67  [n3] no need to upgrade, cluster already at the newest version
I180827 20:41:53.255122 51841 sql/event_log.go:126  [n3] Event: "node_join", target: 3, info: {Descriptor:{NodeID:3 Address:{NetworkField:tcp AddressField:127.0.0.1:46463} Attrs: Locality: ServerVersion:2.0-12 BuildTag:v2.1.0-alpha.20180702-2025-gf1e7bb1 StartedAt:1535402513226706701 LocalityAddress:[]} ClusterID:d5e53e69-a109-4eb6-91bf-29e74ae744ba StartedAt:1535402513226706701 LastUp:1535402513226706701}
I180827 20:41:53.256061 51983 storage/replica_raftstorage.go:790  [n2,s2,r20/?:/Table/{23-50}] applied preemptive snapshot in 1ms [clear=0ms batch=0ms entries=0ms commit=1ms]
I180827 20:41:53.256605 50930 storage/replica_command.go:812  [replicate,n1,s1,r20/1:/Table/{23-50}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r20:/Table/{23-50} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.259565 50930 storage/replica.go:3743  [n1,s1,r20/1:/Table/{23-50}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I180827 20:41:53.261627 51625 rpc/nodedialer/nodedialer.go:92  [n2] connection to n1 established
I180827 20:41:53.264544 50862 testutils/testcluster/testcluster.go:536  [n1,s1] has 22 underreplicated ranges
I180827 20:41:53.286630 50930 rpc/nodedialer/nodedialer.go:92  [replicate,n1,s1,r21/1:/Table/5{0-1}] connection to n3 established
I180827 20:41:53.287245 50862 testutils/testcluster/testcluster.go:536  [n1,s1] has 22 underreplicated ranges
I180827 20:41:53.287799 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r21/1:/Table/5{0-1}] sending preemptive snapshot de08568a at applied index 18
I180827 20:41:53.288157 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r21/1:/Table/5{0-1}] streamed snapshot to (n3,s3):?: kv pairs: 8, log entries: 8, rate-limit: 8.0 MiB/sec, 1ms
I180827 20:41:53.288623 51959 storage/replica_raftstorage.go:784  [n3,s3,r21/?:{-}] applying preemptive snapshot at index 18 (id=de08568a, encoded size=2646, 1 rocksdb batches, 8 log entries)
I180827 20:41:53.289814 51959 storage/replica_raftstorage.go:790  [n3,s3,r21/?:/Table/5{0-1}] applied preemptive snapshot in 1ms [clear=0ms batch=0ms entries=0ms commit=1ms]
I180827 20:41:53.290329 50930 storage/replica_command.go:812  [replicate,n1,s1,r21/1:/Table/5{0-1}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r21:/Table/5{0-1} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.293678 50930 storage/replica.go:3743  [n1,s1,r21/1:/Table/5{0-1}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I180827 20:41:53.294953 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r22/1:/{Table/51-Max}] sending preemptive snapshot a84e7278 at applied index 12
I180827 20:41:53.295229 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r22/1:/{Table/51-Max}] streamed snapshot to (n3,s3):?: kv pairs: 7, log entries: 2, rate-limit: 8.0 MiB/sec, 1ms
I180827 20:41:53.295441 51883 rpc/nodedialer/nodedialer.go:92  [n3] connection to n1 established
I180827 20:41:53.295585 51953 storage/replica_raftstorage.go:784  [n3,s3,r22/?:{-}] applying preemptive snapshot at index 12 (id=a84e7278, encoded size=386, 1 rocksdb batches, 2 log entries)
I180827 20:41:53.295717 51953 storage/replica_raftstorage.go:790  [n3,s3,r22/?:/{Table/51-Max}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.295955 50930 storage/replica_command.go:812  [replicate,n1,s1,r22/1:/{Table/51-Max}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r22:/{Table/51-Max} [(n1,s1):1, next=2, gen=0]
I180827 20:41:53.298097 50930 storage/replica.go:3743  [n1,s1,r22/1:/{Table/51-Max}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I180827 20:41:53.301122 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r8/1:/Table/1{1-2}] sending preemptive snapshot 201bdccc at applied index 18
I180827 20:41:53.301565 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r8/1:/Table/1{1-2}] streamed snapshot to (n3,s3):?: kv pairs: 9, log entries: 8, rate-limit: 8.0 MiB/sec, 3ms
I180827 20:41:53.306578 52088 storage/replica_raftstorage.go:784  [n3,s3,r8/?:{-}] applying preemptive snapshot at index 18 (id=201bdccc, encoded size=4352, 1 rocksdb batches, 8 log entries)
I180827 20:41:53.306868 52088 storage/replica_raftstorage.go:790  [n3,s3,r8/?:/Table/1{1-2}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.307601 50930 storage/replica_command.go:812  [replicate,n1,s1,r8/1:/Table/1{1-2}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r8:/Table/1{1-2} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.311873 50930 storage/replica.go:3743  [n1,s1,r8/1:/Table/1{1-2}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I180827 20:41:53.314134 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r17/1:/Table/2{0-1}] sending preemptive snapshot 53116eb2 at applied index 16
I180827 20:41:53.314317 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r17/1:/Table/2{0-1}] streamed snapshot to (n3,s3):?: kv pairs: 8, log entries: 6, rate-limit: 8.0 MiB/sec, 2ms
I180827 20:41:53.314683 52103 storage/replica_raftstorage.go:784  [n3,s3,r17/?:{-}] applying preemptive snapshot at index 16 (id=53116eb2, encoded size=2105, 1 rocksdb batches, 6 log entries)
I180827 20:41:53.314887 52103 storage/replica_raftstorage.go:790  [n3,s3,r17/?:/Table/2{0-1}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.315401 50930 storage/replica_command.go:812  [replicate,n1,s1,r17/1:/Table/2{0-1}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r17:/Table/2{0-1} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.318398 50930 storage/replica.go:3743  [n1,s1,r17/1:/Table/2{0-1}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I180827 20:41:53.319436 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r16/1:/Table/{19-20}] sending preemptive snapshot e0be8540 at applied index 16
I180827 20:41:53.319691 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r16/1:/Table/{19-20}] streamed snapshot to (n2,s2):?: kv pairs: 8, log entries: 6, rate-limit: 8.0 MiB/sec, 1ms
I180827 20:41:53.320127 52072 storage/replica_raftstorage.go:784  [n2,s2,r16/?:{-}] applying preemptive snapshot at index 16 (id=e0be8540, encoded size=2109, 1 rocksdb batches, 6 log entries)
I180827 20:41:53.320339 52072 storage/replica_raftstorage.go:790  [n2,s2,r16/?:/Table/{19-20}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.320816 50930 storage/replica_command.go:812  [replicate,n1,s1,r16/1:/Table/{19-20}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r16:/Table/{19-20} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.323849 50930 storage/replica.go:3743  [n1,s1,r16/1:/Table/{19-20}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I180827 20:41:53.326208 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r15/1:/Table/1{8-9}] sending preemptive snapshot d259ae5c at applied index 16
I180827 20:41:53.326404 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r15/1:/Table/1{8-9}] streamed snapshot to (n2,s2):?: kv pairs: 8, log entries: 6, rate-limit: 8.0 MiB/sec, 2ms
I180827 20:41:53.326731 52116 storage/replica_raftstorage.go:784  [n2,s2,r15/?:{-}] applying preemptive snapshot at index 16 (id=d259ae5c, encoded size=2276, 1 rocksdb batches, 6 log entries)
I180827 20:41:53.326923 50862 testutils/testcluster/testcluster.go:536  [n1,s1] has 22 underreplicated ranges
I180827 20:41:53.326953 52116 storage/replica_raftstorage.go:790  [n2,s2,r15/?:/Table/1{8-9}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.334514 50930 storage/replica_command.go:812  [replicate,n1,s1,r15/1:/Table/1{8-9}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r15:/Table/1{8-9} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.337656 50930 storage/replica.go:3743  [n1,s1,r15/1:/Table/1{8-9}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I180827 20:41:53.338767 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r14/1:/Table/1{7-8}] sending preemptive snapshot 9d0058d5 at applied index 16
I180827 20:41:53.339034 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r14/1:/Table/1{7-8}] streamed snapshot to (n2,s2):?: kv pairs: 8, log entries: 6, rate-limit: 8.0 MiB/sec, 1ms
I180827 20:41:53.339612 52090 storage/replica_raftstorage.go:784  [n2,s2,r14/?:{-}] applying preemptive snapshot at index 16 (id=9d0058d5, encoded size=2276, 1 rocksdb batches, 6 log entries)
I180827 20:41:53.339831 52090 storage/replica_raftstorage.go:790  [n2,s2,r14/?:/Table/1{7-8}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.340173 50930 storage/replica_command.go:812  [replicate,n1,s1,r14/1:/Table/1{7-8}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r14:/Table/1{7-8} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.343121 50930 storage/replica.go:3743  [n1,s1,r14/1:/Table/1{7-8}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I180827 20:41:53.345432 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r9/1:/Table/1{2-3}] sending preemptive snapshot 0eea2d20 at applied index 26
I180827 20:41:53.345859 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r9/1:/Table/1{2-3}] streamed snapshot to (n2,s2):?: kv pairs: 53, log entries: 16, rate-limit: 8.0 MiB/sec, 2ms
I180827 20:41:53.347137 52066 storage/replica_raftstorage.go:784  [n2,s2,r9/?:{-}] applying preemptive snapshot at index 26 (id=0eea2d20, encoded size=15139, 1 rocksdb batches, 16 log entries)
I180827 20:41:53.347467 52066 storage/replica_raftstorage.go:790  [n2,s2,r9/?:/Table/1{2-3}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.348208 50930 storage/replica_command.go:812  [replicate,n1,s1,r9/1:/Table/1{2-3}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r9:/Table/1{2-3} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.352166 50930 storage/replica.go:3743  [n1,s1,r9/1:/Table/1{2-3}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I180827 20:41:53.353188 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r4/1:/System/{NodeLive…-tsd}] sending preemptive snapshot 0cdee511 at applied index 39
I180827 20:41:53.353765 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r4/1:/System/{NodeLive…-tsd}] streamed snapshot to (n2,s2):?: kv pairs: 36, log entries: 29, rate-limit: 8.0 MiB/sec, 1ms
I180827 20:41:53.354286 51723 storage/replica_raftstorage.go:784  [n2,s2,r4/?:{-}] applying preemptive snapshot at index 39 (id=0cdee511, encoded size=98384, 1 rocksdb batches, 29 log entries)
I180827 20:41:53.354994 51723 storage/replica_raftstorage.go:790  [n2,s2,r4/?:/System/{NodeLive…-tsd}] applied preemptive snapshot in 1ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.355529 50930 storage/replica_command.go:812  [replicate,n1,s1,r4/1:/System/{NodeLive…-tsd}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r4:/System/{NodeLivenessMax-tsd} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.358523 50930 storage/replica.go:3743  [n1,s1,r4/1:/System/{NodeLive…-tsd}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I180827 20:41:53.360250 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r3/1:/System/NodeLiveness{-Max}] sending preemptive snapshot 965d58b1 at applied index 19
I180827 20:41:53.360436 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r3/1:/System/NodeLiveness{-Max}] streamed snapshot to (n3,s3):?: kv pairs: 10, log entries: 9, rate-limit: 8.0 MiB/sec, 1ms
I180827 20:41:53.360789 52150 storage/replica_raftstorage.go:784  [n3,s3,r3/?:{-}] applying preemptive snapshot at index 19 (id=965d58b1, encoded size=4003, 1 rocksdb batches, 9 log entries)
I180827 20:41:53.361043 52150 storage/replica_raftstorage.go:790  [n3,s3,r3/?:/System/NodeLiveness{-Max}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.361522 50930 storage/replica_command.go:812  [replicate,n1,s1,r3/1:/System/NodeLiveness{-Max}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r3:/System/NodeLiveness{-Max} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.364392 50930 storage/replica.go:3743  [n1,s1,r3/1:/System/NodeLiveness{-Max}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I180827 20:41:53.366422 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r12/1:/Table/1{5-6}] sending preemptive snapshot 811af376 at applied index 16
I180827 20:41:53.366638 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r12/1:/Table/1{5-6}] streamed snapshot to (n3,s3):?: kv pairs: 8, log entries: 6, rate-limit: 8.0 MiB/sec, 2ms
I180827 20:41:53.367089 52137 storage/replica_raftstorage.go:784  [n3,s3,r12/?:{-}] applying preemptive snapshot at index 16 (id=811af376, encoded size=2276, 1 rocksdb batches, 6 log entries)
I180827 20:41:53.367359 52137 storage/replica_raftstorage.go:790  [n3,s3,r12/?:/Table/1{5-6}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.368127 50930 storage/replica_command.go:812  [replicate,n1,s1,r12/1:/Table/1{5-6}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r12:/Table/1{5-6} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.371691 50930 storage/replica.go:3743  [n1,s1,r12/1:/Table/1{5-6}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I180827 20:41:53.374563 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r19/1:/Table/2{2-3}] sending preemptive snapshot 9cd02555 at applied index 16
I180827 20:41:53.374760 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r19/1:/Table/2{2-3}] streamed snapshot to (n3,s3):?: kv pairs: 8, log entries: 6, rate-limit: 8.0 MiB/sec, 2ms
I180827 20:41:53.375252 52080 storage/replica_raftstorage.go:784  [n3,s3,r19/?:{-}] applying preemptive snapshot at index 16 (id=9cd02555, encoded size=2276, 1 rocksdb batches, 6 log entries)
I180827 20:41:53.375582 52080 storage/replica_raftstorage.go:790  [n3,s3,r19/?:/Table/2{2-3}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.375950 50930 storage/replica_command.go:812  [replicate,n1,s1,r19/1:/Table/2{2-3}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r19:/Table/2{2-3} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.381819 50930 storage/replica.go:3743  [n1,s1,r19/1:/Table/2{2-3}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I180827 20:41:53.386461 52091 rpc/nodedialer/nodedialer.go:92  [ct-client] connection to n3 established
I180827 20:41:53.386637 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r10/1:/Table/1{3-4}] sending preemptive snapshot a16f4b15 at applied index 64
I180827 20:41:53.388005 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r10/1:/Table/1{3-4}] streamed snapshot to (n3,s3):?: kv pairs: 204, log entries: 54, rate-limit: 8.0 MiB/sec, 4ms
I180827 20:41:53.388536 52181 storage/replica_raftstorage.go:784  [n3,s3,r10/?:{-}] applying preemptive snapshot at index 64 (id=a16f4b15, encoded size=62836, 1 rocksdb batches, 54 log entries)
I180827 20:41:53.389154 52181 storage/replica_raftstorage.go:790  [n3,s3,r10/?:/Table/1{3-4}] applied preemptive snapshot in 1ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.389513 50930 storage/replica_command.go:812  [replicate,n1,s1,r10/1:/Table/1{3-4}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r10:/Table/1{3-4} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.392649 50930 storage/replica.go:3743  [n1,s1,r10/1:/Table/1{3-4}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I180827 20:41:53.394122 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r2/1:/System/{-NodeLive…}] sending preemptive snapshot 69adabc1 at applied index 23
I180827 20:41:53.394365 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r2/1:/System/{-NodeLive…}] streamed snapshot to (n2,s2):?: kv pairs: 7, log entries: 13, rate-limit: 8.0 MiB/sec, 1ms
I180827 20:41:53.394729 52213 storage/replica_raftstorage.go:784  [n2,s2,r2/?:{-}] applying preemptive snapshot at index 23 (id=69adabc1, encoded size=6277, 1 rocksdb batches, 13 log entries)
I180827 20:41:53.394981 52213 storage/replica_raftstorage.go:790  [n2,s2,r2/?:/System/{-NodeLive…}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.395465 50930 storage/replica_command.go:812  [replicate,n1,s1,r2/1:/System/{-NodeLive…}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r2:/System/{-NodeLiveness} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.398757 50930 storage/replica.go:3743  [n1,s1,r2/1:/System/{-NodeLive…}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I180827 20:41:53.399709 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r18/1:/Table/2{1-2}] sending preemptive snapshot e9df2a4a at applied index 16
I180827 20:41:53.400036 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r18/1:/Table/2{1-2}] streamed snapshot to (n3,s3):?: kv pairs: 8, log entries: 6, rate-limit: 8.0 MiB/sec, 1ms
I180827 20:41:53.400391 52185 storage/replica_raftstorage.go:784  [n3,s3,r18/?:{-}] applying preemptive snapshot at index 16 (id=e9df2a4a, encoded size=2272, 1 rocksdb batches, 6 log entries)
I180827 20:41:53.400594 52185 storage/replica_raftstorage.go:790  [n3,s3,r18/?:/Table/2{1-2}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.400882 50930 storage/replica_command.go:812  [replicate,n1,s1,r18/1:/Table/2{1-2}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r18:/Table/2{1-2} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.407636 50930 storage/replica.go:3743  [n1,s1,r18/1:/Table/2{1-2}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I180827 20:41:53.408861 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r13/1:/Table/1{6-7}] sending preemptive snapshot 6f914d55 at applied index 16
I180827 20:41:53.409071 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r13/1:/Table/1{6-7}] streamed snapshot to (n2,s2):?: kv pairs: 8, log entries: 6, rate-limit: 8.0 MiB/sec, 1ms
I180827 20:41:53.409426 52218 storage/replica_raftstorage.go:784  [n2,s2,r13/?:{-}] applying preemptive snapshot at index 16 (id=6f914d55, encoded size=2276, 1 rocksdb batches, 6 log entries)
I180827 20:41:53.409616 52218 storage/replica_raftstorage.go:790  [n2,s2,r13/?:/Table/1{6-7}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.409970 50930 storage/replica_command.go:812  [replicate,n1,s1,r13/1:/Table/1{6-7}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r13:/Table/1{6-7} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.411262 50862 testutils/testcluster/testcluster.go:536  [n1,s1] has 22 underreplicated ranges
I180827 20:41:53.412831 50930 storage/replica.go:3743  [n1,s1,r13/1:/Table/1{6-7}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I180827 20:41:53.414081 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r11/1:/Table/1{4-5}] sending preemptive snapshot cca961c1 at applied index 16
I180827 20:41:53.414277 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r11/1:/Table/1{4-5}] streamed snapshot to (n3,s3):?: kv pairs: 8, log entries: 6, rate-limit: 8.0 MiB/sec, 1ms
I180827 20:41:53.414576 52199 storage/replica_raftstorage.go:784  [n3,s3,r11/?:{-}] applying preemptive snapshot at index 16 (id=cca961c1, encoded size=2272, 1 rocksdb batches, 6 log entries)
I180827 20:41:53.414816 52199 storage/replica_raftstorage.go:790  [n3,s3,r11/?:/Table/1{4-5}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.415293 50930 storage/replica_command.go:812  [replicate,n1,s1,r11/1:/Table/1{4-5}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r11:/Table/1{4-5} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.418111 50930 storage/replica.go:3743  [n1,s1,r11/1:/Table/1{4-5}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I180827 20:41:53.419054 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r5/1:/System/ts{d-e}] sending preemptive snapshot 3c3a015f at applied index 27
I180827 20:41:53.423022 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r5/1:/System/ts{d-e}] streamed snapshot to (n3,s3):?: kv pairs: 1391, log entries: 2, rate-limit: 8.0 MiB/sec, 4ms
I180827 20:41:53.423893 52201 storage/replica_raftstorage.go:784  [n3,s3,r5/?:{-}] applying preemptive snapshot at index 27 (id=3c3a015f, encoded size=194658, 1 rocksdb batches, 2 log entries)
I180827 20:41:53.429501 52201 storage/replica_raftstorage.go:790  [n3,s3,r5/?:/System/ts{d-e}] applied preemptive snapshot in 6ms [clear=0ms batch=0ms entries=2ms commit=4ms]
I180827 20:41:53.433500 50930 storage/replica_command.go:812  [replicate,n1,s1,r5/1:/System/ts{d-e}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r5:/System/ts{d-e} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.437580 50930 storage/replica.go:3743  [n1,s1,r5/1:/System/ts{d-e}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I180827 20:41:53.440575 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r6/1:/{System/tse-Table/System…}] sending preemptive snapshot cbd412df at applied index 21
I180827 20:41:53.440794 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r6/1:/{System/tse-Table/System…}] streamed snapshot to (n3,s3):?: kv pairs: 8, log entries: 11, rate-limit: 8.0 MiB/sec, 2ms
I180827 20:41:53.441181 52260 storage/replica_raftstorage.go:784  [n3,s3,r6/?:{-}] applying preemptive snapshot at index 21 (id=cbd412df, encoded size=4339, 1 rocksdb batches, 11 log entries)
I180827 20:41:53.441400 52260 storage/replica_raftstorage.go:790  [n3,s3,r6/?:/{System/tse-Table/System…}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.441676 50930 storage/replica_command.go:812  [replicate,n1,s1,r6/1:/{System/tse-Table/System…}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r6:/{System/tse-Table/SystemConfigSpan/Start} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.448564 52224 rpc/nodedialer/nodedialer.go:92  [ct-client] connection to n2 established
I180827 20:41:53.461587 50930 storage/replica.go:3743  [n1,s1,r6/1:/{System/tse-Table/System…}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I180827 20:41:53.463345 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r7/1:/Table/{SystemCon…-11}] sending preemptive snapshot 114f4385 at applied index 29
I180827 20:41:53.464896 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r7/1:/Table/{SystemCon…-11}] streamed snapshot to (n2,s2):?: kv pairs: 59, log entries: 19, rate-limit: 8.0 MiB/sec, 3ms
I180827 20:41:53.465343 52280 storage/replica_raftstorage.go:784  [n2,s2,r7/?:{-}] applying preemptive snapshot at index 29 (id=114f4385, encoded size=16646, 1 rocksdb batches, 19 log entries)
I180827 20:41:53.465821 52280 storage/replica_raftstorage.go:790  [n2,s2,r7/?:/Table/{SystemCon…-11}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.466988 50930 storage/replica_command.go:812  [replicate,n1,s1,r7/1:/Table/{SystemCon…-11}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r7:/Table/{SystemConfigSpan/Start-11} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.472743 50930 storage/replica.go:3743  [n1,s1,r7/1:/Table/{SystemCon…-11}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I180827 20:41:53.474632 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r1/1:/{Min-System/}] sending preemptive snapshot 0a244018 at applied index 114
I180827 20:41:53.475250 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r1/1:/{Min-System/}] streamed snapshot to (n2,s2):?: kv pairs: 73, log entries: 90, rate-limit: 8.0 MiB/sec, 1ms
I180827 20:41:53.475827 52267 storage/replica_raftstorage.go:784  [n2,s2,r1/?:{-}] applying preemptive snapshot at index 114 (id=0a244018, encoded size=40271, 1 rocksdb batches, 90 log entries)
I180827 20:41:53.476525 52267 storage/replica_raftstorage.go:790  [n2,s2,r1/?:/{Min-System/}] applied preemptive snapshot in 1ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.476869 50930 storage/replica_command.go:812  [replicate,n1,s1,r1/1:/{Min-System/}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r1:/{Min-System/} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.482912 50930 storage/replica.go:3743  [n1,s1,r1/1:/{Min-System/}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I180827 20:41:53.483281 50930 storage/queue.go:873  [n1,replicate] purgatory is now empty
I180827 20:41:53.485684 52286 storage/store_snapshot.go:615  [replicate,n1,s1,r20/1:/Table/{23-50}] sending preemptive snapshot f1426c69 at applied index 19
I180827 20:41:53.487316 52286 storage/store_snapshot.go:657  [replicate,n1,s1,r20/1:/Table/{23-50}] streamed snapshot to (n3,s3):?: kv pairs: 13, log entries: 9, rate-limit: 8.0 MiB/sec, 4ms
I180827 20:41:53.487681 52252 storage/replica_raftstorage.go:784  [n3,s3,r20/?:{-}] applying preemptive snapshot at index 19 (id=f1426c69, encoded size=3273, 1 rocksdb batches, 9 log entries)
I180827 20:41:53.487932 52252 storage/replica_raftstorage.go:790  [n3,s3,r20/?:/Table/{23-50}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.488311 52286 storage/replica_command.go:812  [replicate,n1,s1,r20/1:/Table/{23-50}] change replicas (ADD_REPLICA (n3,s3):3): read existing descriptor r20:/Table/{23-50} [(n1,s1):1, (n2,s2):2, next=3, gen=1]
I180827 20:41:53.503580 52286 storage/replica.go:3743  [n1,s1,r20/1:/Table/{23-50}] proposing ADD_REPLICA((n3,s3):3): updated=[(n1,s1):1 (n2,s2):2 (n3,s3):3] next=4
I180827 20:41:53.505707 52235 storage/store_snapshot.go:615  [replicate,n1,s1,r1/1:/{Min-System/}] sending preemptive snapshot 99036b07 at applied index 119
I180827 20:41:53.506514 52235 storage/store_snapshot.go:657  [replicate,n1,s1,r1/1:/{Min-System/}] streamed snapshot to (n3,s3):?: kv pairs: 78, log entries: 95, rate-limit: 8.0 MiB/sec, 1ms
I180827 20:41:53.507282 52188 storage/replica_raftstorage.go:784  [n3,s3,r1/?:{-}] applying preemptive snapshot at index 119 (id=99036b07, encoded size=42101, 1 rocksdb batches, 95 log entries)
I180827 20:41:53.508109 52188 storage/replica_raftstorage.go:790  [n3,s3,r1/?:/{Min-System/}] applied preemptive snapshot in 1ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.508641 52235 storage/replica_command.go:812  [replicate,n1,s1,r1/1:/{Min-System/}] change replicas (ADD_REPLICA (n3,s3):3): read existing descriptor r1:/{Min-System/} [(n1,s1):1, (n2,s2):2, next=3, gen=1]
I180827 20:41:53.512524 52235 storage/replica.go:3743  [n1,s1,r1/1:/{Min-System/}] proposing ADD_REPLICA((n3,s3):3): updated=[(n1,s1):1 (n2,s2):2 (n3,s3):3] next=4
I180827 20:41:53.513999 52209 storage/store_snapshot.go:615  [replicate,n1,s1,r7/1:/Table/{SystemCon…-11}] sending preemptive snapshot bb53109c at applied index 32
I180827 20:41:53.514379 52209 storage/store_snapshot.go:657  [replicate,n1,s1,r7/1:/Table/{SystemCon…-11}] streamed snapshot to (n3,s3):?: kv pairs: 60, log entries: 22, rate-limit: 8.0 MiB/sec, 1ms
I180827 20:41:53.514821 52292 storage/replica_raftstorage.go:784  [n3,s3,r7/?:{-}] applying preemptive snapshot at index 32 (id=bb53109c, encoded size=17687, 1 rocksdb batches, 22 log entries)
I180827 20:41:53.515905 52292 storage/replica_raftstorage.go:790  [n3,s3,r7/?:/Table/{SystemCon…-11}] applied preemptive snapshot in 1ms [clear=1ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.516367 52209 storage/replica_command.go:812  [replicate,n1,s1,r7/1:/Table/{SystemCon…-11}] change replicas (ADD_REPLICA (n3,s3):3): read existing descriptor r7:/Table/{SystemConfigSpan/Start-11} [(n1,s1):1, (n2,s2):2, next=3, gen=1]
I180827 20:41:53.520158 52209 storage/replica.go:3743  [n1,s1,r7/1:/Table/{SystemCon…-11}] proposing ADD_REPLICA((n3,s3):3): updated=[(n1,s1):1 (n2,s2):2 (n3,s3):3] next=4
I180827 20:41:53.521958 52312 storage/store_snapshot.go:615  [replicate,n1,s1,r6/1:/{System/tse-Table/System…}] sending preemptive snapshot 2ca43612 at applied index 24
I180827 20:41:53.522776 52312 storage/store_snapshot.go:657  [replicate,n1,s1,r6/1:/{System/tse-Table/System…}] streamed snapshot to (n2,s2):?: kv pairs: 9, log entries: 14, rate-limit: 8.0 MiB/sec, 2ms
I180827 20:41:53.523128 52239 storage/replica_raftstorage.go:784  [n2,s2,r6/?:{-}] applying preemptive snapshot at index 24 (id=2ca43612, encoded size=5410, 1 rocksdb batches, 14 log entries)
I180827 20:41:53.523377 52239 storage/replica_raftstorage.go:790  [n2,s2,r6/?:/{System/tse-Table/System…}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.523701 52312 storage/replica_command.go:812  [replicate,n1,s1,r6/1:/{System/tse-Table/System…}] change replicas (ADD_REPLICA (n2,s2):3): read existing descriptor r6:/{System/tse-Table/SystemConfigSpan/Start} [(n1,s1):1, (n3,s3):2, next=3, gen=1]
I180827 20:41:53.525176 50862 testutils/testcluster/testcluster.go:536  [n1,s1] has 19 underreplicated ranges
I180827 20:41:53.527482 52312 storage/replica.go:3743  [n1,s1,r6/1:/{System/tse-Table/System…}] proposing ADD_REPLICA((n2,s2):3): updated=[(n1,s1):1 (n3,s3):2 (n2,s2):3] next=4
I180827 20:41:53.528875 52327 storage/store_snapshot.go:615  [replicate,n1,s1,r5/1:/System/ts{d-e}] sending preemptive snapshot 731be2ae at applied index 30
I180827 20:41:53.532860 52327 storage/store_snapshot.go:657  [replicate,n1,s1,r5/1:/System/ts{d-e}] streamed snapshot to (n2,s2):?: kv pairs: 1392, log entries: 5, rate-limit: 8.0 MiB/sec, 4ms
I180827 20:41:53.533361 52316 storage/replica_raftstorage.go:784  [n2,s2,r5/?:{-}] applying preemptive snapshot at index 30 (id=731be2ae, encoded size=195741, 1 rocksdb batches, 5 log entries)
I180827 20:41:53.535834 52316 storage/replica_raftstorage.go:790  [n2,s2,r5/?:/System/ts{d-e}] applied preemptive snapshot in 2ms [clear=0ms batch=0ms entries=0ms commit=2ms]
I180827 20:41:53.536253 52327 storage/replica_command.go:812  [replicate,n1,s1,r5/1:/System/ts{d-e}] change replicas (ADD_REPLICA (n2,s2):3): read existing descriptor r5:/System/ts{d-e} [(n1,s1):1, (n3,s3):2, next=3, gen=1]
I180827 20:41:53.540576 52327 storage/replica.go:3743  [n1,s1,r5/1:/System/ts{d-e}] proposing ADD_REPLICA((n2,s2):3): updated=[(n1,s1):1 (n3,s3):2 (n2,s2):3] next=4
I180827 20:41:53.545804 52341 storage/store_snapshot.go:615  [replicate,n1,s1,r11/1:/Table/1{4-5}] sending preemptive snapshot 7497a95f at applied index 19
I180827 20:41:53.546108 52341 storage/store_snapshot.go:657  [replicate,n1,s1,r11/1:/Table/1{4-5}] streamed snapshot to (n2,s2):?: kv pairs: 9, log entries: 9, rate-limit: 8.0 MiB/sec, 4ms
I180827 20:41:53.546590 52275 storage/replica_raftstorage.go:784  [n2,s2,r11/?:{-}] applying preemptive snapshot at index 19 (id=7497a95f, encoded size=3304, 1 rocksdb batches, 9 log entries)
I180827 20:41:53.546960 52275 storage/replica_raftstorage.go:790  [n2,s2,r11/?:/Table/1{4-5}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.547386 52341 storage/replica_command.go:812  [replicate,n1,s1,r11/1:/Table/1{4-5}] change replicas (ADD_REPLICA (

Please assign, take a look and update the issue accordingly.

teamcity: failed test: test/TestImportPgDump

The following tests appear to have failed on release-banana.
You may want to check for open issues.

#864629:

--- FAIL: test/TestImportPgDump/read_data_only (0.000s)
Test ended in panic.

------- Stdout: -------
I180827 20:41:54.053559 52667 storage/replica_command.go:298  [n1,s1,r23/1:/{Table/52-Max}] initiating a split of this range at key /Table/53/1/106 [r24]
I180827 20:41:54.062208 52385 storage/replica_range_lease.go:554  [replicate,n1,s1,r23/1:/Table/5{2-3/1/106}] transferring lease to s2
I180827 20:41:54.063407 52385 storage/replica_range_lease.go:617  [replicate,n1,s1,r23/1:/Table/5{2-3/1/106}] done transferring lease to s2: <nil>
I180827 20:41:54.063498 51617 storage/replica_proposal.go:210  [n2,s2,r23/3:/Table/5{2-3/1/106}] new range lease repl=(n2,s2):3 seq=3 start=1535402514.062230012,0 epo=1 pro=1535402514.062232488,0 following repl=(n1,s1):1 seq=2 start=1535402512.768597075,0 exp=1535402521.769064687,0 pro=1535402512.769088099,0
I180827 20:41:54.077824 52355 storage/replica_command.go:298  [n1,s1,r24/1:/{Table/53/1/1…-Max}] initiating a split of this range at key /Table/53/2/"\x15\x8f\xe8\u007f\\\xf3\xdf\xf0nP\xdb\xd3\xe8\x1b\"B1K\xa8l+\x96/l\v\x9e\x0e\x91\xa0D\x96\xc0J\xf1\xa1͠\xd2̃\x05\xe3\xe2?ET蛂\x00\xe5\xb0\x1a\x8e\x13Zu\xfd\xf2\x81w^\xb7\xbdH\xb8\xe4\a\x9c\xfd\x99{\xb4\"\xe5Q\x9c\x17\x85\x97\xf7Ëb\x0f\xff\xb0-vmO\xe1\xfb\xc3\xf3\xab0\xa0\x05u\x1c\xb0{B\xeamp\xbd\x8f\x99?\x87\x0f\xb2e\xe3ؿ2LN\x03\x17\xa7\x9f\xd3\x0f\x15$\x02I\xd2\xd7\x04R\x193\x9d\xddX\u007f\x01A\xcc\xde`Pm:\xdbe\xfd\xa6\a\xf8i\x88\xa7\xee\xacӸ\xbf2\x84y\xcd\n\xe6]L)\xca\xd9`x\xb4\x1b|\xe8\x13\x82\x1a(/* 3`J\xe1ٰ\xe6AdN!-\xd9"/"ॹॹ;,✅\nπ<\t\nπ\tॹπ✅a\n,\nᐿ�\nॹ✅�ॹ�\"✅ॹ\\<\"\n;a\\\n,✅π\n<\n<\nॹπᐿ�ᐿ;,�\tᐿ\nᐿaᐿ,\nπ�\t\\<ॹ\\π;π�π\"<;\"�\\<�,<�\\a�<\nॹaᐿaॹ�\\ᐿπ,✅ᐿ\"<✅✅a\t�ॹ\t<π;�ॹ\\ᐿ;✅\r\\,;\\ᐿॹ\nॹᐿππ\nᐿ\nᐿaπ\\\nπ\r\"✅�π\nπ\rॹπ\"ॹ✅a\ra�✅\nπ;ॹ✅\n;ॹ,�\nπ\rπᐿa\\\\ᐿ,π<ᐿ✅\n�,\r\nᐿ✅\n<�ᐿ\"\"✅,,\"\n<\n✅\rπa�π\n<\\ॹ\nॹ�;\ra��✅ᐿ\n,\t�;,π<\r��\r\\�\n✅\r✅�;\\\n\n,\nॹ✅π\n,\n✅\t,�<\nπ\t;aπ\n<a<\n\tπ\r\"\\✅\n\n\n<ᐿπaπ\\�\"<✅\\a,✅\n✅\n<\"\"\n\n\r\rᐿ�\\\tᐿᐿ;\n\rᐿa\\π<\n\\\n\n\";\r\r\raπ\"\r�ॹa\r�\"\n\"✅ππ✅�\t�ᐿ\tᐿ\\\r�ᐿ<\\\nᐿπ✅\tॹ<π\ta\"✅\t,ॹa✅ᐿ;\\\r✅\\,ॹ\"a\n<ॹ\\\n<\"π\\\\ᐿ\n✅\nᐿ\n,\n\r\t\n\r\n<aᐿ;ᐿ;ᐿ\r;✅a<a,,<\t\n\\ππ\\\"✅\n\\a\n\tπa<\r<π\n✅\\π<ॹ,\t;<aaaπॹᐿaॹaॹ�,\"\t,ॹ;\\<✅a\nᐿ\"\nπ\\aᐿ�ᐿ<ॹ;\\<ᐿ\nᐿ\n\"aᐿᐿπ,\"\r✅ॹ\n,<\r<\n<<,ᐿᐿa,\rᐿ<;π\\�,\"\rπ�\nππ�,✅;�\ra<;\r�ᐿ\tπ;\"πᐿ\\�a\"ᐿ\\;\\ॹ\";ॹ;;✅\tॹ\r\n<\t\n\t<aॹ\tᐿ\n\"ॹᐿ\t✅✅�ॹ;;<�\t,\n\r\n\n\ta\"\\<\rπᐿa\t<\na;\t\"\nπ<πॹ\r\n<\n✅ᐿॹ✅�<,;✅\"\n<�π<✅<<✅\\;\n\"\rᐿ\t�\n\n\r\t,ॹ�\"\rᐿᐿᐿ,\"π\nπ\",a\"\"<�\t\\πॹ\n\taॹᐿ\tπॹ,\n✅\rπa\r<<,\n\nᐿ;\t\\<\tᐿ\n\n�\"ॹ<\n\r\nᐿ\n�\n\nπᐿ;\nॹ\n\"π<\"\r\r\n\r\\ᐿ;;πॹ;�\r✅�,✅\r\r�,a;ᐿ\\ॹ\"\t\r✅;\t<,π,�\t\"πaᐿ��\\ॹ\"\n\tᐿ\t,ॹ✅�ᐿ✅\tᐿ,ॹ✅;;�\r\n✅ᐿ�\nπa;\\,✅ॹ<ᐿ\nπ\n\"\n;a\t\\π\n<\r\r\rπ\"\n\nᐿ<<ᐿ\"\n,\n\"ॹπaᐿπ��\r\n�ᐿॹ,\na\n\rॹ<�ॹ\"\n\t\r\n,π\n�,<ॹ,<\n<�✅ᐿ\r✅a✅<\r;,�a�\\\nॹ<\\<✅ॹ\"\nॹ\r�\ta;\"\\ᐿ\n\n✅\"\r;✅\t,a✅✅<\"ᐿ\t�π\\✅✅�<;\"✅π✅ॹ,\nπ\n��<,\ta\r�✅ᐿ\nॹπ\nᐿॹ✅;\nॹ\t\r\\\nᐿ�ᐿ\n\tᐿ,\r\\;<<a,\"π,\tᐿ\nπ<a\"ॹ\\aa\r\r\"\";\tᐿ,ॹa\nॹ\nπ"/PrefixEnd [r25]
I180827 20:41:54.090278 52643 storage/replica_range_lease.go:554  [n1,s1,r24/1:/Table/53/{1/106-2/"\x15…}] transferring lease to s3
I180827 20:41:54.091255 52643 storage/replica_range_lease.go:617  [n1,s1,r24/1:/Table/53/{1/106-2/"\x15…}] done transferring lease to s3: <nil>
I180827 20:41:54.092073 51863 storage/replica_proposal.go:210  [n3,s3,r24/2:/Table/53/{1/106-2/"\x15…}] new range lease repl=(n3,s3):2 seq=3 start=1535402514.090298269,0 epo=1 pro=1535402514.090300825,0 following repl=(n1,s1):1 seq=2 start=1535402512.768597075,0 exp=1535402521.769064687,0 pro=1535402512.769088099,0
I180827 20:41:54.094854 52702 storage/replica_command.go:298  [n1,s1,r25/1:/{Table/53/2/"…-Max}] initiating a split of this range at key /Table/53/2/"\xc0\t\x13\xe0*c\xe4\xcfS-\x9b,\xe2\x82\xfa\xd8Z\xf6\x99\x81\\\x18ŕ\xea\x80Db\xa7\x94\xf7Q#\x13\\\xc7(\xc4=\xaaZ\xa2Hա}\xdeI\x06\x840I\xa9\x95\xcbи\r#iH\x97F~\x10\xe4<\xb2\xefFb\xac\xee\xf90H5\xd7D\xe4:\xf0Ae\xe3\xd1<\xd1\xf7\xb9\xad\xea\xd9\xe0r\xbc\xa6\xae\x92\xfb\xb5,\xc2\U0010f26eD\xe0 \xc5\x06\xfa\x04{\xf7\xe8\xbfZQ\xa3\x05M\xbb\xa8\xbe\xf4\xc4\x0f\xe9|s{|\x8fr\xad\xdaWĢ\x9e\xdf\x17\x9f\x02\xf3п\xd3\xea\xfd\x8ew3\xb8@7ꇘN%\n\xe0@jq\xb3\xb0&y\xe3K0ȼ_s\x1e\x15\x98\xe7\xbf6\xeb\xef}$dd/\xaa\xf1\xcb.U\x8f\xd4r"/"<a✅ᐿ<\n\n\nॹ\",\n\"πॹᐿ,\rᐿ\nॹ�ॹ<\naᐿᐿ,\"ॹ\"\\✅\n�✅<\n\r<\\<\"\\π;<✅,;✅ॹa\r✅ᐿ;ᐿ\r\\�\\�,ᐿ\r\n,�✅,\t;\\\"π,;ᐿa\nॹ✅\n;ॹ<\\\n;<ᐿॹ\n\r<\n�\\\",ॹ✅,\n\"✅ᐿ\raa\n\n\t;�π<,\",ᐿ<ᐿ\\ᐿ✅;\t<π\"\"ॹ<\t\n,,π\"\t�✅\r;;ॹ,ᐿ\tॹ✅ॹ,\nᐿ\n\\;\n\nπa<✅aπ\t;✅\r;�✅;a\t\n✅ππ\t\rॹ\n\\<✅✅<\r✅\t\r✅<π\n;\n\"\"��ॹ\r,ᐿ\naॹᐿπ\\π\n\n,�\t<�a\nπ\\✅,πॹ\",πᐿa,ᐿᐿa\r<\ta\t\nॹ\n\r✅\r;πa✅�\t�π\",πa\n<✅\"a\t�\r<ᐿa,\naᐿॹᐿ\rॹᐿa�\"\\a✅\"\"✅✅ᐿ\t\"ॹ<\n\"\nπ✅✅\n\\ॹ\t;ᐿ\"a,ॹ\"aॹᐿᐿ\n,\\\nॹॹ,;ॹ;\\\n\n\t\n,\naॹπ\r\"\n\t✅ॹ\r\tᐿ✅;\r\ta�ᐿ\t\t\nπᐿ<ॹ\tᐿ\na\nᐿ\tππ<<π\t✅;\r\"ॹ\n\na�<ᐿ\r\nᐿᐿ\";a<\rॹ\\πॹ\tᐿ\n\nᐿπ;\t;;✅ᐿ✅✅<�ॹ;\tᐿ,,\t,π✅\nᐿॹ;\r\"ᐿa,ᐿᐿᐿa\nॹ<aॹ\r,;π<<\nπa�\n\\\r,ॹπ�\n\"ᐿππ\n✅ॹπ\ra,πॹ\n\t<;πᐿᐿ✅ॹ\t�a✅\r�\"a,π��,ॹa\n\\\rॹ\nॹ\"π\"π\ta�<π�\r;a,a\r<πᐿ\na<\r\t\nॹ\\\\\n\\<�\\�aπa;\r\\,,\nॹ\"✅;\"\n✅ᐿ✅a\n<ππ\tπ<a\t�\\<\"✅\\\nᐿ;\r\t✅�✅π\r\r\r\n\",ᐿ;ॹ\nᐿ\r\"\naa\"\n\t<,✅<a\\\n\"ॹᐿ\n\\\t\t\r\"ॹ<,,π�\"ॹ<ॹ,\\ᐿ<\\π\"\\<ᐿ\n\rॹ\na\t\nπ;\\π\\✅\r,\r�\",,;;,<\n\t\"\\\r\r\"ॹ✅ॹ�\n�✅�\n�\\\n\n\rᐿॹ\tπa<;;\n\n�a\n\\ॹॹ\t;\n<\t\\<ॹॹ�✅π\t\"<\n\tᐿπᐿ\"\"�;\t;ॹ<π<✅\nππ<\"\rॹ�πᐿ�\rπ,<,<ᐿ<;;�,\t<<ᐿ\t<\tॹ,π,<\\a\t;\n\r�a✅\r\r\t\nᐿᐿ✅\r;\n;�;\r✅\n,;✅ᐿ,\\\n<,<\\\t\n;<\\aᐿ\r,\n;\\\r�\rॹ\\\t\";\t�;\n,\ta✅\r\t\r,\n\\\t\nᐿ✅\\\"\naᐿ\\\\\";\r<�✅;aॹ\t\t\\\t\tᐿ�\r,\"\n\"\taᐿ\na\rππॹ\nπᐿ\rॹ\nॹ\tπ,π\r<\"\n,�\na;�\\✅,<\"\"�\nππ\t\nπaॹaa✅\\;\\a\r\rᐿ,\\�;\\ᐿᐿ<a\"\r;π\"\\ᐿπ\tπॹ;\\a\ra;\t\n\\�\r<\t<ॹ\r✅\na\t\t<\n\n✅π\nᐿ✅\\<,\nπ\rᐿ✅a\"\r\"\n✅ॹ\\�\r\n\\�\nॹ\\<\\<\n\n\n"/PrefixEnd [r26]
I180827 20:41:54.112580 52726 storage/replica_range_lease.go:554  [n1,s1,r25/1:/Table/53/2/"\x{15\x8…-c0\t\…}] transferring lease to s3
I180827 20:41:54.113319 52726 storage/replica_range_lease.go:617  [n1,s1,r25/1:/Table/53/2/"\x{15\x8…-c0\t\…}] done transferring lease to s3: <nil>
I180827 20:41:54.113657 51850 storage/replica_proposal.go:210  [n3,s3,r25/2:/Table/53/2/"\x{15\x8…-c0\t\…}] new range lease repl=(n3,s3):2 seq=3 start=1535402514.112607829,0 epo=1 pro=1535402514.112610518,0 following repl=(n1,s1):1 seq=2 start=1535402512.768597075,0 exp=1535402521.769064687,0 pro=1535402512.769088099,0
I180827 20:41:54.117098 52757 storage/replica_command.go:298  [n1,s1,r26/1:/{Table/53/2/"…-Max}] initiating a split of this range at key /Table/53/3/";π,\\✅✅ᐿπ✅,�a\r<\nπᐿॹ;π\\,✅\nᐿॹ✅�,\r�\r\r\r,;\r;ᐿ,\n\nᐿaπ\r,,✅\na,a\\<✅\"✅\\,,a\"π\r\n�✅π\"ॹπ;\nπ;<,<\n;<\n\tॹ\rπ\r,a\\\t�\n\r\\ᐿ<\t,\n\\ᐿa\t\t\n\nπ\t\\\n\\πa;π\r\rᐿ\",a<\"\n�\r\ta\r\t�\r\t✅\t;ᐿᐿ��\nᐿᐿ\\,ᐿᐿ\na\"ᐿ\"\"aa\n;✅π\nॹ\\\"\"�✅ॹॹ\\\nॹ\t\nॹ✅,\n\"πᐿ\n\n;<<;\r\tᐿ�,,\\\n\n\n\n\nπ�;\n;,\"✅\r;a\n\\;aa\n\n\n;\n\n<<ॹ�\nπa�πॹaॹ\r\n;✅✅,;ᐿ\n;π\"\\πᐿ\n\"\n\\a\\aπ✅ᐿ\n<\",\tॹ\r<;\";\nππ\"�\n\n\t,π\\\\<\\\t;ॹπ\\,;�✅ॹ,\r<\n✅�;ᐿ\",;✅\n\nॹॹ\r\n✅\n<<\n\"\",a\t\r\r,ॹ\t�;�,π\\\t,;\\\"✅\t\n\n\nᐿ\n<\\\rᐿ,;�\nπ\r\\<ᐿᐿ\n<✅ᐿaॹ�✅\n\t,π\"\r<<\n\nπ\tπ✅\n<\\πॹaᐿ\t;�ॹॹ,\"\n\\a\n,\"πॹ,\r,ॹॹ\\\";�\n\\π✅\n\"ππ\n✅�πॹ,\r✅\n;π;ᐿ\"\nᐿ✅\rॹ;;\n\"ॹ\"�\"a\n\rπ\n<\n\t\"aπ\t;\\\n\";\"π,\ta\t\n\nᐿ<,;�<ᐿ\"\\ᐿᐿa,\n;;ॹॹ\tॹπa�ᐿ\ra,π<✅\tᐿᐿ\n,✅\ra�\"\r\r\",;π�<;\n<;ᐿ\"�;ᐿ;�;✅\t\\<\\<;πa✅\rॹ\\\\\\ᐿ\n;\r\t\n\\\r\"✅\n\tπ✅\"\"<\r\rπ\r<,\n,\\✅ᐿᐿ�\t�,ππ;ᐿ\t�\"\\ππ\"ॹ,πa<\n\n<��\rॹॹ\t,\r\"ॹ✅✅\n\n;\\ॹ;π<\"�\t�<\"ᐿॹॹ;\n<\n\r\na\t�ॹᐿa\n,\"\t\r\"\n,\r<,\"\tᐿ\\\n<,;<\"\t\n\nᐿ,ᐿ\tπ✅\n,\r,\n\t<,�\\;<\\a\nπ\t,\t�ॹ\t\n�a✅\n✅\nॹ\";\r\t✅<\tᐿ\n\tᐿॹ✅\"\r\rॹ✅π\n\n,\t\\\t\\;\"a\t,ॹॹ\"aॹ\n,\n✅�\t\nॹᐿ\n\r✅<πॹ\n✅\tॹ\"ॹ\"�\r\\;\\✅;ॹπ;\n\nᐿ<\r<\"ॹ\n,\n;π\nॹ\ta✅\n�;ᐿ\"a�✅π\r✅ॹ,\n\n\",✅\nᐿ\n<�\r\nπᐿ\"πॹᐿ\r�\n<,✅a\\ॹ\r✅<;πᐿ✅ॹ<\"<✅\"π,\\\rπ\\<\"<\"π\n✅<;\\�\tॹ\n\n\r<\n\rᐿ\nᐿaॹaॹ\\\r<<\n\r\n�\ta,\nॹॹᐿ\n,π✅<;\\\nπॹπᐿॹ<;\"a\r<;\t\t<,;�π\n<✅ॹॹ\tᐿ\rᐿaaaॹ\t\\,ᐿ✅\n\\ॹ<\"π\t\r\"\tᐿ\n\ta\t,<ππ;\n\\\r�\n,\n\n\\ᐿa\nॹᐿa\n\t\n\t\n✅\"ᐿ\"\r\n\n\"�\r\n\n<<π\ra✅\\<ᐿ�\n\n✅�a✅�"/105 [r27]
I180827 20:41:54.137397 52716 storage/store_snapshot.go:615  [raftsnapshot,n3,s3,r25/2:/Table/53/2/"\x{15\x8…-c0\t\…}] sending Raft snapshot 547ab8d0 at applied index 21
I180827 20:41:54.140430 52716 storage/store_snapshot.go:657  [raftsnapshot,n3,s3,r25/2:/Table/53/2/"\x{15\x8…-c0\t\…}] streamed snapshot to (n2,s2):3: kv pairs: 14, log entries: 2, rate-limit: 8.0 MiB/sec, 22ms
I180827 20:41:54.140860 52705 storage/replica_raftstorage.go:784  [n2,s2,r25/3:/Table/53/2/"\x{15\x8…-c0\t\…}] applying Raft snapshot at index 21 (id=547ab8d0, encoded size=31270, 1 rocksdb batches, 2 log entries)
I180827 20:41:54.162696 52705 storage/replica_raftstorage.go:790  [n2,s2,r25/3:/Table/53/2/"\x{15\x8…-c0\t\…}] applied Raft snapshot in 22ms [clear=0ms batch=0ms entries=21ms commit=0ms]
I180827 20:41:54.166103 52791 storage/replica_range_lease.go:554  [n1,s1,r26/1:/Table/53/{2/"\xc0…-3/";π,…}] transferring lease to s3
I180827 20:41:54.167118 51903 storage/replica_proposal.go:210  [n3,s3,r26/2:/Table/53/{2/"\xc0…-3/";π,…}] new range lease repl=(n3,s3):2 seq=3 start=1535402514.166156675,0 epo=1 pro=1535402514.166159831,0 following repl=(n1,s1):1 seq=2 start=1535402512.768597075,0 exp=1535402521.769064687,0 pro=1535402512.769088099,0
I180827 20:41:54.167216 52791 storage/replica_range_lease.go:617  [n1,s1,r26/1:/Table/53/{2/"\xc0…-3/";π,…}] done transferring lease to s3: <nil>
I180827 20:41:54.172711 52589 storage/replica_command.go:298  [n1,s1,r27/1:/{Table/53/3/"…-Max}] initiating a split of this range at key /Table/54 [r28]
I180827 20:41:54.182740 52807 storage/replica_range_lease.go:554  [n1,s1,r27/1:/Table/5{3/3/";π…-4}] transferring lease to s2
I180827 20:41:54.183947 52807 storage/replica_range_lease.go:617  [n1,s1,r27/1:/Table/5{3/3/";π…-4}] done transferring lease to s2: <nil>
I180827 20:41:54.184954 51646 storage/replica_proposal.go:210  [n2,s2,r27/3:/Table/5{3/3/";π…-4}] new range lease repl=(n2,s2):3 seq=3 start=1535402514.182761052,0 epo=1 pro=1535402514.182764040,0 following repl=(n1,s1):1 seq=2 start=1535402512.768597075,0 exp=1535402521.769064687,0 pro=1535402512.769088099,0

Please assign, take a look and update the issue accordingly.

Test failure in CI build 422

The following test appears to have failed:

#422:

I0331 16:21:52.388774     257 multiraft.go:644] Committed Entry[0]: 6/13 EntryNormal 13d0a1bc6472fd3a45a6e3cafccae224: raft_id:1 cmd:<increment:<header:<timestamp:<wall_time:0 logical:7 > cmd_id:<wall_time:1427818912387628346 random:5018949295715050020 > user:"" repli
I0331 16:21:52.389341     257 kv.go:125] failed Increment: mvcc.go:450: attempted access to empty key
I0331 16:21:52.389396     257 retry.go:85]  failed an iteration: mvcc.go:450: attempted access to empty key
I0331 16:21:52.389444     257 retry.go:100]  failed; retrying in 50ms
==================
WARNING: DATA RACE
Write by goroutine 54:
  github.com/cockroachdb/cockroach/storage.TestAllocateErrorAndRecovery()
      /go/src/github.com/cockroachdb/cockroach/storage/id_alloc_test.go:162 +0x874
  testing.tRunner()
      /usr/src/go/src/testing/testing.go:447 +0x133

Previous read by goroutine 74:
  github.com/cockroachdb/cockroach/storage.func·009()
      /go/src/github.com/cockroachdb/cockroach/storage/id_alloc.go:114 +0x24e
  github.com/cockroachdb/cockroach/util.RetryWithBackoff()
      /go/src/github.com/cockroachdb/cockroach/util/retry.go:80 +0x6d
  github.com/cockroachdb/cockroach/storage.(*IDAllocator).allocateBlock()
      /go/src/github.com/cockroachdb/cockroach/storage/id_alloc.go:118 +0x158
  github.com/cockroachdb/cockroach/storage.func·008()
      /go/src/github.com/cockroachdb/cockroach/storage/id_alloc.go:92 +0x7a

Goroutine 54 (running) created at:
--
I0331 16:21:56.238552     257 multiraft.go:641] New Entry[0]: 6/20 EntryConfChange "\b\x00\x10\x00\x18\x82\x04\"\xce\x01\x00\x13С\xbdH\xb8\x01k\x1f\xcc \xae\xae\xd4\xe9\xa8\x10\x01\x1a\xb8\x01J\xb5\x01\n\x90\x01\n\x04\b\x00\x10\r\x12\x14\b\xeb\x82\xe0\xc5Է\xa8\x
I0331 16:21:56.238774     257 multiraft.go:644] Committed Entry[0]: 6/20 EntryConfChange "\b\x00\x10\x00\x18\x82\x04\"\xce\x01\x00\x13С\xbdH\xb8\x01k\x1f\xcc \xae\xae\xd4\xe9\xa8\x10\x01\x1a\xb8\x01J\xb5\x01\n\x90\x01\n\x04\b\x00\x10\r\x12\x14\b\xeb\x82\xe0\xc5Է\xa8\x
I0331 16:21:56.240845     257 multiraft.go:751] node 257 applying configuration change {0 ConfChangeAddNode 514 [0 19 208 161 189 72 184 1 107 31 204 32 174 174 212 233 168 16 1 26 184 1 74 181 1 10 144 1 10 4 8 0 16 13 18 20 8 235 130 224 197 212 183 168 232 19 16 168 211 211 246 234 149 136 230 31 26 0 42 4 114 111 111 116 50 8 8 1 1 16 1 1 26 0 56 1 74 94 10 20 99 104 97 110 103 101 32 114 101 112 108 105 99 97 115 32 111 102 32 49 18 0 26 36 99 102 102 100 54 56 51 53 45 53 51 97 53 45 52 57 54 56 45 97 97 54 102 45 51 55 56 99 99 102 48 49 97 57 99 49 32 215 192 144 206 4 40 0 48 0 56 0 74 4 8 0 16 13 82 4 8 0 16 13 90 4 8 0 16 13 98 0 16 1 26 30 26 28 8 1 2 16 1 2 24 0 34 8 8 1 1 16 1 1 26 0 34 8 8 1 2 16 1 2 26 0] []}
I0331 16:21:56.243841     257 multiraft.go:633] node 257: group 1 raft ready
I0331 16:21:56.243967     257 multiraft.go:650] Outgoing Message[0]: 257->514 MsgHeartbeat Term:6 Log:0/0
--- FAIL: TestFailedReplicaChange (0.14s)
    <autogenerated>:31: Test appears to have leaked an rpc client:
        net.(*pollDesc).Wait(0xc208678060, 0x72, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:84 +0x63
        net.(*pollDesc).WaitRead(0xc208678060, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:89 +0x51
        net.(*netFD).Read(0xc208678000, 0xc208544000, 0x1000, 0x1000, 0x0, 0x7fd528c66e20, 0xc208542948)
            /usr/src/go/src/net/fd_unix.go:242 +0x4b3
        net.(*conn).Read(0xc2088b4000, 0xc208544000, 0x1000, 0x1000, 0xc208631640, 0x0, 0x0)
            /usr/src/go/src/net/net.go:121 +0x125
        net.(*TCPConn).Read(0xc2088b4000, 0xc208544000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
--
I0331 16:21:56.525952     257 multiraft.go:448] node 257: group 1 got message 514->257 MsgAppResp Term:6 Log:0/18
I0331 16:21:56.526382     257 multiraft.go:633] node 514: group 1 raft ready
I0331 16:21:56.526546     257 multiraft.go:650] Outgoing Message[0]: 514->257 MsgHeartbeatResp Term:6 Log:0/0
I0331 16:21:56.526763     257 multiraft.go:448] node 257: group 1 got message 514->257 MsgAppResp Term:6 Log:0/18
W0331 16:21:56.527293     257 multiraft.go:828] node 514 failed to send message to 257
--- FAIL: TestReplicateAfterTruncation (0.28s)
    <autogenerated>:31: Test appears to have leaked an rpc client:
        net.(*pollDesc).Wait(0xc208678060, 0x72, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:84 +0x63
        net.(*pollDesc).WaitRead(0xc208678060, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:89 +0x51
        net.(*netFD).Read(0xc208678000, 0xc208544000, 0x1000, 0x1000, 0x0, 0x7fd528c66e20, 0xc208542948)
            /usr/src/go/src/net/fd_unix.go:242 +0x4b3
        net.(*conn).Read(0xc2088b4000, 0xc208544000, 0x1000, 0x1000, 0xc208631640, 0x0, 0x0)
            /usr/src/go/src/net/net.go:121 +0x125
        net.(*TCPConn).Read(0xc2088b4000, 0xc208544000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
--
            /go/src/github.com/cockroachdb/cockroach/storage/main_test.go:29 +0x36
        main.main()
            github.com/cockroachdb/cockroach/storage/_test/_testmain.go:316 +0x28d
=== RUN TestStoreRangeSplitAtIllegalKeys
I0331 16:21:56.620350     257 multiraft.go:407] node 257 starting
--- FAIL: TestStoreRangeSplitAtIllegalKeys (0.09s)
    <autogenerated>:31: Test appears to have leaked an rpc client:
        net.(*pollDesc).Wait(0xc208678060, 0x72, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:84 +0x63
        net.(*pollDesc).WaitRead(0xc208678060, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:89 +0x51
        net.(*netFD).Read(0xc208678000, 0xc208544000, 0x1000, 0x1000, 0x0, 0x7fd528c66e20, 0xc208542948)
            /usr/src/go/src/net/fd_unix.go:242 +0x4b3
        net.(*conn).Read(0xc2088b4000, 0xc208544000, 0x1000, 0x1000, 0xc208631640, 0x0, 0x0)
            /usr/src/go/src/net/net.go:121 +0x125
        net.(*TCPConn).Read(0xc2088b4000, 0xc208544000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
--
I0331 16:21:56.745668     257 multiraft.go:638] HardState updated: {Term:6 Vote:257 Commit:18 XXX_unrecognized:[]}
I0331 16:21:56.749485     257 multiraft.go:641] New Entry[0]: 6/18 EntryNormal 13d0a1bd66c85575258a577a01a6b56b: raft_id:1 cmd:<end_transaction:<header:<timestamp:<wall_time:0 logical:8 > cmd_id:<wall_time:1427818916721743221 random:2705070707714733419 > key:"a"
I0331 16:21:56.750659     257 multiraft.go:644] Committed Entry[0]: 6/18 EntryNormal 13d0a1bd66c85575258a577a01a6b56b: raft_id:1 cmd:<end_transaction:<header:<timestamp:<wall_time:0 logical:8 > cmd_id:<wall_time:1427818916721743221 random:2705070707714733419 > key:"a"
I0331 16:21:56.755494     257 raft.go:390] raft: 101 became follower at term 5
I0331 16:21:56.755807     257 raft.go:207] raft: newRaft 101 [peers: [101], term: 5, commit: 10, applied: 10, lastindex: 10, lastterm: 5]
--- FAIL: TestStoreRangeSplitAtRangeBounds (0.14s)
    <autogenerated>:31: Test appears to have leaked an rpc client:
        net.(*pollDesc).Wait(0xc208678060, 0x72, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:84 +0x63
        net.(*pollDesc).WaitRead(0xc208678060, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:89 +0x51
        net.(*netFD).Read(0xc208678000, 0xc208544000, 0x1000, 0x1000, 0x0, 0x7fd528c66e20, 0xc208542948)
            /usr/src/go/src/net/fd_unix.go:242 +0x4b3
        net.(*conn).Read(0xc2088b4000, 0xc208544000, 0x1000, 0x1000, 0xc208631640, 0x0, 0x0)
            /usr/src/go/src/net/net.go:121 +0x125
        net.(*TCPConn).Read(0xc2088b4000, 0xc208544000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
--
I0331 16:21:56.905591     257 multiraft.go:641] New Entry[1]: 6/20 EntryNormal 000000000000000050571f18021cfcee: raft_id:1 cmd:<internal_resolve_intent:<header:<timestamp:<wall_time:0 logical:17 > cmd_id:<wall_time:0 random:0 > key:"\000\000meta1\377\377" user:"
I0331 16:21:56.906291     257 multiraft.go:641] New Entry[2]: 6/21 EntryNormal 0000000000000000298df029d6fc440f: raft_id:1 cmd:<internal_resolve_intent:<header:<timestamp:<wall_time:0 logical:17 > cmd_id:<wall_time:0 random:0 > key:"\000\000meta2a" user:"root" r
I0331 16:21:56.907067     257 multiraft.go:644] Committed Entry[0]: 6/19 EntryNormal 00000000000000003ecd1e7b980a389c: raft_id:1 cmd:<internal_resolve_intent:<header:<timestamp:<wall_time:0 logical:17 > cmd_id:<wall_time:0 random:0 > key:"\000\000\000k\000\001rdsc" us
I0331 16:21:56.907782     257 multiraft.go:644] Committed Entry[1]: 6/20 EntryNormal 000000000000000050571f18021cfcee: raft_id:1 cmd:<internal_resolve_intent:<header:<timestamp:<wall_time:0 logical:17 > cmd_id:<wall_time:0 random:0 > key:"\000\000meta1\377\377" user:"
I0331 16:21:56.908485     257 multiraft.go:644] Committed Entry[2]: 6/21 EntryNormal 0000000000000000298df029d6fc440f: raft_id:1 cmd:<internal_resolve_intent:<header:<timestamp:<wall_time:0 logical:17 > cmd_id:<wall_time:0 random:0 > key:"\000\000meta2a" user:"root" r
--- FAIL: TestStoreRangeSplitConcurrent (0.16s)
    <autogenerated>:31: Test appears to have leaked an rpc client:
        net.(*pollDesc).Wait(0xc208678060, 0x72, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:84 +0x63
        net.(*pollDesc).WaitRead(0xc208678060, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:89 +0x51
        net.(*netFD).Read(0xc208678000, 0xc208544000, 0x1000, 0x1000, 0x0, 0x7fd528c66e20, 0xc208542948)
            /usr/src/go/src/net/fd_unix.go:242 +0x4b3
        net.(*conn).Read(0xc2088b4000, 0xc208544000, 0x1000, 0x1000, 0xc208631640, 0x0, 0x0)
            /usr/src/go/src/net/net.go:121 +0x125
        net.(*TCPConn).Read(0xc2088b4000, 0xc208544000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
--
I0331 16:21:57.044982     257 multiraft.go:638] HardState updated: {Term:6 Vote:257 Commit:22 XXX_unrecognized:[]}
I0331 16:21:57.045979     257 multiraft.go:641] New Entry[0]: 6/22 EntryNormal 13d0a1bd78d254e01d41f3b098c1293e: raft_id:1 cmd:<end_transaction:<header:<timestamp:<wall_time:0 logical:12 > cmd_id:<wall_time:1427818917024388320 random:2108234040388692286 > key:"m
I0331 16:21:57.046955     257 multiraft.go:644] Committed Entry[0]: 6/22 EntryNormal 13d0a1bd78d254e01d41f3b098c1293e: raft_id:1 cmd:<end_transaction:<header:<timestamp:<wall_time:0 logical:12 > cmd_id:<wall_time:1427818917024388320 random:2108234040388692286 > key:"m
I0331 16:21:57.051002     257 raft.go:390] raft: 101 became follower at term 5
I0331 16:21:57.051243     257 raft.go:207] raft: newRaft 101 [peers: [101], term: 5, commit: 10, applied: 10, lastindex: 10, lastterm: 5]
--- FAIL: TestStoreRangeSplit (0.16s)
    <autogenerated>:31: Test appears to have leaked an rpc client:
        net.(*pollDesc).Wait(0xc208678060, 0x72, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:84 +0x63
        net.(*pollDesc).WaitRead(0xc208678060, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:89 +0x51
        net.(*netFD).Read(0xc208678000, 0xc208544000, 0x1000, 0x1000, 0x0, 0x7fd528c66e20, 0xc208542948)
            /usr/src/go/src/net/fd_unix.go:242 +0x4b3
        net.(*conn).Read(0xc2088b4000, 0xc208544000, 0x1000, 0x1000, 0xc208631640, 0x0, 0x0)
            /usr/src/go/src/net/net.go:121 +0x125
        net.(*TCPConn).Read(0xc2088b4000, 0xc208544000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
--
I0331 16:21:57.598036     257 multiraft.go:638] HardState updated: {Term:6 Vote:257 Commit:115 XXX_unrecognized:[]}
I0331 16:21:57.599009     257 multiraft.go:641] New Entry[0]: 6/115 EntryNormal 13d0a1bd99d69e2864270c8e8aebcdde: raft_id:2 cmd:<end_transaction:<header:<timestamp:<wall_time:0 logical:228 > cmd_id:<wall_time:1427818917578317352 random:7216750734240107998 > key:
I0331 16:21:57.599962     257 multiraft.go:644] Committed Entry[0]: 6/115 EntryNormal 13d0a1bd99d69e2864270c8e8aebcdde: raft_id:2 cmd:<end_transaction:<header:<timestamp:<wall_time:0 logical:228 > cmd_id:<wall_time:1427818917578317352 random:7216750734240107998 > key:
I0331 16:21:57.606780     257 raft.go:390] raft: 101 became follower at term 5
I0331 16:21:57.607014     257 raft.go:207] raft: newRaft 101 [peers: [101], term: 5, commit: 10, applied: 10, lastindex: 10, lastterm: 5]
--- FAIL: TestStoreRangeSplitStats (0.54s)
    <autogenerated>:31: Test appears to have leaked an rpc client:
        net.(*pollDesc).Wait(0xc208678060, 0x72, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:84 +0x63
        net.(*pollDesc).WaitRead(0xc208678060, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:89 +0x51
        net.(*netFD).Read(0xc208678000, 0xc208544000, 0x1000, 0x1000, 0x0, 0x7fd528c66e20, 0xc208542948)
            /usr/src/go/src/net/fd_unix.go:242 +0x4b3
        net.(*conn).Read(0xc2088b4000, 0xc208544000, 0x1000, 0x1000, 0xc208631640, 0x0, 0x0)
            /usr/src/go/src/net/net.go:121 +0x125
        net.(*TCPConn).Read(0xc2088b4000, 0xc208544000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
--
I0331 16:21:58.765338     257 queue.go:211] processed range range=1 (""-"testZHMtQoLrvcssrtoMUpwiwLoiYLgAbUibFATwYLcrfzqyLotKmLcdRBEIZmDgeHbtygdWYtWlXDsXTwEYHjPgGFXiwfrGuOhjMCPH") from split queue in 212.509µs
I0331 16:21:58.766328     257 multiraft.go:633] node 257: group 1 raft ready
I0331 16:21:58.766390     257 multiraft.go:638] HardState updated: {Term:6 Vote:257 Commit:265 XXX_unrecognized:[]}
I0331 16:21:58.767235     257 multiraft.go:641] New Entry[0]: 6/265 EntryNormal 000000000000000024b47e7e8557a7f7: raft_id:1 cmd:<internal_resolve_intent:<header:<timestamp:<wall_time:0 logical:495 > cmd_id:<wall_time:0 random:0 > key:"\000\000meta2\377\377" user
I0331 16:21:58.767980     257 multiraft.go:644] Committed Entry[0]: 6/265 EntryNormal 000000000000000024b47e7e8557a7f7: raft_id:1 cmd:<internal_resolve_intent:<header:<timestamp:<wall_time:0 logical:495 > cmd_id:<wall_time:0 random:0 > key:"\000\000meta2\377\377" user
--- FAIL: TestStoreZoneUpdateAndRangeSplit (1.17s)
    <autogenerated>:31: Test appears to have leaked an rpc client:
        net.(*pollDesc).Wait(0xc208678060, 0x72, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:84 +0x63
        net.(*pollDesc).WaitRead(0xc208678060, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:89 +0x51
        net.(*netFD).Read(0xc208678000, 0xc208544000, 0x1000, 0x1000, 0x0, 0x7fd528c66e20, 0xc208542948)
            /usr/src/go/src/net/fd_unix.go:242 +0x4b3
        net.(*conn).Read(0xc2088b4000, 0xc208544000, 0x1000, 0x1000, 0xc208631640, 0x0, 0x0)
            /usr/src/go/src/net/net.go:121 +0x125
        net.(*TCPConn).Read(0xc2088b4000, 0xc208544000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
--
I0331 16:21:59.302254     257 multiraft.go:644] Committed Entry[0]: 6/11 EntryNormal [empty]
I0331 16:21:59.302920     257 multiraft.go:633] node 257: group 6 raft ready
I0331 16:21:59.302972     257 multiraft.go:638] HardState updated: {Term:6 Vote:257 Commit:12 XXX_unrecognized:[]}
I0331 16:21:59.303685     257 multiraft.go:641] New Entry[0]: 6/12 EntryNormal 00000000000000000cc74ebb26459b61: raft_id:6 cmd:<internal_resolve_intent:<header:<timestamp:<wall_time:0 logical:152 > cmd_id:<wall_time:0 random:0 > key:"\000\000\000kdb2\000\001rdsc
I0331 16:21:59.304334     257 multiraft.go:644] Committed Entry[0]: 6/12 EntryNormal 00000000000000000cc74ebb26459b61: raft_id:6 cmd:<internal_resolve_intent:<header:<timestamp:<wall_time:0 logical:152 > cmd_id:<wall_time:0 random:0 > key:"\000\000\000kdb2\000\001rdsc
--- FAIL: TestStoreRangeSplitOnConfigs (0.56s)
    <autogenerated>:31: Test appears to have leaked an rpc client:
        net.(*pollDesc).Wait(0xc208678060, 0x72, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:84 +0x63
        net.(*pollDesc).WaitRead(0xc208678060, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:89 +0x51
        net.(*netFD).Read(0xc208678000, 0xc208544000, 0x1000, 0x1000, 0x0, 0x7fd528c66e20, 0xc208542948)
            /usr/src/go/src/net/fd_unix.go:242 +0x4b3
        net.(*conn).Read(0xc2088b4000, 0xc208544000, 0x1000, 0x1000, 0xc208631640, 0x0, 0x0)
            /usr/src/go/src/net/net.go:121 +0x125
        net.(*TCPConn).Read(0xc2088b4000, 0xc208544000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
--
I0331 16:21:59.591273     257 multiraft.go:644] Committed Entry[0]: 6/45 EntryNormal 13d0a1be111fe5f05775a5acfcfe9550: raft_id:1 cmd:<put:<header:<timestamp:<wall_time:0 logical:39 > cmd_id:<wall_time:1427818919579608560 random:6302125415972377936 > key:"\000\000meta2
I0331 16:21:59.593271     257 multiraft.go:633] node 257: group 1 raft ready
I0331 16:21:59.593333     257 multiraft.go:638] HardState updated: {Term:6 Vote:257 Commit:46 XXX_unrecognized:[]}
I0331 16:21:59.593855     257 multiraft.go:641] New Entry[0]: 6/46 EntryNormal 13d0a1be112053bf4c94c863da93a71c: raft_id:1 cmd:<put:<header:<timestamp:<wall_time:0 logical:40 > cmd_id:<wall_time:1427818919579636671 random:5518255774630127388 > key:"\000\000meta1
I0331 16:21:59.594365     257 multiraft.go:644] Committed Entry[0]: 6/46 EntryNormal 13d0a1be112053bf4c94c863da93a71c: raft_id:1 cmd:<put:<header:<timestamp:<wall_time:0 logical:40 > cmd_id:<wall_time:1427818919579636671 random:5518255774630127388 > key:"\000\000meta1
--- FAIL: TestUpdateRangeAddressing (0.26s)
    <autogenerated>:31: Test appears to have leaked an rpc client:
        net.(*pollDesc).Wait(0xc208678060, 0x72, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:84 +0x63
        net.(*pollDesc).WaitRead(0xc208678060, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:89 +0x51
        net.(*netFD).Read(0xc208678000, 0xc208544000, 0x1000, 0x1000, 0x0, 0x7fd528c66e20, 0xc208542948)
            /usr/src/go/src/net/fd_unix.go:242 +0x4b3
        net.(*conn).Read(0xc2088b4000, 0xc208544000, 0x1000, 0x1000, 0xc208631640, 0x0, 0x0)
            /usr/src/go/src/net/net.go:121 +0x125
        net.(*TCPConn).Read(0xc2088b4000, 0xc208544000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
--
            /go/src/github.com/cockroachdb/cockroach/storage/main_test.go:29 +0x36
        main.main()
            github.com/cockroachdb/cockroach/storage/_test/_testmain.go:316 +0x28d
=== RUN TestUpdateRangeAddressingSplitMeta1
I0331 16:21:59.689296     257 multiraft.go:407] node 257 starting
--- FAIL: TestUpdateRangeAddressingSplitMeta1 (0.10s)
    <autogenerated>:31: Test appears to have leaked an rpc client:
        net.(*pollDesc).Wait(0xc208678060, 0x72, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:84 +0x63
        net.(*pollDesc).WaitRead(0xc208678060, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:89 +0x51
        net.(*netFD).Read(0xc208678000, 0xc208544000, 0x1000, 0x1000, 0x0, 0x7fd528c66e20, 0xc208542948)
            /usr/src/go/src/net/fd_unix.go:242 +0x4b3
        net.(*conn).Read(0xc2088b4000, 0xc208544000, 0x1000, 0x1000, 0xc208631640, 0x0, 0x0)
            /usr/src/go/src/net/net.go:121 +0x125
        net.(*TCPConn).Read(0xc2088b4000, 0xc208544000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
--
            /go/src/github.com/cockroachdb/cockroach/storage/main_test.go:29 +0x36
        main.main()
            github.com/cockroachdb/cockroach/storage/_test/_testmain.go:316 +0x28d
=== RUN TestSetupRangeTree
I0331 16:21:59.791262     257 multiraft.go:407] node 257 starting
--- FAIL: TestSetupRangeTree (0.09s)
    <autogenerated>:31: Test appears to have leaked an rpc client:
        net.(*pollDesc).Wait(0xc208678060, 0x72, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:84 +0x63
        net.(*pollDesc).WaitRead(0xc208678060, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:89 +0x51
        net.(*netFD).Read(0xc208678000, 0xc208544000, 0x1000, 0x1000, 0x0, 0x7fd528c66e20, 0xc208542948)
            /usr/src/go/src/net/fd_unix.go:242 +0x4b3
        net.(*conn).Read(0xc2088b4000, 0xc208544000, 0x1000, 0x1000, 0xc208631640, 0x0, 0x0)
            /usr/src/go/src/net/net.go:121 +0x125
        net.(*TCPConn).Read(0xc2088b4000, 0xc208544000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
--
            /go/src/github.com/cockroachdb/cockroach/util/leaktest/leaktest.go:34 +0x36
        github.com/cockroachdb/cockroach/storage.TestMain(0xc20802eb40)
            /go/src/github.com/cockroachdb/cockroach/storage/main_test.go:29 +0x36
        main.main()
            github.com/cockroachdb/cockroach/storage/_test/_testmain.go:316 +0x28d
FAIL
FAIL    github.com/cockroachdb/cockroach/storage    7.861s
=== RUN TestBatchBasics
--- PASS: TestBatchBasics (0.00s)
=== RUN TestBatchGet
--- PASS: TestBatchGet (0.00s)
=== RUN TestBatchMerge
--- PASS: TestBatchMerge (0.00s)
=== RUN TestBatchProto
--- PASS: TestBatchProto (0.00s)
=== RUN TestBatchScan
--- PASS: TestBatchScan (0.00s)
I0331 16:21:52.388774     257 multiraft.go:644] Committed Entry[0]: 6/13 EntryNormal 13d0a1bc6472fd3a45a6e3cafccae224: raft_id:1 cmd:<increment:<header:<timestamp:<wall_time:0 logical:7 > cmd_id:<wall_time:1427818912387628346 random:5018949295715050020 > user:"" repli
I0331 16:21:52.389341     257 kv.go:125] failed Increment: mvcc.go:450: attempted access to empty key
I0331 16:21:52.389396     257 retry.go:85]  failed an iteration: mvcc.go:450: attempted access to empty key
I0331 16:21:52.389444     257 retry.go:100]  failed; retrying in 50ms
==================
WARNING: DATA RACE
Write by goroutine 54:
  github.com/cockroachdb/cockroach/storage.TestAllocateErrorAndRecovery()
      /go/src/github.com/cockroachdb/cockroach/storage/id_alloc_test.go:162 +0x874
  testing.tRunner()
      /usr/src/go/src/testing/testing.go:447 +0x133

Previous read by goroutine 74:
  github.com/cockroachdb/cockroach/storage.func·009()
      /go/src/github.com/cockroachdb/cockroach/storage/id_alloc.go:114 +0x24e
  github.com/cockroachdb/cockroach/util.RetryWithBackoff()
      /go/src/github.com/cockroachdb/cockroach/util/retry.go:80 +0x6d
  github.com/cockroachdb/cockroach/storage.(*IDAllocator).allocateBlock()
      /go/src/github.com/cockroachdb/cockroach/storage/id_alloc.go:118 +0x158
  github.com/cockroachdb/cockroach/storage.func·008()
      /go/src/github.com/cockroachdb/cockroach/storage/id_alloc.go:92 +0x7a

Goroutine 54 (running) created at:
--
I0331 16:21:56.238552     257 multiraft.go:641] New Entry[0]: 6/20 EntryConfChange "\b\x00\x10\x00\x18\x82\x04\"\xce\x01\x00\x13С\xbdH\xb8\x01k\x1f\xcc \xae\xae\xd4\xe9\xa8\x10\x01\x1a\xb8\x01J\xb5\x01\n\x90\x01\n\x04\b\x00\x10\r\x12\x14\b\xeb\x82\xe0\xc5Է\xa8\x
I0331 16:21:56.238774     257 multiraft.go:644] Committed Entry[0]: 6/20 EntryConfChange "\b\x00\x10\x00\x18\x82\x04\"\xce\x01\x00\x13С\xbdH\xb8\x01k\x1f\xcc \xae\xae\xd4\xe9\xa8\x10\x01\x1a\xb8\x01J\xb5\x01\n\x90\x01\n\x04\b\x00\x10\r\x12\x14\b\xeb\x82\xe0\xc5Է\xa8\x
I0331 16:21:56.240845     257 multiraft.go:751] node 257 applying configuration change {0 ConfChangeAddNode 514 [0 19 208 161 189 72 184 1 107 31 204 32 174 174 212 233 168 16 1 26 184 1 74 181 1 10 144 1 10 4 8 0 16 13 18 20 8 235 130 224 197 212 183 168 232 19 16 168 211 211 246 234 149 136 230 31 26 0 42 4 114 111 111 116 50 8 8 1 1 16 1 1 26 0 56 1 74 94 10 20 99 104 97 110 103 101 32 114 101 112 108 105 99 97 115 32 111 102 32 49 18 0 26 36 99 102 102 100 54 56 51 53 45 53 51 97 53 45 52 57 54 56 45 97 97 54 102 45 51 55 56 99 99 102 48 49 97 57 99 49 32 215 192 144 206 4 40 0 48 0 56 0 74 4 8 0 16 13 82 4 8 0 16 13 90 4 8 0 16 13 98 0 16 1 26 30 26 28 8 1 2 16 1 2 24 0 34 8 8 1 1 16 1 1 26 0 34 8 8 1 2 16 1 2 26 0] []}
I0331 16:21:56.243841     257 multiraft.go:633] node 257: group 1 raft ready
I0331 16:21:56.243967     257 multiraft.go:650] Outgoing Message[0]: 257->514 MsgHeartbeat Term:6 Log:0/0
--- FAIL: TestFailedReplicaChange (0.14s)
    <autogenerated>:31: Test appears to have leaked an rpc client:
        net.(*pollDesc).Wait(0xc208678060, 0x72, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:84 +0x63
        net.(*pollDesc).WaitRead(0xc208678060, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:89 +0x51
        net.(*netFD).Read(0xc208678000, 0xc208544000, 0x1000, 0x1000, 0x0, 0x7fd528c66e20, 0xc208542948)
            /usr/src/go/src/net/fd_unix.go:242 +0x4b3
        net.(*conn).Read(0xc2088b4000, 0xc208544000, 0x1000, 0x1000, 0xc208631640, 0x0, 0x0)
            /usr/src/go/src/net/net.go:121 +0x125
        net.(*TCPConn).Read(0xc2088b4000, 0xc208544000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
--
I0331 16:21:56.525952     257 multiraft.go:448] node 257: group 1 got message 514->257 MsgAppResp Term:6 Log:0/18
I0331 16:21:56.526382     257 multiraft.go:633] node 514: group 1 raft ready
I0331 16:21:56.526546     257 multiraft.go:650] Outgoing Message[0]: 514->257 MsgHeartbeatResp Term:6 Log:0/0
I0331 16:21:56.526763     257 multiraft.go:448] node 257: group 1 got message 514->257 MsgAppResp Term:6 Log:0/18
W0331 16:21:56.527293     257 multiraft.go:828] node 514 failed to send message to 257
--- FAIL: TestReplicateAfterTruncation (0.28s)
    <autogenerated>:31: Test appears to have leaked an rpc client:
        net.(*pollDesc).Wait(0xc208678060, 0x72, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:84 +0x63
        net.(*pollDesc).WaitRead(0xc208678060, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:89 +0x51
        net.(*netFD).Read(0xc208678000, 0xc208544000, 0x1000, 0x1000, 0x0, 0x7fd528c66e20, 0xc208542948)
            /usr/src/go/src/net/fd_unix.go:242 +0x4b3
        net.(*conn).Read(0xc2088b4000, 0xc208544000, 0x1000, 0x1000, 0xc208631640, 0x0, 0x0)
            /usr/src/go/src/net/net.go:121 +0x125
        net.(*TCPConn).Read(0xc2088b4000, 0xc208544000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
--
            /go/src/github.com/cockroachdb/cockroach/storage/main_test.go:29 +0x36
        main.main()
            github.com/cockroachdb/cockroach/storage/_test/_testmain.go:316 +0x28d
=== RUN TestStoreRangeSplitAtIllegalKeys
I0331 16:21:56.620350     257 multiraft.go:407] node 257 starting
--- FAIL: TestStoreRangeSplitAtIllegalKeys (0.09s)
    <autogenerated>:31: Test appears to have leaked an rpc client:
        net.(*pollDesc).Wait(0xc208678060, 0x72, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:84 +0x63
        net.(*pollDesc).WaitRead(0xc208678060, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:89 +0x51
        net.(*netFD).Read(0xc208678000, 0xc208544000, 0x1000, 0x1000, 0x0, 0x7fd528c66e20, 0xc208542948)
            /usr/src/go/src/net/fd_unix.go:242 +0x4b3
        net.(*conn).Read(0xc2088b4000, 0xc208544000, 0x1000, 0x1000, 0xc208631640, 0x0, 0x0)
            /usr/src/go/src/net/net.go:121 +0x125
        net.(*TCPConn).Read(0xc2088b4000, 0xc208544000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
--
I0331 16:21:56.745668     257 multiraft.go:638] HardState updated: {Term:6 Vote:257 Commit:18 XXX_unrecognized:[]}
I0331 16:21:56.749485     257 multiraft.go:641] New Entry[0]: 6/18 EntryNormal 13d0a1bd66c85575258a577a01a6b56b: raft_id:1 cmd:<end_transaction:<header:<timestamp:<wall_time:0 logical:8 > cmd_id:<wall_time:1427818916721743221 random:2705070707714733419 > key:"a"
I0331 16:21:56.750659     257 multiraft.go:644] Committed Entry[0]: 6/18 EntryNormal 13d0a1bd66c85575258a577a01a6b56b: raft_id:1 cmd:<end_transaction:<header:<timestamp:<wall_time:0 logical:8 > cmd_id:<wall_time:1427818916721743221 random:2705070707714733419 > key:"a"
I0331 16:21:56.755494     257 raft.go:390] raft: 101 became follower at term 5
I0331 16:21:56.755807     257 raft.go:207] raft: newRaft 101 [peers: [101], term: 5, commit: 10, applied: 10, lastindex: 10, lastterm: 5]
--- FAIL: TestStoreRangeSplitAtRangeBounds (0.14s)
    <autogenerated>:31: Test appears to have leaked an rpc client:
        net.(*pollDesc).Wait(0xc208678060, 0x72, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:84 +0x63
        net.(*pollDesc).WaitRead(0xc208678060, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:89 +0x51
        net.(*netFD).Read(0xc208678000, 0xc208544000, 0x1000, 0x1000, 0x0, 0x7fd528c66e20, 0xc208542948)
            /usr/src/go/src/net/fd_unix.go:242 +0x4b3
        net.(*conn).Read(0xc2088b4000, 0xc208544000, 0x1000, 0x1000, 0xc208631640, 0x0, 0x0)
            /usr/src/go/src/net/net.go:121 +0x125
        net.(*TCPConn).Read(0xc2088b4000, 0xc208544000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
--
I0331 16:21:56.905591     257 multiraft.go:641] New Entry[1]: 6/20 EntryNormal 000000000000000050571f18021cfcee: raft_id:1 cmd:<internal_resolve_intent:<header:<timestamp:<wall_time:0 logical:17 > cmd_id:<wall_time:0 random:0 > key:"\000\000meta1\377\377" user:"
I0331 16:21:56.906291     257 multiraft.go:641] New Entry[2]: 6/21 EntryNormal 0000000000000000298df029d6fc440f: raft_id:1 cmd:<internal_resolve_intent:<header:<timestamp:<wall_time:0 logical:17 > cmd_id:<wall_time:0 random:0 > key:"\000\000meta2a" user:"root" r
I0331 16:21:56.907067     257 multiraft.go:644] Committed Entry[0]: 6/19 EntryNormal 00000000000000003ecd1e7b980a389c: raft_id:1 cmd:<internal_resolve_intent:<header:<timestamp:<wall_time:0 logical:17 > cmd_id:<wall_time:0 random:0 > key:"\000\000\000k\000\001rdsc" us
I0331 16:21:56.907782     257 multiraft.go:644] Committed Entry[1]: 6/20 EntryNormal 000000000000000050571f18021cfcee: raft_id:1 cmd:<internal_resolve_intent:<header:<timestamp:<wall_time:0 logical:17 > cmd_id:<wall_time:0 random:0 > key:"\000\000meta1\377\377" user:"
I0331 16:21:56.908485     257 multiraft.go:644] Committed Entry[2]: 6/21 EntryNormal 0000000000000000298df029d6fc440f: raft_id:1 cmd:<internal_resolve_intent:<header:<timestamp:<wall_time:0 logical:17 > cmd_id:<wall_time:0 random:0 > key:"\000\000meta2a" user:"root" r
--- FAIL: TestStoreRangeSplitConcurrent (0.16s)
    <autogenerated>:31: Test appears to have leaked an rpc client:
        net.(*pollDesc).Wait(0xc208678060, 0x72, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:84 +0x63
        net.(*pollDesc).WaitRead(0xc208678060, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:89 +0x51
        net.(*netFD).Read(0xc208678000, 0xc208544000, 0x1000, 0x1000, 0x0, 0x7fd528c66e20, 0xc208542948)
            /usr/src/go/src/net/fd_unix.go:242 +0x4b3
        net.(*conn).Read(0xc2088b4000, 0xc208544000, 0x1000, 0x1000, 0xc208631640, 0x0, 0x0)
            /usr/src/go/src/net/net.go:121 +0x125
        net.(*TCPConn).Read(0xc2088b4000, 0xc208544000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
--
I0331 16:21:57.044982     257 multiraft.go:638] HardState updated: {Term:6 Vote:257 Commit:22 XXX_unrecognized:[]}
I0331 16:21:57.045979     257 multiraft.go:641] New Entry[0]: 6/22 EntryNormal 13d0a1bd78d254e01d41f3b098c1293e: raft_id:1 cmd:<end_transaction:<header:<timestamp:<wall_time:0 logical:12 > cmd_id:<wall_time:1427818917024388320 random:2108234040388692286 > key:"m
I0331 16:21:57.046955     257 multiraft.go:644] Committed Entry[0]: 6/22 EntryNormal 13d0a1bd78d254e01d41f3b098c1293e: raft_id:1 cmd:<end_transaction:<header:<timestamp:<wall_time:0 logical:12 > cmd_id:<wall_time:1427818917024388320 random:2108234040388692286 > key:"m
I0331 16:21:57.051002     257 raft.go:390] raft: 101 became follower at term 5
I0331 16:21:57.051243     257 raft.go:207] raft: newRaft 101 [peers: [101], term: 5, commit: 10, applied: 10, lastindex: 10, lastterm: 5]
--- FAIL: TestStoreRangeSplit (0.16s)
    <autogenerated>:31: Test appears to have leaked an rpc client:
        net.(*pollDesc).Wait(0xc208678060, 0x72, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:84 +0x63
        net.(*pollDesc).WaitRead(0xc208678060, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:89 +0x51
        net.(*netFD).Read(0xc208678000, 0xc208544000, 0x1000, 0x1000, 0x0, 0x7fd528c66e20, 0xc208542948)
            /usr/src/go/src/net/fd_unix.go:242 +0x4b3
        net.(*conn).Read(0xc2088b4000, 0xc208544000, 0x1000, 0x1000, 0xc208631640, 0x0, 0x0)
            /usr/src/go/src/net/net.go:121 +0x125
        net.(*TCPConn).Read(0xc2088b4000, 0xc208544000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
--
I0331 16:21:57.598036     257 multiraft.go:638] HardState updated: {Term:6 Vote:257 Commit:115 XXX_unrecognized:[]}
I0331 16:21:57.599009     257 multiraft.go:641] New Entry[0]: 6/115 EntryNormal 13d0a1bd99d69e2864270c8e8aebcdde: raft_id:2 cmd:<end_transaction:<header:<timestamp:<wall_time:0 logical:228 > cmd_id:<wall_time:1427818917578317352 random:7216750734240107998 > key:
I0331 16:21:57.599962     257 multiraft.go:644] Committed Entry[0]: 6/115 EntryNormal 13d0a1bd99d69e2864270c8e8aebcdde: raft_id:2 cmd:<end_transaction:<header:<timestamp:<wall_time:0 logical:228 > cmd_id:<wall_time:1427818917578317352 random:7216750734240107998 > key:
I0331 16:21:57.606780     257 raft.go:390] raft: 101 became follower at term 5
I0331 16:21:57.607014     257 raft.go:207] raft: newRaft 101 [peers: [101], term: 5, commit: 10, applied: 10, lastindex: 10, lastterm: 5]
--- FAIL: TestStoreRangeSplitStats (0.54s)
    <autogenerated>:31: Test appears to have leaked an rpc client:
        net.(*pollDesc).Wait(0xc208678060, 0x72, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:84 +0x63
        net.(*pollDesc).WaitRead(0xc208678060, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:89 +0x51
        net.(*netFD).Read(0xc208678000, 0xc208544000, 0x1000, 0x1000, 0x0, 0x7fd528c66e20, 0xc208542948)
            /usr/src/go/src/net/fd_unix.go:242 +0x4b3
        net.(*conn).Read(0xc2088b4000, 0xc208544000, 0x1000, 0x1000, 0xc208631640, 0x0, 0x0)
            /usr/src/go/src/net/net.go:121 +0x125
        net.(*TCPConn).Read(0xc2088b4000, 0xc208544000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
--
I0331 16:21:58.765338     257 queue.go:211] processed range range=1 (""-"testZHMtQoLrvcssrtoMUpwiwLoiYLgAbUibFATwYLcrfzqyLotKmLcdRBEIZmDgeHbtygdWYtWlXDsXTwEYHjPgGFXiwfrGuOhjMCPH") from split queue in 212.509µs
I0331 16:21:58.766328     257 multiraft.go:633] node 257: group 1 raft ready
I0331 16:21:58.766390     257 multiraft.go:638] HardState updated: {Term:6 Vote:257 Commit:265 XXX_unrecognized:[]}
I0331 16:21:58.767235     257 multiraft.go:641] New Entry[0]: 6/265 EntryNormal 000000000000000024b47e7e8557a7f7: raft_id:1 cmd:<internal_resolve_intent:<header:<timestamp:<wall_time:0 logical:495 > cmd_id:<wall_time:0 random:0 > key:"\000\000meta2\377\377" user
I0331 16:21:58.767980     257 multiraft.go:644] Committed Entry[0]: 6/265 EntryNormal 000000000000000024b47e7e8557a7f7: raft_id:1 cmd:<internal_resolve_intent:<header:<timestamp:<wall_time:0 logical:495 > cmd_id:<wall_time:0 random:0 > key:"\000\000meta2\377\377" user
--- FAIL: TestStoreZoneUpdateAndRangeSplit (1.17s)
    <autogenerated>:31: Test appears to have leaked an rpc client:
        net.(*pollDesc).Wait(0xc208678060, 0x72, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:84 +0x63
        net.(*pollDesc).WaitRead(0xc208678060, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:89 +0x51
        net.(*netFD).Read(0xc208678000, 0xc208544000, 0x1000, 0x1000, 0x0, 0x7fd528c66e20, 0xc208542948)
            /usr/src/go/src/net/fd_unix.go:242 +0x4b3
        net.(*conn).Read(0xc2088b4000, 0xc208544000, 0x1000, 0x1000, 0xc208631640, 0x0, 0x0)
            /usr/src/go/src/net/net.go:121 +0x125
        net.(*TCPConn).Read(0xc2088b4000, 0xc208544000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
--
I0331 16:21:59.302254     257 multiraft.go:644] Committed Entry[0]: 6/11 EntryNormal [empty]
I0331 16:21:59.302920     257 multiraft.go:633] node 257: group 6 raft ready
I0331 16:21:59.302972     257 multiraft.go:638] HardState updated: {Term:6 Vote:257 Commit:12 XXX_unrecognized:[]}
I0331 16:21:59.303685     257 multiraft.go:641] New Entry[0]: 6/12 EntryNormal 00000000000000000cc74ebb26459b61: raft_id:6 cmd:<internal_resolve_intent:<header:<timestamp:<wall_time:0 logical:152 > cmd_id:<wall_time:0 random:0 > key:"\000\000\000kdb2\000\001rdsc
I0331 16:21:59.304334     257 multiraft.go:644] Committed Entry[0]: 6/12 EntryNormal 00000000000000000cc74ebb26459b61: raft_id:6 cmd:<internal_resolve_intent:<header:<timestamp:<wall_time:0 logical:152 > cmd_id:<wall_time:0 random:0 > key:"\000\000\000kdb2\000\001rdsc
--- FAIL: TestStoreRangeSplitOnConfigs (0.56s)
    <autogenerated>:31: Test appears to have leaked an rpc client:
        net.(*pollDesc).Wait(0xc208678060, 0x72, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:84 +0x63
        net.(*pollDesc).WaitRead(0xc208678060, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:89 +0x51
        net.(*netFD).Read(0xc208678000, 0xc208544000, 0x1000, 0x1000, 0x0, 0x7fd528c66e20, 0xc208542948)
            /usr/src/go/src/net/fd_unix.go:242 +0x4b3
        net.(*conn).Read(0xc2088b4000, 0xc208544000, 0x1000, 0x1000, 0xc208631640, 0x0, 0x0)
            /usr/src/go/src/net/net.go:121 +0x125
        net.(*TCPConn).Read(0xc2088b4000, 0xc208544000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
--
I0331 16:21:59.591273     257 multiraft.go:644] Committed Entry[0]: 6/45 EntryNormal 13d0a1be111fe5f05775a5acfcfe9550: raft_id:1 cmd:<put:<header:<timestamp:<wall_time:0 logical:39 > cmd_id:<wall_time:1427818919579608560 random:6302125415972377936 > key:"\000\000meta2
I0331 16:21:59.593271     257 multiraft.go:633] node 257: group 1 raft ready
I0331 16:21:59.593333     257 multiraft.go:638] HardState updated: {Term:6 Vote:257 Commit:46 XXX_unrecognized:[]}
I0331 16:21:59.593855     257 multiraft.go:641] New Entry[0]: 6/46 EntryNormal 13d0a1be112053bf4c94c863da93a71c: raft_id:1 cmd:<put:<header:<timestamp:<wall_time:0 logical:40 > cmd_id:<wall_time:1427818919579636671 random:5518255774630127388 > key:"\000\000meta1
I0331 16:21:59.594365     257 multiraft.go:644] Committed Entry[0]: 6/46 EntryNormal 13d0a1be112053bf4c94c863da93a71c: raft_id:1 cmd:<put:<header:<timestamp:<wall_time:0 logical:40 > cmd_id:<wall_time:1427818919579636671 random:5518255774630127388 > key:"\000\000meta1
--- FAIL: TestUpdateRangeAddressing (0.26s)
    <autogenerated>:31: Test appears to have leaked an rpc client:
        net.(*pollDesc).Wait(0xc208678060, 0x72, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:84 +0x63
        net.(*pollDesc).WaitRead(0xc208678060, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:89 +0x51
        net.(*netFD).Read(0xc208678000, 0xc208544000, 0x1000, 0x1000, 0x0, 0x7fd528c66e20, 0xc208542948)
            /usr/src/go/src/net/fd_unix.go:242 +0x4b3
        net.(*conn).Read(0xc2088b4000, 0xc208544000, 0x1000, 0x1000, 0xc208631640, 0x0, 0x0)
            /usr/src/go/src/net/net.go:121 +0x125
        net.(*TCPConn).Read(0xc2088b4000, 0xc208544000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
--
            /go/src/github.com/cockroachdb/cockroach/storage/main_test.go:29 +0x36
        main.main()
            github.com/cockroachdb/cockroach/storage/_test/_testmain.go:316 +0x28d
=== RUN TestUpdateRangeAddressingSplitMeta1
I0331 16:21:59.689296     257 multiraft.go:407] node 257 starting
--- FAIL: TestUpdateRangeAddressingSplitMeta1 (0.10s)
    <autogenerated>:31: Test appears to have leaked an rpc client:
        net.(*pollDesc).Wait(0xc208678060, 0x72, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:84 +0x63
        net.(*pollDesc).WaitRead(0xc208678060, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:89 +0x51
        net.(*netFD).Read(0xc208678000, 0xc208544000, 0x1000, 0x1000, 0x0, 0x7fd528c66e20, 0xc208542948)
            /usr/src/go/src/net/fd_unix.go:242 +0x4b3
        net.(*conn).Read(0xc2088b4000, 0xc208544000, 0x1000, 0x1000, 0xc208631640, 0x0, 0x0)
            /usr/src/go/src/net/net.go:121 +0x125
        net.(*TCPConn).Read(0xc2088b4000, 0xc208544000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
--
            /go/src/github.com/cockroachdb/cockroach/storage/main_test.go:29 +0x36
        main.main()
            github.com/cockroachdb/cockroach/storage/_test/_testmain.go:316 +0x28d
=== RUN TestSetupRangeTree
I0331 16:21:59.791262     257 multiraft.go:407] node 257 starting
--- FAIL: TestSetupRangeTree (0.09s)
    <autogenerated>:31: Test appears to have leaked an rpc client:
        net.(*pollDesc).Wait(0xc208678060, 0x72, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:84 +0x63
        net.(*pollDesc).WaitRead(0xc208678060, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:89 +0x51
        net.(*netFD).Read(0xc208678000, 0xc208544000, 0x1000, 0x1000, 0x0, 0x7fd528c66e20, 0xc208542948)
            /usr/src/go/src/net/fd_unix.go:242 +0x4b3
        net.(*conn).Read(0xc2088b4000, 0xc208544000, 0x1000, 0x1000, 0xc208631640, 0x0, 0x0)
            /usr/src/go/src/net/net.go:121 +0x125
        net.(*TCPConn).Read(0xc2088b4000, 0xc208544000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
--
            /go/src/github.com/cockroachdb/cockroach/util/leaktest/leaktest.go:34 +0x36
        github.com/cockroachdb/cockroach/storage.TestMain(0xc20802eb40)
            /go/src/github.com/cockroachdb/cockroach/storage/main_test.go:29 +0x36
        main.main()
            github.com/cockroachdb/cockroach/storage/_test/_testmain.go:316 +0x28d
FAIL
FAIL    github.com/cockroachdb/cockroach/storage    7.861s
=== RUN TestBatchBasics
--- PASS: TestBatchBasics (0.00s)
=== RUN TestBatchGet
--- PASS: TestBatchGet (0.00s)
=== RUN TestBatchMerge
--- PASS: TestBatchMerge (0.00s)
=== RUN TestBatchProto
--- PASS: TestBatchProto (0.00s)
=== RUN TestBatchScan
--- PASS: TestBatchScan (0.00s)

Please assign, take a look and update the issue accordingly.

teamcity: failed test: test/TestImportPgDump

The following tests appear to have failed on release-banana.
You may want to check for open issues.

#864629:

--- FAIL: test/TestImportPgDump (0.000s)
Test ended in panic.

------- Stdout: -------
W180827 20:41:52.746991 50862 server/status/runtime.go:294  [n?] Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006"
I180827 20:41:52.757923 50862 server/server.go:830  [n?] monitoring forward clock jumps based on server.clock.forward_jump_check_enabled
I180827 20:41:52.758132 50862 base/addr_validation.go:260  [n?] server certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
I180827 20:41:52.758156 50862 base/addr_validation.go:300  [n?] web UI certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
I180827 20:41:52.761168 50862 server/config.go:496  [n?] 1 storage engine initialized
I180827 20:41:52.761191 50862 server/config.go:499  [n?] RocksDB cache size: 128 MiB
I180827 20:41:52.761204 50862 server/config.go:499  [n?] store 0: in-memory, size 0 B
I180827 20:41:52.767725 50862 server/node.go:373  [n?] **** cluster d5e53e69-a109-4eb6-91bf-29e74ae744ba has been created
I180827 20:41:52.767752 50862 server/server.go:1401  [n?] **** add additional nodes by specifying --join=127.0.0.1:41477
I180827 20:41:52.767936 50862 gossip/gossip.go:382  [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:41477" > attrs:<> locality:<> ServerVersion:<major_val:2 minor_val:0 patch:0 unstable:12 > build_tag:"v2.1.0-alpha.20180702-2025-gf1e7bb1" started_at:1535402512767856449 
I180827 20:41:52.770338 50862 storage/store.go:1541  [n1,s1] [n1,s1]: failed initial metrics computation: [n1,s1]: system config not yet available
I180827 20:41:52.770546 50862 server/node.go:476  [n1] initialized store [n1,s1]: disk (capacity=512 MiB, available=512 MiB, used=0 B, logicalBytes=6.9 KiB), ranges=1, leases=1, queries=0.00, writes=0.00, bytesPerReplica={p10=7103.00 p25=7103.00 p50=7103.00 p75=7103.00 p90=7103.00 pMax=7103.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
I180827 20:41:52.770626 50862 storage/stores.go:242  [n1] read 0 node addresses from persistent storage
I180827 20:41:52.770721 50862 server/node.go:697  [n1] connecting to gossip network to verify cluster ID...
I180827 20:41:52.770760 50862 server/node.go:722  [n1] node connected via gossip and verified as part of cluster "d5e53e69-a109-4eb6-91bf-29e74ae744ba"
I180827 20:41:52.770788 50862 server/node.go:546  [n1] node=1: started with [<no-attributes>=<in-mem>] engine(s) and attributes []
I180827 20:41:52.771023 50862 server/status/recorder.go:652  [n1] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180827 20:41:52.771066 50862 server/server.go:1807  [n1] Could not start heap profiler worker due to: directory to store profiles could not be determined
I180827 20:41:52.771159 50862 server/server.go:1538  [n1] starting https server at 127.0.0.1:42563 (use: 127.0.0.1:42563)
I180827 20:41:52.771188 50862 server/server.go:1540  [n1] starting grpc/postgres server at 127.0.0.1:41477
I180827 20:41:52.771209 50862 server/server.go:1541  [n1] advertising CockroachDB node at 127.0.0.1:41477
I180827 20:41:52.775258 51089 server/status/recorder.go:652  [n1,summaries] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180827 20:41:52.776337 50925 storage/replica_command.go:298  [split,n1,s1,r1/1:/M{in-ax}] initiating a split of this range at key /System/"" [r2]
I180827 20:41:52.788832 51094 storage/replica_command.go:298  [split,n1,s1,r2/1:/{System/-Max}] initiating a split of this range at key /System/NodeLiveness [r3]
W180827 20:41:52.790188 51128 storage/intent_resolver.go:668  [n1,s1] failed to push during intent resolution: failed to push "unnamed" id=ec083bbe key=/Table/SystemConfigSpan/Start rw=true pri=0.01126188 iso=SERIALIZABLE stat=PENDING epo=0 ts=1535402512.772758792,0 orig=1535402512.772758792,0 max=1535402512.772758792,0 wto=false rop=false seq=6
I180827 20:41:52.790695 51118 sql/event_log.go:126  [n1,intExec=optInToDiagnosticsStatReporting] Event: "set_cluster_setting", target: 0, info: {SettingName:diagnostics.reporting.enabled Value:true User:root}
I180827 20:41:52.795125 51100 storage/replica_command.go:298  [split,n1,s1,r3/1:/{System/NodeL…-Max}] initiating a split of this range at key /System/NodeLivenessMax [r4]
I180827 20:41:52.800783 51143 storage/replica_command.go:298  [split,n1,s1,r4/1:/{System/NodeL…-Max}] initiating a split of this range at key /System/tsd [r5]
I180827 20:41:52.807906 51165 storage/replica_command.go:298  [split,n1,s1,r5/1:/{System/tsd-Max}] initiating a split of this range at key /System/"tse" [r6]
I180827 20:41:52.811784 51141 sql/event_log.go:126  [n1,intExec=set-setting] Event: "set_cluster_setting", target: 0, info: {SettingName:version Value:2.0-12 User:root}
I180827 20:41:52.818164 50799 sql/event_log.go:126  [n1,intExec=disableNetTrace] Event: "set_cluster_setting", target: 0, info: {SettingName:trace.debug.enable Value:false User:root}
I180827 20:41:52.821094 51188 storage/replica_command.go:298  [split,n1,s1,r6/1:/{System/tse-Max}] initiating a split of this range at key /Table/SystemConfigSpan/Start [r7]
I180827 20:41:52.830709 51176 storage/replica_command.go:298  [split,n1,s1,r7/1:/{Table/System…-Max}] initiating a split of this range at key /Table/11 [r8]
I180827 20:41:52.839374 51187 sql/event_log.go:126  [n1,intExec=initializeClusterSecret] Event: "set_cluster_setting", target: 0, info: {SettingName:cluster.secret Value:045a1c98-219f-445b-bd6b-d481f04d6b0d User:root}
I180827 20:41:52.849534 51154 storage/replica_command.go:298  [split,n1,s1,r8/1:/{Table/11-Max}] initiating a split of this range at key /Table/12 [r9]
I180827 20:41:52.855898 51218 sql/event_log.go:126  [n1,intExec=create-default-db] Event: "create_database", target: 50, info: {DatabaseName:defaultdb Statement:CREATE DATABASE IF NOT EXISTS defaultdb User:root}
I180827 20:41:52.861462 51240 storage/replica_command.go:298  [split,n1,s1,r9/1:/{Table/12-Max}] initiating a split of this range at key /Table/13 [r10]
I180827 20:41:52.868342 51268 storage/replica_command.go:298  [split,n1,s1,r10/1:/{Table/13-Max}] initiating a split of this range at key /Table/14 [r11]
I180827 20:41:52.872706 51256 sql/event_log.go:126  [n1,intExec=create-default-db] Event: "create_database", target: 51, info: {DatabaseName:postgres Statement:CREATE DATABASE IF NOT EXISTS postgres User:root}
I180827 20:41:52.874819 51264 storage/replica_command.go:298  [split,n1,s1,r11/1:/{Table/14-Max}] initiating a split of this range at key /Table/15 [r12]
I180827 20:41:52.876403 50862 server/server.go:1594  [n1] done ensuring all necessary migrations have run
I180827 20:41:52.876433 50862 server/server.go:1597  [n1] serving sql connections
I180827 20:41:52.879108 51233 server/server_update.go:67  [n1] no need to upgrade, cluster already at the newest version
I180827 20:41:52.879639 51235 sql/event_log.go:126  [n1] Event: "node_join", target: 1, info: {Descriptor:{NodeID:1 Address:{NetworkField:tcp AddressField:127.0.0.1:41477} Attrs: Locality: ServerVersion:2.0-12 BuildTag:v2.1.0-alpha.20180702-2025-gf1e7bb1 StartedAt:1535402512767856449 LocalityAddress:[]} ClusterID:d5e53e69-a109-4eb6-91bf-29e74ae744ba StartedAt:1535402512767856449 LastUp:1535402512767856449}
I180827 20:41:52.880318 51302 storage/replica_command.go:298  [split,n1,s1,r12/1:/{Table/15-Max}] initiating a split of this range at key /Table/16 [r13]
I180827 20:41:52.927701 50819 storage/replica_command.go:298  [split,n1,s1,r13/1:/{Table/16-Max}] initiating a split of this range at key /Table/17 [r14]
I180827 20:41:52.940165 51323 storage/replica_command.go:298  [split,n1,s1,r14/1:/{Table/17-Max}] initiating a split of this range at key /Table/18 [r15]
I180827 20:41:52.948539 51355 storage/replica_command.go:298  [split,n1,s1,r15/1:/{Table/18-Max}] initiating a split of this range at key /Table/19 [r16]
I180827 20:41:52.953658 51380 storage/replica_command.go:298  [split,n1,s1,r16/1:/{Table/19-Max}] initiating a split of this range at key /Table/20 [r17]
I180827 20:41:52.961237 51137 storage/replica_command.go:298  [split,n1,s1,r17/1:/{Table/20-Max}] initiating a split of this range at key /Table/21 [r18]
I180827 20:41:52.966548 50832 storage/replica_command.go:298  [split,n1,s1,r18/1:/{Table/21-Max}] initiating a split of this range at key /Table/22 [r19]
I180827 20:41:52.977113 51362 storage/replica_command.go:298  [split,n1,s1,r19/1:/{Table/22-Max}] initiating a split of this range at key /Table/23 [r20]
I180827 20:41:53.041315 51440 storage/replica_command.go:298  [split,n1,s1,r20/1:/{Table/23-Max}] initiating a split of this range at key /Table/50 [r21]
I180827 20:41:53.047478 51414 storage/replica_command.go:298  [split,n1,s1,r21/1:/{Table/50-Max}] initiating a split of this range at key /Table/51 [r22]
W180827 20:41:53.081214 50862 server/status/runtime.go:294  [n?] Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006"
I180827 20:41:53.089127 50862 server/server.go:830  [n?] monitoring forward clock jumps based on server.clock.forward_jump_check_enabled
I180827 20:41:53.089322 50862 base/addr_validation.go:260  [n?] server certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
I180827 20:41:53.089338 50862 base/addr_validation.go:300  [n?] web UI certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
I180827 20:41:53.102793 50862 server/config.go:496  [n?] 1 storage engine initialized
I180827 20:41:53.102863 50862 server/config.go:499  [n?] RocksDB cache size: 128 MiB
I180827 20:41:53.102878 50862 server/config.go:499  [n?] store 0: in-memory, size 0 B
W180827 20:41:53.102953 50862 gossip/gossip.go:1371  [n?] no incoming or outgoing connections
I180827 20:41:53.103001 50862 server/server.go:1403  [n?] no stores bootstrapped and --join flag specified, awaiting init command.
I180827 20:41:53.115344 51458 gossip/client.go:129  [n?] started gossip client to 127.0.0.1:41477
I180827 20:41:53.125579 51530 gossip/server.go:217  [n1] received initial cluster-verification connection from {tcp 127.0.0.1:36113}
I180827 20:41:53.127987 50862 server/node.go:697  [n?] connecting to gossip network to verify cluster ID...
I180827 20:41:53.128034 50862 server/node.go:722  [n?] node connected via gossip and verified as part of cluster "d5e53e69-a109-4eb6-91bf-29e74ae744ba"
I180827 20:41:53.128397 51575 kv/dist_sender.go:345  [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I180827 20:41:53.134920 51574 kv/dist_sender.go:345  [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I180827 20:41:53.135628 50862 kv/dist_sender.go:345  [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I180827 20:41:53.136461 50862 server/node.go:428  [n?] new node allocated ID 2
I180827 20:41:53.136541 50862 gossip/gossip.go:382  [n2] NodeDescriptor set to node_id:2 address:<network_field:"tcp" address_field:"127.0.0.1:36113" > attrs:<> locality:<> ServerVersion:<major_val:2 minor_val:0 patch:0 unstable:12 > build_tag:"v2.1.0-alpha.20180702-2025-gf1e7bb1" started_at:1535402513136479434 
I180827 20:41:53.136591 50862 storage/stores.go:242  [n2] read 0 node addresses from persistent storage
I180827 20:41:53.136624 50862 storage/stores.go:261  [n2] wrote 1 node addresses to persistent storage
I180827 20:41:53.137485 51552 storage/stores.go:261  [n1] wrote 1 node addresses to persistent storage
I180827 20:41:53.139442 50862 server/node.go:672  [n2] bootstrapped store [n2,s2]
I180827 20:41:53.139577 50862 server/node.go:546  [n2] node=2: started with [] engine(s) and attributes []
I180827 20:41:53.140140 50862 server/status/recorder.go:652  [n2] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180827 20:41:53.140166 50862 server/server.go:1807  [n2] Could not start heap profiler worker due to: directory to store profiles could not be determined
I180827 20:41:53.140233 50862 server/server.go:1538  [n2] starting https server at 127.0.0.1:39947 (use: 127.0.0.1:39947)
I180827 20:41:53.140246 50862 server/server.go:1540  [n2] starting grpc/postgres server at 127.0.0.1:36113
I180827 20:41:53.140256 50862 server/server.go:1541  [n2] advertising CockroachDB node at 127.0.0.1:36113
I180827 20:41:53.140624 51685 server/status/recorder.go:652  [n2,summaries] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180827 20:41:53.153945 50862 server/server.go:1594  [n2] done ensuring all necessary migrations have run
I180827 20:41:53.153974 50862 server/server.go:1597  [n2] serving sql connections
W180827 20:41:53.165268 50862 server/status/runtime.go:294  [n?] Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006"
I180827 20:41:53.185802 51467 server/server_update.go:67  [n2] no need to upgrade, cluster already at the newest version
I180827 20:41:53.186848 51469 sql/event_log.go:126  [n2] Event: "node_join", target: 2, info: {Descriptor:{NodeID:2 Address:{NetworkField:tcp AddressField:127.0.0.1:36113} Attrs: Locality: ServerVersion:2.0-12 BuildTag:v2.1.0-alpha.20180702-2025-gf1e7bb1 StartedAt:1535402513136479434 LocalityAddress:[]} ClusterID:d5e53e69-a109-4eb6-91bf-29e74ae744ba StartedAt:1535402513136479434 LastUp:1535402513136479434}
I180827 20:41:53.189622 50862 server/server.go:830  [n?] monitoring forward clock jumps based on server.clock.forward_jump_check_enabled
I180827 20:41:53.189776 50862 base/addr_validation.go:260  [n?] server certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
I180827 20:41:53.189808 50862 base/addr_validation.go:300  [n?] web UI certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
I180827 20:41:53.207782 50862 server/config.go:496  [n?] 1 storage engine initialized
I180827 20:41:53.207807 50862 server/config.go:499  [n?] RocksDB cache size: 128 MiB
I180827 20:41:53.207815 50862 server/config.go:499  [n?] store 0: in-memory, size 0 B
W180827 20:41:53.207911 50862 gossip/gossip.go:1371  [n?] no incoming or outgoing connections
I180827 20:41:53.207947 50862 server/server.go:1403  [n?] no stores bootstrapped and --join flag specified, awaiting init command.
I180827 20:41:53.211471 51475 rpc/nodedialer/nodedialer.go:92  [ct-client] connection to n2 established
I180827 20:41:53.223653 51740 gossip/client.go:129  [n?] started gossip client to 127.0.0.1:41477
I180827 20:41:53.223954 51816 gossip/server.go:217  [n1] received initial cluster-verification connection from {tcp 127.0.0.1:46463}
I180827 20:41:53.224401 50862 server/node.go:697  [n?] connecting to gossip network to verify cluster ID...
I180827 20:41:53.224432 50862 server/node.go:722  [n?] node connected via gossip and verified as part of cluster "d5e53e69-a109-4eb6-91bf-29e74ae744ba"
I180827 20:41:53.224690 51837 kv/dist_sender.go:345  [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I180827 20:41:53.225445 51836 kv/dist_sender.go:345  [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I180827 20:41:53.226030 50862 kv/dist_sender.go:345  [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I180827 20:41:53.226699 50862 server/node.go:428  [n?] new node allocated ID 3
I180827 20:41:53.226763 50862 gossip/gossip.go:382  [n3] NodeDescriptor set to node_id:3 address:<network_field:"tcp" address_field:"127.0.0.1:46463" > attrs:<> locality:<> ServerVersion:<major_val:2 minor_val:0 patch:0 unstable:12 > build_tag:"v2.1.0-alpha.20180702-2025-gf1e7bb1" started_at:1535402513226706701 
I180827 20:41:53.226805 50862 storage/stores.go:242  [n3] read 0 node addresses from persistent storage
I180827 20:41:53.226851 50862 storage/stores.go:261  [n3] wrote 2 node addresses to persistent storage
I180827 20:41:53.227563 51809 storage/stores.go:261  [n1] wrote 2 node addresses to persistent storage
I180827 20:41:53.227869 51810 storage/stores.go:261  [n2] wrote 2 node addresses to persistent storage
I180827 20:41:53.228504 50862 server/node.go:672  [n3] bootstrapped store [n3,s3]
I180827 20:41:53.229044 50862 server/node.go:546  [n3] node=3: started with [] engine(s) and attributes []
I180827 20:41:53.229696 50862 server/status/recorder.go:652  [n3] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180827 20:41:53.229749 50862 server/server.go:1807  [n3] Could not start heap profiler worker due to: directory to store profiles could not be determined
I180827 20:41:53.235251 50862 server/server.go:1538  [n3] starting https server at 127.0.0.1:43307 (use: 127.0.0.1:43307)
I180827 20:41:53.235271 50862 server/server.go:1540  [n3] starting grpc/postgres server at 127.0.0.1:46463
I180827 20:41:53.235283 50862 server/server.go:1541  [n3] advertising CockroachDB node at 127.0.0.1:46463
I180827 20:41:53.240284 50862 server/server.go:1594  [n3] done ensuring all necessary migrations have run
I180827 20:41:53.240307 50862 server/server.go:1597  [n3] serving sql connections
I180827 20:41:53.243124 51945 server/status/recorder.go:652  [n3,summaries] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180827 20:41:53.248117 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r20/1:/Table/{23-50}] sending preemptive snapshot 59e1afc9 at applied index 16
I180827 20:41:53.249136 50862 testutils/testcluster/testcluster.go:536  [n1,s1] has 22 underreplicated ranges
I180827 20:41:53.251012 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r20/1:/Table/{23-50}] streamed snapshot to (n2,s2):?: kv pairs: 12, log entries: 6, rate-limit: 8.0 MiB/sec, 3ms
I180827 20:41:53.251369 51983 storage/replica_raftstorage.go:784  [n2,s2,r20/?:{-}] applying preemptive snapshot at index 16 (id=59e1afc9, encoded size=2241, 1 rocksdb batches, 6 log entries)
I180827 20:41:53.254056 51839 server/server_update.go:67  [n3] no need to upgrade, cluster already at the newest version
I180827 20:41:53.255122 51841 sql/event_log.go:126  [n3] Event: "node_join", target: 3, info: {Descriptor:{NodeID:3 Address:{NetworkField:tcp AddressField:127.0.0.1:46463} Attrs: Locality: ServerVersion:2.0-12 BuildTag:v2.1.0-alpha.20180702-2025-gf1e7bb1 StartedAt:1535402513226706701 LocalityAddress:[]} ClusterID:d5e53e69-a109-4eb6-91bf-29e74ae744ba StartedAt:1535402513226706701 LastUp:1535402513226706701}
I180827 20:41:53.256061 51983 storage/replica_raftstorage.go:790  [n2,s2,r20/?:/Table/{23-50}] applied preemptive snapshot in 1ms [clear=0ms batch=0ms entries=0ms commit=1ms]
I180827 20:41:53.256605 50930 storage/replica_command.go:812  [replicate,n1,s1,r20/1:/Table/{23-50}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r20:/Table/{23-50} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.259565 50930 storage/replica.go:3743  [n1,s1,r20/1:/Table/{23-50}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I180827 20:41:53.261627 51625 rpc/nodedialer/nodedialer.go:92  [n2] connection to n1 established
I180827 20:41:53.264544 50862 testutils/testcluster/testcluster.go:536  [n1,s1] has 22 underreplicated ranges
I180827 20:41:53.286630 50930 rpc/nodedialer/nodedialer.go:92  [replicate,n1,s1,r21/1:/Table/5{0-1}] connection to n3 established
I180827 20:41:53.287245 50862 testutils/testcluster/testcluster.go:536  [n1,s1] has 22 underreplicated ranges
I180827 20:41:53.287799 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r21/1:/Table/5{0-1}] sending preemptive snapshot de08568a at applied index 18
I180827 20:41:53.288157 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r21/1:/Table/5{0-1}] streamed snapshot to (n3,s3):?: kv pairs: 8, log entries: 8, rate-limit: 8.0 MiB/sec, 1ms
I180827 20:41:53.288623 51959 storage/replica_raftstorage.go:784  [n3,s3,r21/?:{-}] applying preemptive snapshot at index 18 (id=de08568a, encoded size=2646, 1 rocksdb batches, 8 log entries)
I180827 20:41:53.289814 51959 storage/replica_raftstorage.go:790  [n3,s3,r21/?:/Table/5{0-1}] applied preemptive snapshot in 1ms [clear=0ms batch=0ms entries=0ms commit=1ms]
I180827 20:41:53.290329 50930 storage/replica_command.go:812  [replicate,n1,s1,r21/1:/Table/5{0-1}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r21:/Table/5{0-1} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.293678 50930 storage/replica.go:3743  [n1,s1,r21/1:/Table/5{0-1}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I180827 20:41:53.294953 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r22/1:/{Table/51-Max}] sending preemptive snapshot a84e7278 at applied index 12
I180827 20:41:53.295229 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r22/1:/{Table/51-Max}] streamed snapshot to (n3,s3):?: kv pairs: 7, log entries: 2, rate-limit: 8.0 MiB/sec, 1ms
I180827 20:41:53.295441 51883 rpc/nodedialer/nodedialer.go:92  [n3] connection to n1 established
I180827 20:41:53.295585 51953 storage/replica_raftstorage.go:784  [n3,s3,r22/?:{-}] applying preemptive snapshot at index 12 (id=a84e7278, encoded size=386, 1 rocksdb batches, 2 log entries)
I180827 20:41:53.295717 51953 storage/replica_raftstorage.go:790  [n3,s3,r22/?:/{Table/51-Max}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.295955 50930 storage/replica_command.go:812  [replicate,n1,s1,r22/1:/{Table/51-Max}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r22:/{Table/51-Max} [(n1,s1):1, next=2, gen=0]
I180827 20:41:53.298097 50930 storage/replica.go:3743  [n1,s1,r22/1:/{Table/51-Max}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I180827 20:41:53.301122 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r8/1:/Table/1{1-2}] sending preemptive snapshot 201bdccc at applied index 18
I180827 20:41:53.301565 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r8/1:/Table/1{1-2}] streamed snapshot to (n3,s3):?: kv pairs: 9, log entries: 8, rate-limit: 8.0 MiB/sec, 3ms
I180827 20:41:53.306578 52088 storage/replica_raftstorage.go:784  [n3,s3,r8/?:{-}] applying preemptive snapshot at index 18 (id=201bdccc, encoded size=4352, 1 rocksdb batches, 8 log entries)
I180827 20:41:53.306868 52088 storage/replica_raftstorage.go:790  [n3,s3,r8/?:/Table/1{1-2}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.307601 50930 storage/replica_command.go:812  [replicate,n1,s1,r8/1:/Table/1{1-2}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r8:/Table/1{1-2} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.311873 50930 storage/replica.go:3743  [n1,s1,r8/1:/Table/1{1-2}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I180827 20:41:53.314134 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r17/1:/Table/2{0-1}] sending preemptive snapshot 53116eb2 at applied index 16
I180827 20:41:53.314317 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r17/1:/Table/2{0-1}] streamed snapshot to (n3,s3):?: kv pairs: 8, log entries: 6, rate-limit: 8.0 MiB/sec, 2ms
I180827 20:41:53.314683 52103 storage/replica_raftstorage.go:784  [n3,s3,r17/?:{-}] applying preemptive snapshot at index 16 (id=53116eb2, encoded size=2105, 1 rocksdb batches, 6 log entries)
I180827 20:41:53.314887 52103 storage/replica_raftstorage.go:790  [n3,s3,r17/?:/Table/2{0-1}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.315401 50930 storage/replica_command.go:812  [replicate,n1,s1,r17/1:/Table/2{0-1}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r17:/Table/2{0-1} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.318398 50930 storage/replica.go:3743  [n1,s1,r17/1:/Table/2{0-1}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I180827 20:41:53.319436 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r16/1:/Table/{19-20}] sending preemptive snapshot e0be8540 at applied index 16
I180827 20:41:53.319691 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r16/1:/Table/{19-20}] streamed snapshot to (n2,s2):?: kv pairs: 8, log entries: 6, rate-limit: 8.0 MiB/sec, 1ms
I180827 20:41:53.320127 52072 storage/replica_raftstorage.go:784  [n2,s2,r16/?:{-}] applying preemptive snapshot at index 16 (id=e0be8540, encoded size=2109, 1 rocksdb batches, 6 log entries)
I180827 20:41:53.320339 52072 storage/replica_raftstorage.go:790  [n2,s2,r16/?:/Table/{19-20}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.320816 50930 storage/replica_command.go:812  [replicate,n1,s1,r16/1:/Table/{19-20}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r16:/Table/{19-20} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.323849 50930 storage/replica.go:3743  [n1,s1,r16/1:/Table/{19-20}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I180827 20:41:53.326208 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r15/1:/Table/1{8-9}] sending preemptive snapshot d259ae5c at applied index 16
I180827 20:41:53.326404 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r15/1:/Table/1{8-9}] streamed snapshot to (n2,s2):?: kv pairs: 8, log entries: 6, rate-limit: 8.0 MiB/sec, 2ms
I180827 20:41:53.326731 52116 storage/replica_raftstorage.go:784  [n2,s2,r15/?:{-}] applying preemptive snapshot at index 16 (id=d259ae5c, encoded size=2276, 1 rocksdb batches, 6 log entries)
I180827 20:41:53.326923 50862 testutils/testcluster/testcluster.go:536  [n1,s1] has 22 underreplicated ranges
I180827 20:41:53.326953 52116 storage/replica_raftstorage.go:790  [n2,s2,r15/?:/Table/1{8-9}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.334514 50930 storage/replica_command.go:812  [replicate,n1,s1,r15/1:/Table/1{8-9}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r15:/Table/1{8-9} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.337656 50930 storage/replica.go:3743  [n1,s1,r15/1:/Table/1{8-9}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I180827 20:41:53.338767 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r14/1:/Table/1{7-8}] sending preemptive snapshot 9d0058d5 at applied index 16
I180827 20:41:53.339034 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r14/1:/Table/1{7-8}] streamed snapshot to (n2,s2):?: kv pairs: 8, log entries: 6, rate-limit: 8.0 MiB/sec, 1ms
I180827 20:41:53.339612 52090 storage/replica_raftstorage.go:784  [n2,s2,r14/?:{-}] applying preemptive snapshot at index 16 (id=9d0058d5, encoded size=2276, 1 rocksdb batches, 6 log entries)
I180827 20:41:53.339831 52090 storage/replica_raftstorage.go:790  [n2,s2,r14/?:/Table/1{7-8}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.340173 50930 storage/replica_command.go:812  [replicate,n1,s1,r14/1:/Table/1{7-8}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r14:/Table/1{7-8} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.343121 50930 storage/replica.go:3743  [n1,s1,r14/1:/Table/1{7-8}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I180827 20:41:53.345432 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r9/1:/Table/1{2-3}] sending preemptive snapshot 0eea2d20 at applied index 26
I180827 20:41:53.345859 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r9/1:/Table/1{2-3}] streamed snapshot to (n2,s2):?: kv pairs: 53, log entries: 16, rate-limit: 8.0 MiB/sec, 2ms
I180827 20:41:53.347137 52066 storage/replica_raftstorage.go:784  [n2,s2,r9/?:{-}] applying preemptive snapshot at index 26 (id=0eea2d20, encoded size=15139, 1 rocksdb batches, 16 log entries)
I180827 20:41:53.347467 52066 storage/replica_raftstorage.go:790  [n2,s2,r9/?:/Table/1{2-3}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.348208 50930 storage/replica_command.go:812  [replicate,n1,s1,r9/1:/Table/1{2-3}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r9:/Table/1{2-3} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.352166 50930 storage/replica.go:3743  [n1,s1,r9/1:/Table/1{2-3}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I180827 20:41:53.353188 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r4/1:/System/{NodeLive…-tsd}] sending preemptive snapshot 0cdee511 at applied index 39
I180827 20:41:53.353765 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r4/1:/System/{NodeLive…-tsd}] streamed snapshot to (n2,s2):?: kv pairs: 36, log entries: 29, rate-limit: 8.0 MiB/sec, 1ms
I180827 20:41:53.354286 51723 storage/replica_raftstorage.go:784  [n2,s2,r4/?:{-}] applying preemptive snapshot at index 39 (id=0cdee511, encoded size=98384, 1 rocksdb batches, 29 log entries)
I180827 20:41:53.354994 51723 storage/replica_raftstorage.go:790  [n2,s2,r4/?:/System/{NodeLive…-tsd}] applied preemptive snapshot in 1ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.355529 50930 storage/replica_command.go:812  [replicate,n1,s1,r4/1:/System/{NodeLive…-tsd}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r4:/System/{NodeLivenessMax-tsd} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.358523 50930 storage/replica.go:3743  [n1,s1,r4/1:/System/{NodeLive…-tsd}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I180827 20:41:53.360250 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r3/1:/System/NodeLiveness{-Max}] sending preemptive snapshot 965d58b1 at applied index 19
I180827 20:41:53.360436 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r3/1:/System/NodeLiveness{-Max}] streamed snapshot to (n3,s3):?: kv pairs: 10, log entries: 9, rate-limit: 8.0 MiB/sec, 1ms
I180827 20:41:53.360789 52150 storage/replica_raftstorage.go:784  [n3,s3,r3/?:{-}] applying preemptive snapshot at index 19 (id=965d58b1, encoded size=4003, 1 rocksdb batches, 9 log entries)
I180827 20:41:53.361043 52150 storage/replica_raftstorage.go:790  [n3,s3,r3/?:/System/NodeLiveness{-Max}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.361522 50930 storage/replica_command.go:812  [replicate,n1,s1,r3/1:/System/NodeLiveness{-Max}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r3:/System/NodeLiveness{-Max} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.364392 50930 storage/replica.go:3743  [n1,s1,r3/1:/System/NodeLiveness{-Max}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I180827 20:41:53.366422 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r12/1:/Table/1{5-6}] sending preemptive snapshot 811af376 at applied index 16
I180827 20:41:53.366638 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r12/1:/Table/1{5-6}] streamed snapshot to (n3,s3):?: kv pairs: 8, log entries: 6, rate-limit: 8.0 MiB/sec, 2ms
I180827 20:41:53.367089 52137 storage/replica_raftstorage.go:784  [n3,s3,r12/?:{-}] applying preemptive snapshot at index 16 (id=811af376, encoded size=2276, 1 rocksdb batches, 6 log entries)
I180827 20:41:53.367359 52137 storage/replica_raftstorage.go:790  [n3,s3,r12/?:/Table/1{5-6}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.368127 50930 storage/replica_command.go:812  [replicate,n1,s1,r12/1:/Table/1{5-6}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r12:/Table/1{5-6} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.371691 50930 storage/replica.go:3743  [n1,s1,r12/1:/Table/1{5-6}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I180827 20:41:53.374563 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r19/1:/Table/2{2-3}] sending preemptive snapshot 9cd02555 at applied index 16
I180827 20:41:53.374760 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r19/1:/Table/2{2-3}] streamed snapshot to (n3,s3):?: kv pairs: 8, log entries: 6, rate-limit: 8.0 MiB/sec, 2ms
I180827 20:41:53.375252 52080 storage/replica_raftstorage.go:784  [n3,s3,r19/?:{-}] applying preemptive snapshot at index 16 (id=9cd02555, encoded size=2276, 1 rocksdb batches, 6 log entries)
I180827 20:41:53.375582 52080 storage/replica_raftstorage.go:790  [n3,s3,r19/?:/Table/2{2-3}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.375950 50930 storage/replica_command.go:812  [replicate,n1,s1,r19/1:/Table/2{2-3}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r19:/Table/2{2-3} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.381819 50930 storage/replica.go:3743  [n1,s1,r19/1:/Table/2{2-3}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I180827 20:41:53.386461 52091 rpc/nodedialer/nodedialer.go:92  [ct-client] connection to n3 established
I180827 20:41:53.386637 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r10/1:/Table/1{3-4}] sending preemptive snapshot a16f4b15 at applied index 64
I180827 20:41:53.388005 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r10/1:/Table/1{3-4}] streamed snapshot to (n3,s3):?: kv pairs: 204, log entries: 54, rate-limit: 8.0 MiB/sec, 4ms
I180827 20:41:53.388536 52181 storage/replica_raftstorage.go:784  [n3,s3,r10/?:{-}] applying preemptive snapshot at index 64 (id=a16f4b15, encoded size=62836, 1 rocksdb batches, 54 log entries)
I180827 20:41:53.389154 52181 storage/replica_raftstorage.go:790  [n3,s3,r10/?:/Table/1{3-4}] applied preemptive snapshot in 1ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.389513 50930 storage/replica_command.go:812  [replicate,n1,s1,r10/1:/Table/1{3-4}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r10:/Table/1{3-4} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.392649 50930 storage/replica.go:3743  [n1,s1,r10/1:/Table/1{3-4}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I180827 20:41:53.394122 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r2/1:/System/{-NodeLive…}] sending preemptive snapshot 69adabc1 at applied index 23
I180827 20:41:53.394365 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r2/1:/System/{-NodeLive…}] streamed snapshot to (n2,s2):?: kv pairs: 7, log entries: 13, rate-limit: 8.0 MiB/sec, 1ms
I180827 20:41:53.394729 52213 storage/replica_raftstorage.go:784  [n2,s2,r2/?:{-}] applying preemptive snapshot at index 23 (id=69adabc1, encoded size=6277, 1 rocksdb batches, 13 log entries)
I180827 20:41:53.394981 52213 storage/replica_raftstorage.go:790  [n2,s2,r2/?:/System/{-NodeLive…}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.395465 50930 storage/replica_command.go:812  [replicate,n1,s1,r2/1:/System/{-NodeLive…}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r2:/System/{-NodeLiveness} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.398757 50930 storage/replica.go:3743  [n1,s1,r2/1:/System/{-NodeLive…}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I180827 20:41:53.399709 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r18/1:/Table/2{1-2}] sending preemptive snapshot e9df2a4a at applied index 16
I180827 20:41:53.400036 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r18/1:/Table/2{1-2}] streamed snapshot to (n3,s3):?: kv pairs: 8, log entries: 6, rate-limit: 8.0 MiB/sec, 1ms
I180827 20:41:53.400391 52185 storage/replica_raftstorage.go:784  [n3,s3,r18/?:{-}] applying preemptive snapshot at index 16 (id=e9df2a4a, encoded size=2272, 1 rocksdb batches, 6 log entries)
I180827 20:41:53.400594 52185 storage/replica_raftstorage.go:790  [n3,s3,r18/?:/Table/2{1-2}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.400882 50930 storage/replica_command.go:812  [replicate,n1,s1,r18/1:/Table/2{1-2}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r18:/Table/2{1-2} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.407636 50930 storage/replica.go:3743  [n1,s1,r18/1:/Table/2{1-2}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I180827 20:41:53.408861 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r13/1:/Table/1{6-7}] sending preemptive snapshot 6f914d55 at applied index 16
I180827 20:41:53.409071 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r13/1:/Table/1{6-7}] streamed snapshot to (n2,s2):?: kv pairs: 8, log entries: 6, rate-limit: 8.0 MiB/sec, 1ms
I180827 20:41:53.409426 52218 storage/replica_raftstorage.go:784  [n2,s2,r13/?:{-}] applying preemptive snapshot at index 16 (id=6f914d55, encoded size=2276, 1 rocksdb batches, 6 log entries)
I180827 20:41:53.409616 52218 storage/replica_raftstorage.go:790  [n2,s2,r13/?:/Table/1{6-7}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.409970 50930 storage/replica_command.go:812  [replicate,n1,s1,r13/1:/Table/1{6-7}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r13:/Table/1{6-7} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.411262 50862 testutils/testcluster/testcluster.go:536  [n1,s1] has 22 underreplicated ranges
I180827 20:41:53.412831 50930 storage/replica.go:3743  [n1,s1,r13/1:/Table/1{6-7}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I180827 20:41:53.414081 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r11/1:/Table/1{4-5}] sending preemptive snapshot cca961c1 at applied index 16
I180827 20:41:53.414277 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r11/1:/Table/1{4-5}] streamed snapshot to (n3,s3):?: kv pairs: 8, log entries: 6, rate-limit: 8.0 MiB/sec, 1ms
I180827 20:41:53.414576 52199 storage/replica_raftstorage.go:784  [n3,s3,r11/?:{-}] applying preemptive snapshot at index 16 (id=cca961c1, encoded size=2272, 1 rocksdb batches, 6 log entries)
I180827 20:41:53.414816 52199 storage/replica_raftstorage.go:790  [n3,s3,r11/?:/Table/1{4-5}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.415293 50930 storage/replica_command.go:812  [replicate,n1,s1,r11/1:/Table/1{4-5}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r11:/Table/1{4-5} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.418111 50930 storage/replica.go:3743  [n1,s1,r11/1:/Table/1{4-5}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I180827 20:41:53.419054 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r5/1:/System/ts{d-e}] sending preemptive snapshot 3c3a015f at applied index 27
I180827 20:41:53.423022 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r5/1:/System/ts{d-e}] streamed snapshot to (n3,s3):?: kv pairs: 1391, log entries: 2, rate-limit: 8.0 MiB/sec, 4ms
I180827 20:41:53.423893 52201 storage/replica_raftstorage.go:784  [n3,s3,r5/?:{-}] applying preemptive snapshot at index 27 (id=3c3a015f, encoded size=194658, 1 rocksdb batches, 2 log entries)
I180827 20:41:53.429501 52201 storage/replica_raftstorage.go:790  [n3,s3,r5/?:/System/ts{d-e}] applied preemptive snapshot in 6ms [clear=0ms batch=0ms entries=2ms commit=4ms]
I180827 20:41:53.433500 50930 storage/replica_command.go:812  [replicate,n1,s1,r5/1:/System/ts{d-e}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r5:/System/ts{d-e} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.437580 50930 storage/replica.go:3743  [n1,s1,r5/1:/System/ts{d-e}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I180827 20:41:53.440575 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r6/1:/{System/tse-Table/System…}] sending preemptive snapshot cbd412df at applied index 21
I180827 20:41:53.440794 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r6/1:/{System/tse-Table/System…}] streamed snapshot to (n3,s3):?: kv pairs: 8, log entries: 11, rate-limit: 8.0 MiB/sec, 2ms
I180827 20:41:53.441181 52260 storage/replica_raftstorage.go:784  [n3,s3,r6/?:{-}] applying preemptive snapshot at index 21 (id=cbd412df, encoded size=4339, 1 rocksdb batches, 11 log entries)
I180827 20:41:53.441400 52260 storage/replica_raftstorage.go:790  [n3,s3,r6/?:/{System/tse-Table/System…}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.441676 50930 storage/replica_command.go:812  [replicate,n1,s1,r6/1:/{System/tse-Table/System…}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r6:/{System/tse-Table/SystemConfigSpan/Start} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.448564 52224 rpc/nodedialer/nodedialer.go:92  [ct-client] connection to n2 established
I180827 20:41:53.461587 50930 storage/replica.go:3743  [n1,s1,r6/1:/{System/tse-Table/System…}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I180827 20:41:53.463345 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r7/1:/Table/{SystemCon…-11}] sending preemptive snapshot 114f4385 at applied index 29
I180827 20:41:53.464896 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r7/1:/Table/{SystemCon…-11}] streamed snapshot to (n2,s2):?: kv pairs: 59, log entries: 19, rate-limit: 8.0 MiB/sec, 3ms
I180827 20:41:53.465343 52280 storage/replica_raftstorage.go:784  [n2,s2,r7/?:{-}] applying preemptive snapshot at index 29 (id=114f4385, encoded size=16646, 1 rocksdb batches, 19 log entries)
I180827 20:41:53.465821 52280 storage/replica_raftstorage.go:790  [n2,s2,r7/?:/Table/{SystemCon…-11}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.466988 50930 storage/replica_command.go:812  [replicate,n1,s1,r7/1:/Table/{SystemCon…-11}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r7:/Table/{SystemConfigSpan/Start-11} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.472743 50930 storage/replica.go:3743  [n1,s1,r7/1:/Table/{SystemCon…-11}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I180827 20:41:53.474632 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r1/1:/{Min-System/}] sending preemptive snapshot 0a244018 at applied index 114
I180827 20:41:53.475250 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r1/1:/{Min-System/}] streamed snapshot to (n2,s2):?: kv pairs: 73, log entries: 90, rate-limit: 8.0 MiB/sec, 1ms
I180827 20:41:53.475827 52267 storage/replica_raftstorage.go:784  [n2,s2,r1/?:{-}] applying preemptive snapshot at index 114 (id=0a244018, encoded size=40271, 1 rocksdb batches, 90 log entries)
I180827 20:41:53.476525 52267 storage/replica_raftstorage.go:790  [n2,s2,r1/?:/{Min-System/}] applied preemptive snapshot in 1ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.476869 50930 storage/replica_command.go:812  [replicate,n1,s1,r1/1:/{Min-System/}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r1:/{Min-System/} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.482912 50930 storage/replica.go:3743  [n1,s1,r1/1:/{Min-System/}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I180827 20:41:53.483281 50930 storage/queue.go:873  [n1,replicate] purgatory is now empty
I180827 20:41:53.485684 52286 storage/store_snapshot.go:615  [replicate,n1,s1,r20/1:/Table/{23-50}] sending preemptive snapshot f1426c69 at applied index 19
I180827 20:41:53.487316 52286 storage/store_snapshot.go:657  [replicate,n1,s1,r20/1:/Table/{23-50}] streamed snapshot to (n3,s3):?: kv pairs: 13, log entries: 9, rate-limit: 8.0 MiB/sec, 4ms
I180827 20:41:53.487681 52252 storage/replica_raftstorage.go:784  [n3,s3,r20/?:{-}] applying preemptive snapshot at index 19 (id=f1426c69, encoded size=3273, 1 rocksdb batches, 9 log entries)
I180827 20:41:53.487932 52252 storage/replica_raftstorage.go:790  [n3,s3,r20/?:/Table/{23-50}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.488311 52286 storage/replica_command.go:812  [replicate,n1,s1,r20/1:/Table/{23-50}] change replicas (ADD_REPLICA (n3,s3):3): read existing descriptor r20:/Table/{23-50} [(n1,s1):1, (n2,s2):2, next=3, gen=1]
I180827 20:41:53.503580 52286 storage/replica.go:3743  [n1,s1,r20/1:/Table/{23-50}] proposing ADD_REPLICA((n3,s3):3): updated=[(n1,s1):1 (n2,s2):2 (n3,s3):3] next=4
I180827 20:41:53.505707 52235 storage/store_snapshot.go:615  [replicate,n1,s1,r1/1:/{Min-System/}] sending preemptive snapshot 99036b07 at applied index 119
I180827 20:41:53.506514 52235 storage/store_snapshot.go:657  [replicate,n1,s1,r1/1:/{Min-System/}] streamed snapshot to (n3,s3):?: kv pairs: 78, log entries: 95, rate-limit: 8.0 MiB/sec, 1ms
I180827 20:41:53.507282 52188 storage/replica_raftstorage.go:784  [n3,s3,r1/?:{-}] applying preemptive snapshot at index 119 (id=99036b07, encoded size=42101, 1 rocksdb batches, 95 log entries)
I180827 20:41:53.508109 52188 storage/replica_raftstorage.go:790  [n3,s3,r1/?:/{Min-System/}] applied preemptive snapshot in 1ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.508641 52235 storage/replica_command.go:812  [replicate,n1,s1,r1/1:/{Min-System/}] change replicas (ADD_REPLICA (n3,s3):3): read existing descriptor r1:/{Min-System/} [(n1,s1):1, (n2,s2):2, next=3, gen=1]
I180827 20:41:53.512524 52235 storage/replica.go:3743  [n1,s1,r1/1:/{Min-System/}] proposing ADD_REPLICA((n3,s3):3): updated=[(n1,s1):1 (n2,s2):2 (n3,s3):3] next=4
I180827 20:41:53.513999 52209 storage/store_snapshot.go:615  [replicate,n1,s1,r7/1:/Table/{SystemCon…-11}] sending preemptive snapshot bb53109c at applied index 32
I180827 20:41:53.514379 52209 storage/store_snapshot.go:657  [replicate,n1,s1,r7/1:/Table/{SystemCon…-11}] streamed snapshot to (n3,s3):?: kv pairs: 60, log entries: 22, rate-limit: 8.0 MiB/sec, 1ms
I180827 20:41:53.514821 52292 storage/replica_raftstorage.go:784  [n3,s3,r7/?:{-}] applying preemptive snapshot at index 32 (id=bb53109c, encoded size=17687, 1 rocksdb batches, 22 log entries)
I180827 20:41:53.515905 52292 storage/replica_raftstorage.go:790  [n3,s3,r7/?:/Table/{SystemCon…-11}] applied preemptive snapshot in 1ms [clear=1ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.516367 52209 storage/replica_command.go:812  [replicate,n1,s1,r7/1:/Table/{SystemCon…-11}] change replicas (ADD_REPLICA (n3,s3):3): read existing descriptor r7:/Table/{SystemConfigSpan/Start-11} [(n1,s1):1, (n2,s2):2, next=3, gen=1]
I180827 20:41:53.520158 52209 storage/replica.go:3743  [n1,s1,r7/1:/Table/{SystemCon…-11}] proposing ADD_REPLICA((n3,s3):3): updated=[(n1,s1):1 (n2,s2):2 (n3,s3):3] next=4
I180827 20:41:53.521958 52312 storage/store_snapshot.go:615  [replicate,n1,s1,r6/1:/{System/tse-Table/System…}] sending preemptive snapshot 2ca43612 at applied index 24
I180827 20:41:53.522776 52312 storage/store_snapshot.go:657  [replicate,n1,s1,r6/1:/{System/tse-Table/System…}] streamed snapshot to (n2,s2):?: kv pairs: 9, log entries: 14, rate-limit: 8.0 MiB/sec, 2ms
I180827 20:41:53.523128 52239 storage/replica_raftstorage.go:784  [n2,s2,r6/?:{-}] applying preemptive snapshot at index 24 (id=2ca43612, encoded size=5410, 1 rocksdb batches, 14 log entries)
I180827 20:41:53.523377 52239 storage/replica_raftstorage.go:790  [n2,s2,r6/?:/{System/tse-Table/System…}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.523701 52312 storage/replica_command.go:812  [replicate,n1,s1,r6/1:/{System/tse-Table/System…}] change replicas (ADD_REPLICA (n2,s2):3): read existing descriptor r6:/{System/tse-Table/SystemConfigSpan/Start} [(n1,s1):1, (n3,s3):2, next=3, gen=1]
I180827 20:41:53.525176 50862 testutils/testcluster/testcluster.go:536  [n1,s1] has 19 underreplicated ranges
I180827 20:41:53.527482 52312 storage/replica.go:3743  [n1,s1,r6/1:/{System/tse-Table/System…}] proposing ADD_REPLICA((n2,s2):3): updated=[(n1,s1):1 (n3,s3):2 (n2,s2):3] next=4
I180827 20:41:53.528875 52327 storage/store_snapshot.go:615  [replicate,n1,s1,r5/1:/System/ts{d-e}] sending preemptive snapshot 731be2ae at applied index 30
I180827 20:41:53.532860 52327 storage/store_snapshot.go:657  [replicate,n1,s1,r5/1:/System/ts{d-e}] streamed snapshot to (n2,s2):?: kv pairs: 1392, log entries: 5, rate-limit: 8.0 MiB/sec, 4ms
I180827 20:41:53.533361 52316 storage/replica_raftstorage.go:784  [n2,s2,r5/?:{-}] applying preemptive snapshot at index 30 (id=731be2ae, encoded size=195741, 1 rocksdb batches, 5 log entries)
I180827 20:41:53.535834 52316 storage/replica_raftstorage.go:790  [n2,s2,r5/?:/System/ts{d-e}] applied preemptive snapshot in 2ms [clear=0ms batch=0ms entries=0ms commit=2ms]
I180827 20:41:53.536253 52327 storage/replica_command.go:812  [replicate,n1,s1,r5/1:/System/ts{d-e}] change replicas (ADD_REPLICA (n2,s2):3): read existing descriptor r5:/System/ts{d-e} [(n1,s1):1, (n3,s3):2, next=3, gen=1]
I180827 20:41:53.540576 52327 storage/replica.go:3743  [n1,s1,r5/1:/System/ts{d-e}] proposing ADD_REPLICA((n2,s2):3): updated=[(n1,s1):1 (n3,s3):2 (n2,s2):3] next=4
I180827 20:41:53.545804 52341 storage/store_snapshot.go:615  [replicate,n1,s1,r11/1:/Table/1{4-5}] sending preemptive snapshot 7497a95f at applied index 19
I180827 20:41:53.546108 52341 storage/store_snapshot.go:657  [replicate,n1,s1,r11/1:/Table/1{4-5}] streamed snapshot to (n2,s2):?: kv pairs: 9, log entries: 9, rate-limit: 8.0 MiB/sec, 4ms
I180827 20:41:53.546590 52275 storage/replica_raftstorage.go:784  [n2,s2,r11/?:{-}] applying preemptive snapshot at index 19 (id=7497a95f, encoded size=3304, 1 rocksdb batches, 9 log entries)
I180827 20:41:53.546960 52275 storage/replica_raftstorage.go:790  [n2,s2,r11/?:/Table/1{4-5}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.547386 52341 storage/replica_command.go:812  [replicate,n1,s1,r11/1:/Table/1{4-5}] change replicas (ADD_REPLICA (n2,s2):3): read existing descriptor r11:/Table/1{4-5} [(n1,s1):1, (n3,s3):2, next=3, gen=1]
I180827 20:41:53.551568 52341 storage/replica.go:3743  [n1,s1,r11/1:/Table/1{4-5}] proposing ADD_REPLICA((n2,s2):3): updated=[(n1,s1):1 (n3,s3):2 (n2,s2):3] next=4
I180827 20:41:53.554959 52323 storage/store_snapshot.go:615  [replicate,n1,s1,r13/1:/Table/1{6-7}] sending preemptive snapshot 5932a5bd at applied index 19
I180827 20:41:53.555353 52323 storage/store_snapshot.go:657  [replicate,n1,s1,r13/1:/Table/1{6-7}] streamed snapshot to (n3,s3):?: kv pairs: 9, log entries: 9, rate-limit: 8.0 MiB/sec, 3ms
I180827 20:41:53.555743 52329 storage/replica_raftstorage.go:784  [n3,s3,r13/?:{-}] applying preemptive snapshot at index 19 (id=5932a5bd, encoded size=3308, 1 rocksdb batches, 9 log entries)
I180827 20:41:53.556103 52329 storage/replica_raftstorage.go:790  [n3,s3,r13/?:/Table/1{6-7}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.556489 52323 storage/replica_command.go:812  [replicate,n1,s1,r13/1:/Table/1{6-7}] change replicas (ADD_REPLICA (n3,s3):3): read existing descriptor r13:/Table/1{6-7} [(n1,s1):1, (n2,s2):2, next=3, gen=1]
I180827 20:41:53.563494 52323 storage/replica.go:3743  [n1,s1,r13/1:/Table/1{6-7}] proposing ADD_REPLICA((n3,s3):3): updated=[(n1,s1):1 (n2,s2):2 (n3,s3):3] next=4
I180827 20:41:53.566866 52362 storage/store_snapshot.go:615  [replicate,n1,s1,r18/1:/Table/2{1-2}] sending preemptive snapshot c74baa54 at applied index 19
I180827 20:41:53.568042 52362 storage/store_snapshot.go:657  [replicate,n1,s1,r18/1:/Table/2{1-2}] streamed snapshot to (n2,s2):?: kv pairs: 9, log entries: 9, rate-limit: 8.0 MiB/sec, 3ms
I180827 20:41:53.568417 52366 storage/replica_raftstorage.go:784  [n2,s2,r18/?:{-}] applying preemptive snapshot at index 19 (id=c74baa54, encoded size=3304, 1 rocksdb batches, 9 log entries)
I180827 20:41:53.568651 52366 storage/replica_raftstorage.go:790  [n2,s2,r18/?:/Table/2{1-2}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.568954 52362 storage/replica_command.go:812  [replicate,n1,s1,r18/1:/Table/2{1-2}] change replicas (ADD_REPLICA (n2,s2):3): read existing descriptor r18:/Table/2{1-2} [(n1,s1):1, (n3,s3):2, next=3, gen=1]
I180827 20:41:53.572711 52362 storage/replica.go:3743  [n1,s1,r18/1:/Table/2{1-2}] proposing ADD_REPLICA((n2,s2):3): updated=[(n1,s1):1 (n3,s3):2 (n2,s2):3] next=4
I180827 20:41:53.574655 52190 storage/store_snapshot.go:615  [replicate,n1,s1,r2/1:/System/{-NodeLive…}] sending preemptive snapshot d4c499ea at applied index 26
I180827 20:41:53.574962 52190 storage/store_snapshot.go:657  [replicate,n1,s1,r2/1:/System/{-NodeLive…}] streamed snapshot to (n3,s3):?: kv pairs: 8, log entries: 16, rate-limit: 8.0 MiB/sec, 1ms
I180827 20:41:53.575725 52299 storage/replica_raftstorage.go:784  [n3,s3,r2/?:{-}] applying preemptive snapshot at index 26 (id=d4c499ea, encoded size=7349, 1 rocksdb batches, 16 log entries)
I180827 20:41:53.576022 52299 storage/replica_raftstorage.go:790  [n3,s3,r2/?:/System/{-NodeLive…}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.576405 52190 storage/replica_command.go:812  [replicate,n1,s1,r2/1:/System/{-NodeLive…}] change replicas (ADD_REPLICA (n3,s3):3): read existing descriptor r2:/System/{-NodeLiveness} [(n1,s1):1, (n2,s2):2, next=3, gen=1]
I180827 20:41:53.579762 52190 storage/replica.go:3743  [n1,s1,r2/1:/System/{-NodeLive…}] proposing ADD_REPLICA((n3,s3):3): updated=[(n1,s1):1 (n2,s2):2 (n3,s3):3] next=4
I180827 20:41:53.583103 52372 storage/store_snapshot.go:615  [replicate,n1,s1,r10/1:/Table/1{3-4}] sending preemptive snapshot dbe83d06 at applied index 103
I180827 20:41:53.583765 52372 storage/store_snapshot.go:657  [replicate,n1,s1,r10/1:/Table/1{3-4}] streamed snapshot to (n2,s2):?: kv pairs: 295, log entries: 10, rate-limit: 8.0 MiB/sec, 2ms
I180827 20:41:53.584214 52369 storage/replica_raftstorage.go:784  [n2,s2,r10/?:{-}] applying preemptive snapshot at index 103 (id=dbe83d06, encoded size=38018, 1 rocksdb batches, 10 log entries)
I180827 20:41:53.584577 52369 storage/replica_raftstorage.go:790  [n2,s2,r10/?:/Table/1{3-4}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.584963 52372 storage/replica_command.go:812  [replicate,n1,s1,r10/1:/Table/1{3-4}] change replicas (ADD_REPLICA (n2,s2):3): read existing descriptor r10:/Table/1{3-4} [(n1,s1):1, (n3,s3):2, next=3, gen=1]
I180827 20:41:53.588661 52372 storage/replica.go:3743  [n1,s1,r10/1:/Table/1{3-4}] proposing ADD_REPLICA((n2,s2):3): updated=[(n1,s1):1 (n3,s3):2 (n2,s2):3] next=4
I180827 20:41:53.590522 52331 storage/store_snapshot.go:615  [replicate,n1,s1,r19/1:/Table/2{2-3}] sending preemptive snapshot ba0de389 at applied index 19
I180827 20:41:53.596120 52331 storage/store_snapshot.go:657  [replicate,n1,s1,r19/1:/Table/2{2-3}] streamed snapshot to (n2,s2):?: kv pairs: 9, log entries: 9, rate-limit: 8.0 MiB/sec, 6ms
I180827 20:41:53.597215 52259 storage/replica_raftstorage.go:784  [n2,s2,r19/?:{-}] applying preemptive snapshot at index 19 (id=ba0de389, encoded size=3308, 1 rocksdb batches, 9 log entries)
I180827 20:41:53.597484 52259 storage/replica_raftstorage.go:790  [n2,s2,r19/?:/Table/2{2-3}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.597898 52331 storage/replica_command.go:812  [replicate,n1,s1,r19/1:/Table/2{2-3}] change replicas (ADD_REPLICA (n2,s2):3): read existing descriptor r19:/Table/2{2-3} [(n1,s1):1, (n3,s3):2, next=3, gen=1]
I180827 20:41:53.601937 52331 storage/replica.go:3743  [n1,s1,r19/1:/Table/2{2-3}] proposing ADD_REPLICA((n2,s2):3): updated=[(n1,s1):1 (n3,s3):2 (n2,s2):3] next=4
I180827 20:41:53.604379 52303 storage/store_snapshot.go:615  [replicate,n1,s1,r12/1:/Table/1{5-6}] sending preemptive snapshot 60969a90 at applied index 19
I180827 20:41:53.606558 52303 storage/store_snapshot.go:657  [replicate,n1,s1,r12/1:/Table/1{5-6}] streamed snapshot to (n2,s2):?: kv pairs: 9, log entries: 9, rate-limit: 8.0 MiB/sec, 3ms
I180827 20:41:53.606975 52307 storage/replica_raftstorage.go:784  [n2,s2,r12/?:{-}] applying preemptive snapshot at index 19 (id=60969a90, encoded size=3308, 1 rocksdb batches, 9 log entries)
I180827 20:41:53.607211 52307 storage/replica_raftstorage.go:790  [n2,s2,r12/?:/Table/1{5-6}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.608276 52303 storage/replica_command.go:812  [replicate,n1,s1,r12/1:/Table/1{5-6}] change replicas (ADD_REPLICA (n2,s2):3): read existing descriptor r12:/Table/1{5-6} [(n1,s1):1, (n3,s3):2, next=3, gen=1]
I180827 20:41:53.612795 52303 storage/replica.go:3743  [n1,s1,r12/1:/Table/1{5-6}] proposing ADD_REPLICA((n2,s2):3): updated=[(n1,s1):1 (n3,s3):2 (n2,s2):3] next=4
I180827 20:41:53.615927 52392 storage/store_snapshot.go:615  [replicate,n1,s1,r3/1:/System/NodeLiveness{-Max}] sending preemptive snapshot 3d427041 at applied index 22
I180827 20:41:53.617153 52392 storage/store_snapshot.go:657  [replicate,n1,s1,r3/1:/System/NodeLiveness{-Max}] streamed snapshot to (n2,s2):?: kv pairs: 11, log entries: 12, rate-limit: 8.0 MiB/sec, 3ms
I180827 20:41:53.618565 52405 storage/replica_raftstorage.go:784  [n2,s2,r3/?:{-}] applying preemptive snapshot at index 22 (id=3d427041, encoded size=5215, 1 rocksdb batches, 12 log entries)
I180827 20:41:53.619140 52405 storage/replica_raftstorage.go:790  [n2,s2,r3/?:/System/NodeLiveness{-Max}] applied preemptive snapshot in 1ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.621854 50862 testutils/testcluster/testcluster.go:536  [n1,s1] has 10 underreplicated ranges
I180827 20:41:53.635001 52392 storage/replica_command.go:812  [replicate,n1,s1,r3/1:/System/NodeLiveness{-Max}] change replicas (ADD_REPLICA (n2,s2):3): read existing descriptor r3:/System/NodeLiveness{-Max} [(n1,s1):1, (n3,s3):2, next=3, gen=1]
I180827 20:41:53.638726 52392 storage/replica.go:3743  [n1,s1,r3/1:/System/NodeLiveness{-Max}] proposing ADD_REPLICA((n2,s2):3): updated=[(n1,s1):1 (n3,s3):2 (n2,s2):3] next=4
I180827 20:41:53.643490 52420 storage/store_snapshot.go:615  [replicate,n1,s1,r4/1:/System/{NodeLive…-tsd}] sending preemptive snapshot a2505d74 at applied index 42
I180827 20:41:53.644245 52420 storage/store_snapshot.go:657  [replicate,n1,s1,r4/1:/System/{NodeLive…-tsd}] streamed snapshot to (n3,s3):?: kv pairs: 37, log entries: 32, rate-limit: 8.0 MiB/sec, 3ms
I180827 20:41:53.644709 52436 storage/replica_raftstorage.go:784  [n3,s3,r4/?:{-}] applying preemptive snapshot at index 42 (id=a2505d74, encoded size=99568, 1 rocksdb batches, 32 log entries)
I180827 20:41:53.645176 52436 storage/replica_raftstorage.go:790  [n3,s3,r4/?:/System/{NodeLive…-tsd}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.645567 52420 storage/replica_command.go:812  [replicate,n1,s1,r4/1:/System/{NodeLive…-tsd}] change replicas (ADD_REPLICA (n3,s3):3): read existing descriptor r4:/System/{NodeLivenessMax-tsd} [(n1,s1):1, (n2,s2):2, next=3, gen=1]
I180827 20:41:53.649433 52420 storage/replica.go:3743  [n1,s1,r4/1:/System/{NodeLive…-tsd}] proposing ADD_REPLICA((n3,s3):3): updated=[(n1,s1):1 (n2,s2):2 (n3,s3):3] next=4
I180827 20:41:53.652117 52398 storage/store_snapshot.go:615  [replicate,n1,s1,r9/1:/Table/1{2-3}] sending preemptive snapshot 6dc7ffcb at applied index 29
I180827 20:41:53.653729 52398 storage/store_snapshot.go:657  [replicate,n1,s1,r9/1:/Table/1{2-3}] streamed snapshot to (n3,s3):?: kv pairs: 54, log entries: 19, rate-limit: 8.0 MiB/sec, 3ms
I180827 20:41:53.654216 52402 storage/replica_raftstorage.go:784  [n3,s3,r9/?:{-}] applying preemptive snapshot at index 29 (id=6dc7ffcb, encoded size=16171, 1 ro

Please assign, take a look and update the issue accordingly.

teamcity: failed test: lint/TestLint

The following tests appear to have failed on release-banana.
You may want to check for open issues.

#864629:

--- FAIL: lint/TestLint: TestLint/TestGolint (71.030s)
lint_test.go:1202: 
	pkg/kv/txn_coord_sender.go:717:10: if block ends with a return statement, so drop this else and outdent its block
------- Stdout: -------
=== PAUSE TestLint/TestGolint

Please assign, take a look and update the issue accordingly.

test failure #411

The following test appears to have failed:

#411:

I0322 21:24:06.234509     263 queue.go:158] adding range range=1 (""-"\xff\xff") to split queue
I0322 21:24:06.236047     263 raft.go:315] raft: 101 became follower at term 5
I0322 21:24:06.236258     263 raft.go:134] raft: newRaft 101 [peers: [], term: 5, commit: 10, applied: 10, lastindex: 10, lastterm: 5]
I0322 21:24:06.236457     263 queue.go:210] processing range range=1 (""-"\xff\xff") from split queue...
I0322 21:24:06.236644     263 split_queue.go:95] splitting range "\"\""-"\"\\xff\\xff\"" at keys ["/db1" "/db2"]
panic: runtime error: invalid memory address or nil pointer dereference
[signal 0xb code=0x1 addr=0x40 pc=0xa6b679]

goroutine 78 [running]:
github.com/cockroachdb/cockroach/client.(*KV).Call(0x0, 0x1088ad0, 0xa, 0x7f3e7454f268, 0xc2080fa000, 0x7f3e7454f2a8, 0xc2080f8070, 0x0, 0x0)
    /go/src/github.com/cockroachdb/cockroach/client/kv.go:106 +0x79
github.com/cockroachdb/cockroach/storage.(*splitQueue).process(0xc2080a58e0, 0x0, 0xa, 0x0, 0x0, 0x0, 0xc20803ef80, 0x0, 0x0)
    /go/src/github.com/cockroachdb/cockroach/storage/split_queue.go:101 +0x732
github.com/cockroachdb/cockroach/storage.*splitQueue.(github.com/cockroachdb/cockroach/storage.process)·fm(0x0, 0xa, 0x0, 0x0, 0x0, 0xc20803ef80, 0x0, 0x0)
    /go/src/github.com/cockroachdb/cockroach/storage/split_queue.go:52 +0x69
github.com/cockroachdb/cockroach/storage.(*baseQueue).processLoop(0xc208112960, 0xc2080f6cc0, 0xc2080a5860)
--
goroutine 85 [select]:
github.com/cockroachdb/cockroach/multiraft.(*writeTask).start(0xc2081060f0)
    /go/src/github.com/cockroachdb/cockroach/multiraft/storage.go:136 +0xb86
created by github.com/cockroachdb/cockroach/multiraft.(*state).start
    /go/src/github.com/cockroachdb/cockroach/multiraft/multiraft.go:409 +0x1cf
FAIL    github.com/cockroachdb/cockroach/storage    0.176s
=== RUN TestBatchBasics
--- PASS: TestBatchBasics (0.00s)
=== RUN TestBatchGet
--- PASS: TestBatchGet (0.00s)
=== RUN TestBatchMerge
--- PASS: TestBatchMerge (0.00s)
=== RUN TestBatchProto
--- PASS: TestBatchProto (0.00s)
=== RUN TestBatchScan
--- PASS: TestBatchScan (0.00s)
I0322 21:24:06.234509     263 queue.go:158] adding range range=1 (""-"\xff\xff") to split queue
I0322 21:24:06.236047     263 raft.go:315] raft: 101 became follower at term 5
I0322 21:24:06.236258     263 raft.go:134] raft: newRaft 101 [peers: [], term: 5, commit: 10, applied: 10, lastindex: 10, lastterm: 5]
I0322 21:24:06.236457     263 queue.go:210] processing range range=1 (""-"\xff\xff") from split queue...
I0322 21:24:06.236644     263 split_queue.go:95] splitting range "\"\""-"\"\\xff\\xff\"" at keys ["/db1" "/db2"]
panic: runtime error: invalid memory address or nil pointer dereference
[signal 0xb code=0x1 addr=0x40 pc=0xa6b679]

goroutine 78 [running]:
github.com/cockroachdb/cockroach/client.(*KV).Call(0x0, 0x1088ad0, 0xa, 0x7f3e7454f268, 0xc2080fa000, 0x7f3e7454f2a8, 0xc2080f8070, 0x0, 0x0)
    /go/src/github.com/cockroachdb/cockroach/client/kv.go:106 +0x79
github.com/cockroachdb/cockroach/storage.(*splitQueue).process(0xc2080a58e0, 0x0, 0xa, 0x0, 0x0, 0x0, 0xc20803ef80, 0x0, 0x0)
    /go/src/github.com/cockroachdb/cockroach/storage/split_queue.go:101 +0x732
github.com/cockroachdb/cockroach/storage.*splitQueue.(github.com/cockroachdb/cockroach/storage.process)·fm(0x0, 0xa, 0x0, 0x0, 0x0, 0xc20803ef80, 0x0, 0x0)
    /go/src/github.com/cockroachdb/cockroach/storage/split_queue.go:52 +0x69
github.com/cockroachdb/cockroach/storage.(*baseQueue).processLoop(0xc208112960, 0xc2080f6cc0, 0xc2080a5860)
--
goroutine 85 [select]:
github.com/cockroachdb/cockroach/multiraft.(*writeTask).start(0xc2081060f0)
    /go/src/github.com/cockroachdb/cockroach/multiraft/storage.go:136 +0xb86
created by github.com/cockroachdb/cockroach/multiraft.(*state).start
    /go/src/github.com/cockroachdb/cockroach/multiraft/multiraft.go:409 +0x1cf
FAIL    github.com/cockroachdb/cockroach/storage    0.176s
=== RUN TestBatchBasics
--- PASS: TestBatchBasics (0.00s)
=== RUN TestBatchGet
--- PASS: TestBatchGet (0.00s)
=== RUN TestBatchMerge
--- PASS: TestBatchMerge (0.00s)
=== RUN TestBatchProto
--- PASS: TestBatchProto (0.00s)
=== RUN TestBatchScan
--- PASS: TestBatchScan (0.00s)

Please assign, take a look and update the issue accordingly.

test failure #

The following test appears to have failed:

#:

I0320 20:48:24.699705     263 multiraft.go:637] Committed Entry[0]: 6/268 EntryNormal 000000000000000072bf44f202077487: raft_id:1 cmd:<put:<header:<timestamp:<wall_time:0 logical:525 > cmd_id:<wall_time:0 random:0 > key:"testOpJknOtWyQreTYYGpJzhUhTJJPUCrglalftmqIEpNHx
I0320 20:48:24.701645     263 multiraft.go:626] node 257: group 1 raft ready
I0320 20:48:24.701816     263 multiraft.go:631] HardState updated: {Term:6 Vote:257 Commit:269 XXX_unrecognized:[]}
I0320 20:48:24.702396     263 multiraft.go:634] New Entry[0]: 6/269 EntryNormal 0000000000000000680a3e42521dca39: raft_id:1 cmd:<put:<header:<timestamp:<wall_time:0 logical:527 > cmd_id:<wall_time:0 random:0 > key:"testpFDSqLrvqjIJMkVevUWyhgXipCjATUUcsJiVWBVnIMc
I0320 20:48:24.702975     263 multiraft.go:637] Committed Entry[0]: 6/269 EntryNormal 0000000000000000680a3e42521dca39: raft_id:1 cmd:<put:<header:<timestamp:<wall_time:0 logical:527 > cmd_id:<wall_time:0 random:0 > key:"testpFDSqLrvqjIJMkVevUWyhgXipCjATUUcsJiVWBVnIMc
--- FAIL: TestStoreZoneUpdateAndRangeSplit (1.21s)
    client_split_test.go:348: key range "\"testpFDSqLrvqjIJMkVevUWyhgXipCjATUUcsJiVWBVnIMcoUfdiCmSxdlSYwMEiIiIikoqNcAkUEFOZwOGmJJWtecGuBzpxbnTemDCC\""-"\"\"" outside of bounds of range "\"\""-"\"testZHMtQoLrvcssrtoMUpwiwLoiYLgAbUibFATwYLcrfzqyLotKmLcdRBEIZmDgeHbtygdWYtWlXDsXTwEYHjPgGFXiwfrGuOhjMCPH\""
=== RUN TestStoreRangeSplitOnConfigs
I0320 20:48:24.716406     263 multiraft.go:408] node 257 starting
I0320 20:48:24.717507     263 raft.go:415] raft: 101 became follower at term 5
I0320 20:48:24.717800     263 raft.go:232] raft: newRaft 101 [peers: [101], term: 5, commit: 10, applied: 10, lastindex: 10, lastterm: 5]
I0320 20:48:24.718402     263 raft.go:634] raft: 101 no leader at term 5; dropping proposal
I0320 20:48:24.725199     263 raft.go:489] raft: 101 is starting a new election at term 5
I0320 20:48:24.725279     263 raft.go:428] raft: 101 became candidate at term 6
I0320 20:48:24.725330     263 raft.go:472] raft: 101 received vote from 101 at term 6
I0320 20:48:24.725414     263 raft.go:451] raft: 101 became leader at term 6
--
I0320 20:48:25.284244     263 multiraft.go:637] Committed Entry[0]: 6/46 EntryNormal 13cd4fe5c83f719154cad7f23903a93c: raft_id:1 cmd:<put:<header:<timestamp:<wall_time:0 logical:40 > cmd_id:<wall_time:1426884505272021393 random:6109933279820097852 > key:"\000\000meta1
--- PASS: TestUpdateRangeAddressing (0.13s)
=== RUN TestUpdateRangeAddressingSplitMeta1
I0320 20:48:25.295013     263 multiraft.go:408] node 257 starting
--- PASS: TestUpdateRangeAddressingSplitMeta1 (0.01s)
FAIL
FAIL    github.com/cockroachdb/cockroach/storage    6.273s
=== RUN TestBatchBasics
--- PASS: TestBatchBasics (0.00s)
=== RUN TestBatchGet
--- PASS: TestBatchGet (0.00s)
=== RUN TestBatchMerge
--- PASS: TestBatchMerge (0.00s)
=== RUN TestBatchProto
--- PASS: TestBatchProto (0.00s)
=== RUN TestBatchScan
--- PASS: TestBatchScan (0.00s)
--
--- PASS: TestRawBroadcast (0.00s)
=== RUN TestMetricSystemStop
--- PASS: TestMetricSystemStop (0.00s)
=== RUN: ExampleMetricSystem
==================
WARNING: DATA RACE
Read by goroutine 44:
  github.com/cockroachdb/cockroach/util/metrics.func·006()
      /go/src/github.com/cockroachdb/cockroach/util/metrics/metrics.go:591 +0x571
  github.com/cockroachdb/cockroach/util/metrics.func·005()
      /go/src/github.com/cockroachdb/cockroach/util/metrics/metrics.go:530 +0x8f

Previous write by main goroutine:
  sync/atomic.AddInt64()
      /usr/src/go/src/runtime/race_amd64.s:261 +0xc
  github.com/cockroachdb/cockroach/util/metrics.(*MetricSystem).Histogram()
      /go/src/github.com/cockroachdb/cockroach/util/metrics/metrics.go:295 +0xd93
  github.com/cockroachdb/cockroach/util/metrics.(*MetricSystem).StopTimer()
      /go/src/github.com/cockroachdb/cockroach/util/metrics/metrics.go:226 +0x75
  github.com/cockroachdb/cockroach/util/metrics.ExampleMetricSystem()
      /go/src/github.com/cockroachdb/cockroach/util/metrics/metrics_test.go:37 +0x1cd
  testing.runExample()
      /usr/src/go/src/testing/example.go:98 +0x5e6
--
Goroutine 44 (running) created at:
  github.com/cockroachdb/cockroach/util/metrics.(*MetricSystem).reaper()
      /go/src/github.com/cockroachdb/cockroach/util/metrics/metrics.go:532 +0x1ed
==================
==================
WARNING: DATA RACE
Read by goroutine 44:
  github.com/cockroachdb/cockroach/util/metrics.func·006()
      /go/src/github.com/cockroachdb/cockroach/util/metrics/metrics.go:593 +0x6d4
  github.com/cockroachdb/cockroach/util/metrics.func·005()
      /go/src/github.com/cockroachdb/cockroach/util/metrics/metrics.go:530 +0x8f

Previous write by main goroutine:
  sync/atomic.AddInt64()
      /usr/src/go/src/runtime/race_amd64.s:261 +0xc
  github.com/cockroachdb/cockroach/util/metrics.(*MetricSystem).Histogram()
      /go/src/github.com/cockroachdb/cockroach/util/metrics/metrics.go:294 +0xce9
  github.com/cockroachdb/cockroach/util/metrics.(*MetricSystem).StopTimer()
      /go/src/github.com/cockroachdb/cockroach/util/metrics/metrics.go:226 +0x75
  github.com/cockroachdb/cockroach/util/metrics.ExampleMetricSystem()
      /go/src/github.com/cockroachdb/cockroach/util/metrics/metrics_test.go:37 +0x1cd
  testing.runExample()
      /usr/src/go/src/testing/example.go:98 +0x5e6
I0320 20:48:24.699705     263 multiraft.go:637] Committed Entry[0]: 6/268 EntryNormal 000000000000000072bf44f202077487: raft_id:1 cmd:<put:<header:<timestamp:<wall_time:0 logical:525 > cmd_id:<wall_time:0 random:0 > key:"testOpJknOtWyQreTYYGpJzhUhTJJPUCrglalftmqIEpNHx
I0320 20:48:24.701645     263 multiraft.go:626] node 257: group 1 raft ready
I0320 20:48:24.701816     263 multiraft.go:631] HardState updated: {Term:6 Vote:257 Commit:269 XXX_unrecognized:[]}
I0320 20:48:24.702396     263 multiraft.go:634] New Entry[0]: 6/269 EntryNormal 0000000000000000680a3e42521dca39: raft_id:1 cmd:<put:<header:<timestamp:<wall_time:0 logical:527 > cmd_id:<wall_time:0 random:0 > key:"testpFDSqLrvqjIJMkVevUWyhgXipCjATUUcsJiVWBVnIMc
I0320 20:48:24.702975     263 multiraft.go:637] Committed Entry[0]: 6/269 EntryNormal 0000000000000000680a3e42521dca39: raft_id:1 cmd:<put:<header:<timestamp:<wall_time:0 logical:527 > cmd_id:<wall_time:0 random:0 > key:"testpFDSqLrvqjIJMkVevUWyhgXipCjATUUcsJiVWBVnIMc
--- FAIL: TestStoreZoneUpdateAndRangeSplit (1.21s)
    client_split_test.go:348: key range "\"testpFDSqLrvqjIJMkVevUWyhgXipCjATUUcsJiVWBVnIMcoUfdiCmSxdlSYwMEiIiIikoqNcAkUEFOZwOGmJJWtecGuBzpxbnTemDCC\""-"\"\"" outside of bounds of range "\"\""-"\"testZHMtQoLrvcssrtoMUpwiwLoiYLgAbUibFATwYLcrfzqyLotKmLcdRBEIZmDgeHbtygdWYtWlXDsXTwEYHjPgGFXiwfrGuOhjMCPH\""
=== RUN TestStoreRangeSplitOnConfigs
I0320 20:48:24.716406     263 multiraft.go:408] node 257 starting
I0320 20:48:24.717507     263 raft.go:415] raft: 101 became follower at term 5
I0320 20:48:24.717800     263 raft.go:232] raft: newRaft 101 [peers: [101], term: 5, commit: 10, applied: 10, lastindex: 10, lastterm: 5]
I0320 20:48:24.718402     263 raft.go:634] raft: 101 no leader at term 5; dropping proposal
I0320 20:48:24.725199     263 raft.go:489] raft: 101 is starting a new election at term 5
I0320 20:48:24.725279     263 raft.go:428] raft: 101 became candidate at term 6
I0320 20:48:24.725330     263 raft.go:472] raft: 101 received vote from 101 at term 6
I0320 20:48:24.725414     263 raft.go:451] raft: 101 became leader at term 6
--
I0320 20:48:25.284244     263 multiraft.go:637] Committed Entry[0]: 6/46 EntryNormal 13cd4fe5c83f719154cad7f23903a93c: raft_id:1 cmd:<put:<header:<timestamp:<wall_time:0 logical:40 > cmd_id:<wall_time:1426884505272021393 random:6109933279820097852 > key:"\000\000meta1
--- PASS: TestUpdateRangeAddressing (0.13s)
=== RUN TestUpdateRangeAddressingSplitMeta1
I0320 20:48:25.295013     263 multiraft.go:408] node 257 starting
--- PASS: TestUpdateRangeAddressingSplitMeta1 (0.01s)
FAIL
FAIL    github.com/cockroachdb/cockroach/storage    6.273s
=== RUN TestBatchBasics
--- PASS: TestBatchBasics (0.00s)
=== RUN TestBatchGet
--- PASS: TestBatchGet (0.00s)
=== RUN TestBatchMerge
--- PASS: TestBatchMerge (0.00s)
=== RUN TestBatchProto
--- PASS: TestBatchProto (0.00s)
=== RUN TestBatchScan
--- PASS: TestBatchScan (0.00s)
--
--- PASS: TestRawBroadcast (0.00s)
=== RUN TestMetricSystemStop
--- PASS: TestMetricSystemStop (0.00s)
=== RUN: ExampleMetricSystem
==================
WARNING: DATA RACE
Read by goroutine 44:
  github.com/cockroachdb/cockroach/util/metrics.func·006()
      /go/src/github.com/cockroachdb/cockroach/util/metrics/metrics.go:591 +0x571
  github.com/cockroachdb/cockroach/util/metrics.func·005()
      /go/src/github.com/cockroachdb/cockroach/util/metrics/metrics.go:530 +0x8f

Previous write by main goroutine:
  sync/atomic.AddInt64()
      /usr/src/go/src/runtime/race_amd64.s:261 +0xc
  github.com/cockroachdb/cockroach/util/metrics.(*MetricSystem).Histogram()
      /go/src/github.com/cockroachdb/cockroach/util/metrics/metrics.go:295 +0xd93
  github.com/cockroachdb/cockroach/util/metrics.(*MetricSystem).StopTimer()
      /go/src/github.com/cockroachdb/cockroach/util/metrics/metrics.go:226 +0x75
  github.com/cockroachdb/cockroach/util/metrics.ExampleMetricSystem()
      /go/src/github.com/cockroachdb/cockroach/util/metrics/metrics_test.go:37 +0x1cd
  testing.runExample()
      /usr/src/go/src/testing/example.go:98 +0x5e6
--
Goroutine 44 (running) created at:
  github.com/cockroachdb/cockroach/util/metrics.(*MetricSystem).reaper()
      /go/src/github.com/cockroachdb/cockroach/util/metrics/metrics.go:532 +0x1ed
==================
==================
WARNING: DATA RACE
Read by goroutine 44:
  github.com/cockroachdb/cockroach/util/metrics.func·006()
      /go/src/github.com/cockroachdb/cockroach/util/metrics/metrics.go:593 +0x6d4
  github.com/cockroachdb/cockroach/util/metrics.func·005()
      /go/src/github.com/cockroachdb/cockroach/util/metrics/metrics.go:530 +0x8f

Previous write by main goroutine:
  sync/atomic.AddInt64()
      /usr/src/go/src/runtime/race_amd64.s:261 +0xc
  github.com/cockroachdb/cockroach/util/metrics.(*MetricSystem).Histogram()
      /go/src/github.com/cockroachdb/cockroach/util/metrics/metrics.go:294 +0xce9
  github.com/cockroachdb/cockroach/util/metrics.(*MetricSystem).StopTimer()
      /go/src/github.com/cockroachdb/cockroach/util/metrics/metrics.go:226 +0x75
  github.com/cockroachdb/cockroach/util/metrics.ExampleMetricSystem()
      /go/src/github.com/cockroachdb/cockroach/util/metrics/metrics_test.go:37 +0x1cd
  testing.runExample()
      /usr/src/go/src/testing/example.go:98 +0x5e6

Please assign, take a look and update the issue accordingly.

Failed tests (): TestRaftRemoveRace TestRaftRemoveRace

The following test appears to have failed:

#:

W1215 01:18:50.477119 959 multiraft/multiraft.go:1233  aborting configuration change: key range /Local/Range/RangeDescriptor/""-/Min outside of bounds of range /Min-/Min
W1215 01:18:50.478804 959 multiraft/multiraft.go:1139  failed to look up replica ID for range 1 (disabling replica ID check): storage/store.go:1695: store 3 not found as replica of range 1
I1215 01:18:53.557582 959 storage/engine/rocksdb.go:138  closing in-memory rocksdb instance
I1215 01:18:53.557889 959 storage/engine/rocksdb.go:138  closing in-memory rocksdb instance
I1215 01:18:53.558132 959 storage/engine/rocksdb.go:138  closing in-memory rocksdb instance
--- FAIL: TestRaftRemoveRace (3.53s)
    <autogenerated>:32: storage/client_test.go:521: condition failed to evaluate within 3s: storage/client_test.go:517: range not found on store 2
=== RUN   TestStoreRangeRemoveDead
E1215 01:18:53.562437 959 gossip/gossip.go:181  different node IDs were set for the same gossip instance (2147483647, 1)
I1215 01:18:53.563875 959 multiraft/multiraft.go:579  node 1 starting
I1215 01:18:53.564727 959 storage/replica.go:1308  gossiping cluster id  from store 1, range 1
I1215 01:18:53.565694 959 raft/raft.go:446  [group 1] 1 became follower at term 5
I1215 01:18:53.565898 959 raft/raft.go:234  [group 1] newRaft 1 [peers: [1], term: 5, commit: 10, applied: 10, lastindex: 10, lastterm: 5]
I1215 01:18:53.566074 959 multiraft/multiraft.go:999  node 1 campaigning because initial confstate is [1]
I1215 01:18:53.566196 959 raft/raft.go:526  [group 1] 1 is starting a new election at term 5
I1215 01:18:53.566292 959 raft/raft.go:459  [group 1] 1 became candidate at term 6
--
I1215 01:19:05.063638 959 storage/engine/rocksdb.go:138  closing in-memory rocksdb instance
I1215 01:19:05.063778 959 storage/engine/rocksdb.go:138  closing in-memory rocksdb instance
--- PASS: TestLeaderAfterSplit (0.44s)
=== RUN   Example_rebalancing
--- PASS: Example_rebalancing (0.62s)
FAIL
FAIL    github.com/cockroachdb/cockroach/storage    32.371s
=== RUN   TestBatchBasics
I1215 01:18:45.307778 1019 storage/engine/rocksdb.go:138  closing in-memory rocksdb instance
--- PASS: TestBatchBasics (0.01s)
=== RUN   TestBatchGet
I1215 01:18:45.312076 1019 storage/engine/rocksdb.go:138  closing in-memory rocksdb instance
--- PASS: TestBatchGet (0.01s)
=== RUN   TestBatchMerge
I1215 01:18:45.320845 1019 storage/engine/rocksdb.go:138  closing in-memory rocksdb instance
--- PASS: TestBatchMerge (0.01s)
=== RUN   TestBatchProtoW1215 01:18:50.477119 959 multiraft/multiraft.go:1233  aborting configuration change: key range /Local/Range/RangeDescriptor/""-/Min outside of bounds of range /Min-/Min
W1215 01:18:50.478804 959 multiraft/multiraft.go:1139  failed to look up replica ID for range 1 (disabling replica ID check): storage/store.go:1695: store 3 not found as replica of range 1
I1215 01:18:53.557582 959 storage/engine/rocksdb.go:138  closing in-memory rocksdb instance
I1215 01:18:53.557889 959 storage/engine/rocksdb.go:138  closing in-memory rocksdb instance
I1215 01:18:53.558132 959 storage/engine/rocksdb.go:138  closing in-memory rocksdb instance
--- FAIL: TestRaftRemoveRace (3.53s)
    <autogenerated>:32: storage/client_test.go:521: condition failed to evaluate within 3s: storage/client_test.go:517: range not found on store 2
=== RUN   TestStoreRangeRemoveDead
E1215 01:18:53.562437 959 gossip/gossip.go:181  different node IDs were set for the same gossip instance (2147483647, 1)
I1215 01:18:53.563875 959 multiraft/multiraft.go:579  node 1 starting
I1215 01:18:53.564727 959 storage/replica.go:1308  gossiping cluster id  from store 1, range 1
I1215 01:18:53.565694 959 raft/raft.go:446  [group 1] 1 became follower at term 5
I1215 01:18:53.565898 959 raft/raft.go:234  [group 1] newRaft 1 [peers: [1], term: 5, commit: 10, applied: 10, lastindex: 10, lastterm: 5]
I1215 01:18:53.566074 959 multiraft/multiraft.go:999  node 1 campaigning because initial confstate is [1]
I1215 01:18:53.566196 959 raft/raft.go:526  [group 1] 1 is starting a new election at term 5
I1215 01:18:53.566292 959 raft/raft.go:459  [group 1] 1 became candidate at term 6
--
I1215 01:19:05.063638 959 storage/engine/rocksdb.go:138  closing in-memory rocksdb instance
I1215 01:19:05.063778 959 storage/engine/rocksdb.go:138  closing in-memory rocksdb instance
--- PASS: TestLeaderAfterSplit (0.44s)
=== RUN   Example_rebalancing
--- PASS: Example_rebalancing (0.62s)
FAIL
FAIL    github.com/cockroachdb/cockroach/storage    32.371s
=== RUN   TestBatchBasics
I1215 01:18:45.307778 1019 storage/engine/rocksdb.go:138  closing in-memory rocksdb instance
--- PASS: TestBatchBasics (0.01s)
=== RUN   TestBatchGet
I1215 01:18:45.312076 1019 storage/engine/rocksdb.go:138  closing in-memory rocksdb instance
--- PASS: TestBatchGet (0.01s)
=== RUN   TestBatchMerge
I1215 01:18:45.320845 1019 storage/engine/rocksdb.go:138  closing in-memory rocksdb instance
--- PASS: TestBatchMerge (0.01s)
=== RUN   TestBatchProto

Please assign, take a look and update the issue accordingly.

teamcity: failed tests on release-banana: test/TestImportPgDump, lint/TestLint

The following tests appear to have failed:

#864629:

--- FAIL: test/TestImportPgDump/read_data_only (0.000s)
Test ended in panic.

------- Stdout: -------
I180827 20:41:54.053559 52667 storage/replica_command.go:298  [n1,s1,r23/1:/{Table/52-Max}] initiating a split of this range at key /Table/53/1/106 [r24]
I180827 20:41:54.062208 52385 storage/replica_range_lease.go:554  [replicate,n1,s1,r23/1:/Table/5{2-3/1/106}] transferring lease to s2
I180827 20:41:54.063407 52385 storage/replica_range_lease.go:617  [replicate,n1,s1,r23/1:/Table/5{2-3/1/106}] done transferring lease to s2: <nil>
I180827 20:41:54.063498 51617 storage/replica_proposal.go:210  [n2,s2,r23/3:/Table/5{2-3/1/106}] new range lease repl=(n2,s2):3 seq=3 start=1535402514.062230012,0 epo=1 pro=1535402514.062232488,0 following repl=(n1,s1):1 seq=2 start=1535402512.768597075,0 exp=1535402521.769064687,0 pro=1535402512.769088099,0
I180827 20:41:54.077824 52355 storage/replica_command.go:298  [n1,s1,r24/1:/{Table/53/1/1…-Max}] initiating a split of this range at key /Table/53/2/"\x15\x8f\xe8\u007f\\\xf3\xdf\xf0nP\xdb\xd3\xe8\x1b\"B1K\xa8l+\x96/l\v\x9e\x0e\x91\xa0D\x96\xc0J\xf1\xa1͠\xd2̃\x05\xe3\xe2?ET蛂\x00\xe5\xb0\x1a\x8e\x13Zu\xfd\xf2\x81w^\xb7\xbdH\xb8\xe4\a\x9c\xfd\x99{\xb4\"\xe5Q\x9c\x17\x85\x97\xf7Ëb\x0f\xff\xb0-vmO\xe1\xfb\xc3\xf3\xab0\xa0\x05u\x1c\xb0{B\xeamp\xbd\x8f\x99?\x87\x0f\xb2e\xe3ؿ2LN\x03\x17\xa7\x9f\xd3\x0f\x15$\x02I\xd2\xd7\x04R\x193\x9d\xddX\u007f\x01A\xcc\xde`Pm:\xdbe\xfd\xa6\a\xf8i\x88\xa7\xee\xacӸ\xbf2\x84y\xcd\n\xe6]L)\xca\xd9`x\xb4\x1b|\xe8\x13\x82\x1a(/* 3`J\xe1ٰ\xe6AdN!-\xd9"/"ॹॹ;,✅\nπ<\t\nπ\tॹπ✅a\n,\nᐿ�\nॹ✅�ॹ�\"✅ॹ\\<\"\n;a\\\n,✅π\n<\n<\nॹπᐿ�ᐿ;,�\tᐿ\nᐿaᐿ,\nπ�\t\\<ॹ\\π;π�π\"<;\"�\\<�,<�\\a�<\nॹaᐿaॹ�\\ᐿπ,✅ᐿ\"<✅✅a\t�ॹ\t<π;�ॹ\\ᐿ;✅\r\\,;\\ᐿॹ\nॹᐿππ\nᐿ\nᐿaπ\\\nπ\r\"✅�π\nπ\rॹπ\"ॹ✅a\ra�✅\nπ;ॹ✅\n;ॹ,�\nπ\rπᐿa\\\\ᐿ,π<ᐿ✅\n�,\r\nᐿ✅\n<�ᐿ\"\"✅,,\"\n<\n✅\rπa�π\n<\\ॹ\nॹ�;\ra��✅ᐿ\n,\t�;,π<\r��\r\\�\n✅\r✅�;\\\n\n,\nॹ✅π\n,\n✅\t,�<\nπ\t;aπ\n<a<\n\tπ\r\"\\✅\n\n\n<ᐿπaπ\\�\"<✅\\a,✅\n✅\n<\"\"\n\n\r\rᐿ�\\\tᐿᐿ;\n\rᐿa\\π<\n\\\n\n\";\r\r\raπ\"\r�ॹa\r�\"\n\"✅ππ✅�\t�ᐿ\tᐿ\\\r�ᐿ<\\\nᐿπ✅\tॹ<π\ta\"✅\t,ॹa✅ᐿ;\\\r✅\\,ॹ\"a\n<ॹ\\\n<\"π\\\\ᐿ\n✅\nᐿ\n,\n\r\t\n\r\n<aᐿ;ᐿ;ᐿ\r;✅a<a,,<\t\n\\ππ\\\"✅\n\\a\n\tπa<\r<π\n✅\\π<ॹ,\t;<aaaπॹᐿaॹaॹ�,\"\t,ॹ;\\<✅a\nᐿ\"\nπ\\aᐿ�ᐿ<ॹ;\\<ᐿ\nᐿ\n\"aᐿᐿπ,\"\r✅ॹ\n,<\r<\n<<,ᐿᐿa,\rᐿ<;π\\�,\"\rπ�\nππ�,✅;�\ra<;\r�ᐿ\tπ;\"πᐿ\\�a\"ᐿ\\;\\ॹ\";ॹ;;✅\tॹ\r\n<\t\n\t<aॹ\tᐿ\n\"ॹᐿ\t✅✅�ॹ;;<�\t,\n\r\n\n\ta\"\\<\rπᐿa\t<\na;\t\"\nπ<πॹ\r\n<\n✅ᐿॹ✅�<,;✅\"\n<�π<✅<<✅\\;\n\"\rᐿ\t�\n\n\r\t,ॹ�\"\rᐿᐿᐿ,\"π\nπ\",a\"\"<�\t\\πॹ\n\taॹᐿ\tπॹ,\n✅\rπa\r<<,\n\nᐿ;\t\\<\tᐿ\n\n�\"ॹ<\n\r\nᐿ\n�\n\nπᐿ;\nॹ\n\"π<\"\r\r\n\r\\ᐿ;;πॹ;�\r✅�,✅\r\r�,a;ᐿ\\ॹ\"\t\r✅;\t<,π,�\t\"πaᐿ��\\ॹ\"\n\tᐿ\t,ॹ✅�ᐿ✅\tᐿ,ॹ✅;;�\r\n✅ᐿ�\nπa;\\,✅ॹ<ᐿ\nπ\n\"\n;a\t\\π\n<\r\r\rπ\"\n\nᐿ<<ᐿ\"\n,\n\"ॹπaᐿπ��\r\n�ᐿॹ,\na\n\rॹ<�ॹ\"\n\t\r\n,π\n�,<ॹ,<\n<�✅ᐿ\r✅a✅<\r;,�a�\\\nॹ<\\<✅ॹ\"\nॹ\r�\ta;\"\\ᐿ\n\n✅\"\r;✅\t,a✅✅<\"ᐿ\t�π\\✅✅�<;\"✅π✅ॹ,\nπ\n��<,\ta\r�✅ᐿ\nॹπ\nᐿॹ✅;\nॹ\t\r\\\nᐿ�ᐿ\n\tᐿ,\r\\;<<a,\"π,\tᐿ\nπ<a\"ॹ\\aa\r\r\"\";\tᐿ,ॹa\nॹ\nπ"/PrefixEnd [r25]
I180827 20:41:54.090278 52643 storage/replica_range_lease.go:554  [n1,s1,r24/1:/Table/53/{1/106-2/"\x15…}] transferring lease to s3
I180827 20:41:54.091255 52643 storage/replica_range_lease.go:617  [n1,s1,r24/1:/Table/53/{1/106-2/"\x15…}] done transferring lease to s3: <nil>
I180827 20:41:54.092073 51863 storage/replica_proposal.go:210  [n3,s3,r24/2:/Table/53/{1/106-2/"\x15…}] new range lease repl=(n3,s3):2 seq=3 start=1535402514.090298269,0 epo=1 pro=1535402514.090300825,0 following repl=(n1,s1):1 seq=2 start=1535402512.768597075,0 exp=1535402521.769064687,0 pro=1535402512.769088099,0
I180827 20:41:54.094854 52702 storage/replica_command.go:298  [n1,s1,r25/1:/{Table/53/2/"…-Max}] initiating a split of this range at key /Table/53/2/"\xc0\t\x13\xe0*c\xe4\xcfS-\x9b,\xe2\x82\xfa\xd8Z\xf6\x99\x81\\\x18ŕ\xea\x80Db\xa7\x94\xf7Q#\x13\\\xc7(\xc4=\xaaZ\xa2Hա}\xdeI\x06\x840I\xa9\x95\xcbи\r#iH\x97F~\x10\xe4<\xb2\xefFb\xac\xee\xf90H5\xd7D\xe4:\xf0Ae\xe3\xd1<\xd1\xf7\xb9\xad\xea\xd9\xe0r\xbc\xa6\xae\x92\xfb\xb5,\xc2\U0010f26eD\xe0 \xc5\x06\xfa\x04{\xf7\xe8\xbfZQ\xa3\x05M\xbb\xa8\xbe\xf4\xc4\x0f\xe9|s{|\x8fr\xad\xdaWĢ\x9e\xdf\x17\x9f\x02\xf3п\xd3\xea\xfd\x8ew3\xb8@7ꇘN%\n\xe0@jq\xb3\xb0&y\xe3K0ȼ_s\x1e\x15\x98\xe7\xbf6\xeb\xef}$dd/\xaa\xf1\xcb.U\x8f\xd4r"/"<a✅ᐿ<\n\n\nॹ\",\n\"πॹᐿ,\rᐿ\nॹ�ॹ<\naᐿᐿ,\"ॹ\"\\✅\n�✅<\n\r<\\<\"\\π;<✅,;✅ॹa\r✅ᐿ;ᐿ\r\\�\\�,ᐿ\r\n,�✅,\t;\\\"π,;ᐿa\nॹ✅\n;ॹ<\\\n;<ᐿॹ\n\r<\n�\\\",ॹ✅,\n\"✅ᐿ\raa\n\n\t;�π<,\",ᐿ<ᐿ\\ᐿ✅;\t<π\"\"ॹ<\t\n,,π\"\t�✅\r;;ॹ,ᐿ\tॹ✅ॹ,\nᐿ\n\\;\n\nπa<✅aπ\t;✅\r;�✅;a\t\n✅ππ\t\rॹ\n\\<✅✅<\r✅\t\r✅<π\n;\n\"\"��ॹ\r,ᐿ\naॹᐿπ\\π\n\n,�\t<�a\nπ\\✅,πॹ\",πᐿa,ᐿᐿa\r<\ta\t\nॹ\n\r✅\r;πa✅�\t�π\",πa\n<✅\"a\t�\r<ᐿa,\naᐿॹᐿ\rॹᐿa�\"\\a✅\"\"✅✅ᐿ\t\"ॹ<\n\"\nπ✅✅\n\\ॹ\t;ᐿ\"a,ॹ\"aॹᐿᐿ\n,\\\nॹॹ,;ॹ;\\\n\n\t\n,\naॹπ\r\"\n\t✅ॹ\r\tᐿ✅;\r\ta�ᐿ\t\t\nπᐿ<ॹ\tᐿ\na\nᐿ\tππ<<π\t✅;\r\"ॹ\n\na�<ᐿ\r\nᐿᐿ\";a<\rॹ\\πॹ\tᐿ\n\nᐿπ;\t;;✅ᐿ✅✅<�ॹ;\tᐿ,,\t,π✅\nᐿॹ;\r\"ᐿa,ᐿᐿᐿa\nॹ<aॹ\r,;π<<\nπa�\n\\\r,ॹπ�\n\"ᐿππ\n✅ॹπ\ra,πॹ\n\t<;πᐿᐿ✅ॹ\t�a✅\r�\"a,π��,ॹa\n\\\rॹ\nॹ\"π\"π\ta�<π�\r;a,a\r<πᐿ\na<\r\t\nॹ\\\\\n\\<�\\�aπa;\r\\,,\nॹ\"✅;\"\n✅ᐿ✅a\n<ππ\tπ<a\t�\\<\"✅\\\nᐿ;\r\t✅�✅π\r\r\r\n\",ᐿ;ॹ\nᐿ\r\"\naa\"\n\t<,✅<a\\\n\"ॹᐿ\n\\\t\t\r\"ॹ<,,π�\"ॹ<ॹ,\\ᐿ<\\π\"\\<ᐿ\n\rॹ\na\t\nπ;\\π\\✅\r,\r�\",,;;,<\n\t\"\\\r\r\"ॹ✅ॹ�\n�✅�\n�\\\n\n\rᐿॹ\tπa<;;\n\n�a\n\\ॹॹ\t;\n<\t\\<ॹॹ�✅π\t\"<\n\tᐿπᐿ\"\"�;\t;ॹ<π<✅\nππ<\"\rॹ�πᐿ�\rπ,<,<ᐿ<;;�,\t<<ᐿ\t<\tॹ,π,<\\a\t;\n\r�a✅\r\r\t\nᐿᐿ✅\r;\n;�;\r✅\n,;✅ᐿ,\\\n<,<\\\t\n;<\\aᐿ\r,\n;\\\r�\rॹ\\\t\";\t�;\n,\ta✅\r\t\r,\n\\\t\nᐿ✅\\\"\naᐿ\\\\\";\r<�✅;aॹ\t\t\\\t\tᐿ�\r,\"\n\"\taᐿ\na\rππॹ\nπᐿ\rॹ\nॹ\tπ,π\r<\"\n,�\na;�\\✅,<\"\"�\nππ\t\nπaॹaa✅\\;\\a\r\rᐿ,\\�;\\ᐿᐿ<a\"\r;π\"\\ᐿπ\tπॹ;\\a\ra;\t\n\\�\r<\t<ॹ\r✅\na\t\t<\n\n✅π\nᐿ✅\\<,\nπ\rᐿ✅a\"\r\"\n✅ॹ\\�\r\n\\�\nॹ\\<\\<\n\n\n"/PrefixEnd [r26]
I180827 20:41:54.112580 52726 storage/replica_range_lease.go:554  [n1,s1,r25/1:/Table/53/2/"\x{15\x8…-c0\t\…}] transferring lease to s3
I180827 20:41:54.113319 52726 storage/replica_range_lease.go:617  [n1,s1,r25/1:/Table/53/2/"\x{15\x8…-c0\t\…}] done transferring lease to s3: <nil>
I180827 20:41:54.113657 51850 storage/replica_proposal.go:210  [n3,s3,r25/2:/Table/53/2/"\x{15\x8…-c0\t\…}] new range lease repl=(n3,s3):2 seq=3 start=1535402514.112607829,0 epo=1 pro=1535402514.112610518,0 following repl=(n1,s1):1 seq=2 start=1535402512.768597075,0 exp=1535402521.769064687,0 pro=1535402512.769088099,0
I180827 20:41:54.117098 52757 storage/replica_command.go:298  [n1,s1,r26/1:/{Table/53/2/"…-Max}] initiating a split of this range at key /Table/53/3/";π,\\✅✅ᐿπ✅,�a\r<\nπᐿॹ;π\\,✅\nᐿॹ✅�,\r�\r\r\r,;\r;ᐿ,\n\nᐿaπ\r,,✅\na,a\\<✅\"✅\\,,a\"π\r\n�✅π\"ॹπ;\nπ;<,<\n;<\n\tॹ\rπ\r,a\\\t�\n\r\\ᐿ<\t,\n\\ᐿa\t\t\n\nπ\t\\\n\\πa;π\r\rᐿ\",a<\"\n�\r\ta\r\t�\r\t✅\t;ᐿᐿ��\nᐿᐿ\\,ᐿᐿ\na\"ᐿ\"\"aa\n;✅π\nॹ\\\"\"�✅ॹॹ\\\nॹ\t\nॹ✅,\n\"πᐿ\n\n;<<;\r\tᐿ�,,\\\n\n\n\n\nπ�;\n;,\"✅\r;a\n\\;aa\n\n\n;\n\n<<ॹ�\nπa�πॹaॹ\r\n;✅✅,;ᐿ\n;π\"\\πᐿ\n\"\n\\a\\aπ✅ᐿ\n<\",\tॹ\r<;\";\nππ\"�\n\n\t,π\\\\<\\\t;ॹπ\\,;�✅ॹ,\r<\n✅�;ᐿ\",;✅\n\nॹॹ\r\n✅\n<<\n\"\",a\t\r\r,ॹ\t�;�,π\\\t,;\\\"✅\t\n\n\nᐿ\n<\\\rᐿ,;�\nπ\r\\<ᐿᐿ\n<✅ᐿaॹ�✅\n\t,π\"\r<<\n\nπ\tπ✅\n<\\πॹaᐿ\t;�ॹॹ,\"\n\\a\n,\"πॹ,\r,ॹॹ\\\";�\n\\π✅\n\"ππ\n✅�πॹ,\r✅\n;π;ᐿ\"\nᐿ✅\rॹ;;\n\"ॹ\"�\"a\n\rπ\n<\n\t\"aπ\t;\\\n\";\"π,\ta\t\n\nᐿ<,;�<ᐿ\"\\ᐿᐿa,\n;;ॹॹ\tॹπa�ᐿ\ra,π<✅\tᐿᐿ\n,✅\ra�\"\r\r\",;π�<;\n<;ᐿ\"�;ᐿ;�;✅\t\\<\\<;πa✅\rॹ\\\\\\ᐿ\n;\r\t\n\\\r\"✅\n\tπ✅\"\"<\r\rπ\r<,\n,\\✅ᐿᐿ�\t�,ππ;ᐿ\t�\"\\ππ\"ॹ,πa<\n\n<��\rॹॹ\t,\r\"ॹ✅✅\n\n;\\ॹ;π<\"�\t�<\"ᐿॹॹ;\n<\n\r\na\t�ॹᐿa\n,\"\t\r\"\n,\r<,\"\tᐿ\\\n<,;<\"\t\n\nᐿ,ᐿ\tπ✅\n,\r,\n\t<,�\\;<\\a\nπ\t,\t�ॹ\t\n�a✅\n✅\nॹ\";\r\t✅<\tᐿ\n\tᐿॹ✅\"\r\rॹ✅π\n\n,\t\\\t\\;\"a\t,ॹॹ\"aॹ\n,\n✅�\t\nॹᐿ\n\r✅<πॹ\n✅\tॹ\"ॹ\"�\r\\;\\✅;ॹπ;\n\nᐿ<\r<\"ॹ\n,\n;π\nॹ\ta✅\n�;ᐿ\"a�✅π\r✅ॹ,\n\n\",✅\nᐿ\n<�\r\nπᐿ\"πॹᐿ\r�\n<,✅a\\ॹ\r✅<;πᐿ✅ॹ<\"<✅\"π,\\\rπ\\<\"<\"π\n✅<;\\�\tॹ\n\n\r<\n\rᐿ\nᐿaॹaॹ\\\r<<\n\r\n�\ta,\nॹॹᐿ\n,π✅<;\\\nπॹπᐿॹ<;\"a\r<;\t\t<,;�π\n<✅ॹॹ\tᐿ\rᐿaaaॹ\t\\,ᐿ✅\n\\ॹ<\"π\t\r\"\tᐿ\n\ta\t,<ππ;\n\\\r�\n,\n\n\\ᐿa\nॹᐿa\n\t\n\t\n✅\"ᐿ\"\r\n\n\"�\r\n\n<<π\ra✅\\<ᐿ�\n\n✅�a✅�"/105 [r27]
I180827 20:41:54.137397 52716 storage/store_snapshot.go:615  [raftsnapshot,n3,s3,r25/2:/Table/53/2/"\x{15\x8…-c0\t\…}] sending Raft snapshot 547ab8d0 at applied index 21
I180827 20:41:54.140430 52716 storage/store_snapshot.go:657  [raftsnapshot,n3,s3,r25/2:/Table/53/2/"\x{15\x8…-c0\t\…}] streamed snapshot to (n2,s2):3: kv pairs: 14, log entries: 2, rate-limit: 8.0 MiB/sec, 22ms
I180827 20:41:54.140860 52705 storage/replica_raftstorage.go:784  [n2,s2,r25/3:/Table/53/2/"\x{15\x8…-c0\t\…}] applying Raft snapshot at index 21 (id=547ab8d0, encoded size=31270, 1 rocksdb batches, 2 log entries)
I180827 20:41:54.162696 52705 storage/replica_raftstorage.go:790  [n2,s2,r25/3:/Table/53/2/"\x{15\x8…-c0\t\…}] applied Raft snapshot in 22ms [clear=0ms batch=0ms entries=21ms commit=0ms]
I180827 20:41:54.166103 52791 storage/replica_range_lease.go:554  [n1,s1,r26/1:/Table/53/{2/"\xc0…-3/";π,…}] transferring lease to s3
I180827 20:41:54.167118 51903 storage/replica_proposal.go:210  [n3,s3,r26/2:/Table/53/{2/"\xc0…-3/";π,…}] new range lease repl=(n3,s3):2 seq=3 start=1535402514.166156675,0 epo=1 pro=1535402514.166159831,0 following repl=(n1,s1):1 seq=2 start=1535402512.768597075,0 exp=1535402521.769064687,0 pro=1535402512.769088099,0
I180827 20:41:54.167216 52791 storage/replica_range_lease.go:617  [n1,s1,r26/1:/Table/53/{2/"\xc0…-3/";π,…}] done transferring lease to s3: <nil>
I180827 20:41:54.172711 52589 storage/replica_command.go:298  [n1,s1,r27/1:/{Table/53/3/"…-Max}] initiating a split of this range at key /Table/54 [r28]
I180827 20:41:54.182740 52807 storage/replica_range_lease.go:554  [n1,s1,r27/1:/Table/5{3/3/";π…-4}] transferring lease to s2
I180827 20:41:54.183947 52807 storage/replica_range_lease.go:617  [n1,s1,r27/1:/Table/5{3/3/";π…-4}] done transferring lease to s2: <nil>
I180827 20:41:54.184954 51646 storage/replica_proposal.go:210  [n2,s2,r27/3:/Table/5{3/3/";π…-4}] new range lease repl=(n2,s2):3 seq=3 start=1535402514.182761052,0 epo=1 pro=1535402514.182764040,0 following repl=(n1,s1):1 seq=2 start=1535402512.768597075,0 exp=1535402521.769064687,0 pro=1535402512.769088099,0
--- FAIL: test/TestImportPgDump (0.000s)
Test ended in panic.

------- Stdout: -------
W180827 20:41:52.746991 50862 server/status/runtime.go:294  [n?] Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006"
I180827 20:41:52.757923 50862 server/server.go:830  [n?] monitoring forward clock jumps based on server.clock.forward_jump_check_enabled
I180827 20:41:52.758132 50862 base/addr_validation.go:260  [n?] server certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
I180827 20:41:52.758156 50862 base/addr_validation.go:300  [n?] web UI certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
I180827 20:41:52.761168 50862 server/config.go:496  [n?] 1 storage engine initialized
I180827 20:41:52.761191 50862 server/config.go:499  [n?] RocksDB cache size: 128 MiB
I180827 20:41:52.761204 50862 server/config.go:499  [n?] store 0: in-memory, size 0 B
I180827 20:41:52.767725 50862 server/node.go:373  [n?] **** cluster d5e53e69-a109-4eb6-91bf-29e74ae744ba has been created
I180827 20:41:52.767752 50862 server/server.go:1401  [n?] **** add additional nodes by specifying --join=127.0.0.1:41477
I180827 20:41:52.767936 50862 gossip/gossip.go:382  [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:41477" > attrs:<> locality:<> ServerVersion:<major_val:2 minor_val:0 patch:0 unstable:12 > build_tag:"v2.1.0-alpha.20180702-2025-gf1e7bb1" started_at:1535402512767856449 
I180827 20:41:52.770338 50862 storage/store.go:1541  [n1,s1] [n1,s1]: failed initial metrics computation: [n1,s1]: system config not yet available
I180827 20:41:52.770546 50862 server/node.go:476  [n1] initialized store [n1,s1]: disk (capacity=512 MiB, available=512 MiB, used=0 B, logicalBytes=6.9 KiB), ranges=1, leases=1, queries=0.00, writes=0.00, bytesPerReplica={p10=7103.00 p25=7103.00 p50=7103.00 p75=7103.00 p90=7103.00 pMax=7103.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
I180827 20:41:52.770626 50862 storage/stores.go:242  [n1] read 0 node addresses from persistent storage
I180827 20:41:52.770721 50862 server/node.go:697  [n1] connecting to gossip network to verify cluster ID...
I180827 20:41:52.770760 50862 server/node.go:722  [n1] node connected via gossip and verified as part of cluster "d5e53e69-a109-4eb6-91bf-29e74ae744ba"
I180827 20:41:52.770788 50862 server/node.go:546  [n1] node=1: started with [<no-attributes>=<in-mem>] engine(s) and attributes []
I180827 20:41:52.771023 50862 server/status/recorder.go:652  [n1] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180827 20:41:52.771066 50862 server/server.go:1807  [n1] Could not start heap profiler worker due to: directory to store profiles could not be determined
I180827 20:41:52.771159 50862 server/server.go:1538  [n1] starting https server at 127.0.0.1:42563 (use: 127.0.0.1:42563)
I180827 20:41:52.771188 50862 server/server.go:1540  [n1] starting grpc/postgres server at 127.0.0.1:41477
I180827 20:41:52.771209 50862 server/server.go:1541  [n1] advertising CockroachDB node at 127.0.0.1:41477
I180827 20:41:52.775258 51089 server/status/recorder.go:652  [n1,summaries] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180827 20:41:52.776337 50925 storage/replica_command.go:298  [split,n1,s1,r1/1:/M{in-ax}] initiating a split of this range at key /System/"" [r2]
I180827 20:41:52.788832 51094 storage/replica_command.go:298  [split,n1,s1,r2/1:/{System/-Max}] initiating a split of this range at key /System/NodeLiveness [r3]
W180827 20:41:52.790188 51128 storage/intent_resolver.go:668  [n1,s1] failed to push during intent resolution: failed to push "unnamed" id=ec083bbe key=/Table/SystemConfigSpan/Start rw=true pri=0.01126188 iso=SERIALIZABLE stat=PENDING epo=0 ts=1535402512.772758792,0 orig=1535402512.772758792,0 max=1535402512.772758792,0 wto=false rop=false seq=6
I180827 20:41:52.790695 51118 sql/event_log.go:126  [n1,intExec=optInToDiagnosticsStatReporting] Event: "set_cluster_setting", target: 0, info: {SettingName:diagnostics.reporting.enabled Value:true User:root}
I180827 20:41:52.795125 51100 storage/replica_command.go:298  [split,n1,s1,r3/1:/{System/NodeL…-Max}] initiating a split of this range at key /System/NodeLivenessMax [r4]
I180827 20:41:52.800783 51143 storage/replica_command.go:298  [split,n1,s1,r4/1:/{System/NodeL…-Max}] initiating a split of this range at key /System/tsd [r5]
I180827 20:41:52.807906 51165 storage/replica_command.go:298  [split,n1,s1,r5/1:/{System/tsd-Max}] initiating a split of this range at key /System/"tse" [r6]
I180827 20:41:52.811784 51141 sql/event_log.go:126  [n1,intExec=set-setting] Event: "set_cluster_setting", target: 0, info: {SettingName:version Value:2.0-12 User:root}
I180827 20:41:52.818164 50799 sql/event_log.go:126  [n1,intExec=disableNetTrace] Event: "set_cluster_setting", target: 0, info: {SettingName:trace.debug.enable Value:false User:root}
I180827 20:41:52.821094 51188 storage/replica_command.go:298  [split,n1,s1,r6/1:/{System/tse-Max}] initiating a split of this range at key /Table/SystemConfigSpan/Start [r7]
I180827 20:41:52.830709 51176 storage/replica_command.go:298  [split,n1,s1,r7/1:/{Table/System…-Max}] initiating a split of this range at key /Table/11 [r8]
I180827 20:41:52.839374 51187 sql/event_log.go:126  [n1,intExec=initializeClusterSecret] Event: "set_cluster_setting", target: 0, info: {SettingName:cluster.secret Value:045a1c98-219f-445b-bd6b-d481f04d6b0d User:root}
I180827 20:41:52.849534 51154 storage/replica_command.go:298  [split,n1,s1,r8/1:/{Table/11-Max}] initiating a split of this range at key /Table/12 [r9]
I180827 20:41:52.855898 51218 sql/event_log.go:126  [n1,intExec=create-default-db] Event: "create_database", target: 50, info: {DatabaseName:defaultdb Statement:CREATE DATABASE IF NOT EXISTS defaultdb User:root}
I180827 20:41:52.861462 51240 storage/replica_command.go:298  [split,n1,s1,r9/1:/{Table/12-Max}] initiating a split of this range at key /Table/13 [r10]
I180827 20:41:52.868342 51268 storage/replica_command.go:298  [split,n1,s1,r10/1:/{Table/13-Max}] initiating a split of this range at key /Table/14 [r11]
I180827 20:41:52.872706 51256 sql/event_log.go:126  [n1,intExec=create-default-db] Event: "create_database", target: 51, info: {DatabaseName:postgres Statement:CREATE DATABASE IF NOT EXISTS postgres User:root}
I180827 20:41:52.874819 51264 storage/replica_command.go:298  [split,n1,s1,r11/1:/{Table/14-Max}] initiating a split of this range at key /Table/15 [r12]
I180827 20:41:52.876403 50862 server/server.go:1594  [n1] done ensuring all necessary migrations have run
I180827 20:41:52.876433 50862 server/server.go:1597  [n1] serving sql connections
I180827 20:41:52.879108 51233 server/server_update.go:67  [n1] no need to upgrade, cluster already at the newest version
I180827 20:41:52.879639 51235 sql/event_log.go:126  [n1] Event: "node_join", target: 1, info: {Descriptor:{NodeID:1 Address:{NetworkField:tcp AddressField:127.0.0.1:41477} Attrs: Locality: ServerVersion:2.0-12 BuildTag:v2.1.0-alpha.20180702-2025-gf1e7bb1 StartedAt:1535402512767856449 LocalityAddress:[]} ClusterID:d5e53e69-a109-4eb6-91bf-29e74ae744ba StartedAt:1535402512767856449 LastUp:1535402512767856449}
I180827 20:41:52.880318 51302 storage/replica_command.go:298  [split,n1,s1,r12/1:/{Table/15-Max}] initiating a split of this range at key /Table/16 [r13]
I180827 20:41:52.927701 50819 storage/replica_command.go:298  [split,n1,s1,r13/1:/{Table/16-Max}] initiating a split of this range at key /Table/17 [r14]
I180827 20:41:52.940165 51323 storage/replica_command.go:298  [split,n1,s1,r14/1:/{Table/17-Max}] initiating a split of this range at key /Table/18 [r15]
I180827 20:41:52.948539 51355 storage/replica_command.go:298  [split,n1,s1,r15/1:/{Table/18-Max}] initiating a split of this range at key /Table/19 [r16]
I180827 20:41:52.953658 51380 storage/replica_command.go:298  [split,n1,s1,r16/1:/{Table/19-Max}] initiating a split of this range at key /Table/20 [r17]
I180827 20:41:52.961237 51137 storage/replica_command.go:298  [split,n1,s1,r17/1:/{Table/20-Max}] initiating a split of this range at key /Table/21 [r18]
I180827 20:41:52.966548 50832 storage/replica_command.go:298  [split,n1,s1,r18/1:/{Table/21-Max}] initiating a split of this range at key /Table/22 [r19]
I180827 20:41:52.977113 51362 storage/replica_command.go:298  [split,n1,s1,r19/1:/{Table/22-Max}] initiating a split of this range at key /Table/23 [r20]
I180827 20:41:53.041315 51440 storage/replica_command.go:298  [split,n1,s1,r20/1:/{Table/23-Max}] initiating a split of this range at key /Table/50 [r21]
I180827 20:41:53.047478 51414 storage/replica_command.go:298  [split,n1,s1,r21/1:/{Table/50-Max}] initiating a split of this range at key /Table/51 [r22]
W180827 20:41:53.081214 50862 server/status/runtime.go:294  [n?] Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006"
I180827 20:41:53.089127 50862 server/server.go:830  [n?] monitoring forward clock jumps based on server.clock.forward_jump_check_enabled
I180827 20:41:53.089322 50862 base/addr_validation.go:260  [n?] server certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
I180827 20:41:53.089338 50862 base/addr_validation.go:300  [n?] web UI certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
I180827 20:41:53.102793 50862 server/config.go:496  [n?] 1 storage engine initialized
I180827 20:41:53.102863 50862 server/config.go:499  [n?] RocksDB cache size: 128 MiB
I180827 20:41:53.102878 50862 server/config.go:499  [n?] store 0: in-memory, size 0 B
W180827 20:41:53.102953 50862 gossip/gossip.go:1371  [n?] no incoming or outgoing connections
I180827 20:41:53.103001 50862 server/server.go:1403  [n?] no stores bootstrapped and --join flag specified, awaiting init command.
I180827 20:41:53.115344 51458 gossip/client.go:129  [n?] started gossip client to 127.0.0.1:41477
I180827 20:41:53.125579 51530 gossip/server.go:217  [n1] received initial cluster-verification connection from {tcp 127.0.0.1:36113}
I180827 20:41:53.127987 50862 server/node.go:697  [n?] connecting to gossip network to verify cluster ID...
I180827 20:41:53.128034 50862 server/node.go:722  [n?] node connected via gossip and verified as part of cluster "d5e53e69-a109-4eb6-91bf-29e74ae744ba"
I180827 20:41:53.128397 51575 kv/dist_sender.go:345  [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I180827 20:41:53.134920 51574 kv/dist_sender.go:345  [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I180827 20:41:53.135628 50862 kv/dist_sender.go:345  [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I180827 20:41:53.136461 50862 server/node.go:428  [n?] new node allocated ID 2
I180827 20:41:53.136541 50862 gossip/gossip.go:382  [n2] NodeDescriptor set to node_id:2 address:<network_field:"tcp" address_field:"127.0.0.1:36113" > attrs:<> locality:<> ServerVersion:<major_val:2 minor_val:0 patch:0 unstable:12 > build_tag:"v2.1.0-alpha.20180702-2025-gf1e7bb1" started_at:1535402513136479434 
I180827 20:41:53.136591 50862 storage/stores.go:242  [n2] read 0 node addresses from persistent storage
I180827 20:41:53.136624 50862 storage/stores.go:261  [n2] wrote 1 node addresses to persistent storage
I180827 20:41:53.137485 51552 storage/stores.go:261  [n1] wrote 1 node addresses to persistent storage
I180827 20:41:53.139442 50862 server/node.go:672  [n2] bootstrapped store [n2,s2]
I180827 20:41:53.139577 50862 server/node.go:546  [n2] node=2: started with [] engine(s) and attributes []
I180827 20:41:53.140140 50862 server/status/recorder.go:652  [n2] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180827 20:41:53.140166 50862 server/server.go:1807  [n2] Could not start heap profiler worker due to: directory to store profiles could not be determined
I180827 20:41:53.140233 50862 server/server.go:1538  [n2] starting https server at 127.0.0.1:39947 (use: 127.0.0.1:39947)
I180827 20:41:53.140246 50862 server/server.go:1540  [n2] starting grpc/postgres server at 127.0.0.1:36113
I180827 20:41:53.140256 50862 server/server.go:1541  [n2] advertising CockroachDB node at 127.0.0.1:36113
I180827 20:41:53.140624 51685 server/status/recorder.go:652  [n2,summaries] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180827 20:41:53.153945 50862 server/server.go:1594  [n2] done ensuring all necessary migrations have run
I180827 20:41:53.153974 50862 server/server.go:1597  [n2] serving sql connections
W180827 20:41:53.165268 50862 server/status/runtime.go:294  [n?] Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006"
I180827 20:41:53.185802 51467 server/server_update.go:67  [n2] no need to upgrade, cluster already at the newest version
I180827 20:41:53.186848 51469 sql/event_log.go:126  [n2] Event: "node_join", target: 2, info: {Descriptor:{NodeID:2 Address:{NetworkField:tcp AddressField:127.0.0.1:36113} Attrs: Locality: ServerVersion:2.0-12 BuildTag:v2.1.0-alpha.20180702-2025-gf1e7bb1 StartedAt:1535402513136479434 LocalityAddress:[]} ClusterID:d5e53e69-a109-4eb6-91bf-29e74ae744ba StartedAt:1535402513136479434 LastUp:1535402513136479434}
I180827 20:41:53.189622 50862 server/server.go:830  [n?] monitoring forward clock jumps based on server.clock.forward_jump_check_enabled
I180827 20:41:53.189776 50862 base/addr_validation.go:260  [n?] server certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
I180827 20:41:53.189808 50862 base/addr_validation.go:300  [n?] web UI certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
I180827 20:41:53.207782 50862 server/config.go:496  [n?] 1 storage engine initialized
I180827 20:41:53.207807 50862 server/config.go:499  [n?] RocksDB cache size: 128 MiB
I180827 20:41:53.207815 50862 server/config.go:499  [n?] store 0: in-memory, size 0 B
W180827 20:41:53.207911 50862 gossip/gossip.go:1371  [n?] no incoming or outgoing connections
I180827 20:41:53.207947 50862 server/server.go:1403  [n?] no stores bootstrapped and --join flag specified, awaiting init command.
I180827 20:41:53.211471 51475 rpc/nodedialer/nodedialer.go:92  [ct-client] connection to n2 established
I180827 20:41:53.223653 51740 gossip/client.go:129  [n?] started gossip client to 127.0.0.1:41477
I180827 20:41:53.223954 51816 gossip/server.go:217  [n1] received initial cluster-verification connection from {tcp 127.0.0.1:46463}
I180827 20:41:53.224401 50862 server/node.go:697  [n?] connecting to gossip network to verify cluster ID...
I180827 20:41:53.224432 50862 server/node.go:722  [n?] node connected via gossip and verified as part of cluster "d5e53e69-a109-4eb6-91bf-29e74ae744ba"
I180827 20:41:53.224690 51837 kv/dist_sender.go:345  [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I180827 20:41:53.225445 51836 kv/dist_sender.go:345  [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I180827 20:41:53.226030 50862 kv/dist_sender.go:345  [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I180827 20:41:53.226699 50862 server/node.go:428  [n?] new node allocated ID 3
I180827 20:41:53.226763 50862 gossip/gossip.go:382  [n3] NodeDescriptor set to node_id:3 address:<network_field:"tcp" address_field:"127.0.0.1:46463" > attrs:<> locality:<> ServerVersion:<major_val:2 minor_val:0 patch:0 unstable:12 > build_tag:"v2.1.0-alpha.20180702-2025-gf1e7bb1" started_at:1535402513226706701 
I180827 20:41:53.226805 50862 storage/stores.go:242  [n3] read 0 node addresses from persistent storage
I180827 20:41:53.226851 50862 storage/stores.go:261  [n3] wrote 2 node addresses to persistent storage
I180827 20:41:53.227563 51809 storage/stores.go:261  [n1] wrote 2 node addresses to persistent storage
I180827 20:41:53.227869 51810 storage/stores.go:261  [n2] wrote 2 node addresses to persistent storage
I180827 20:41:53.228504 50862 server/node.go:672  [n3] bootstrapped store [n3,s3]
I180827 20:41:53.229044 50862 server/node.go:546  [n3] node=3: started with [] engine(s) and attributes []
I180827 20:41:53.229696 50862 server/status/recorder.go:652  [n3] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180827 20:41:53.229749 50862 server/server.go:1807  [n3] Could not start heap profiler worker due to: directory to store profiles could not be determined
I180827 20:41:53.235251 50862 server/server.go:1538  [n3] starting https server at 127.0.0.1:43307 (use: 127.0.0.1:43307)
I180827 20:41:53.235271 50862 server/server.go:1540  [n3] starting grpc/postgres server at 127.0.0.1:46463
I180827 20:41:53.235283 50862 server/server.go:1541  [n3] advertising CockroachDB node at 127.0.0.1:46463
I180827 20:41:53.240284 50862 server/server.go:1594  [n3] done ensuring all necessary migrations have run
I180827 20:41:53.240307 50862 server/server.go:1597  [n3] serving sql connections
I180827 20:41:53.243124 51945 server/status/recorder.go:652  [n3,summaries] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180827 20:41:53.248117 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r20/1:/Table/{23-50}] sending preemptive snapshot 59e1afc9 at applied index 16
I180827 20:41:53.249136 50862 testutils/testcluster/testcluster.go:536  [n1,s1] has 22 underreplicated ranges
I180827 20:41:53.251012 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r20/1:/Table/{23-50}] streamed snapshot to (n2,s2):?: kv pairs: 12, log entries: 6, rate-limit: 8.0 MiB/sec, 3ms
I180827 20:41:53.251369 51983 storage/replica_raftstorage.go:784  [n2,s2,r20/?:{-}] applying preemptive snapshot at index 16 (id=59e1afc9, encoded size=2241, 1 rocksdb batches, 6 log entries)
I180827 20:41:53.254056 51839 server/server_update.go:67  [n3] no need to upgrade, cluster already at the newest version
I180827 20:41:53.255122 51841 sql/event_log.go:126  [n3] Event: "node_join", target: 3, info: {Descriptor:{NodeID:3 Address:{NetworkField:tcp AddressField:127.0.0.1:46463} Attrs: Locality: ServerVersion:2.0-12 BuildTag:v2.1.0-alpha.20180702-2025-gf1e7bb1 StartedAt:1535402513226706701 LocalityAddress:[]} ClusterID:d5e53e69-a109-4eb6-91bf-29e74ae744ba StartedAt:1535402513226706701 LastUp:1535402513226706701}
I180827 20:41:53.256061 51983 storage/replica_raftstorage.go:790  [n2,s2,r20/?:/Table/{23-50}] applied preemptive snapshot in 1ms [clear=0ms batch=0ms entries=0ms commit=1ms]
I180827 20:41:53.256605 50930 storage/replica_command.go:812  [replicate,n1,s1,r20/1:/Table/{23-50}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r20:/Table/{23-50} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.259565 50930 storage/replica.go:3743  [n1,s1,r20/1:/Table/{23-50}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I180827 20:41:53.261627 51625 rpc/nodedialer/nodedialer.go:92  [n2] connection to n1 established
I180827 20:41:53.264544 50862 testutils/testcluster/testcluster.go:536  [n1,s1] has 22 underreplicated ranges
I180827 20:41:53.286630 50930 rpc/nodedialer/nodedialer.go:92  [replicate,n1,s1,r21/1:/Table/5{0-1}] connection to n3 established
I180827 20:41:53.287245 50862 testutils/testcluster/testcluster.go:536  [n1,s1] has 22 underreplicated ranges
I180827 20:41:53.287799 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r21/1:/Table/5{0-1}] sending preemptive snapshot de08568a at applied index 18
I180827 20:41:53.288157 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r21/1:/Table/5{0-1}] streamed snapshot to (n3,s3):?: kv pairs: 8, log entries: 8, rate-limit: 8.0 MiB/sec, 1ms
I180827 20:41:53.288623 51959 storage/replica_raftstorage.go:784  [n3,s3,r21/?:{-}] applying preemptive snapshot at index 18 (id=de08568a, encoded size=2646, 1 rocksdb batches, 8 log entries)
I180827 20:41:53.289814 51959 storage/replica_raftstorage.go:790  [n3,s3,r21/?:/Table/5{0-1}] applied preemptive snapshot in 1ms [clear=0ms batch=0ms entries=0ms commit=1ms]
I180827 20:41:53.290329 50930 storage/replica_command.go:812  [replicate,n1,s1,r21/1:/Table/5{0-1}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r21:/Table/5{0-1} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.293678 50930 storage/replica.go:3743  [n1,s1,r21/1:/Table/5{0-1}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I180827 20:41:53.294953 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r22/1:/{Table/51-Max}] sending preemptive snapshot a84e7278 at applied index 12
I180827 20:41:53.295229 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r22/1:/{Table/51-Max}] streamed snapshot to (n3,s3):?: kv pairs: 7, log entries: 2, rate-limit: 8.0 MiB/sec, 1ms
I180827 20:41:53.295441 51883 rpc/nodedialer/nodedialer.go:92  [n3] connection to n1 established
I180827 20:41:53.295585 51953 storage/replica_raftstorage.go:784  [n3,s3,r22/?:{-}] applying preemptive snapshot at index 12 (id=a84e7278, encoded size=386, 1 rocksdb batches, 2 log entries)
I180827 20:41:53.295717 51953 storage/replica_raftstorage.go:790  [n3,s3,r22/?:/{Table/51-Max}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.295955 50930 storage/replica_command.go:812  [replicate,n1,s1,r22/1:/{Table/51-Max}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r22:/{Table/51-Max} [(n1,s1):1, next=2, gen=0]
I180827 20:41:53.298097 50930 storage/replica.go:3743  [n1,s1,r22/1:/{Table/51-Max}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I180827 20:41:53.301122 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r8/1:/Table/1{1-2}] sending preemptive snapshot 201bdccc at applied index 18
I180827 20:41:53.301565 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r8/1:/Table/1{1-2}] streamed snapshot to (n3,s3):?: kv pairs: 9, log entries: 8, rate-limit: 8.0 MiB/sec, 3ms
I180827 20:41:53.306578 52088 storage/replica_raftstorage.go:784  [n3,s3,r8/?:{-}] applying preemptive snapshot at index 18 (id=201bdccc, encoded size=4352, 1 rocksdb batches, 8 log entries)
I180827 20:41:53.306868 52088 storage/replica_raftstorage.go:790  [n3,s3,r8/?:/Table/1{1-2}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.307601 50930 storage/replica_command.go:812  [replicate,n1,s1,r8/1:/Table/1{1-2}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r8:/Table/1{1-2} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.311873 50930 storage/replica.go:3743  [n1,s1,r8/1:/Table/1{1-2}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I180827 20:41:53.314134 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r17/1:/Table/2{0-1}] sending preemptive snapshot 53116eb2 at applied index 16
I180827 20:41:53.314317 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r17/1:/Table/2{0-1}] streamed snapshot to (n3,s3):?: kv pairs: 8, log entries: 6, rate-limit: 8.0 MiB/sec, 2ms
I180827 20:41:53.314683 52103 storage/replica_raftstorage.go:784  [n3,s3,r17/?:{-}] applying preemptive snapshot at index 16 (id=53116eb2, encoded size=2105, 1 rocksdb batches, 6 log entries)
I180827 20:41:53.314887 52103 storage/replica_raftstorage.go:790  [n3,s3,r17/?:/Table/2{0-1}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.315401 50930 storage/replica_command.go:812  [replicate,n1,s1,r17/1:/Table/2{0-1}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r17:/Table/2{0-1} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.318398 50930 storage/replica.go:3743  [n1,s1,r17/1:/Table/2{0-1}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I180827 20:41:53.319436 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r16/1:/Table/{19-20}] sending preemptive snapshot e0be8540 at applied index 16
I180827 20:41:53.319691 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r16/1:/Table/{19-20}] streamed snapshot to (n2,s2):?: kv pairs: 8, log entries: 6, rate-limit: 8.0 MiB/sec, 1ms
I180827 20:41:53.320127 52072 storage/replica_raftstorage.go:784  [n2,s2,r16/?:{-}] applying preemptive snapshot at index 16 (id=e0be8540, encoded size=2109, 1 rocksdb batches, 6 log entries)
I180827 20:41:53.320339 52072 storage/replica_raftstorage.go:790  [n2,s2,r16/?:/Table/{19-20}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.320816 50930 storage/replica_command.go:812  [replicate,n1,s1,r16/1:/Table/{19-20}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r16:/Table/{19-20} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.323849 50930 storage/replica.go:3743  [n1,s1,r16/1:/Table/{19-20}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I180827 20:41:53.326208 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r15/1:/Table/1{8-9}] sending preemptive snapshot d259ae5c at applied index 16
I180827 20:41:53.326404 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r15/1:/Table/1{8-9}] streamed snapshot to (n2,s2):?: kv pairs: 8, log entries: 6, rate-limit: 8.0 MiB/sec, 2ms
I180827 20:41:53.326731 52116 storage/replica_raftstorage.go:784  [n2,s2,r15/?:{-}] applying preemptive snapshot at index 16 (id=d259ae5c, encoded size=2276, 1 rocksdb batches, 6 log entries)
I180827 20:41:53.326923 50862 testutils/testcluster/testcluster.go:536  [n1,s1] has 22 underreplicated ranges
I180827 20:41:53.326953 52116 storage/replica_raftstorage.go:790  [n2,s2,r15/?:/Table/1{8-9}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.334514 50930 storage/replica_command.go:812  [replicate,n1,s1,r15/1:/Table/1{8-9}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r15:/Table/1{8-9} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.337656 50930 storage/replica.go:3743  [n1,s1,r15/1:/Table/1{8-9}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I180827 20:41:53.338767 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r14/1:/Table/1{7-8}] sending preemptive snapshot 9d0058d5 at applied index 16
I180827 20:41:53.339034 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r14/1:/Table/1{7-8}] streamed snapshot to (n2,s2):?: kv pairs: 8, log entries: 6, rate-limit: 8.0 MiB/sec, 1ms
I180827 20:41:53.339612 52090 storage/replica_raftstorage.go:784  [n2,s2,r14/?:{-}] applying preemptive snapshot at index 16 (id=9d0058d5, encoded size=2276, 1 rocksdb batches, 6 log entries)
I180827 20:41:53.339831 52090 storage/replica_raftstorage.go:790  [n2,s2,r14/?:/Table/1{7-8}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.340173 50930 storage/replica_command.go:812  [replicate,n1,s1,r14/1:/Table/1{7-8}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r14:/Table/1{7-8} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.343121 50930 storage/replica.go:3743  [n1,s1,r14/1:/Table/1{7-8}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I180827 20:41:53.345432 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r9/1:/Table/1{2-3}] sending preemptive snapshot 0eea2d20 at applied index 26
I180827 20:41:53.345859 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r9/1:/Table/1{2-3}] streamed snapshot to (n2,s2):?: kv pairs: 53, log entries: 16, rate-limit: 8.0 MiB/sec, 2ms
I180827 20:41:53.347137 52066 storage/replica_raftstorage.go:784  [n2,s2,r9/?:{-}] applying preemptive snapshot at index 26 (id=0eea2d20, encoded size=15139, 1 rocksdb batches, 16 log entries)
I180827 20:41:53.347467 52066 storage/replica_raftstorage.go:790  [n2,s2,r9/?:/Table/1{2-3}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.348208 50930 storage/replica_command.go:812  [replicate,n1,s1,r9/1:/Table/1{2-3}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r9:/Table/1{2-3} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.352166 50930 storage/replica.go:3743  [n1,s1,r9/1:/Table/1{2-3}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I180827 20:41:53.353188 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r4/1:/System/{NodeLive…-tsd}] sending preemptive snapshot 0cdee511 at applied index 39
I180827 20:41:53.353765 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r4/1:/System/{NodeLive…-tsd}] streamed snapshot to (n2,s2):?: kv pairs: 36, log entries: 29, rate-limit: 8.0 MiB/sec, 1ms
I180827 20:41:53.354286 51723 storage/replica_raftstorage.go:784  [n2,s2,r4/?:{-}] applying preemptive snapshot at index 39 (id=0cdee511, encoded size=98384, 1 rocksdb batches, 29 log entries)
I180827 20:41:53.354994 51723 storage/replica_raftstorage.go:790  [n2,s2,r4/?:/System/{NodeLive…-tsd}] applied preemptive snapshot in 1ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.355529 50930 storage/replica_command.go:812  [replicate,n1,s1,r4/1:/System/{NodeLive…-tsd}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r4:/System/{NodeLivenessMax-tsd} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.358523 50930 storage/replica.go:3743  [n1,s1,r4/1:/System/{NodeLive…-tsd}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I180827 20:41:53.360250 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r3/1:/System/NodeLiveness{-Max}] sending preemptive snapshot 965d58b1 at applied index 19
I180827 20:41:53.360436 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r3/1:/System/NodeLiveness{-Max}] streamed snapshot to (n3,s3):?: kv pairs: 10, log entries: 9, rate-limit: 8.0 MiB/sec, 1ms
I180827 20:41:53.360789 52150 storage/replica_raftstorage.go:784  [n3,s3,r3/?:{-}] applying preemptive snapshot at index 19 (id=965d58b1, encoded size=4003, 1 rocksdb batches, 9 log entries)
I180827 20:41:53.361043 52150 storage/replica_raftstorage.go:790  [n3,s3,r3/?:/System/NodeLiveness{-Max}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.361522 50930 storage/replica_command.go:812  [replicate,n1,s1,r3/1:/System/NodeLiveness{-Max}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r3:/System/NodeLiveness{-Max} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.364392 50930 storage/replica.go:3743  [n1,s1,r3/1:/System/NodeLiveness{-Max}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I180827 20:41:53.366422 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r12/1:/Table/1{5-6}] sending preemptive snapshot 811af376 at applied index 16
I180827 20:41:53.366638 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r12/1:/Table/1{5-6}] streamed snapshot to (n3,s3):?: kv pairs: 8, log entries: 6, rate-limit: 8.0 MiB/sec, 2ms
I180827 20:41:53.367089 52137 storage/replica_raftstorage.go:784  [n3,s3,r12/?:{-}] applying preemptive snapshot at index 16 (id=811af376, encoded size=2276, 1 rocksdb batches, 6 log entries)
I180827 20:41:53.367359 52137 storage/replica_raftstorage.go:790  [n3,s3,r12/?:/Table/1{5-6}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.368127 50930 storage/replica_command.go:812  [replicate,n1,s1,r12/1:/Table/1{5-6}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r12:/Table/1{5-6} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.371691 50930 storage/replica.go:3743  [n1,s1,r12/1:/Table/1{5-6}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I180827 20:41:53.374563 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r19/1:/Table/2{2-3}] sending preemptive snapshot 9cd02555 at applied index 16
I180827 20:41:53.374760 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r19/1:/Table/2{2-3}] streamed snapshot to (n3,s3):?: kv pairs: 8, log entries: 6, rate-limit: 8.0 MiB/sec, 2ms
I180827 20:41:53.375252 52080 storage/replica_raftstorage.go:784  [n3,s3,r19/?:{-}] applying preemptive snapshot at index 16 (id=9cd02555, encoded size=2276, 1 rocksdb batches, 6 log entries)
I180827 20:41:53.375582 52080 storage/replica_raftstorage.go:790  [n3,s3,r19/?:/Table/2{2-3}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.375950 50930 storage/replica_command.go:812  [replicate,n1,s1,r19/1:/Table/2{2-3}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r19:/Table/2{2-3} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.381819 50930 storage/replica.go:3743  [n1,s1,r19/1:/Table/2{2-3}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I180827 20:41:53.386461 52091 rpc/nodedialer/nodedialer.go:92  [ct-client] connection to n3 established
I180827 20:41:53.386637 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r10/1:/Table/1{3-4}] sending preemptive snapshot a16f4b15 at applied index 64
I180827 20:41:53.388005 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r10/1:/Table/1{3-4}] streamed snapshot to (n3,s3):?: kv pairs: 204, log entries: 54, rate-limit: 8.0 MiB/sec, 4ms
I180827 20:41:53.388536 52181 storage/replica_raftstorage.go:784  [n3,s3,r10/?:{-}] applying preemptive snapshot at index 64 (id=a16f4b15, encoded size=62836, 1 rocksdb batches, 54 log entries)
I180827 20:41:53.389154 52181 storage/replica_raftstorage.go:790  [n3,s3,r10/?:/Table/1{3-4}] applied preemptive snapshot in 1ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.389513 50930 storage/replica_command.go:812  [replicate,n1,s1,r10/1:/Table/1{3-4}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r10:/Table/1{3-4} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.392649 50930 storage/replica.go:3743  [n1,s1,r10/1:/Table/1{3-4}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I180827 20:41:53.394122 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r2/1:/System/{-NodeLive…}] sending preemptive snapshot 69adabc1 at applied index 23
I180827 20:41:53.394365 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r2/1:/System/{-NodeLive…}] streamed snapshot to (n2,s2):?: kv pairs: 7, log entries: 13, rate-limit: 8.0 MiB/sec, 1ms
I180827 20:41:53.394729 52213 storage/replica_raftstorage.go:784  [n2,s2,r2/?:{-}] applying preemptive snapshot at index 23 (id=69adabc1, encoded size=6277, 1 rocksdb batches, 13 log entries)
I180827 20:41:53.394981 52213 storage/replica_raftstorage.go:790  [n2,s2,r2/?:/System/{-NodeLive…}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.395465 50930 storage/replica_command.go:812  [replicate,n1,s1,r2/1:/System/{-NodeLive…}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r2:/System/{-NodeLiveness} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.398757 50930 storage/replica.go:3743  [n1,s1,r2/1:/System/{-NodeLive…}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I180827 20:41:53.399709 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r18/1:/Table/2{1-2}] sending preemptive snapshot e9df2a4a at applied index 16
I180827 20:41:53.400036 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r18/1:/Table/2{1-2}] streamed snapshot to (n3,s3):?: kv pairs: 8, log entries: 6, rate-limit: 8.0 MiB/sec, 1ms
I180827 20:41:53.400391 52185 storage/replica_raftstorage.go:784  [n3,s3,r18/?:{-}] applying preemptive snapshot at index 16 (id=e9df2a4a, encoded size=2272, 1 rocksdb batches, 6 log entries)
I180827 20:41:53.400594 52185 storage/replica_raftstorage.go:790  [n3,s3,r18/?:/Table/2{1-2}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.400882 50930 storage/replica_command.go:812  [replicate,n1,s1,r18/1:/Table/2{1-2}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r18:/Table/2{1-2} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.407636 50930 storage/replica.go:3743  [n1,s1,r18/1:/Table/2{1-2}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I180827 20:41:53.408861 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r13/1:/Table/1{6-7}] sending preemptive snapshot 6f914d55 at applied index 16
I180827 20:41:53.409071 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r13/1:/Table/1{6-7}] streamed snapshot to (n2,s2):?: kv pairs: 8, log entries: 6, rate-limit: 8.0 MiB/sec, 1ms
I180827 20:41:53.409426 52218 storage/replica_raftstorage.go:784  [n2,s2,r13/?:{-}] applying preemptive snapshot at index 16 (id=6f914d55, encoded size=2276, 1 rocksdb batches, 6 log entries)
I180827 20:41:53.409616 52218 storage/replica_raftstorage.go:790  [n2,s2,r13/?:/Table/1{6-7}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.409970 50930 storage/replica_command.go:812  [replicate,n1,s1,r13/1:/Table/1{6-7}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r13:/Table/1{6-7} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.411262 50862 testutils/testcluster/testcluster.go:536  [n1,s1] has 22 underreplicated ranges
I180827 20:41:53.412831 50930 storage/replica.go:3743  [n1,s1,r13/1:/Table/1{6-7}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I180827 20:41:53.414081 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r11/1:/Table/1{4-5}] sending preemptive snapshot cca961c1 at applied index 16
I180827 20:41:53.414277 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r11/1:/Table/1{4-5}] streamed snapshot to (n3,s3):?: kv pairs: 8, log entries: 6, rate-limit: 8.0 MiB/sec, 1ms
I180827 20:41:53.414576 52199 storage/replica_raftstorage.go:784  [n3,s3,r11/?:{-}] applying preemptive snapshot at index 16 (id=cca961c1, encoded size=2272, 1 rocksdb batches, 6 log entries)
I180827 20:41:53.414816 52199 storage/replica_raftstorage.go:790  [n3,s3,r11/?:/Table/1{4-5}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.415293 50930 storage/replica_command.go:812  [replicate,n1,s1,r11/1:/Table/1{4-5}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r11:/Table/1{4-5} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.418111 50930 storage/replica.go:3743  [n1,s1,r11/1:/Table/1{4-5}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I180827 20:41:53.419054 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r5/1:/System/ts{d-e}] sending preemptive snapshot 3c3a015f at applied index 27
I180827 20:41:53.423022 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r5/1:/System/ts{d-e}] streamed snapshot to (n3,s3):?: kv pairs: 1391, log entries: 2, rate-limit: 8.0 MiB/sec, 4ms
I180827 20:41:53.423893 52201 storage/replica_raftstorage.go:784  [n3,s3,r5/?:{-}] applying preemptive snapshot at index 27 (id=3c3a015f, encoded size=194658, 1 rocksdb batches, 2 log entries)
I180827 20:41:53.429501 52201 storage/replica_raftstorage.go:790  [n3,s3,r5/?:/System/ts{d-e}] applied preemptive snapshot in 6ms [clear=0ms batch=0ms entries=2ms commit=4ms]
I180827 20:41:53.433500 50930 storage/replica_command.go:812  [replicate,n1,s1,r5/1:/System/ts{d-e}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r5:/System/ts{d-e} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.437580 50930 storage/replica.go:3743  [n1,s1,r5/1:/System/ts{d-e}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I180827 20:41:53.440575 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r6/1:/{System/tse-Table/System…}] sending preemptive snapshot cbd412df at applied index 21
I180827 20:41:53.440794 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r6/1:/{System/tse-Table/System…}] streamed snapshot to (n3,s3):?: kv pairs: 8, log entries: 11, rate-limit: 8.0 MiB/sec, 2ms
I180827 20:41:53.441181 52260 storage/replica_raftstorage.go:784  [n3,s3,r6/?:{-}] applying preemptive snapshot at index 21 (id=cbd412df, encoded size=4339, 1 rocksdb batches, 11 log entries)
I180827 20:41:53.441400 52260 storage/replica_raftstorage.go:790  [n3,s3,r6/?:/{System/tse-Table/System…}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.441676 50930 storage/replica_command.go:812  [replicate,n1,s1,r6/1:/{System/tse-Table/System…}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r6:/{System/tse-Table/SystemConfigSpan/Start} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.448564 52224 rpc/nodedialer/nodedialer.go:92  [ct-client] connection to n2 established
I180827 20:41:53.461587 50930 storage/replica.go:3743  [n1,s1,r6/1:/{System/tse-Table/System…}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I180827 20:41:53.463345 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r7/1:/Table/{SystemCon…-11}] sending preemptive snapshot 114f4385 at applied index 29
I180827 20:41:53.464896 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r7/1:/Table/{SystemCon…-11}] streamed snapshot to (n2,s2):?: kv pairs: 59, log entries: 19, rate-limit: 8.0 MiB/sec, 3ms
I180827 20:41:53.465343 52280 storage/replica_raftstorage.go:784  [n2,s2,r7/?:{-}] applying preemptive snapshot at index 29 (id=114f4385, encoded size=16646, 1 rocksdb batches, 19 log entries)
I180827 20:41:53.465821 52280 storage/replica_raftstorage.go:790  [n2,s2,r7/?:/Table/{SystemCon…-11}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.466988 50930 storage/replica_command.go:812  [replicate,n1,s1,r7/1:/Table/{SystemCon…-11}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r7:/Table/{SystemConfigSpan/Start-11} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.472743 50930 storage/replica.go:3743  [n1,s1,r7/1:/Table/{SystemCon…-11}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I180827 20:41:53.474632 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r1/1:/{Min-System/}] sending preemptive snapshot 0a244018 at applied index 114
I180827 20:41:53.475250 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r1/1:/{Min-System/}] streamed snapshot to (n2,s2):?: kv pairs: 73, log entries: 90, rate-limit: 8.0 MiB/sec, 1ms
I180827 20:41:53.475827 52267 storage/replica_raftstorage.go:784  [n2,s2,r1/?:{-}] applying preemptive snapshot at index 114 (id=0a244018, encoded size=40271, 1 rocksdb batches, 90 log entries)
I180827 20:41:53.476525 52267 storage/replica_raftstorage.go:790  [n2,s2,r1/?:/{Min-System/}] applied preemptive snapshot in 1ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.476869 50930 storage/replica_command.go:812  [replicate,n1,s1,r1/1:/{Min-System/}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r1:/{Min-System/} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.482912 50930 storage/replica.go:3743  [n1,s1,r1/1:/{Min-System/}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I180827 20:41:53.483281 50930 storage/queue.go:873  [n1,replicate] purgatory is now empty
I180827 20:41:53.485684 52286 storage/store_snapshot.go:615  [replicate,n1,s1,r20/1:/Table/{23-50}] sending preemptive snapshot f1426c69 at applied index 19
I180827 20:41:53.487316 52286 storage/store_snapshot.go:657  [replicate,n1,s1,r20/1:/Table/{23-50}] streamed snapshot to (n3,s3):?: kv pairs: 13, log entries: 9, rate-limit: 8.0 MiB/sec, 4ms
I180827 20:41:53.487681 52252 storage/replica_raftstorage.go:784  [n3,s3,r20/?:{-}] applying preemptive snapshot at index 19 (id=f1426c69, encoded size=3273, 1 rocksdb batches, 9 log entries)
I180827 20:41:53.487932 52252 storage/replica_raftstorage.go:790  [n3,s3,r20/?:/Table/{23-50}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.488311 52286 storage/replica_command.go:812  [replicate,n1,s1,r20/1:/Table/{23-50}] change replicas (ADD_REPLICA (n3,s3):3): read existing descriptor r20:/Table/{23-50} [(n1,s1):1, (n2,s2):2, next=3, gen=1]
I180827 20:41:53.503580 52286 storage/replica.go:3743  [n1,s1,r20/1:/Table/{23-50}] proposing ADD_REPLICA((n3,s3):3): updated=[(n1,s1):1 (n2,s2):2 (n3,s3):3] next=4
I180827 20:41:53.505707 52235 storage/store_snapshot.go:615  [replicate,n1,s1,r1/1:/{Min-System/}] sending preemptive snapshot 99036b07 at applied index 119
I180827 20:41:53.506514 52235 storage/store_snapshot.go:657  [replicate,n1,s1,r1/1:/{Min-System/}] streamed snapshot to (n3,s3):?: kv pairs: 78, log entries: 95, rate-limit: 8.0 MiB/sec, 1ms
I180827 20:41:53.507282 52188 storage/replica_raftstorage.go:784  [n3,s3,r1/?:{-}] applying preemptive snapshot at index 119 (id=99036b07, encoded size=42101, 1 rocksdb batches, 95 log entries)
I180827 20:41:53.508109 52188 storage/replica_raftstorage.go:790  [n3,s3,r1/?:/{Min-System/}] applied preemptive snapshot in 1ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.508641 52235 storage/replica_command.go:812  [replicate,n1,s1,r1/1:/{Min-System/}] change replicas (ADD_REPLICA (n3,s3):3): read existing descriptor r1:/{Min-System/} [(n1,s1):1, (n2,s2):2, next=3, gen=1]
I180827 20:41:53.512524 52235 storage/replica.go:3743  [n1,s1,r1/1:/{Min-System/}] proposing ADD_REPLICA((n3,s3):3): updated=[(n1,s1):1 (n2,s2):2 (n3,s3):3] next=4
I180827 20:41:53.513999 52209 storage/store_snapshot.go:615  [replicate,n1,s1,r7/1:/Table/{SystemCon…-11}] sending preemptive snapshot bb53109c at applied index 32
I180827 20:41:53.514379 52209 storage/store_snapshot.go:657  [replicate,n1,s1,r7/1:/Table/{SystemCon…-11}] streamed snapshot to (n3,s3):?: kv pairs: 60, log entries: 22, rate-limit: 8.0 MiB/sec, 1ms
I180827 20:41:53.514821 52292 storage/replica_raftstorage.go:784  [n3,s3,r7/?:{-}] applying preemptive snapshot at index 32 (id=bb53109c, encoded size=17687, 1 rocksdb batches, 22 log entries)
I180827 20:41:53.515905 52292 storage/replica_raftstorage.go:790  [n3,s3,r7/?:/Table/{SystemCon…-11}] applied preemptive snapshot in 1ms [clear=1ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.516367 52209 storage/replica_command.go:812  [replicate,n1,s1,r7/1:/Table/{SystemCon…-11}] change replicas (ADD_REPLICA (n3,s3):3): read existing descriptor r7:/Table/{SystemConfigSpan/Start-11} [(n1,s1):1, (n2,s2):2, next=3, gen=1]
I180827 20:41:53.520158 52209 storage/replica.go:3743  [n1,s1,r7/1:/Table/{SystemCon…-11}] proposing ADD_REPLICA((n3,s3):3): updated=[(n1,s1):1 (n2,s2):2 (n3,s3):3] next=4
I180827 20:41:53.521958 52312 storage/store_snapshot.go:615  [replicate,n1,s1,r6/1:/{System/tse-Table/System…}] sending preemptive snapshot 2ca43612 at applied index 24
I180827 20:41:53.522776 52312 storage/store_snapshot.go:657  [replicate,n1,s1,r6/1:/{System/tse-Table/System…}] streamed snapshot to (n2,s2):?: kv pairs: 9, log entries: 14, rate-limit: 8.0 MiB/sec, 2ms
I180827 20:41:53.523128 52239 storage/replica_raftstorage.go:784  [n2,s2,r6/?:{-}] applying preemptive snapshot at index 24 (id=2ca43612, encoded size=5410, 1 rocksdb batches, 14 log entries)
I180827 20:41:53.523377 52239 storage/replica_raftstorage.go:790  [n2,s2,r6/?:/{System/tse-Table/System…}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.523701 52312 storage/replica_command.go:812  [replicate,n1,s1,r6/1:/{System/tse-Table/System…}] change replicas (ADD_REPLICA (n2,s2):3): read existing descriptor r6:/{System/tse-Table/SystemConfigSpan/Start} [(n1,s1):1, (n3,s3):2, next=3, gen=1]
I180827 20:41:53.525176 50862 testutils/testcluster/testcluster.go:536  [n1,s1] has 19 underreplicated ranges
I180827 20:41:53.527482 52312 storage/replica.go:3743  [n1,s1,r6/1:/{System/tse-Table/System…}] proposing ADD_REPLICA((n2,s2):3): updated=[(n1,s1):1 (n3,s3):2 (n2,s2):3] next=4
I180827 20:41:53.528875 52327 storage/store_snapshot.go:615  [replicate,n1,s1,r5/1:/System/ts{d-e}] sending preemptive snapshot 731be2ae at applied index 30
I180827 20:41:53.532860 52327 storage/store_snapshot.go:657  [replicate,n1,s1,r5/1:/System/ts{d-e}] streamed snapshot to (n2,s2):?: kv pairs: 1392, log entries: 5, rate-limit: 8.0 MiB/sec, 4ms
I180827 20:41:53.533361 52316 storage/replica_raftstorage.go:784  [n2,s2,r5/?:{-}] applying preemptive snapshot at index 30 (id=731be2ae, encoded size=195741, 1 rocksdb batches, 5 log entries)
I180827 20:41:53.535834 52316 storage/replica_raftstorage.go:790  [n2,s2,r5/?:/System/ts{d-e}] applied preemptive snapshot in 2ms [clear=0ms batch=0ms entries=0ms commit=2ms]
I180827 20:41:53.536253 52327 storage/replica_command.go:812  [replicate,n1,s1,r5/1:/System/ts{d-e}] change replicas (ADD_REPLICA (n2,s2):3): read existing descriptor r5:/System/ts{d-e} [(n1,s1):1, (n3,s3):2, next=3, gen=1]
I180827 20:41:53.540576 52327 storage/replica.go:3743  [n1,s1,r5/1:/System/ts{d-e}] proposing ADD_REPLICA((n2,s2):3): updated=[(n1,s1):1 (n3,s3):2 (n2,s2):3] next=4
I180827 20:41:53.545804 52341 storage/store_snapshot.go:615  [replicate,n1,s1,r11/1:/Table/1{4-5}] sending preemptive snapshot 7497a95f at applied index 19
I180827 20:41:53.546108 52341 storage/store_snapshot.go:657  [replicate,n1,s1,r11/1:/Table/1{4-5}] streamed snapshot to (n2,s2):?: kv pairs: 9, log entries: 9, rate-limit: 8.0 MiB/sec, 4ms
I180827 20:41:53.546590 52275 storage/replica_raftstorage.go:784  [n2,s2,r11/?:{-}] applying preemptive snapshot at index 19 (id=7497a95f, encoded size=3304, 1 rocksdb batches, 9 log entries)
I180827 20:41:53.546960 52275 storage/replica_raftstorage.go:790  [n2,s2,r11/?:/Table/1{4-5}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.547386 52341 storage/replica_command.go:812  [replicate,n1,s1,r11/1:/Table/1{4-5}] change replicas (ADD_REPLICA (

Please assign, take a look and update the issue accordingly.

test failure #

The following test appears to have failed:

#:

Please assign, take a look and update the issue accordingly.

Failed tests ():

The following test appears to have failed:

#:

W1215 01:18:50.477119 959 multiraft/multiraft.go:1233  aborting configuration change: key range /Local/Range/RangeDescriptor/""-/Min outside of bounds of range /Min-/Min
W1215 01:18:50.478804 959 multiraft/multiraft.go:1139  failed to look up replica ID for range 1 (disabling replica ID check): storage/store.go:1695: store 3 not found as replica of range 1
I1215 01:18:53.557582 959 storage/engine/rocksdb.go:138  closing in-memory rocksdb instance
I1215 01:18:53.557889 959 storage/engine/rocksdb.go:138  closing in-memory rocksdb instance
I1215 01:18:53.558132 959 storage/engine/rocksdb.go:138  closing in-memory rocksdb instance
--- FAIL: TestRaftRemoveRace (3.53s)
    <autogenerated>:32: storage/client_test.go:521: condition failed to evaluate within 3s: storage/client_test.go:517: range not found on store 2
=== RUN   TestStoreRangeRemoveDead
E1215 01:18:53.562437 959 gossip/gossip.go:181  different node IDs were set for the same gossip instance (2147483647, 1)
I1215 01:18:53.563875 959 multiraft/multiraft.go:579  node 1 starting
I1215 01:18:53.564727 959 storage/replica.go:1308  gossiping cluster id  from store 1, range 1
I1215 01:18:53.565694 959 raft/raft.go:446  [group 1] 1 became follower at term 5
I1215 01:18:53.565898 959 raft/raft.go:234  [group 1] newRaft 1 [peers: [1], term: 5, commit: 10, applied: 10, lastindex: 10, lastterm: 5]
I1215 01:18:53.566074 959 multiraft/multiraft.go:999  node 1 campaigning because initial confstate is [1]
I1215 01:18:53.566196 959 raft/raft.go:526  [group 1] 1 is starting a new election at term 5
I1215 01:18:53.566292 959 raft/raft.go:459  [group 1] 1 became candidate at term 6
--
I1215 01:19:05.063638 959 storage/engine/rocksdb.go:138  closing in-memory rocksdb instance
I1215 01:19:05.063778 959 storage/engine/rocksdb.go:138  closing in-memory rocksdb instance
--- PASS: TestLeaderAfterSplit (0.44s)
=== RUN   Example_rebalancing
--- PASS: Example_rebalancing (0.62s)
FAIL
FAIL    github.com/cockroachdb/cockroach/storage    32.371s
=== RUN   TestBatchBasics
I1215 01:18:45.307778 1019 storage/engine/rocksdb.go:138  closing in-memory rocksdb instance
--- PASS: TestBatchBasics (0.01s)
=== RUN   TestBatchGet
I1215 01:18:45.312076 1019 storage/engine/rocksdb.go:138  closing in-memory rocksdb instance
--- PASS: TestBatchGet (0.01s)
=== RUN   TestBatchMerge
I1215 01:18:45.320845 1019 storage/engine/rocksdb.go:138  closing in-memory rocksdb instance
--- PASS: TestBatchMerge (0.01s)
=== RUN   TestBatchProtoW1215 01:18:50.477119 959 multiraft/multiraft.go:1233  aborting configuration change: key range /Local/Range/RangeDescriptor/""-/Min outside of bounds of range /Min-/Min
W1215 01:18:50.478804 959 multiraft/multiraft.go:1139  failed to look up replica ID for range 1 (disabling replica ID check): storage/store.go:1695: store 3 not found as replica of range 1
I1215 01:18:53.557582 959 storage/engine/rocksdb.go:138  closing in-memory rocksdb instance
I1215 01:18:53.557889 959 storage/engine/rocksdb.go:138  closing in-memory rocksdb instance
I1215 01:18:53.558132 959 storage/engine/rocksdb.go:138  closing in-memory rocksdb instance
--- FAIL: TestRaftRemoveRace (3.53s)
    <autogenerated>:32: storage/client_test.go:521: condition failed to evaluate within 3s: storage/client_test.go:517: range not found on store 2
=== RUN   TestStoreRangeRemoveDead
E1215 01:18:53.562437 959 gossip/gossip.go:181  different node IDs were set for the same gossip instance (2147483647, 1)
I1215 01:18:53.563875 959 multiraft/multiraft.go:579  node 1 starting
I1215 01:18:53.564727 959 storage/replica.go:1308  gossiping cluster id  from store 1, range 1
I1215 01:18:53.565694 959 raft/raft.go:446  [group 1] 1 became follower at term 5
I1215 01:18:53.565898 959 raft/raft.go:234  [group 1] newRaft 1 [peers: [1], term: 5, commit: 10, applied: 10, lastindex: 10, lastterm: 5]
I1215 01:18:53.566074 959 multiraft/multiraft.go:999  node 1 campaigning because initial confstate is [1]
I1215 01:18:53.566196 959 raft/raft.go:526  [group 1] 1 is starting a new election at term 5
I1215 01:18:53.566292 959 raft/raft.go:459  [group 1] 1 became candidate at term 6
--
I1215 01:19:05.063638 959 storage/engine/rocksdb.go:138  closing in-memory rocksdb instance
I1215 01:19:05.063778 959 storage/engine/rocksdb.go:138  closing in-memory rocksdb instance
--- PASS: TestLeaderAfterSplit (0.44s)
=== RUN   Example_rebalancing
--- PASS: Example_rebalancing (0.62s)
FAIL
FAIL    github.com/cockroachdb/cockroach/storage    32.371s
=== RUN   TestBatchBasics
I1215 01:18:45.307778 1019 storage/engine/rocksdb.go:138  closing in-memory rocksdb instance
--- PASS: TestBatchBasics (0.01s)
=== RUN   TestBatchGet
I1215 01:18:45.312076 1019 storage/engine/rocksdb.go:138  closing in-memory rocksdb instance
--- PASS: TestBatchGet (0.01s)
=== RUN   TestBatchMerge
I1215 01:18:45.320845 1019 storage/engine/rocksdb.go:138  closing in-memory rocksdb instance
--- PASS: TestBatchMerge (0.01s)
=== RUN   TestBatchProto

Please assign, take a look and update the issue accordingly.

teamcity: failed tests on release-banana: test/TestImportPgDump, lint/TestLint

The following tests appear to have failed:

#864629:

--- FAIL: test/TestImportPgDump/read_data_only (0.000s)
Test ended in panic.

------- Stdout: -------
I180827 20:41:54.053559 52667 storage/replica_command.go:298  [n1,s1,r23/1:/{Table/52-Max}] initiating a split of this range at key /Table/53/1/106 [r24]
I180827 20:41:54.062208 52385 storage/replica_range_lease.go:554  [replicate,n1,s1,r23/1:/Table/5{2-3/1/106}] transferring lease to s2
I180827 20:41:54.063407 52385 storage/replica_range_lease.go:617  [replicate,n1,s1,r23/1:/Table/5{2-3/1/106}] done transferring lease to s2: <nil>
I180827 20:41:54.063498 51617 storage/replica_proposal.go:210  [n2,s2,r23/3:/Table/5{2-3/1/106}] new range lease repl=(n2,s2):3 seq=3 start=1535402514.062230012,0 epo=1 pro=1535402514.062232488,0 following repl=(n1,s1):1 seq=2 start=1535402512.768597075,0 exp=1535402521.769064687,0 pro=1535402512.769088099,0
I180827 20:41:54.077824 52355 storage/replica_command.go:298  [n1,s1,r24/1:/{Table/53/1/1…-Max}] initiating a split of this range at key /Table/53/2/"\x15\x8f\xe8\u007f\\\xf3\xdf\xf0nP\xdb\xd3\xe8\x1b\"B1K\xa8l+\x96/l\v\x9e\x0e\x91\xa0D\x96\xc0J\xf1\xa1͠\xd2̃\x05\xe3\xe2?ET蛂\x00\xe5\xb0\x1a\x8e\x13Zu\xfd\xf2\x81w^\xb7\xbdH\xb8\xe4\a\x9c\xfd\x99{\xb4\"\xe5Q\x9c\x17\x85\x97\xf7Ëb\x0f\xff\xb0-vmO\xe1\xfb\xc3\xf3\xab0\xa0\x05u\x1c\xb0{B\xeamp\xbd\x8f\x99?\x87\x0f\xb2e\xe3ؿ2LN\x03\x17\xa7\x9f\xd3\x0f\x15$\x02I\xd2\xd7\x04R\x193\x9d\xddX\u007f\x01A\xcc\xde`Pm:\xdbe\xfd\xa6\a\xf8i\x88\xa7\xee\xacӸ\xbf2\x84y\xcd\n\xe6]L)\xca\xd9`x\xb4\x1b|\xe8\x13\x82\x1a(/* 3`J\xe1ٰ\xe6AdN!-\xd9"/"ॹॹ;,✅\nπ<\t\nπ\tॹπ✅a\n,\nᐿ�\nॹ✅�ॹ�\"✅ॹ\\<\"\n;a\\\n,✅π\n<\n<\nॹπᐿ�ᐿ;,�\tᐿ\nᐿaᐿ,\nπ�\t\\<ॹ\\π;π�π\"<;\"�\\<�,<�\\a�<\nॹaᐿaॹ�\\ᐿπ,✅ᐿ\"<✅✅a\t�ॹ\t<π;�ॹ\\ᐿ;✅\r\\,;\\ᐿॹ\nॹᐿππ\nᐿ\nᐿaπ\\\nπ\r\"✅�π\nπ\rॹπ\"ॹ✅a\ra�✅\nπ;ॹ✅\n;ॹ,�\nπ\rπᐿa\\\\ᐿ,π<ᐿ✅\n�,\r\nᐿ✅\n<�ᐿ\"\"✅,,\"\n<\n✅\rπa�π\n<\\ॹ\nॹ�;\ra��✅ᐿ\n,\t�;,π<\r��\r\\�\n✅\r✅�;\\\n\n,\nॹ✅π\n,\n✅\t,�<\nπ\t;aπ\n<a<\n\tπ\r\"\\✅\n\n\n<ᐿπaπ\\�\"<✅\\a,✅\n✅\n<\"\"\n\n\r\rᐿ�\\\tᐿᐿ;\n\rᐿa\\π<\n\\\n\n\";\r\r\raπ\"\r�ॹa\r�\"\n\"✅ππ✅�\t�ᐿ\tᐿ\\\r�ᐿ<\\\nᐿπ✅\tॹ<π\ta\"✅\t,ॹa✅ᐿ;\\\r✅\\,ॹ\"a\n<ॹ\\\n<\"π\\\\ᐿ\n✅\nᐿ\n,\n\r\t\n\r\n<aᐿ;ᐿ;ᐿ\r;✅a<a,,<\t\n\\ππ\\\"✅\n\\a\n\tπa<\r<π\n✅\\π<ॹ,\t;<aaaπॹᐿaॹaॹ�,\"\t,ॹ;\\<✅a\nᐿ\"\nπ\\aᐿ�ᐿ<ॹ;\\<ᐿ\nᐿ\n\"aᐿᐿπ,\"\r✅ॹ\n,<\r<\n<<,ᐿᐿa,\rᐿ<;π\\�,\"\rπ�\nππ�,✅;�\ra<;\r�ᐿ\tπ;\"πᐿ\\�a\"ᐿ\\;\\ॹ\";ॹ;;✅\tॹ\r\n<\t\n\t<aॹ\tᐿ\n\"ॹᐿ\t✅✅�ॹ;;<�\t,\n\r\n\n\ta\"\\<\rπᐿa\t<\na;\t\"\nπ<πॹ\r\n<\n✅ᐿॹ✅�<,;✅\"\n<�π<✅<<✅\\;\n\"\rᐿ\t�\n\n\r\t,ॹ�\"\rᐿᐿᐿ,\"π\nπ\",a\"\"<�\t\\πॹ\n\taॹᐿ\tπॹ,\n✅\rπa\r<<,\n\nᐿ;\t\\<\tᐿ\n\n�\"ॹ<\n\r\nᐿ\n�\n\nπᐿ;\nॹ\n\"π<\"\r\r\n\r\\ᐿ;;πॹ;�\r✅�,✅\r\r�,a;ᐿ\\ॹ\"\t\r✅;\t<,π,�\t\"πaᐿ��\\ॹ\"\n\tᐿ\t,ॹ✅�ᐿ✅\tᐿ,ॹ✅;;�\r\n✅ᐿ�\nπa;\\,✅ॹ<ᐿ\nπ\n\"\n;a\t\\π\n<\r\r\rπ\"\n\nᐿ<<ᐿ\"\n,\n\"ॹπaᐿπ��\r\n�ᐿॹ,\na\n\rॹ<�ॹ\"\n\t\r\n,π\n�,<ॹ,<\n<�✅ᐿ\r✅a✅<\r;,�a�\\\nॹ<\\<✅ॹ\"\nॹ\r�\ta;\"\\ᐿ\n\n✅\"\r;✅\t,a✅✅<\"ᐿ\t�π\\✅✅�<;\"✅π✅ॹ,\nπ\n��<,\ta\r�✅ᐿ\nॹπ\nᐿॹ✅;\nॹ\t\r\\\nᐿ�ᐿ\n\tᐿ,\r\\;<<a,\"π,\tᐿ\nπ<a\"ॹ\\aa\r\r\"\";\tᐿ,ॹa\nॹ\nπ"/PrefixEnd [r25]
I180827 20:41:54.090278 52643 storage/replica_range_lease.go:554  [n1,s1,r24/1:/Table/53/{1/106-2/"\x15…}] transferring lease to s3
I180827 20:41:54.091255 52643 storage/replica_range_lease.go:617  [n1,s1,r24/1:/Table/53/{1/106-2/"\x15…}] done transferring lease to s3: <nil>
I180827 20:41:54.092073 51863 storage/replica_proposal.go:210  [n3,s3,r24/2:/Table/53/{1/106-2/"\x15…}] new range lease repl=(n3,s3):2 seq=3 start=1535402514.090298269,0 epo=1 pro=1535402514.090300825,0 following repl=(n1,s1):1 seq=2 start=1535402512.768597075,0 exp=1535402521.769064687,0 pro=1535402512.769088099,0
I180827 20:41:54.094854 52702 storage/replica_command.go:298  [n1,s1,r25/1:/{Table/53/2/"…-Max}] initiating a split of this range at key /Table/53/2/"\xc0\t\x13\xe0*c\xe4\xcfS-\x9b,\xe2\x82\xfa\xd8Z\xf6\x99\x81\\\x18ŕ\xea\x80Db\xa7\x94\xf7Q#\x13\\\xc7(\xc4=\xaaZ\xa2Hա}\xdeI\x06\x840I\xa9\x95\xcbи\r#iH\x97F~\x10\xe4<\xb2\xefFb\xac\xee\xf90H5\xd7D\xe4:\xf0Ae\xe3\xd1<\xd1\xf7\xb9\xad\xea\xd9\xe0r\xbc\xa6\xae\x92\xfb\xb5,\xc2\U0010f26eD\xe0 \xc5\x06\xfa\x04{\xf7\xe8\xbfZQ\xa3\x05M\xbb\xa8\xbe\xf4\xc4\x0f\xe9|s{|\x8fr\xad\xdaWĢ\x9e\xdf\x17\x9f\x02\xf3п\xd3\xea\xfd\x8ew3\xb8@7ꇘN%\n\xe0@jq\xb3\xb0&y\xe3K0ȼ_s\x1e\x15\x98\xe7\xbf6\xeb\xef}$dd/\xaa\xf1\xcb.U\x8f\xd4r"/"<a✅ᐿ<\n\n\nॹ\",\n\"πॹᐿ,\rᐿ\nॹ�ॹ<\naᐿᐿ,\"ॹ\"\\✅\n�✅<\n\r<\\<\"\\π;<✅,;✅ॹa\r✅ᐿ;ᐿ\r\\�\\�,ᐿ\r\n,�✅,\t;\\\"π,;ᐿa\nॹ✅\n;ॹ<\\\n;<ᐿॹ\n\r<\n�\\\",ॹ✅,\n\"✅ᐿ\raa\n\n\t;�π<,\",ᐿ<ᐿ\\ᐿ✅;\t<π\"\"ॹ<\t\n,,π\"\t�✅\r;;ॹ,ᐿ\tॹ✅ॹ,\nᐿ\n\\;\n\nπa<✅aπ\t;✅\r;�✅;a\t\n✅ππ\t\rॹ\n\\<✅✅<\r✅\t\r✅<π\n;\n\"\"��ॹ\r,ᐿ\naॹᐿπ\\π\n\n,�\t<�a\nπ\\✅,πॹ\",πᐿa,ᐿᐿa\r<\ta\t\nॹ\n\r✅\r;πa✅�\t�π\",πa\n<✅\"a\t�\r<ᐿa,\naᐿॹᐿ\rॹᐿa�\"\\a✅\"\"✅✅ᐿ\t\"ॹ<\n\"\nπ✅✅\n\\ॹ\t;ᐿ\"a,ॹ\"aॹᐿᐿ\n,\\\nॹॹ,;ॹ;\\\n\n\t\n,\naॹπ\r\"\n\t✅ॹ\r\tᐿ✅;\r\ta�ᐿ\t\t\nπᐿ<ॹ\tᐿ\na\nᐿ\tππ<<π\t✅;\r\"ॹ\n\na�<ᐿ\r\nᐿᐿ\";a<\rॹ\\πॹ\tᐿ\n\nᐿπ;\t;;✅ᐿ✅✅<�ॹ;\tᐿ,,\t,π✅\nᐿॹ;\r\"ᐿa,ᐿᐿᐿa\nॹ<aॹ\r,;π<<\nπa�\n\\\r,ॹπ�\n\"ᐿππ\n✅ॹπ\ra,πॹ\n\t<;πᐿᐿ✅ॹ\t�a✅\r�\"a,π��,ॹa\n\\\rॹ\nॹ\"π\"π\ta�<π�\r;a,a\r<πᐿ\na<\r\t\nॹ\\\\\n\\<�\\�aπa;\r\\,,\nॹ\"✅;\"\n✅ᐿ✅a\n<ππ\tπ<a\t�\\<\"✅\\\nᐿ;\r\t✅�✅π\r\r\r\n\",ᐿ;ॹ\nᐿ\r\"\naa\"\n\t<,✅<a\\\n\"ॹᐿ\n\\\t\t\r\"ॹ<,,π�\"ॹ<ॹ,\\ᐿ<\\π\"\\<ᐿ\n\rॹ\na\t\nπ;\\π\\✅\r,\r�\",,;;,<\n\t\"\\\r\r\"ॹ✅ॹ�\n�✅�\n�\\\n\n\rᐿॹ\tπa<;;\n\n�a\n\\ॹॹ\t;\n<\t\\<ॹॹ�✅π\t\"<\n\tᐿπᐿ\"\"�;\t;ॹ<π<✅\nππ<\"\rॹ�πᐿ�\rπ,<,<ᐿ<;;�,\t<<ᐿ\t<\tॹ,π,<\\a\t;\n\r�a✅\r\r\t\nᐿᐿ✅\r;\n;�;\r✅\n,;✅ᐿ,\\\n<,<\\\t\n;<\\aᐿ\r,\n;\\\r�\rॹ\\\t\";\t�;\n,\ta✅\r\t\r,\n\\\t\nᐿ✅\\\"\naᐿ\\\\\";\r<�✅;aॹ\t\t\\\t\tᐿ�\r,\"\n\"\taᐿ\na\rππॹ\nπᐿ\rॹ\nॹ\tπ,π\r<\"\n,�\na;�\\✅,<\"\"�\nππ\t\nπaॹaa✅\\;\\a\r\rᐿ,\\�;\\ᐿᐿ<a\"\r;π\"\\ᐿπ\tπॹ;\\a\ra;\t\n\\�\r<\t<ॹ\r✅\na\t\t<\n\n✅π\nᐿ✅\\<,\nπ\rᐿ✅a\"\r\"\n✅ॹ\\�\r\n\\�\nॹ\\<\\<\n\n\n"/PrefixEnd [r26]
I180827 20:41:54.112580 52726 storage/replica_range_lease.go:554  [n1,s1,r25/1:/Table/53/2/"\x{15\x8…-c0\t\…}] transferring lease to s3
I180827 20:41:54.113319 52726 storage/replica_range_lease.go:617  [n1,s1,r25/1:/Table/53/2/"\x{15\x8…-c0\t\…}] done transferring lease to s3: <nil>
I180827 20:41:54.113657 51850 storage/replica_proposal.go:210  [n3,s3,r25/2:/Table/53/2/"\x{15\x8…-c0\t\…}] new range lease repl=(n3,s3):2 seq=3 start=1535402514.112607829,0 epo=1 pro=1535402514.112610518,0 following repl=(n1,s1):1 seq=2 start=1535402512.768597075,0 exp=1535402521.769064687,0 pro=1535402512.769088099,0
I180827 20:41:54.117098 52757 storage/replica_command.go:298  [n1,s1,r26/1:/{Table/53/2/"…-Max}] initiating a split of this range at key /Table/53/3/";π,\\✅✅ᐿπ✅,�a\r<\nπᐿॹ;π\\,✅\nᐿॹ✅�,\r�\r\r\r,;\r;ᐿ,\n\nᐿaπ\r,,✅\na,a\\<✅\"✅\\,,a\"π\r\n�✅π\"ॹπ;\nπ;<,<\n;<\n\tॹ\rπ\r,a\\\t�\n\r\\ᐿ<\t,\n\\ᐿa\t\t\n\nπ\t\\\n\\πa;π\r\rᐿ\",a<\"\n�\r\ta\r\t�\r\t✅\t;ᐿᐿ��\nᐿᐿ\\,ᐿᐿ\na\"ᐿ\"\"aa\n;✅π\nॹ\\\"\"�✅ॹॹ\\\nॹ\t\nॹ✅,\n\"πᐿ\n\n;<<;\r\tᐿ�,,\\\n\n\n\n\nπ�;\n;,\"✅\r;a\n\\;aa\n\n\n;\n\n<<ॹ�\nπa�πॹaॹ\r\n;✅✅,;ᐿ\n;π\"\\πᐿ\n\"\n\\a\\aπ✅ᐿ\n<\",\tॹ\r<;\";\nππ\"�\n\n\t,π\\\\<\\\t;ॹπ\\,;�✅ॹ,\r<\n✅�;ᐿ\",;✅\n\nॹॹ\r\n✅\n<<\n\"\",a\t\r\r,ॹ\t�;�,π\\\t,;\\\"✅\t\n\n\nᐿ\n<\\\rᐿ,;�\nπ\r\\<ᐿᐿ\n<✅ᐿaॹ�✅\n\t,π\"\r<<\n\nπ\tπ✅\n<\\πॹaᐿ\t;�ॹॹ,\"\n\\a\n,\"πॹ,\r,ॹॹ\\\";�\n\\π✅\n\"ππ\n✅�πॹ,\r✅\n;π;ᐿ\"\nᐿ✅\rॹ;;\n\"ॹ\"�\"a\n\rπ\n<\n\t\"aπ\t;\\\n\";\"π,\ta\t\n\nᐿ<,;�<ᐿ\"\\ᐿᐿa,\n;;ॹॹ\tॹπa�ᐿ\ra,π<✅\tᐿᐿ\n,✅\ra�\"\r\r\",;π�<;\n<;ᐿ\"�;ᐿ;�;✅\t\\<\\<;πa✅\rॹ\\\\\\ᐿ\n;\r\t\n\\\r\"✅\n\tπ✅\"\"<\r\rπ\r<,\n,\\✅ᐿᐿ�\t�,ππ;ᐿ\t�\"\\ππ\"ॹ,πa<\n\n<��\rॹॹ\t,\r\"ॹ✅✅\n\n;\\ॹ;π<\"�\t�<\"ᐿॹॹ;\n<\n\r\na\t�ॹᐿa\n,\"\t\r\"\n,\r<,\"\tᐿ\\\n<,;<\"\t\n\nᐿ,ᐿ\tπ✅\n,\r,\n\t<,�\\;<\\a\nπ\t,\t�ॹ\t\n�a✅\n✅\nॹ\";\r\t✅<\tᐿ\n\tᐿॹ✅\"\r\rॹ✅π\n\n,\t\\\t\\;\"a\t,ॹॹ\"aॹ\n,\n✅�\t\nॹᐿ\n\r✅<πॹ\n✅\tॹ\"ॹ\"�\r\\;\\✅;ॹπ;\n\nᐿ<\r<\"ॹ\n,\n;π\nॹ\ta✅\n�;ᐿ\"a�✅π\r✅ॹ,\n\n\",✅\nᐿ\n<�\r\nπᐿ\"πॹᐿ\r�\n<,✅a\\ॹ\r✅<;πᐿ✅ॹ<\"<✅\"π,\\\rπ\\<\"<\"π\n✅<;\\�\tॹ\n\n\r<\n\rᐿ\nᐿaॹaॹ\\\r<<\n\r\n�\ta,\nॹॹᐿ\n,π✅<;\\\nπॹπᐿॹ<;\"a\r<;\t\t<,;�π\n<✅ॹॹ\tᐿ\rᐿaaaॹ\t\\,ᐿ✅\n\\ॹ<\"π\t\r\"\tᐿ\n\ta\t,<ππ;\n\\\r�\n,\n\n\\ᐿa\nॹᐿa\n\t\n\t\n✅\"ᐿ\"\r\n\n\"�\r\n\n<<π\ra✅\\<ᐿ�\n\n✅�a✅�"/105 [r27]
I180827 20:41:54.137397 52716 storage/store_snapshot.go:615  [raftsnapshot,n3,s3,r25/2:/Table/53/2/"\x{15\x8…-c0\t\…}] sending Raft snapshot 547ab8d0 at applied index 21
I180827 20:41:54.140430 52716 storage/store_snapshot.go:657  [raftsnapshot,n3,s3,r25/2:/Table/53/2/"\x{15\x8…-c0\t\…}] streamed snapshot to (n2,s2):3: kv pairs: 14, log entries: 2, rate-limit: 8.0 MiB/sec, 22ms
I180827 20:41:54.140860 52705 storage/replica_raftstorage.go:784  [n2,s2,r25/3:/Table/53/2/"\x{15\x8…-c0\t\…}] applying Raft snapshot at index 21 (id=547ab8d0, encoded size=31270, 1 rocksdb batches, 2 log entries)
I180827 20:41:54.162696 52705 storage/replica_raftstorage.go:790  [n2,s2,r25/3:/Table/53/2/"\x{15\x8…-c0\t\…}] applied Raft snapshot in 22ms [clear=0ms batch=0ms entries=21ms commit=0ms]
I180827 20:41:54.166103 52791 storage/replica_range_lease.go:554  [n1,s1,r26/1:/Table/53/{2/"\xc0…-3/";π,…}] transferring lease to s3
I180827 20:41:54.167118 51903 storage/replica_proposal.go:210  [n3,s3,r26/2:/Table/53/{2/"\xc0…-3/";π,…}] new range lease repl=(n3,s3):2 seq=3 start=1535402514.166156675,0 epo=1 pro=1535402514.166159831,0 following repl=(n1,s1):1 seq=2 start=1535402512.768597075,0 exp=1535402521.769064687,0 pro=1535402512.769088099,0
I180827 20:41:54.167216 52791 storage/replica_range_lease.go:617  [n1,s1,r26/1:/Table/53/{2/"\xc0…-3/";π,…}] done transferring lease to s3: <nil>
I180827 20:41:54.172711 52589 storage/replica_command.go:298  [n1,s1,r27/1:/{Table/53/3/"…-Max}] initiating a split of this range at key /Table/54 [r28]
I180827 20:41:54.182740 52807 storage/replica_range_lease.go:554  [n1,s1,r27/1:/Table/5{3/3/";π…-4}] transferring lease to s2
I180827 20:41:54.183947 52807 storage/replica_range_lease.go:617  [n1,s1,r27/1:/Table/5{3/3/";π…-4}] done transferring lease to s2: <nil>
I180827 20:41:54.184954 51646 storage/replica_proposal.go:210  [n2,s2,r27/3:/Table/5{3/3/";π…-4}] new range lease repl=(n2,s2):3 seq=3 start=1535402514.182761052,0 epo=1 pro=1535402514.182764040,0 following repl=(n1,s1):1 seq=2 start=1535402512.768597075,0 exp=1535402521.769064687,0 pro=1535402512.769088099,0
--- FAIL: test/TestImportPgDump (0.000s)
Test ended in panic.

------- Stdout: -------
W180827 20:41:52.746991 50862 server/status/runtime.go:294  [n?] Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006"
I180827 20:41:52.757923 50862 server/server.go:830  [n?] monitoring forward clock jumps based on server.clock.forward_jump_check_enabled
I180827 20:41:52.758132 50862 base/addr_validation.go:260  [n?] server certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
I180827 20:41:52.758156 50862 base/addr_validation.go:300  [n?] web UI certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
I180827 20:41:52.761168 50862 server/config.go:496  [n?] 1 storage engine initialized
I180827 20:41:52.761191 50862 server/config.go:499  [n?] RocksDB cache size: 128 MiB
I180827 20:41:52.761204 50862 server/config.go:499  [n?] store 0: in-memory, size 0 B
I180827 20:41:52.767725 50862 server/node.go:373  [n?] **** cluster d5e53e69-a109-4eb6-91bf-29e74ae744ba has been created
I180827 20:41:52.767752 50862 server/server.go:1401  [n?] **** add additional nodes by specifying --join=127.0.0.1:41477
I180827 20:41:52.767936 50862 gossip/gossip.go:382  [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:41477" > attrs:<> locality:<> ServerVersion:<major_val:2 minor_val:0 patch:0 unstable:12 > build_tag:"v2.1.0-alpha.20180702-2025-gf1e7bb1" started_at:1535402512767856449 
I180827 20:41:52.770338 50862 storage/store.go:1541  [n1,s1] [n1,s1]: failed initial metrics computation: [n1,s1]: system config not yet available
I180827 20:41:52.770546 50862 server/node.go:476  [n1] initialized store [n1,s1]: disk (capacity=512 MiB, available=512 MiB, used=0 B, logicalBytes=6.9 KiB), ranges=1, leases=1, queries=0.00, writes=0.00, bytesPerReplica={p10=7103.00 p25=7103.00 p50=7103.00 p75=7103.00 p90=7103.00 pMax=7103.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
I180827 20:41:52.770626 50862 storage/stores.go:242  [n1] read 0 node addresses from persistent storage
I180827 20:41:52.770721 50862 server/node.go:697  [n1] connecting to gossip network to verify cluster ID...
I180827 20:41:52.770760 50862 server/node.go:722  [n1] node connected via gossip and verified as part of cluster "d5e53e69-a109-4eb6-91bf-29e74ae744ba"
I180827 20:41:52.770788 50862 server/node.go:546  [n1] node=1: started with [<no-attributes>=<in-mem>] engine(s) and attributes []
I180827 20:41:52.771023 50862 server/status/recorder.go:652  [n1] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180827 20:41:52.771066 50862 server/server.go:1807  [n1] Could not start heap profiler worker due to: directory to store profiles could not be determined
I180827 20:41:52.771159 50862 server/server.go:1538  [n1] starting https server at 127.0.0.1:42563 (use: 127.0.0.1:42563)
I180827 20:41:52.771188 50862 server/server.go:1540  [n1] starting grpc/postgres server at 127.0.0.1:41477
I180827 20:41:52.771209 50862 server/server.go:1541  [n1] advertising CockroachDB node at 127.0.0.1:41477
I180827 20:41:52.775258 51089 server/status/recorder.go:652  [n1,summaries] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180827 20:41:52.776337 50925 storage/replica_command.go:298  [split,n1,s1,r1/1:/M{in-ax}] initiating a split of this range at key /System/"" [r2]
I180827 20:41:52.788832 51094 storage/replica_command.go:298  [split,n1,s1,r2/1:/{System/-Max}] initiating a split of this range at key /System/NodeLiveness [r3]
W180827 20:41:52.790188 51128 storage/intent_resolver.go:668  [n1,s1] failed to push during intent resolution: failed to push "unnamed" id=ec083bbe key=/Table/SystemConfigSpan/Start rw=true pri=0.01126188 iso=SERIALIZABLE stat=PENDING epo=0 ts=1535402512.772758792,0 orig=1535402512.772758792,0 max=1535402512.772758792,0 wto=false rop=false seq=6
I180827 20:41:52.790695 51118 sql/event_log.go:126  [n1,intExec=optInToDiagnosticsStatReporting] Event: "set_cluster_setting", target: 0, info: {SettingName:diagnostics.reporting.enabled Value:true User:root}
I180827 20:41:52.795125 51100 storage/replica_command.go:298  [split,n1,s1,r3/1:/{System/NodeL…-Max}] initiating a split of this range at key /System/NodeLivenessMax [r4]
I180827 20:41:52.800783 51143 storage/replica_command.go:298  [split,n1,s1,r4/1:/{System/NodeL…-Max}] initiating a split of this range at key /System/tsd [r5]
I180827 20:41:52.807906 51165 storage/replica_command.go:298  [split,n1,s1,r5/1:/{System/tsd-Max}] initiating a split of this range at key /System/"tse" [r6]
I180827 20:41:52.811784 51141 sql/event_log.go:126  [n1,intExec=set-setting] Event: "set_cluster_setting", target: 0, info: {SettingName:version Value:2.0-12 User:root}
I180827 20:41:52.818164 50799 sql/event_log.go:126  [n1,intExec=disableNetTrace] Event: "set_cluster_setting", target: 0, info: {SettingName:trace.debug.enable Value:false User:root}
I180827 20:41:52.821094 51188 storage/replica_command.go:298  [split,n1,s1,r6/1:/{System/tse-Max}] initiating a split of this range at key /Table/SystemConfigSpan/Start [r7]
I180827 20:41:52.830709 51176 storage/replica_command.go:298  [split,n1,s1,r7/1:/{Table/System…-Max}] initiating a split of this range at key /Table/11 [r8]
I180827 20:41:52.839374 51187 sql/event_log.go:126  [n1,intExec=initializeClusterSecret] Event: "set_cluster_setting", target: 0, info: {SettingName:cluster.secret Value:045a1c98-219f-445b-bd6b-d481f04d6b0d User:root}
I180827 20:41:52.849534 51154 storage/replica_command.go:298  [split,n1,s1,r8/1:/{Table/11-Max}] initiating a split of this range at key /Table/12 [r9]
I180827 20:41:52.855898 51218 sql/event_log.go:126  [n1,intExec=create-default-db] Event: "create_database", target: 50, info: {DatabaseName:defaultdb Statement:CREATE DATABASE IF NOT EXISTS defaultdb User:root}
I180827 20:41:52.861462 51240 storage/replica_command.go:298  [split,n1,s1,r9/1:/{Table/12-Max}] initiating a split of this range at key /Table/13 [r10]
I180827 20:41:52.868342 51268 storage/replica_command.go:298  [split,n1,s1,r10/1:/{Table/13-Max}] initiating a split of this range at key /Table/14 [r11]
I180827 20:41:52.872706 51256 sql/event_log.go:126  [n1,intExec=create-default-db] Event: "create_database", target: 51, info: {DatabaseName:postgres Statement:CREATE DATABASE IF NOT EXISTS postgres User:root}
I180827 20:41:52.874819 51264 storage/replica_command.go:298  [split,n1,s1,r11/1:/{Table/14-Max}] initiating a split of this range at key /Table/15 [r12]
I180827 20:41:52.876403 50862 server/server.go:1594  [n1] done ensuring all necessary migrations have run
I180827 20:41:52.876433 50862 server/server.go:1597  [n1] serving sql connections
I180827 20:41:52.879108 51233 server/server_update.go:67  [n1] no need to upgrade, cluster already at the newest version
I180827 20:41:52.879639 51235 sql/event_log.go:126  [n1] Event: "node_join", target: 1, info: {Descriptor:{NodeID:1 Address:{NetworkField:tcp AddressField:127.0.0.1:41477} Attrs: Locality: ServerVersion:2.0-12 BuildTag:v2.1.0-alpha.20180702-2025-gf1e7bb1 StartedAt:1535402512767856449 LocalityAddress:[]} ClusterID:d5e53e69-a109-4eb6-91bf-29e74ae744ba StartedAt:1535402512767856449 LastUp:1535402512767856449}
I180827 20:41:52.880318 51302 storage/replica_command.go:298  [split,n1,s1,r12/1:/{Table/15-Max}] initiating a split of this range at key /Table/16 [r13]
I180827 20:41:52.927701 50819 storage/replica_command.go:298  [split,n1,s1,r13/1:/{Table/16-Max}] initiating a split of this range at key /Table/17 [r14]
I180827 20:41:52.940165 51323 storage/replica_command.go:298  [split,n1,s1,r14/1:/{Table/17-Max}] initiating a split of this range at key /Table/18 [r15]
I180827 20:41:52.948539 51355 storage/replica_command.go:298  [split,n1,s1,r15/1:/{Table/18-Max}] initiating a split of this range at key /Table/19 [r16]
I180827 20:41:52.953658 51380 storage/replica_command.go:298  [split,n1,s1,r16/1:/{Table/19-Max}] initiating a split of this range at key /Table/20 [r17]
I180827 20:41:52.961237 51137 storage/replica_command.go:298  [split,n1,s1,r17/1:/{Table/20-Max}] initiating a split of this range at key /Table/21 [r18]
I180827 20:41:52.966548 50832 storage/replica_command.go:298  [split,n1,s1,r18/1:/{Table/21-Max}] initiating a split of this range at key /Table/22 [r19]
I180827 20:41:52.977113 51362 storage/replica_command.go:298  [split,n1,s1,r19/1:/{Table/22-Max}] initiating a split of this range at key /Table/23 [r20]
I180827 20:41:53.041315 51440 storage/replica_command.go:298  [split,n1,s1,r20/1:/{Table/23-Max}] initiating a split of this range at key /Table/50 [r21]
I180827 20:41:53.047478 51414 storage/replica_command.go:298  [split,n1,s1,r21/1:/{Table/50-Max}] initiating a split of this range at key /Table/51 [r22]
W180827 20:41:53.081214 50862 server/status/runtime.go:294  [n?] Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006"
I180827 20:41:53.089127 50862 server/server.go:830  [n?] monitoring forward clock jumps based on server.clock.forward_jump_check_enabled
I180827 20:41:53.089322 50862 base/addr_validation.go:260  [n?] server certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
I180827 20:41:53.089338 50862 base/addr_validation.go:300  [n?] web UI certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
I180827 20:41:53.102793 50862 server/config.go:496  [n?] 1 storage engine initialized
I180827 20:41:53.102863 50862 server/config.go:499  [n?] RocksDB cache size: 128 MiB
I180827 20:41:53.102878 50862 server/config.go:499  [n?] store 0: in-memory, size 0 B
W180827 20:41:53.102953 50862 gossip/gossip.go:1371  [n?] no incoming or outgoing connections
I180827 20:41:53.103001 50862 server/server.go:1403  [n?] no stores bootstrapped and --join flag specified, awaiting init command.
I180827 20:41:53.115344 51458 gossip/client.go:129  [n?] started gossip client to 127.0.0.1:41477
I180827 20:41:53.125579 51530 gossip/server.go:217  [n1] received initial cluster-verification connection from {tcp 127.0.0.1:36113}
I180827 20:41:53.127987 50862 server/node.go:697  [n?] connecting to gossip network to verify cluster ID...
I180827 20:41:53.128034 50862 server/node.go:722  [n?] node connected via gossip and verified as part of cluster "d5e53e69-a109-4eb6-91bf-29e74ae744ba"
I180827 20:41:53.128397 51575 kv/dist_sender.go:345  [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I180827 20:41:53.134920 51574 kv/dist_sender.go:345  [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I180827 20:41:53.135628 50862 kv/dist_sender.go:345  [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I180827 20:41:53.136461 50862 server/node.go:428  [n?] new node allocated ID 2
I180827 20:41:53.136541 50862 gossip/gossip.go:382  [n2] NodeDescriptor set to node_id:2 address:<network_field:"tcp" address_field:"127.0.0.1:36113" > attrs:<> locality:<> ServerVersion:<major_val:2 minor_val:0 patch:0 unstable:12 > build_tag:"v2.1.0-alpha.20180702-2025-gf1e7bb1" started_at:1535402513136479434 
I180827 20:41:53.136591 50862 storage/stores.go:242  [n2] read 0 node addresses from persistent storage
I180827 20:41:53.136624 50862 storage/stores.go:261  [n2] wrote 1 node addresses to persistent storage
I180827 20:41:53.137485 51552 storage/stores.go:261  [n1] wrote 1 node addresses to persistent storage
I180827 20:41:53.139442 50862 server/node.go:672  [n2] bootstrapped store [n2,s2]
I180827 20:41:53.139577 50862 server/node.go:546  [n2] node=2: started with [] engine(s) and attributes []
I180827 20:41:53.140140 50862 server/status/recorder.go:652  [n2] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180827 20:41:53.140166 50862 server/server.go:1807  [n2] Could not start heap profiler worker due to: directory to store profiles could not be determined
I180827 20:41:53.140233 50862 server/server.go:1538  [n2] starting https server at 127.0.0.1:39947 (use: 127.0.0.1:39947)
I180827 20:41:53.140246 50862 server/server.go:1540  [n2] starting grpc/postgres server at 127.0.0.1:36113
I180827 20:41:53.140256 50862 server/server.go:1541  [n2] advertising CockroachDB node at 127.0.0.1:36113
I180827 20:41:53.140624 51685 server/status/recorder.go:652  [n2,summaries] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180827 20:41:53.153945 50862 server/server.go:1594  [n2] done ensuring all necessary migrations have run
I180827 20:41:53.153974 50862 server/server.go:1597  [n2] serving sql connections
W180827 20:41:53.165268 50862 server/status/runtime.go:294  [n?] Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006"
I180827 20:41:53.185802 51467 server/server_update.go:67  [n2] no need to upgrade, cluster already at the newest version
I180827 20:41:53.186848 51469 sql/event_log.go:126  [n2] Event: "node_join", target: 2, info: {Descriptor:{NodeID:2 Address:{NetworkField:tcp AddressField:127.0.0.1:36113} Attrs: Locality: ServerVersion:2.0-12 BuildTag:v2.1.0-alpha.20180702-2025-gf1e7bb1 StartedAt:1535402513136479434 LocalityAddress:[]} ClusterID:d5e53e69-a109-4eb6-91bf-29e74ae744ba StartedAt:1535402513136479434 LastUp:1535402513136479434}
I180827 20:41:53.189622 50862 server/server.go:830  [n?] monitoring forward clock jumps based on server.clock.forward_jump_check_enabled
I180827 20:41:53.189776 50862 base/addr_validation.go:260  [n?] server certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
I180827 20:41:53.189808 50862 base/addr_validation.go:300  [n?] web UI certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
I180827 20:41:53.207782 50862 server/config.go:496  [n?] 1 storage engine initialized
I180827 20:41:53.207807 50862 server/config.go:499  [n?] RocksDB cache size: 128 MiB
I180827 20:41:53.207815 50862 server/config.go:499  [n?] store 0: in-memory, size 0 B
W180827 20:41:53.207911 50862 gossip/gossip.go:1371  [n?] no incoming or outgoing connections
I180827 20:41:53.207947 50862 server/server.go:1403  [n?] no stores bootstrapped and --join flag specified, awaiting init command.
I180827 20:41:53.211471 51475 rpc/nodedialer/nodedialer.go:92  [ct-client] connection to n2 established
I180827 20:41:53.223653 51740 gossip/client.go:129  [n?] started gossip client to 127.0.0.1:41477
I180827 20:41:53.223954 51816 gossip/server.go:217  [n1] received initial cluster-verification connection from {tcp 127.0.0.1:46463}
I180827 20:41:53.224401 50862 server/node.go:697  [n?] connecting to gossip network to verify cluster ID...
I180827 20:41:53.224432 50862 server/node.go:722  [n?] node connected via gossip and verified as part of cluster "d5e53e69-a109-4eb6-91bf-29e74ae744ba"
I180827 20:41:53.224690 51837 kv/dist_sender.go:345  [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I180827 20:41:53.225445 51836 kv/dist_sender.go:345  [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I180827 20:41:53.226030 50862 kv/dist_sender.go:345  [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I180827 20:41:53.226699 50862 server/node.go:428  [n?] new node allocated ID 3
I180827 20:41:53.226763 50862 gossip/gossip.go:382  [n3] NodeDescriptor set to node_id:3 address:<network_field:"tcp" address_field:"127.0.0.1:46463" > attrs:<> locality:<> ServerVersion:<major_val:2 minor_val:0 patch:0 unstable:12 > build_tag:"v2.1.0-alpha.20180702-2025-gf1e7bb1" started_at:1535402513226706701 
I180827 20:41:53.226805 50862 storage/stores.go:242  [n3] read 0 node addresses from persistent storage
I180827 20:41:53.226851 50862 storage/stores.go:261  [n3] wrote 2 node addresses to persistent storage
I180827 20:41:53.227563 51809 storage/stores.go:261  [n1] wrote 2 node addresses to persistent storage
I180827 20:41:53.227869 51810 storage/stores.go:261  [n2] wrote 2 node addresses to persistent storage
I180827 20:41:53.228504 50862 server/node.go:672  [n3] bootstrapped store [n3,s3]
I180827 20:41:53.229044 50862 server/node.go:546  [n3] node=3: started with [] engine(s) and attributes []
I180827 20:41:53.229696 50862 server/status/recorder.go:652  [n3] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180827 20:41:53.229749 50862 server/server.go:1807  [n3] Could not start heap profiler worker due to: directory to store profiles could not be determined
I180827 20:41:53.235251 50862 server/server.go:1538  [n3] starting https server at 127.0.0.1:43307 (use: 127.0.0.1:43307)
I180827 20:41:53.235271 50862 server/server.go:1540  [n3] starting grpc/postgres server at 127.0.0.1:46463
I180827 20:41:53.235283 50862 server/server.go:1541  [n3] advertising CockroachDB node at 127.0.0.1:46463
I180827 20:41:53.240284 50862 server/server.go:1594  [n3] done ensuring all necessary migrations have run
I180827 20:41:53.240307 50862 server/server.go:1597  [n3] serving sql connections
I180827 20:41:53.243124 51945 server/status/recorder.go:652  [n3,summaries] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180827 20:41:53.248117 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r20/1:/Table/{23-50}] sending preemptive snapshot 59e1afc9 at applied index 16
I180827 20:41:53.249136 50862 testutils/testcluster/testcluster.go:536  [n1,s1] has 22 underreplicated ranges
I180827 20:41:53.251012 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r20/1:/Table/{23-50}] streamed snapshot to (n2,s2):?: kv pairs: 12, log entries: 6, rate-limit: 8.0 MiB/sec, 3ms
I180827 20:41:53.251369 51983 storage/replica_raftstorage.go:784  [n2,s2,r20/?:{-}] applying preemptive snapshot at index 16 (id=59e1afc9, encoded size=2241, 1 rocksdb batches, 6 log entries)
I180827 20:41:53.254056 51839 server/server_update.go:67  [n3] no need to upgrade, cluster already at the newest version
I180827 20:41:53.255122 51841 sql/event_log.go:126  [n3] Event: "node_join", target: 3, info: {Descriptor:{NodeID:3 Address:{NetworkField:tcp AddressField:127.0.0.1:46463} Attrs: Locality: ServerVersion:2.0-12 BuildTag:v2.1.0-alpha.20180702-2025-gf1e7bb1 StartedAt:1535402513226706701 LocalityAddress:[]} ClusterID:d5e53e69-a109-4eb6-91bf-29e74ae744ba StartedAt:1535402513226706701 LastUp:1535402513226706701}
I180827 20:41:53.256061 51983 storage/replica_raftstorage.go:790  [n2,s2,r20/?:/Table/{23-50}] applied preemptive snapshot in 1ms [clear=0ms batch=0ms entries=0ms commit=1ms]
I180827 20:41:53.256605 50930 storage/replica_command.go:812  [replicate,n1,s1,r20/1:/Table/{23-50}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r20:/Table/{23-50} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.259565 50930 storage/replica.go:3743  [n1,s1,r20/1:/Table/{23-50}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I180827 20:41:53.261627 51625 rpc/nodedialer/nodedialer.go:92  [n2] connection to n1 established
I180827 20:41:53.264544 50862 testutils/testcluster/testcluster.go:536  [n1,s1] has 22 underreplicated ranges
I180827 20:41:53.286630 50930 rpc/nodedialer/nodedialer.go:92  [replicate,n1,s1,r21/1:/Table/5{0-1}] connection to n3 established
I180827 20:41:53.287245 50862 testutils/testcluster/testcluster.go:536  [n1,s1] has 22 underreplicated ranges
I180827 20:41:53.287799 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r21/1:/Table/5{0-1}] sending preemptive snapshot de08568a at applied index 18
I180827 20:41:53.288157 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r21/1:/Table/5{0-1}] streamed snapshot to (n3,s3):?: kv pairs: 8, log entries: 8, rate-limit: 8.0 MiB/sec, 1ms
I180827 20:41:53.288623 51959 storage/replica_raftstorage.go:784  [n3,s3,r21/?:{-}] applying preemptive snapshot at index 18 (id=de08568a, encoded size=2646, 1 rocksdb batches, 8 log entries)
I180827 20:41:53.289814 51959 storage/replica_raftstorage.go:790  [n3,s3,r21/?:/Table/5{0-1}] applied preemptive snapshot in 1ms [clear=0ms batch=0ms entries=0ms commit=1ms]
I180827 20:41:53.290329 50930 storage/replica_command.go:812  [replicate,n1,s1,r21/1:/Table/5{0-1}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r21:/Table/5{0-1} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.293678 50930 storage/replica.go:3743  [n1,s1,r21/1:/Table/5{0-1}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I180827 20:41:53.294953 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r22/1:/{Table/51-Max}] sending preemptive snapshot a84e7278 at applied index 12
I180827 20:41:53.295229 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r22/1:/{Table/51-Max}] streamed snapshot to (n3,s3):?: kv pairs: 7, log entries: 2, rate-limit: 8.0 MiB/sec, 1ms
I180827 20:41:53.295441 51883 rpc/nodedialer/nodedialer.go:92  [n3] connection to n1 established
I180827 20:41:53.295585 51953 storage/replica_raftstorage.go:784  [n3,s3,r22/?:{-}] applying preemptive snapshot at index 12 (id=a84e7278, encoded size=386, 1 rocksdb batches, 2 log entries)
I180827 20:41:53.295717 51953 storage/replica_raftstorage.go:790  [n3,s3,r22/?:/{Table/51-Max}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.295955 50930 storage/replica_command.go:812  [replicate,n1,s1,r22/1:/{Table/51-Max}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r22:/{Table/51-Max} [(n1,s1):1, next=2, gen=0]
I180827 20:41:53.298097 50930 storage/replica.go:3743  [n1,s1,r22/1:/{Table/51-Max}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I180827 20:41:53.301122 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r8/1:/Table/1{1-2}] sending preemptive snapshot 201bdccc at applied index 18
I180827 20:41:53.301565 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r8/1:/Table/1{1-2}] streamed snapshot to (n3,s3):?: kv pairs: 9, log entries: 8, rate-limit: 8.0 MiB/sec, 3ms
I180827 20:41:53.306578 52088 storage/replica_raftstorage.go:784  [n3,s3,r8/?:{-}] applying preemptive snapshot at index 18 (id=201bdccc, encoded size=4352, 1 rocksdb batches, 8 log entries)
I180827 20:41:53.306868 52088 storage/replica_raftstorage.go:790  [n3,s3,r8/?:/Table/1{1-2}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.307601 50930 storage/replica_command.go:812  [replicate,n1,s1,r8/1:/Table/1{1-2}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r8:/Table/1{1-2} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.311873 50930 storage/replica.go:3743  [n1,s1,r8/1:/Table/1{1-2}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I180827 20:41:53.314134 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r17/1:/Table/2{0-1}] sending preemptive snapshot 53116eb2 at applied index 16
I180827 20:41:53.314317 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r17/1:/Table/2{0-1}] streamed snapshot to (n3,s3):?: kv pairs: 8, log entries: 6, rate-limit: 8.0 MiB/sec, 2ms
I180827 20:41:53.314683 52103 storage/replica_raftstorage.go:784  [n3,s3,r17/?:{-}] applying preemptive snapshot at index 16 (id=53116eb2, encoded size=2105, 1 rocksdb batches, 6 log entries)
I180827 20:41:53.314887 52103 storage/replica_raftstorage.go:790  [n3,s3,r17/?:/Table/2{0-1}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.315401 50930 storage/replica_command.go:812  [replicate,n1,s1,r17/1:/Table/2{0-1}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r17:/Table/2{0-1} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.318398 50930 storage/replica.go:3743  [n1,s1,r17/1:/Table/2{0-1}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I180827 20:41:53.319436 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r16/1:/Table/{19-20}] sending preemptive snapshot e0be8540 at applied index 16
I180827 20:41:53.319691 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r16/1:/Table/{19-20}] streamed snapshot to (n2,s2):?: kv pairs: 8, log entries: 6, rate-limit: 8.0 MiB/sec, 1ms
I180827 20:41:53.320127 52072 storage/replica_raftstorage.go:784  [n2,s2,r16/?:{-}] applying preemptive snapshot at index 16 (id=e0be8540, encoded size=2109, 1 rocksdb batches, 6 log entries)
I180827 20:41:53.320339 52072 storage/replica_raftstorage.go:790  [n2,s2,r16/?:/Table/{19-20}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.320816 50930 storage/replica_command.go:812  [replicate,n1,s1,r16/1:/Table/{19-20}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r16:/Table/{19-20} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.323849 50930 storage/replica.go:3743  [n1,s1,r16/1:/Table/{19-20}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I180827 20:41:53.326208 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r15/1:/Table/1{8-9}] sending preemptive snapshot d259ae5c at applied index 16
I180827 20:41:53.326404 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r15/1:/Table/1{8-9}] streamed snapshot to (n2,s2):?: kv pairs: 8, log entries: 6, rate-limit: 8.0 MiB/sec, 2ms
I180827 20:41:53.326731 52116 storage/replica_raftstorage.go:784  [n2,s2,r15/?:{-}] applying preemptive snapshot at index 16 (id=d259ae5c, encoded size=2276, 1 rocksdb batches, 6 log entries)
I180827 20:41:53.326923 50862 testutils/testcluster/testcluster.go:536  [n1,s1] has 22 underreplicated ranges
I180827 20:41:53.326953 52116 storage/replica_raftstorage.go:790  [n2,s2,r15/?:/Table/1{8-9}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.334514 50930 storage/replica_command.go:812  [replicate,n1,s1,r15/1:/Table/1{8-9}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r15:/Table/1{8-9} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.337656 50930 storage/replica.go:3743  [n1,s1,r15/1:/Table/1{8-9}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I180827 20:41:53.338767 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r14/1:/Table/1{7-8}] sending preemptive snapshot 9d0058d5 at applied index 16
I180827 20:41:53.339034 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r14/1:/Table/1{7-8}] streamed snapshot to (n2,s2):?: kv pairs: 8, log entries: 6, rate-limit: 8.0 MiB/sec, 1ms
I180827 20:41:53.339612 52090 storage/replica_raftstorage.go:784  [n2,s2,r14/?:{-}] applying preemptive snapshot at index 16 (id=9d0058d5, encoded size=2276, 1 rocksdb batches, 6 log entries)
I180827 20:41:53.339831 52090 storage/replica_raftstorage.go:790  [n2,s2,r14/?:/Table/1{7-8}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.340173 50930 storage/replica_command.go:812  [replicate,n1,s1,r14/1:/Table/1{7-8}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r14:/Table/1{7-8} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.343121 50930 storage/replica.go:3743  [n1,s1,r14/1:/Table/1{7-8}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I180827 20:41:53.345432 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r9/1:/Table/1{2-3}] sending preemptive snapshot 0eea2d20 at applied index 26
I180827 20:41:53.345859 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r9/1:/Table/1{2-3}] streamed snapshot to (n2,s2):?: kv pairs: 53, log entries: 16, rate-limit: 8.0 MiB/sec, 2ms
I180827 20:41:53.347137 52066 storage/replica_raftstorage.go:784  [n2,s2,r9/?:{-}] applying preemptive snapshot at index 26 (id=0eea2d20, encoded size=15139, 1 rocksdb batches, 16 log entries)
I180827 20:41:53.347467 52066 storage/replica_raftstorage.go:790  [n2,s2,r9/?:/Table/1{2-3}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.348208 50930 storage/replica_command.go:812  [replicate,n1,s1,r9/1:/Table/1{2-3}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r9:/Table/1{2-3} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.352166 50930 storage/replica.go:3743  [n1,s1,r9/1:/Table/1{2-3}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I180827 20:41:53.353188 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r4/1:/System/{NodeLive…-tsd}] sending preemptive snapshot 0cdee511 at applied index 39
I180827 20:41:53.353765 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r4/1:/System/{NodeLive…-tsd}] streamed snapshot to (n2,s2):?: kv pairs: 36, log entries: 29, rate-limit: 8.0 MiB/sec, 1ms
I180827 20:41:53.354286 51723 storage/replica_raftstorage.go:784  [n2,s2,r4/?:{-}] applying preemptive snapshot at index 39 (id=0cdee511, encoded size=98384, 1 rocksdb batches, 29 log entries)
I180827 20:41:53.354994 51723 storage/replica_raftstorage.go:790  [n2,s2,r4/?:/System/{NodeLive…-tsd}] applied preemptive snapshot in 1ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.355529 50930 storage/replica_command.go:812  [replicate,n1,s1,r4/1:/System/{NodeLive…-tsd}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r4:/System/{NodeLivenessMax-tsd} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.358523 50930 storage/replica.go:3743  [n1,s1,r4/1:/System/{NodeLive…-tsd}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I180827 20:41:53.360250 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r3/1:/System/NodeLiveness{-Max}] sending preemptive snapshot 965d58b1 at applied index 19
I180827 20:41:53.360436 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r3/1:/System/NodeLiveness{-Max}] streamed snapshot to (n3,s3):?: kv pairs: 10, log entries: 9, rate-limit: 8.0 MiB/sec, 1ms
I180827 20:41:53.360789 52150 storage/replica_raftstorage.go:784  [n3,s3,r3/?:{-}] applying preemptive snapshot at index 19 (id=965d58b1, encoded size=4003, 1 rocksdb batches, 9 log entries)
I180827 20:41:53.361043 52150 storage/replica_raftstorage.go:790  [n3,s3,r3/?:/System/NodeLiveness{-Max}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.361522 50930 storage/replica_command.go:812  [replicate,n1,s1,r3/1:/System/NodeLiveness{-Max}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r3:/System/NodeLiveness{-Max} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.364392 50930 storage/replica.go:3743  [n1,s1,r3/1:/System/NodeLiveness{-Max}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I180827 20:41:53.366422 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r12/1:/Table/1{5-6}] sending preemptive snapshot 811af376 at applied index 16
I180827 20:41:53.366638 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r12/1:/Table/1{5-6}] streamed snapshot to (n3,s3):?: kv pairs: 8, log entries: 6, rate-limit: 8.0 MiB/sec, 2ms
I180827 20:41:53.367089 52137 storage/replica_raftstorage.go:784  [n3,s3,r12/?:{-}] applying preemptive snapshot at index 16 (id=811af376, encoded size=2276, 1 rocksdb batches, 6 log entries)
I180827 20:41:53.367359 52137 storage/replica_raftstorage.go:790  [n3,s3,r12/?:/Table/1{5-6}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.368127 50930 storage/replica_command.go:812  [replicate,n1,s1,r12/1:/Table/1{5-6}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r12:/Table/1{5-6} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.371691 50930 storage/replica.go:3743  [n1,s1,r12/1:/Table/1{5-6}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I180827 20:41:53.374563 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r19/1:/Table/2{2-3}] sending preemptive snapshot 9cd02555 at applied index 16
I180827 20:41:53.374760 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r19/1:/Table/2{2-3}] streamed snapshot to (n3,s3):?: kv pairs: 8, log entries: 6, rate-limit: 8.0 MiB/sec, 2ms
I180827 20:41:53.375252 52080 storage/replica_raftstorage.go:784  [n3,s3,r19/?:{-}] applying preemptive snapshot at index 16 (id=9cd02555, encoded size=2276, 1 rocksdb batches, 6 log entries)
I180827 20:41:53.375582 52080 storage/replica_raftstorage.go:790  [n3,s3,r19/?:/Table/2{2-3}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.375950 50930 storage/replica_command.go:812  [replicate,n1,s1,r19/1:/Table/2{2-3}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r19:/Table/2{2-3} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.381819 50930 storage/replica.go:3743  [n1,s1,r19/1:/Table/2{2-3}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I180827 20:41:53.386461 52091 rpc/nodedialer/nodedialer.go:92  [ct-client] connection to n3 established
I180827 20:41:53.386637 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r10/1:/Table/1{3-4}] sending preemptive snapshot a16f4b15 at applied index 64
I180827 20:41:53.388005 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r10/1:/Table/1{3-4}] streamed snapshot to (n3,s3):?: kv pairs: 204, log entries: 54, rate-limit: 8.0 MiB/sec, 4ms
I180827 20:41:53.388536 52181 storage/replica_raftstorage.go:784  [n3,s3,r10/?:{-}] applying preemptive snapshot at index 64 (id=a16f4b15, encoded size=62836, 1 rocksdb batches, 54 log entries)
I180827 20:41:53.389154 52181 storage/replica_raftstorage.go:790  [n3,s3,r10/?:/Table/1{3-4}] applied preemptive snapshot in 1ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.389513 50930 storage/replica_command.go:812  [replicate,n1,s1,r10/1:/Table/1{3-4}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r10:/Table/1{3-4} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.392649 50930 storage/replica.go:3743  [n1,s1,r10/1:/Table/1{3-4}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I180827 20:41:53.394122 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r2/1:/System/{-NodeLive…}] sending preemptive snapshot 69adabc1 at applied index 23
I180827 20:41:53.394365 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r2/1:/System/{-NodeLive…}] streamed snapshot to (n2,s2):?: kv pairs: 7, log entries: 13, rate-limit: 8.0 MiB/sec, 1ms
I180827 20:41:53.394729 52213 storage/replica_raftstorage.go:784  [n2,s2,r2/?:{-}] applying preemptive snapshot at index 23 (id=69adabc1, encoded size=6277, 1 rocksdb batches, 13 log entries)
I180827 20:41:53.394981 52213 storage/replica_raftstorage.go:790  [n2,s2,r2/?:/System/{-NodeLive…}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.395465 50930 storage/replica_command.go:812  [replicate,n1,s1,r2/1:/System/{-NodeLive…}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r2:/System/{-NodeLiveness} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.398757 50930 storage/replica.go:3743  [n1,s1,r2/1:/System/{-NodeLive…}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I180827 20:41:53.399709 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r18/1:/Table/2{1-2}] sending preemptive snapshot e9df2a4a at applied index 16
I180827 20:41:53.400036 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r18/1:/Table/2{1-2}] streamed snapshot to (n3,s3):?: kv pairs: 8, log entries: 6, rate-limit: 8.0 MiB/sec, 1ms
I180827 20:41:53.400391 52185 storage/replica_raftstorage.go:784  [n3,s3,r18/?:{-}] applying preemptive snapshot at index 16 (id=e9df2a4a, encoded size=2272, 1 rocksdb batches, 6 log entries)
I180827 20:41:53.400594 52185 storage/replica_raftstorage.go:790  [n3,s3,r18/?:/Table/2{1-2}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.400882 50930 storage/replica_command.go:812  [replicate,n1,s1,r18/1:/Table/2{1-2}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r18:/Table/2{1-2} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.407636 50930 storage/replica.go:3743  [n1,s1,r18/1:/Table/2{1-2}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I180827 20:41:53.408861 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r13/1:/Table/1{6-7}] sending preemptive snapshot 6f914d55 at applied index 16
I180827 20:41:53.409071 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r13/1:/Table/1{6-7}] streamed snapshot to (n2,s2):?: kv pairs: 8, log entries: 6, rate-limit: 8.0 MiB/sec, 1ms
I180827 20:41:53.409426 52218 storage/replica_raftstorage.go:784  [n2,s2,r13/?:{-}] applying preemptive snapshot at index 16 (id=6f914d55, encoded size=2276, 1 rocksdb batches, 6 log entries)
I180827 20:41:53.409616 52218 storage/replica_raftstorage.go:790  [n2,s2,r13/?:/Table/1{6-7}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.409970 50930 storage/replica_command.go:812  [replicate,n1,s1,r13/1:/Table/1{6-7}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r13:/Table/1{6-7} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.411262 50862 testutils/testcluster/testcluster.go:536  [n1,s1] has 22 underreplicated ranges
I180827 20:41:53.412831 50930 storage/replica.go:3743  [n1,s1,r13/1:/Table/1{6-7}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I180827 20:41:53.414081 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r11/1:/Table/1{4-5}] sending preemptive snapshot cca961c1 at applied index 16
I180827 20:41:53.414277 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r11/1:/Table/1{4-5}] streamed snapshot to (n3,s3):?: kv pairs: 8, log entries: 6, rate-limit: 8.0 MiB/sec, 1ms
I180827 20:41:53.414576 52199 storage/replica_raftstorage.go:784  [n3,s3,r11/?:{-}] applying preemptive snapshot at index 16 (id=cca961c1, encoded size=2272, 1 rocksdb batches, 6 log entries)
I180827 20:41:53.414816 52199 storage/replica_raftstorage.go:790  [n3,s3,r11/?:/Table/1{4-5}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.415293 50930 storage/replica_command.go:812  [replicate,n1,s1,r11/1:/Table/1{4-5}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r11:/Table/1{4-5} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.418111 50930 storage/replica.go:3743  [n1,s1,r11/1:/Table/1{4-5}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I180827 20:41:53.419054 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r5/1:/System/ts{d-e}] sending preemptive snapshot 3c3a015f at applied index 27
I180827 20:41:53.423022 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r5/1:/System/ts{d-e}] streamed snapshot to (n3,s3):?: kv pairs: 1391, log entries: 2, rate-limit: 8.0 MiB/sec, 4ms
I180827 20:41:53.423893 52201 storage/replica_raftstorage.go:784  [n3,s3,r5/?:{-}] applying preemptive snapshot at index 27 (id=3c3a015f, encoded size=194658, 1 rocksdb batches, 2 log entries)
I180827 20:41:53.429501 52201 storage/replica_raftstorage.go:790  [n3,s3,r5/?:/System/ts{d-e}] applied preemptive snapshot in 6ms [clear=0ms batch=0ms entries=2ms commit=4ms]
I180827 20:41:53.433500 50930 storage/replica_command.go:812  [replicate,n1,s1,r5/1:/System/ts{d-e}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r5:/System/ts{d-e} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.437580 50930 storage/replica.go:3743  [n1,s1,r5/1:/System/ts{d-e}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I180827 20:41:53.440575 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r6/1:/{System/tse-Table/System…}] sending preemptive snapshot cbd412df at applied index 21
I180827 20:41:53.440794 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r6/1:/{System/tse-Table/System…}] streamed snapshot to (n3,s3):?: kv pairs: 8, log entries: 11, rate-limit: 8.0 MiB/sec, 2ms
I180827 20:41:53.441181 52260 storage/replica_raftstorage.go:784  [n3,s3,r6/?:{-}] applying preemptive snapshot at index 21 (id=cbd412df, encoded size=4339, 1 rocksdb batches, 11 log entries)
I180827 20:41:53.441400 52260 storage/replica_raftstorage.go:790  [n3,s3,r6/?:/{System/tse-Table/System…}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.441676 50930 storage/replica_command.go:812  [replicate,n1,s1,r6/1:/{System/tse-Table/System…}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r6:/{System/tse-Table/SystemConfigSpan/Start} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.448564 52224 rpc/nodedialer/nodedialer.go:92  [ct-client] connection to n2 established
I180827 20:41:53.461587 50930 storage/replica.go:3743  [n1,s1,r6/1:/{System/tse-Table/System…}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I180827 20:41:53.463345 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r7/1:/Table/{SystemCon…-11}] sending preemptive snapshot 114f4385 at applied index 29
I180827 20:41:53.464896 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r7/1:/Table/{SystemCon…-11}] streamed snapshot to (n2,s2):?: kv pairs: 59, log entries: 19, rate-limit: 8.0 MiB/sec, 3ms
I180827 20:41:53.465343 52280 storage/replica_raftstorage.go:784  [n2,s2,r7/?:{-}] applying preemptive snapshot at index 29 (id=114f4385, encoded size=16646, 1 rocksdb batches, 19 log entries)
I180827 20:41:53.465821 52280 storage/replica_raftstorage.go:790  [n2,s2,r7/?:/Table/{SystemCon…-11}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.466988 50930 storage/replica_command.go:812  [replicate,n1,s1,r7/1:/Table/{SystemCon…-11}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r7:/Table/{SystemConfigSpan/Start-11} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.472743 50930 storage/replica.go:3743  [n1,s1,r7/1:/Table/{SystemCon…-11}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I180827 20:41:53.474632 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r1/1:/{Min-System/}] sending preemptive snapshot 0a244018 at applied index 114
I180827 20:41:53.475250 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r1/1:/{Min-System/}] streamed snapshot to (n2,s2):?: kv pairs: 73, log entries: 90, rate-limit: 8.0 MiB/sec, 1ms
I180827 20:41:53.475827 52267 storage/replica_raftstorage.go:784  [n2,s2,r1/?:{-}] applying preemptive snapshot at index 114 (id=0a244018, encoded size=40271, 1 rocksdb batches, 90 log entries)
I180827 20:41:53.476525 52267 storage/replica_raftstorage.go:790  [n2,s2,r1/?:/{Min-System/}] applied preemptive snapshot in 1ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.476869 50930 storage/replica_command.go:812  [replicate,n1,s1,r1/1:/{Min-System/}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r1:/{Min-System/} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.482912 50930 storage/replica.go:3743  [n1,s1,r1/1:/{Min-System/}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I180827 20:41:53.483281 50930 storage/queue.go:873  [n1,replicate] purgatory is now empty
I180827 20:41:53.485684 52286 storage/store_snapshot.go:615  [replicate,n1,s1,r20/1:/Table/{23-50}] sending preemptive snapshot f1426c69 at applied index 19
I180827 20:41:53.487316 52286 storage/store_snapshot.go:657  [replicate,n1,s1,r20/1:/Table/{23-50}] streamed snapshot to (n3,s3):?: kv pairs: 13, log entries: 9, rate-limit: 8.0 MiB/sec, 4ms
I180827 20:41:53.487681 52252 storage/replica_raftstorage.go:784  [n3,s3,r20/?:{-}] applying preemptive snapshot at index 19 (id=f1426c69, encoded size=3273, 1 rocksdb batches, 9 log entries)
I180827 20:41:53.487932 52252 storage/replica_raftstorage.go:790  [n3,s3,r20/?:/Table/{23-50}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.488311 52286 storage/replica_command.go:812  [replicate,n1,s1,r20/1:/Table/{23-50}] change replicas (ADD_REPLICA (n3,s3):3): read existing descriptor r20:/Table/{23-50} [(n1,s1):1, (n2,s2):2, next=3, gen=1]
I180827 20:41:53.503580 52286 storage/replica.go:3743  [n1,s1,r20/1:/Table/{23-50}] proposing ADD_REPLICA((n3,s3):3): updated=[(n1,s1):1 (n2,s2):2 (n3,s3):3] next=4
I180827 20:41:53.505707 52235 storage/store_snapshot.go:615  [replicate,n1,s1,r1/1:/{Min-System/}] sending preemptive snapshot 99036b07 at applied index 119
I180827 20:41:53.506514 52235 storage/store_snapshot.go:657  [replicate,n1,s1,r1/1:/{Min-System/}] streamed snapshot to (n3,s3):?: kv pairs: 78, log entries: 95, rate-limit: 8.0 MiB/sec, 1ms
I180827 20:41:53.507282 52188 storage/replica_raftstorage.go:784  [n3,s3,r1/?:{-}] applying preemptive snapshot at index 119 (id=99036b07, encoded size=42101, 1 rocksdb batches, 95 log entries)
I180827 20:41:53.508109 52188 storage/replica_raftstorage.go:790  [n3,s3,r1/?:/{Min-System/}] applied preemptive snapshot in 1ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.508641 52235 storage/replica_command.go:812  [replicate,n1,s1,r1/1:/{Min-System/}] change replicas (ADD_REPLICA (n3,s3):3): read existing descriptor r1:/{Min-System/} [(n1,s1):1, (n2,s2):2, next=3, gen=1]
I180827 20:41:53.512524 52235 storage/replica.go:3743  [n1,s1,r1/1:/{Min-System/}] proposing ADD_REPLICA((n3,s3):3): updated=[(n1,s1):1 (n2,s2):2 (n3,s3):3] next=4
I180827 20:41:53.513999 52209 storage/store_snapshot.go:615  [replicate,n1,s1,r7/1:/Table/{SystemCon…-11}] sending preemptive snapshot bb53109c at applied index 32
I180827 20:41:53.514379 52209 storage/store_snapshot.go:657  [replicate,n1,s1,r7/1:/Table/{SystemCon…-11}] streamed snapshot to (n3,s3):?: kv pairs: 60, log entries: 22, rate-limit: 8.0 MiB/sec, 1ms
I180827 20:41:53.514821 52292 storage/replica_raftstorage.go:784  [n3,s3,r7/?:{-}] applying preemptive snapshot at index 32 (id=bb53109c, encoded size=17687, 1 rocksdb batches, 22 log entries)
I180827 20:41:53.515905 52292 storage/replica_raftstorage.go:790  [n3,s3,r7/?:/Table/{SystemCon…-11}] applied preemptive snapshot in 1ms [clear=1ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.516367 52209 storage/replica_command.go:812  [replicate,n1,s1,r7/1:/Table/{SystemCon…-11}] change replicas (ADD_REPLICA (n3,s3):3): read existing descriptor r7:/Table/{SystemConfigSpan/Start-11} [(n1,s1):1, (n2,s2):2, next=3, gen=1]
I180827 20:41:53.520158 52209 storage/replica.go:3743  [n1,s1,r7/1:/Table/{SystemCon…-11}] proposing ADD_REPLICA((n3,s3):3): updated=[(n1,s1):1 (n2,s2):2 (n3,s3):3] next=4
I180827 20:41:53.521958 52312 storage/store_snapshot.go:615  [replicate,n1,s1,r6/1:/{System/tse-Table/System…}] sending preemptive snapshot 2ca43612 at applied index 24
I180827 20:41:53.522776 52312 storage/store_snapshot.go:657  [replicate,n1,s1,r6/1:/{System/tse-Table/System…}] streamed snapshot to (n2,s2):?: kv pairs: 9, log entries: 14, rate-limit: 8.0 MiB/sec, 2ms
I180827 20:41:53.523128 52239 storage/replica_raftstorage.go:784  [n2,s2,r6/?:{-}] applying preemptive snapshot at index 24 (id=2ca43612, encoded size=5410, 1 rocksdb batches, 14 log entries)
I180827 20:41:53.523377 52239 storage/replica_raftstorage.go:790  [n2,s2,r6/?:/{System/tse-Table/System…}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.523701 52312 storage/replica_command.go:812  [replicate,n1,s1,r6/1:/{System/tse-Table/System…}] change replicas (ADD_REPLICA (n2,s2):3): read existing descriptor r6:/{System/tse-Table/SystemConfigSpan/Start} [(n1,s1):1, (n3,s3):2, next=3, gen=1]
I180827 20:41:53.525176 50862 testutils/testcluster/testcluster.go:536  [n1,s1] has 19 underreplicated ranges
I180827 20:41:53.527482 52312 storage/replica.go:3743  [n1,s1,r6/1:/{System/tse-Table/System…}] proposing ADD_REPLICA((n2,s2):3): updated=[(n1,s1):1 (n3,s3):2 (n2,s2):3] next=4
I180827 20:41:53.528875 52327 storage/store_snapshot.go:615  [replicate,n1,s1,r5/1:/System/ts{d-e}] sending preemptive snapshot 731be2ae at applied index 30
I180827 20:41:53.532860 52327 storage/store_snapshot.go:657  [replicate,n1,s1,r5/1:/System/ts{d-e}] streamed snapshot to (n2,s2):?: kv pairs: 1392, log entries: 5, rate-limit: 8.0 MiB/sec, 4ms
I180827 20:41:53.533361 52316 storage/replica_raftstorage.go:784  [n2,s2,r5/?:{-}] applying preemptive snapshot at index 30 (id=731be2ae, encoded size=195741, 1 rocksdb batches, 5 log entries)
I180827 20:41:53.535834 52316 storage/replica_raftstorage.go:790  [n2,s2,r5/?:/System/ts{d-e}] applied preemptive snapshot in 2ms [clear=0ms batch=0ms entries=0ms commit=2ms]
I180827 20:41:53.536253 52327 storage/replica_command.go:812  [replicate,n1,s1,r5/1:/System/ts{d-e}] change replicas (ADD_REPLICA (n2,s2):3): read existing descriptor r5:/System/ts{d-e} [(n1,s1):1, (n3,s3):2, next=3, gen=1]
I180827 20:41:53.540576 52327 storage/replica.go:3743  [n1,s1,r5/1:/System/ts{d-e}] proposing ADD_REPLICA((n2,s2):3): updated=[(n1,s1):1 (n3,s3):2 (n2,s2):3] next=4
I180827 20:41:53.545804 52341 storage/store_snapshot.go:615  [replicate,n1,s1,r11/1:/Table/1{4-5}] sending preemptive snapshot 7497a95f at applied index 19
I180827 20:41:53.546108 52341 storage/store_snapshot.go:657  [replicate,n1,s1,r11/1:/Table/1{4-5}] streamed snapshot to (n2,s2):?: kv pairs: 9, log entries: 9, rate-limit: 8.0 MiB/sec, 4ms
I180827 20:41:53.546590 52275 storage/replica_raftstorage.go:784  [n2,s2,r11/?:{-}] applying preemptive snapshot at index 19 (id=7497a95f, encoded size=3304, 1 rocksdb batches, 9 log entries)
I180827 20:41:53.546960 52275 storage/replica_raftstorage.go:790  [n2,s2,r11/?:/Table/1{4-5}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.547386 52341 storage/replica_command.go:812  [replicate,n1,s1,r11/1:/Table/1{4-5}] change replicas (ADD_REPLICA (

Please assign, take a look and update the issue accordingly.

Failed tests ():

The following test appears to have failed:

#:

W1215 01:18:50.477119 959 multiraft/multiraft.go:1233  aborting configuration change: key range /Local/Range/RangeDescriptor/""-/Min outside of bounds of range /Min-/Min
W1215 01:18:50.478804 959 multiraft/multiraft.go:1139  failed to look up replica ID for range 1 (disabling replica ID check): storage/store.go:1695: store 3 not found as replica of range 1
I1215 01:18:53.557582 959 storage/engine/rocksdb.go:138  closing in-memory rocksdb instance
I1215 01:18:53.557889 959 storage/engine/rocksdb.go:138  closing in-memory rocksdb instance
I1215 01:18:53.558132 959 storage/engine/rocksdb.go:138  closing in-memory rocksdb instance
--- FAIL: TestRaftRemoveRace (3.53s)
    <autogenerated>:32: storage/client_test.go:521: condition failed to evaluate within 3s: storage/client_test.go:517: range not found on store 2
=== RUN   TestStoreRangeRemoveDead
E1215 01:18:53.562437 959 gossip/gossip.go:181  different node IDs were set for the same gossip instance (2147483647, 1)
I1215 01:18:53.563875 959 multiraft/multiraft.go:579  node 1 starting
I1215 01:18:53.564727 959 storage/replica.go:1308  gossiping cluster id  from store 1, range 1
I1215 01:18:53.565694 959 raft/raft.go:446  [group 1] 1 became follower at term 5
I1215 01:18:53.565898 959 raft/raft.go:234  [group 1] newRaft 1 [peers: [1], term: 5, commit: 10, applied: 10, lastindex: 10, lastterm: 5]
I1215 01:18:53.566074 959 multiraft/multiraft.go:999  node 1 campaigning because initial confstate is [1]
I1215 01:18:53.566196 959 raft/raft.go:526  [group 1] 1 is starting a new election at term 5
I1215 01:18:53.566292 959 raft/raft.go:459  [group 1] 1 became candidate at term 6
--
I1215 01:19:05.063638 959 storage/engine/rocksdb.go:138  closing in-memory rocksdb instance
I1215 01:19:05.063778 959 storage/engine/rocksdb.go:138  closing in-memory rocksdb instance
--- PASS: TestLeaderAfterSplit (0.44s)
=== RUN   Example_rebalancing
--- PASS: Example_rebalancing (0.62s)
FAIL
FAIL    github.com/cockroachdb/cockroach/storage    32.371s
=== RUN   TestBatchBasics
I1215 01:18:45.307778 1019 storage/engine/rocksdb.go:138  closing in-memory rocksdb instance
--- PASS: TestBatchBasics (0.01s)
=== RUN   TestBatchGet
I1215 01:18:45.312076 1019 storage/engine/rocksdb.go:138  closing in-memory rocksdb instance
--- PASS: TestBatchGet (0.01s)
=== RUN   TestBatchMerge
I1215 01:18:45.320845 1019 storage/engine/rocksdb.go:138  closing in-memory rocksdb instance
--- PASS: TestBatchMerge (0.01s)
=== RUN   TestBatchProtoW1215 01:18:50.477119 959 multiraft/multiraft.go:1233  aborting configuration change: key range /Local/Range/RangeDescriptor/""-/Min outside of bounds of range /Min-/Min
W1215 01:18:50.478804 959 multiraft/multiraft.go:1139  failed to look up replica ID for range 1 (disabling replica ID check): storage/store.go:1695: store 3 not found as replica of range 1
I1215 01:18:53.557582 959 storage/engine/rocksdb.go:138  closing in-memory rocksdb instance
I1215 01:18:53.557889 959 storage/engine/rocksdb.go:138  closing in-memory rocksdb instance
I1215 01:18:53.558132 959 storage/engine/rocksdb.go:138  closing in-memory rocksdb instance
--- FAIL: TestRaftRemoveRace (3.53s)
    <autogenerated>:32: storage/client_test.go:521: condition failed to evaluate within 3s: storage/client_test.go:517: range not found on store 2
=== RUN   TestStoreRangeRemoveDead
E1215 01:18:53.562437 959 gossip/gossip.go:181  different node IDs were set for the same gossip instance (2147483647, 1)
I1215 01:18:53.563875 959 multiraft/multiraft.go:579  node 1 starting
I1215 01:18:53.564727 959 storage/replica.go:1308  gossiping cluster id  from store 1, range 1
I1215 01:18:53.565694 959 raft/raft.go:446  [group 1] 1 became follower at term 5
I1215 01:18:53.565898 959 raft/raft.go:234  [group 1] newRaft 1 [peers: [1], term: 5, commit: 10, applied: 10, lastindex: 10, lastterm: 5]
I1215 01:18:53.566074 959 multiraft/multiraft.go:999  node 1 campaigning because initial confstate is [1]
I1215 01:18:53.566196 959 raft/raft.go:526  [group 1] 1 is starting a new election at term 5
I1215 01:18:53.566292 959 raft/raft.go:459  [group 1] 1 became candidate at term 6
--
I1215 01:19:05.063638 959 storage/engine/rocksdb.go:138  closing in-memory rocksdb instance
I1215 01:19:05.063778 959 storage/engine/rocksdb.go:138  closing in-memory rocksdb instance
--- PASS: TestLeaderAfterSplit (0.44s)
=== RUN   Example_rebalancing
--- PASS: Example_rebalancing (0.62s)
FAIL
FAIL    github.com/cockroachdb/cockroach/storage    32.371s
=== RUN   TestBatchBasics
I1215 01:18:45.307778 1019 storage/engine/rocksdb.go:138  closing in-memory rocksdb instance
--- PASS: TestBatchBasics (0.01s)
=== RUN   TestBatchGet
I1215 01:18:45.312076 1019 storage/engine/rocksdb.go:138  closing in-memory rocksdb instance
--- PASS: TestBatchGet (0.01s)
=== RUN   TestBatchMerge
I1215 01:18:45.320845 1019 storage/engine/rocksdb.go:138  closing in-memory rocksdb instance
--- PASS: TestBatchMerge (0.01s)
=== RUN   TestBatchProto

Please assign, take a look and update the issue accordingly.

teamcity: failed tests on release-banana: lint/TestLint, test/TestImportPgDump

The following tests appear to have failed:

#864629:

--- FAIL: test/TestImportPgDump/read_data_only (0.000s)
Test ended in panic.

------- Stdout: -------
I180827 20:41:54.053559 52667 storage/replica_command.go:298  [n1,s1,r23/1:/{Table/52-Max}] initiating a split of this range at key /Table/53/1/106 [r24]
I180827 20:41:54.062208 52385 storage/replica_range_lease.go:554  [replicate,n1,s1,r23/1:/Table/5{2-3/1/106}] transferring lease to s2
I180827 20:41:54.063407 52385 storage/replica_range_lease.go:617  [replicate,n1,s1,r23/1:/Table/5{2-3/1/106}] done transferring lease to s2: <nil>
I180827 20:41:54.063498 51617 storage/replica_proposal.go:210  [n2,s2,r23/3:/Table/5{2-3/1/106}] new range lease repl=(n2,s2):3 seq=3 start=1535402514.062230012,0 epo=1 pro=1535402514.062232488,0 following repl=(n1,s1):1 seq=2 start=1535402512.768597075,0 exp=1535402521.769064687,0 pro=1535402512.769088099,0
I180827 20:41:54.077824 52355 storage/replica_command.go:298  [n1,s1,r24/1:/{Table/53/1/1…-Max}] initiating a split of this range at key /Table/53/2/"\x15\x8f\xe8\u007f\\\xf3\xdf\xf0nP\xdb\xd3\xe8\x1b\"B1K\xa8l+\x96/l\v\x9e\x0e\x91\xa0D\x96\xc0J\xf1\xa1͠\xd2̃\x05\xe3\xe2?ET蛂\x00\xe5\xb0\x1a\x8e\x13Zu\xfd\xf2\x81w^\xb7\xbdH\xb8\xe4\a\x9c\xfd\x99{\xb4\"\xe5Q\x9c\x17\x85\x97\xf7Ëb\x0f\xff\xb0-vmO\xe1\xfb\xc3\xf3\xab0\xa0\x05u\x1c\xb0{B\xeamp\xbd\x8f\x99?\x87\x0f\xb2e\xe3ؿ2LN\x03\x17\xa7\x9f\xd3\x0f\x15$\x02I\xd2\xd7\x04R\x193\x9d\xddX\u007f\x01A\xcc\xde`Pm:\xdbe\xfd\xa6\a\xf8i\x88\xa7\xee\xacӸ\xbf2\x84y\xcd\n\xe6]L)\xca\xd9`x\xb4\x1b|\xe8\x13\x82\x1a(/* 3`J\xe1ٰ\xe6AdN!-\xd9"/"ॹॹ;,✅\nπ<\t\nπ\tॹπ✅a\n,\nᐿ�\nॹ✅�ॹ�\"✅ॹ\\<\"\n;a\\\n,✅π\n<\n<\nॹπᐿ�ᐿ;,�\tᐿ\nᐿaᐿ,\nπ�\t\\<ॹ\\π;π�π\"<;\"�\\<�,<�\\a�<\nॹaᐿaॹ�\\ᐿπ,✅ᐿ\"<✅✅a\t�ॹ\t<π;�ॹ\\ᐿ;✅\r\\,;\\ᐿॹ\nॹᐿππ\nᐿ\nᐿaπ\\\nπ\r\"✅�π\nπ\rॹπ\"ॹ✅a\ra�✅\nπ;ॹ✅\n;ॹ,�\nπ\rπᐿa\\\\ᐿ,π<ᐿ✅\n�,\r\nᐿ✅\n<�ᐿ\"\"✅,,\"\n<\n✅\rπa�π\n<\\ॹ\nॹ�;\ra��✅ᐿ\n,\t�;,π<\r��\r\\�\n✅\r✅�;\\\n\n,\nॹ✅π\n,\n✅\t,�<\nπ\t;aπ\n<a<\n\tπ\r\"\\✅\n\n\n<ᐿπaπ\\�\"<✅\\a,✅\n✅\n<\"\"\n\n\r\rᐿ�\\\tᐿᐿ;\n\rᐿa\\π<\n\\\n\n\";\r\r\raπ\"\r�ॹa\r�\"\n\"✅ππ✅�\t�ᐿ\tᐿ\\\r�ᐿ<\\\nᐿπ✅\tॹ<π\ta\"✅\t,ॹa✅ᐿ;\\\r✅\\,ॹ\"a\n<ॹ\\\n<\"π\\\\ᐿ\n✅\nᐿ\n,\n\r\t\n\r\n<aᐿ;ᐿ;ᐿ\r;✅a<a,,<\t\n\\ππ\\\"✅\n\\a\n\tπa<\r<π\n✅\\π<ॹ,\t;<aaaπॹᐿaॹaॹ�,\"\t,ॹ;\\<✅a\nᐿ\"\nπ\\aᐿ�ᐿ<ॹ;\\<ᐿ\nᐿ\n\"aᐿᐿπ,\"\r✅ॹ\n,<\r<\n<<,ᐿᐿa,\rᐿ<;π\\�,\"\rπ�\nππ�,✅;�\ra<;\r�ᐿ\tπ;\"πᐿ\\�a\"ᐿ\\;\\ॹ\";ॹ;;✅\tॹ\r\n<\t\n\t<aॹ\tᐿ\n\"ॹᐿ\t✅✅�ॹ;;<�\t,\n\r\n\n\ta\"\\<\rπᐿa\t<\na;\t\"\nπ<πॹ\r\n<\n✅ᐿॹ✅�<,;✅\"\n<�π<✅<<✅\\;\n\"\rᐿ\t�\n\n\r\t,ॹ�\"\rᐿᐿᐿ,\"π\nπ\",a\"\"<�\t\\πॹ\n\taॹᐿ\tπॹ,\n✅\rπa\r<<,\n\nᐿ;\t\\<\tᐿ\n\n�\"ॹ<\n\r\nᐿ\n�\n\nπᐿ;\nॹ\n\"π<\"\r\r\n\r\\ᐿ;;πॹ;�\r✅�,✅\r\r�,a;ᐿ\\ॹ\"\t\r✅;\t<,π,�\t\"πaᐿ��\\ॹ\"\n\tᐿ\t,ॹ✅�ᐿ✅\tᐿ,ॹ✅;;�\r\n✅ᐿ�\nπa;\\,✅ॹ<ᐿ\nπ\n\"\n;a\t\\π\n<\r\r\rπ\"\n\nᐿ<<ᐿ\"\n,\n\"ॹπaᐿπ��\r\n�ᐿॹ,\na\n\rॹ<�ॹ\"\n\t\r\n,π\n�,<ॹ,<\n<�✅ᐿ\r✅a✅<\r;,�a�\\\nॹ<\\<✅ॹ\"\nॹ\r�\ta;\"\\ᐿ\n\n✅\"\r;✅\t,a✅✅<\"ᐿ\t�π\\✅✅�<;\"✅π✅ॹ,\nπ\n��<,\ta\r�✅ᐿ\nॹπ\nᐿॹ✅;\nॹ\t\r\\\nᐿ�ᐿ\n\tᐿ,\r\\;<<a,\"π,\tᐿ\nπ<a\"ॹ\\aa\r\r\"\";\tᐿ,ॹa\nॹ\nπ"/PrefixEnd [r25]
I180827 20:41:54.090278 52643 storage/replica_range_lease.go:554  [n1,s1,r24/1:/Table/53/{1/106-2/"\x15…}] transferring lease to s3
I180827 20:41:54.091255 52643 storage/replica_range_lease.go:617  [n1,s1,r24/1:/Table/53/{1/106-2/"\x15…}] done transferring lease to s3: <nil>
I180827 20:41:54.092073 51863 storage/replica_proposal.go:210  [n3,s3,r24/2:/Table/53/{1/106-2/"\x15…}] new range lease repl=(n3,s3):2 seq=3 start=1535402514.090298269,0 epo=1 pro=1535402514.090300825,0 following repl=(n1,s1):1 seq=2 start=1535402512.768597075,0 exp=1535402521.769064687,0 pro=1535402512.769088099,0
I180827 20:41:54.094854 52702 storage/replica_command.go:298  [n1,s1,r25/1:/{Table/53/2/"…-Max}] initiating a split of this range at key /Table/53/2/"\xc0\t\x13\xe0*c\xe4\xcfS-\x9b,\xe2\x82\xfa\xd8Z\xf6\x99\x81\\\x18ŕ\xea\x80Db\xa7\x94\xf7Q#\x13\\\xc7(\xc4=\xaaZ\xa2Hա}\xdeI\x06\x840I\xa9\x95\xcbи\r#iH\x97F~\x10\xe4<\xb2\xefFb\xac\xee\xf90H5\xd7D\xe4:\xf0Ae\xe3\xd1<\xd1\xf7\xb9\xad\xea\xd9\xe0r\xbc\xa6\xae\x92\xfb\xb5,\xc2\U0010f26eD\xe0 \xc5\x06\xfa\x04{\xf7\xe8\xbfZQ\xa3\x05M\xbb\xa8\xbe\xf4\xc4\x0f\xe9|s{|\x8fr\xad\xdaWĢ\x9e\xdf\x17\x9f\x02\xf3п\xd3\xea\xfd\x8ew3\xb8@7ꇘN%\n\xe0@jq\xb3\xb0&y\xe3K0ȼ_s\x1e\x15\x98\xe7\xbf6\xeb\xef}$dd/\xaa\xf1\xcb.U\x8f\xd4r"/"<a✅ᐿ<\n\n\nॹ\",\n\"πॹᐿ,\rᐿ\nॹ�ॹ<\naᐿᐿ,\"ॹ\"\\✅\n�✅<\n\r<\\<\"\\π;<✅,;✅ॹa\r✅ᐿ;ᐿ\r\\�\\�,ᐿ\r\n,�✅,\t;\\\"π,;ᐿa\nॹ✅\n;ॹ<\\\n;<ᐿॹ\n\r<\n�\\\",ॹ✅,\n\"✅ᐿ\raa\n\n\t;�π<,\",ᐿ<ᐿ\\ᐿ✅;\t<π\"\"ॹ<\t\n,,π\"\t�✅\r;;ॹ,ᐿ\tॹ✅ॹ,\nᐿ\n\\;\n\nπa<✅aπ\t;✅\r;�✅;a\t\n✅ππ\t\rॹ\n\\<✅✅<\r✅\t\r✅<π\n;\n\"\"��ॹ\r,ᐿ\naॹᐿπ\\π\n\n,�\t<�a\nπ\\✅,πॹ\",πᐿa,ᐿᐿa\r<\ta\t\nॹ\n\r✅\r;πa✅�\t�π\",πa\n<✅\"a\t�\r<ᐿa,\naᐿॹᐿ\rॹᐿa�\"\\a✅\"\"✅✅ᐿ\t\"ॹ<\n\"\nπ✅✅\n\\ॹ\t;ᐿ\"a,ॹ\"aॹᐿᐿ\n,\\\nॹॹ,;ॹ;\\\n\n\t\n,\naॹπ\r\"\n\t✅ॹ\r\tᐿ✅;\r\ta�ᐿ\t\t\nπᐿ<ॹ\tᐿ\na\nᐿ\tππ<<π\t✅;\r\"ॹ\n\na�<ᐿ\r\nᐿᐿ\";a<\rॹ\\πॹ\tᐿ\n\nᐿπ;\t;;✅ᐿ✅✅<�ॹ;\tᐿ,,\t,π✅\nᐿॹ;\r\"ᐿa,ᐿᐿᐿa\nॹ<aॹ\r,;π<<\nπa�\n\\\r,ॹπ�\n\"ᐿππ\n✅ॹπ\ra,πॹ\n\t<;πᐿᐿ✅ॹ\t�a✅\r�\"a,π��,ॹa\n\\\rॹ\nॹ\"π\"π\ta�<π�\r;a,a\r<πᐿ\na<\r\t\nॹ\\\\\n\\<�\\�aπa;\r\\,,\nॹ\"✅;\"\n✅ᐿ✅a\n<ππ\tπ<a\t�\\<\"✅\\\nᐿ;\r\t✅�✅π\r\r\r\n\",ᐿ;ॹ\nᐿ\r\"\naa\"\n\t<,✅<a\\\n\"ॹᐿ\n\\\t\t\r\"ॹ<,,π�\"ॹ<ॹ,\\ᐿ<\\π\"\\<ᐿ\n\rॹ\na\t\nπ;\\π\\✅\r,\r�\",,;;,<\n\t\"\\\r\r\"ॹ✅ॹ�\n�✅�\n�\\\n\n\rᐿॹ\tπa<;;\n\n�a\n\\ॹॹ\t;\n<\t\\<ॹॹ�✅π\t\"<\n\tᐿπᐿ\"\"�;\t;ॹ<π<✅\nππ<\"\rॹ�πᐿ�\rπ,<,<ᐿ<;;�,\t<<ᐿ\t<\tॹ,π,<\\a\t;\n\r�a✅\r\r\t\nᐿᐿ✅\r;\n;�;\r✅\n,;✅ᐿ,\\\n<,<\\\t\n;<\\aᐿ\r,\n;\\\r�\rॹ\\\t\";\t�;\n,\ta✅\r\t\r,\n\\\t\nᐿ✅\\\"\naᐿ\\\\\";\r<�✅;aॹ\t\t\\\t\tᐿ�\r,\"\n\"\taᐿ\na\rππॹ\nπᐿ\rॹ\nॹ\tπ,π\r<\"\n,�\na;�\\✅,<\"\"�\nππ\t\nπaॹaa✅\\;\\a\r\rᐿ,\\�;\\ᐿᐿ<a\"\r;π\"\\ᐿπ\tπॹ;\\a\ra;\t\n\\�\r<\t<ॹ\r✅\na\t\t<\n\n✅π\nᐿ✅\\<,\nπ\rᐿ✅a\"\r\"\n✅ॹ\\�\r\n\\�\nॹ\\<\\<\n\n\n"/PrefixEnd [r26]
I180827 20:41:54.112580 52726 storage/replica_range_lease.go:554  [n1,s1,r25/1:/Table/53/2/"\x{15\x8…-c0\t\…}] transferring lease to s3
I180827 20:41:54.113319 52726 storage/replica_range_lease.go:617  [n1,s1,r25/1:/Table/53/2/"\x{15\x8…-c0\t\…}] done transferring lease to s3: <nil>
I180827 20:41:54.113657 51850 storage/replica_proposal.go:210  [n3,s3,r25/2:/Table/53/2/"\x{15\x8…-c0\t\…}] new range lease repl=(n3,s3):2 seq=3 start=1535402514.112607829,0 epo=1 pro=1535402514.112610518,0 following repl=(n1,s1):1 seq=2 start=1535402512.768597075,0 exp=1535402521.769064687,0 pro=1535402512.769088099,0
I180827 20:41:54.117098 52757 storage/replica_command.go:298  [n1,s1,r26/1:/{Table/53/2/"…-Max}] initiating a split of this range at key /Table/53/3/";π,\\✅✅ᐿπ✅,�a\r<\nπᐿॹ;π\\,✅\nᐿॹ✅�,\r�\r\r\r,;\r;ᐿ,\n\nᐿaπ\r,,✅\na,a\\<✅\"✅\\,,a\"π\r\n�✅π\"ॹπ;\nπ;<,<\n;<\n\tॹ\rπ\r,a\\\t�\n\r\\ᐿ<\t,\n\\ᐿa\t\t\n\nπ\t\\\n\\πa;π\r\rᐿ\",a<\"\n�\r\ta\r\t�\r\t✅\t;ᐿᐿ��\nᐿᐿ\\,ᐿᐿ\na\"ᐿ\"\"aa\n;✅π\nॹ\\\"\"�✅ॹॹ\\\nॹ\t\nॹ✅,\n\"πᐿ\n\n;<<;\r\tᐿ�,,\\\n\n\n\n\nπ�;\n;,\"✅\r;a\n\\;aa\n\n\n;\n\n<<ॹ�\nπa�πॹaॹ\r\n;✅✅,;ᐿ\n;π\"\\πᐿ\n\"\n\\a\\aπ✅ᐿ\n<\",\tॹ\r<;\";\nππ\"�\n\n\t,π\\\\<\\\t;ॹπ\\,;�✅ॹ,\r<\n✅�;ᐿ\",;✅\n\nॹॹ\r\n✅\n<<\n\"\",a\t\r\r,ॹ\t�;�,π\\\t,;\\\"✅\t\n\n\nᐿ\n<\\\rᐿ,;�\nπ\r\\<ᐿᐿ\n<✅ᐿaॹ�✅\n\t,π\"\r<<\n\nπ\tπ✅\n<\\πॹaᐿ\t;�ॹॹ,\"\n\\a\n,\"πॹ,\r,ॹॹ\\\";�\n\\π✅\n\"ππ\n✅�πॹ,\r✅\n;π;ᐿ\"\nᐿ✅\rॹ;;\n\"ॹ\"�\"a\n\rπ\n<\n\t\"aπ\t;\\\n\";\"π,\ta\t\n\nᐿ<,;�<ᐿ\"\\ᐿᐿa,\n;;ॹॹ\tॹπa�ᐿ\ra,π<✅\tᐿᐿ\n,✅\ra�\"\r\r\",;π�<;\n<;ᐿ\"�;ᐿ;�;✅\t\\<\\<;πa✅\rॹ\\\\\\ᐿ\n;\r\t\n\\\r\"✅\n\tπ✅\"\"<\r\rπ\r<,\n,\\✅ᐿᐿ�\t�,ππ;ᐿ\t�\"\\ππ\"ॹ,πa<\n\n<��\rॹॹ\t,\r\"ॹ✅✅\n\n;\\ॹ;π<\"�\t�<\"ᐿॹॹ;\n<\n\r\na\t�ॹᐿa\n,\"\t\r\"\n,\r<,\"\tᐿ\\\n<,;<\"\t\n\nᐿ,ᐿ\tπ✅\n,\r,\n\t<,�\\;<\\a\nπ\t,\t�ॹ\t\n�a✅\n✅\nॹ\";\r\t✅<\tᐿ\n\tᐿॹ✅\"\r\rॹ✅π\n\n,\t\\\t\\;\"a\t,ॹॹ\"aॹ\n,\n✅�\t\nॹᐿ\n\r✅<πॹ\n✅\tॹ\"ॹ\"�\r\\;\\✅;ॹπ;\n\nᐿ<\r<\"ॹ\n,\n;π\nॹ\ta✅\n�;ᐿ\"a�✅π\r✅ॹ,\n\n\",✅\nᐿ\n<�\r\nπᐿ\"πॹᐿ\r�\n<,✅a\\ॹ\r✅<;πᐿ✅ॹ<\"<✅\"π,\\\rπ\\<\"<\"π\n✅<;\\�\tॹ\n\n\r<\n\rᐿ\nᐿaॹaॹ\\\r<<\n\r\n�\ta,\nॹॹᐿ\n,π✅<;\\\nπॹπᐿॹ<;\"a\r<;\t\t<,;�π\n<✅ॹॹ\tᐿ\rᐿaaaॹ\t\\,ᐿ✅\n\\ॹ<\"π\t\r\"\tᐿ\n\ta\t,<ππ;\n\\\r�\n,\n\n\\ᐿa\nॹᐿa\n\t\n\t\n✅\"ᐿ\"\r\n\n\"�\r\n\n<<π\ra✅\\<ᐿ�\n\n✅�a✅�"/105 [r27]
I180827 20:41:54.137397 52716 storage/store_snapshot.go:615  [raftsnapshot,n3,s3,r25/2:/Table/53/2/"\x{15\x8…-c0\t\…}] sending Raft snapshot 547ab8d0 at applied index 21
I180827 20:41:54.140430 52716 storage/store_snapshot.go:657  [raftsnapshot,n3,s3,r25/2:/Table/53/2/"\x{15\x8…-c0\t\…}] streamed snapshot to (n2,s2):3: kv pairs: 14, log entries: 2, rate-limit: 8.0 MiB/sec, 22ms
I180827 20:41:54.140860 52705 storage/replica_raftstorage.go:784  [n2,s2,r25/3:/Table/53/2/"\x{15\x8…-c0\t\…}] applying Raft snapshot at index 21 (id=547ab8d0, encoded size=31270, 1 rocksdb batches, 2 log entries)
I180827 20:41:54.162696 52705 storage/replica_raftstorage.go:790  [n2,s2,r25/3:/Table/53/2/"\x{15\x8…-c0\t\…}] applied Raft snapshot in 22ms [clear=0ms batch=0ms entries=21ms commit=0ms]
I180827 20:41:54.166103 52791 storage/replica_range_lease.go:554  [n1,s1,r26/1:/Table/53/{2/"\xc0…-3/";π,…}] transferring lease to s3
I180827 20:41:54.167118 51903 storage/replica_proposal.go:210  [n3,s3,r26/2:/Table/53/{2/"\xc0…-3/";π,…}] new range lease repl=(n3,s3):2 seq=3 start=1535402514.166156675,0 epo=1 pro=1535402514.166159831,0 following repl=(n1,s1):1 seq=2 start=1535402512.768597075,0 exp=1535402521.769064687,0 pro=1535402512.769088099,0
I180827 20:41:54.167216 52791 storage/replica_range_lease.go:617  [n1,s1,r26/1:/Table/53/{2/"\xc0…-3/";π,…}] done transferring lease to s3: <nil>
I180827 20:41:54.172711 52589 storage/replica_command.go:298  [n1,s1,r27/1:/{Table/53/3/"…-Max}] initiating a split of this range at key /Table/54 [r28]
I180827 20:41:54.182740 52807 storage/replica_range_lease.go:554  [n1,s1,r27/1:/Table/5{3/3/";π…-4}] transferring lease to s2
I180827 20:41:54.183947 52807 storage/replica_range_lease.go:617  [n1,s1,r27/1:/Table/5{3/3/";π…-4}] done transferring lease to s2: <nil>
I180827 20:41:54.184954 51646 storage/replica_proposal.go:210  [n2,s2,r27/3:/Table/5{3/3/";π…-4}] new range lease repl=(n2,s2):3 seq=3 start=1535402514.182761052,0 epo=1 pro=1535402514.182764040,0 following repl=(n1,s1):1 seq=2 start=1535402512.768597075,0 exp=1535402521.769064687,0 pro=1535402512.769088099,0
--- FAIL: test/TestImportPgDump (0.000s)
Test ended in panic.

------- Stdout: -------
W180827 20:41:52.746991 50862 server/status/runtime.go:294  [n?] Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006"
I180827 20:41:52.757923 50862 server/server.go:830  [n?] monitoring forward clock jumps based on server.clock.forward_jump_check_enabled
I180827 20:41:52.758132 50862 base/addr_validation.go:260  [n?] server certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
I180827 20:41:52.758156 50862 base/addr_validation.go:300  [n?] web UI certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
I180827 20:41:52.761168 50862 server/config.go:496  [n?] 1 storage engine initialized
I180827 20:41:52.761191 50862 server/config.go:499  [n?] RocksDB cache size: 128 MiB
I180827 20:41:52.761204 50862 server/config.go:499  [n?] store 0: in-memory, size 0 B
I180827 20:41:52.767725 50862 server/node.go:373  [n?] **** cluster d5e53e69-a109-4eb6-91bf-29e74ae744ba has been created
I180827 20:41:52.767752 50862 server/server.go:1401  [n?] **** add additional nodes by specifying --join=127.0.0.1:41477
I180827 20:41:52.767936 50862 gossip/gossip.go:382  [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:41477" > attrs:<> locality:<> ServerVersion:<major_val:2 minor_val:0 patch:0 unstable:12 > build_tag:"v2.1.0-alpha.20180702-2025-gf1e7bb1" started_at:1535402512767856449 
I180827 20:41:52.770338 50862 storage/store.go:1541  [n1,s1] [n1,s1]: failed initial metrics computation: [n1,s1]: system config not yet available
I180827 20:41:52.770546 50862 server/node.go:476  [n1] initialized store [n1,s1]: disk (capacity=512 MiB, available=512 MiB, used=0 B, logicalBytes=6.9 KiB), ranges=1, leases=1, queries=0.00, writes=0.00, bytesPerReplica={p10=7103.00 p25=7103.00 p50=7103.00 p75=7103.00 p90=7103.00 pMax=7103.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
I180827 20:41:52.770626 50862 storage/stores.go:242  [n1] read 0 node addresses from persistent storage
I180827 20:41:52.770721 50862 server/node.go:697  [n1] connecting to gossip network to verify cluster ID...
I180827 20:41:52.770760 50862 server/node.go:722  [n1] node connected via gossip and verified as part of cluster "d5e53e69-a109-4eb6-91bf-29e74ae744ba"
I180827 20:41:52.770788 50862 server/node.go:546  [n1] node=1: started with [<no-attributes>=<in-mem>] engine(s) and attributes []
I180827 20:41:52.771023 50862 server/status/recorder.go:652  [n1] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180827 20:41:52.771066 50862 server/server.go:1807  [n1] Could not start heap profiler worker due to: directory to store profiles could not be determined
I180827 20:41:52.771159 50862 server/server.go:1538  [n1] starting https server at 127.0.0.1:42563 (use: 127.0.0.1:42563)
I180827 20:41:52.771188 50862 server/server.go:1540  [n1] starting grpc/postgres server at 127.0.0.1:41477
I180827 20:41:52.771209 50862 server/server.go:1541  [n1] advertising CockroachDB node at 127.0.0.1:41477
I180827 20:41:52.775258 51089 server/status/recorder.go:652  [n1,summaries] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180827 20:41:52.776337 50925 storage/replica_command.go:298  [split,n1,s1,r1/1:/M{in-ax}] initiating a split of this range at key /System/"" [r2]
I180827 20:41:52.788832 51094 storage/replica_command.go:298  [split,n1,s1,r2/1:/{System/-Max}] initiating a split of this range at key /System/NodeLiveness [r3]
W180827 20:41:52.790188 51128 storage/intent_resolver.go:668  [n1,s1] failed to push during intent resolution: failed to push "unnamed" id=ec083bbe key=/Table/SystemConfigSpan/Start rw=true pri=0.01126188 iso=SERIALIZABLE stat=PENDING epo=0 ts=1535402512.772758792,0 orig=1535402512.772758792,0 max=1535402512.772758792,0 wto=false rop=false seq=6
I180827 20:41:52.790695 51118 sql/event_log.go:126  [n1,intExec=optInToDiagnosticsStatReporting] Event: "set_cluster_setting", target: 0, info: {SettingName:diagnostics.reporting.enabled Value:true User:root}
I180827 20:41:52.795125 51100 storage/replica_command.go:298  [split,n1,s1,r3/1:/{System/NodeL…-Max}] initiating a split of this range at key /System/NodeLivenessMax [r4]
I180827 20:41:52.800783 51143 storage/replica_command.go:298  [split,n1,s1,r4/1:/{System/NodeL…-Max}] initiating a split of this range at key /System/tsd [r5]
I180827 20:41:52.807906 51165 storage/replica_command.go:298  [split,n1,s1,r5/1:/{System/tsd-Max}] initiating a split of this range at key /System/"tse" [r6]
I180827 20:41:52.811784 51141 sql/event_log.go:126  [n1,intExec=set-setting] Event: "set_cluster_setting", target: 0, info: {SettingName:version Value:2.0-12 User:root}
I180827 20:41:52.818164 50799 sql/event_log.go:126  [n1,intExec=disableNetTrace] Event: "set_cluster_setting", target: 0, info: {SettingName:trace.debug.enable Value:false User:root}
I180827 20:41:52.821094 51188 storage/replica_command.go:298  [split,n1,s1,r6/1:/{System/tse-Max}] initiating a split of this range at key /Table/SystemConfigSpan/Start [r7]
I180827 20:41:52.830709 51176 storage/replica_command.go:298  [split,n1,s1,r7/1:/{Table/System…-Max}] initiating a split of this range at key /Table/11 [r8]
I180827 20:41:52.839374 51187 sql/event_log.go:126  [n1,intExec=initializeClusterSecret] Event: "set_cluster_setting", target: 0, info: {SettingName:cluster.secret Value:045a1c98-219f-445b-bd6b-d481f04d6b0d User:root}
I180827 20:41:52.849534 51154 storage/replica_command.go:298  [split,n1,s1,r8/1:/{Table/11-Max}] initiating a split of this range at key /Table/12 [r9]
I180827 20:41:52.855898 51218 sql/event_log.go:126  [n1,intExec=create-default-db] Event: "create_database", target: 50, info: {DatabaseName:defaultdb Statement:CREATE DATABASE IF NOT EXISTS defaultdb User:root}
I180827 20:41:52.861462 51240 storage/replica_command.go:298  [split,n1,s1,r9/1:/{Table/12-Max}] initiating a split of this range at key /Table/13 [r10]
I180827 20:41:52.868342 51268 storage/replica_command.go:298  [split,n1,s1,r10/1:/{Table/13-Max}] initiating a split of this range at key /Table/14 [r11]
I180827 20:41:52.872706 51256 sql/event_log.go:126  [n1,intExec=create-default-db] Event: "create_database", target: 51, info: {DatabaseName:postgres Statement:CREATE DATABASE IF NOT EXISTS postgres User:root}
I180827 20:41:52.874819 51264 storage/replica_command.go:298  [split,n1,s1,r11/1:/{Table/14-Max}] initiating a split of this range at key /Table/15 [r12]
I180827 20:41:52.876403 50862 server/server.go:1594  [n1] done ensuring all necessary migrations have run
I180827 20:41:52.876433 50862 server/server.go:1597  [n1] serving sql connections
I180827 20:41:52.879108 51233 server/server_update.go:67  [n1] no need to upgrade, cluster already at the newest version
I180827 20:41:52.879639 51235 sql/event_log.go:126  [n1] Event: "node_join", target: 1, info: {Descriptor:{NodeID:1 Address:{NetworkField:tcp AddressField:127.0.0.1:41477} Attrs: Locality: ServerVersion:2.0-12 BuildTag:v2.1.0-alpha.20180702-2025-gf1e7bb1 StartedAt:1535402512767856449 LocalityAddress:[]} ClusterID:d5e53e69-a109-4eb6-91bf-29e74ae744ba StartedAt:1535402512767856449 LastUp:1535402512767856449}
I180827 20:41:52.880318 51302 storage/replica_command.go:298  [split,n1,s1,r12/1:/{Table/15-Max}] initiating a split of this range at key /Table/16 [r13]
I180827 20:41:52.927701 50819 storage/replica_command.go:298  [split,n1,s1,r13/1:/{Table/16-Max}] initiating a split of this range at key /Table/17 [r14]
I180827 20:41:52.940165 51323 storage/replica_command.go:298  [split,n1,s1,r14/1:/{Table/17-Max}] initiating a split of this range at key /Table/18 [r15]
I180827 20:41:52.948539 51355 storage/replica_command.go:298  [split,n1,s1,r15/1:/{Table/18-Max}] initiating a split of this range at key /Table/19 [r16]
I180827 20:41:52.953658 51380 storage/replica_command.go:298  [split,n1,s1,r16/1:/{Table/19-Max}] initiating a split of this range at key /Table/20 [r17]
I180827 20:41:52.961237 51137 storage/replica_command.go:298  [split,n1,s1,r17/1:/{Table/20-Max}] initiating a split of this range at key /Table/21 [r18]
I180827 20:41:52.966548 50832 storage/replica_command.go:298  [split,n1,s1,r18/1:/{Table/21-Max}] initiating a split of this range at key /Table/22 [r19]
I180827 20:41:52.977113 51362 storage/replica_command.go:298  [split,n1,s1,r19/1:/{Table/22-Max}] initiating a split of this range at key /Table/23 [r20]
I180827 20:41:53.041315 51440 storage/replica_command.go:298  [split,n1,s1,r20/1:/{Table/23-Max}] initiating a split of this range at key /Table/50 [r21]
I180827 20:41:53.047478 51414 storage/replica_command.go:298  [split,n1,s1,r21/1:/{Table/50-Max}] initiating a split of this range at key /Table/51 [r22]
W180827 20:41:53.081214 50862 server/status/runtime.go:294  [n?] Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006"
I180827 20:41:53.089127 50862 server/server.go:830  [n?] monitoring forward clock jumps based on server.clock.forward_jump_check_enabled
I180827 20:41:53.089322 50862 base/addr_validation.go:260  [n?] server certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
I180827 20:41:53.089338 50862 base/addr_validation.go:300  [n?] web UI certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
I180827 20:41:53.102793 50862 server/config.go:496  [n?] 1 storage engine initialized
I180827 20:41:53.102863 50862 server/config.go:499  [n?] RocksDB cache size: 128 MiB
I180827 20:41:53.102878 50862 server/config.go:499  [n?] store 0: in-memory, size 0 B
W180827 20:41:53.102953 50862 gossip/gossip.go:1371  [n?] no incoming or outgoing connections
I180827 20:41:53.103001 50862 server/server.go:1403  [n?] no stores bootstrapped and --join flag specified, awaiting init command.
I180827 20:41:53.115344 51458 gossip/client.go:129  [n?] started gossip client to 127.0.0.1:41477
I180827 20:41:53.125579 51530 gossip/server.go:217  [n1] received initial cluster-verification connection from {tcp 127.0.0.1:36113}
I180827 20:41:53.127987 50862 server/node.go:697  [n?] connecting to gossip network to verify cluster ID...
I180827 20:41:53.128034 50862 server/node.go:722  [n?] node connected via gossip and verified as part of cluster "d5e53e69-a109-4eb6-91bf-29e74ae744ba"
I180827 20:41:53.128397 51575 kv/dist_sender.go:345  [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I180827 20:41:53.134920 51574 kv/dist_sender.go:345  [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I180827 20:41:53.135628 50862 kv/dist_sender.go:345  [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I180827 20:41:53.136461 50862 server/node.go:428  [n?] new node allocated ID 2
I180827 20:41:53.136541 50862 gossip/gossip.go:382  [n2] NodeDescriptor set to node_id:2 address:<network_field:"tcp" address_field:"127.0.0.1:36113" > attrs:<> locality:<> ServerVersion:<major_val:2 minor_val:0 patch:0 unstable:12 > build_tag:"v2.1.0-alpha.20180702-2025-gf1e7bb1" started_at:1535402513136479434 
I180827 20:41:53.136591 50862 storage/stores.go:242  [n2] read 0 node addresses from persistent storage
I180827 20:41:53.136624 50862 storage/stores.go:261  [n2] wrote 1 node addresses to persistent storage
I180827 20:41:53.137485 51552 storage/stores.go:261  [n1] wrote 1 node addresses to persistent storage
I180827 20:41:53.139442 50862 server/node.go:672  [n2] bootstrapped store [n2,s2]
I180827 20:41:53.139577 50862 server/node.go:546  [n2] node=2: started with [] engine(s) and attributes []
I180827 20:41:53.140140 50862 server/status/recorder.go:652  [n2] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180827 20:41:53.140166 50862 server/server.go:1807  [n2] Could not start heap profiler worker due to: directory to store profiles could not be determined
I180827 20:41:53.140233 50862 server/server.go:1538  [n2] starting https server at 127.0.0.1:39947 (use: 127.0.0.1:39947)
I180827 20:41:53.140246 50862 server/server.go:1540  [n2] starting grpc/postgres server at 127.0.0.1:36113
I180827 20:41:53.140256 50862 server/server.go:1541  [n2] advertising CockroachDB node at 127.0.0.1:36113
I180827 20:41:53.140624 51685 server/status/recorder.go:652  [n2,summaries] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180827 20:41:53.153945 50862 server/server.go:1594  [n2] done ensuring all necessary migrations have run
I180827 20:41:53.153974 50862 server/server.go:1597  [n2] serving sql connections
W180827 20:41:53.165268 50862 server/status/runtime.go:294  [n?] Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006"
I180827 20:41:53.185802 51467 server/server_update.go:67  [n2] no need to upgrade, cluster already at the newest version
I180827 20:41:53.186848 51469 sql/event_log.go:126  [n2] Event: "node_join", target: 2, info: {Descriptor:{NodeID:2 Address:{NetworkField:tcp AddressField:127.0.0.1:36113} Attrs: Locality: ServerVersion:2.0-12 BuildTag:v2.1.0-alpha.20180702-2025-gf1e7bb1 StartedAt:1535402513136479434 LocalityAddress:[]} ClusterID:d5e53e69-a109-4eb6-91bf-29e74ae744ba StartedAt:1535402513136479434 LastUp:1535402513136479434}
I180827 20:41:53.189622 50862 server/server.go:830  [n?] monitoring forward clock jumps based on server.clock.forward_jump_check_enabled
I180827 20:41:53.189776 50862 base/addr_validation.go:260  [n?] server certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
I180827 20:41:53.189808 50862 base/addr_validation.go:300  [n?] web UI certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
I180827 20:41:53.207782 50862 server/config.go:496  [n?] 1 storage engine initialized
I180827 20:41:53.207807 50862 server/config.go:499  [n?] RocksDB cache size: 128 MiB
I180827 20:41:53.207815 50862 server/config.go:499  [n?] store 0: in-memory, size 0 B
W180827 20:41:53.207911 50862 gossip/gossip.go:1371  [n?] no incoming or outgoing connections
I180827 20:41:53.207947 50862 server/server.go:1403  [n?] no stores bootstrapped and --join flag specified, awaiting init command.
I180827 20:41:53.211471 51475 rpc/nodedialer/nodedialer.go:92  [ct-client] connection to n2 established
I180827 20:41:53.223653 51740 gossip/client.go:129  [n?] started gossip client to 127.0.0.1:41477
I180827 20:41:53.223954 51816 gossip/server.go:217  [n1] received initial cluster-verification connection from {tcp 127.0.0.1:46463}
I180827 20:41:53.224401 50862 server/node.go:697  [n?] connecting to gossip network to verify cluster ID...
I180827 20:41:53.224432 50862 server/node.go:722  [n?] node connected via gossip and verified as part of cluster "d5e53e69-a109-4eb6-91bf-29e74ae744ba"
I180827 20:41:53.224690 51837 kv/dist_sender.go:345  [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I180827 20:41:53.225445 51836 kv/dist_sender.go:345  [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I180827 20:41:53.226030 50862 kv/dist_sender.go:345  [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I180827 20:41:53.226699 50862 server/node.go:428  [n?] new node allocated ID 3
I180827 20:41:53.226763 50862 gossip/gossip.go:382  [n3] NodeDescriptor set to node_id:3 address:<network_field:"tcp" address_field:"127.0.0.1:46463" > attrs:<> locality:<> ServerVersion:<major_val:2 minor_val:0 patch:0 unstable:12 > build_tag:"v2.1.0-alpha.20180702-2025-gf1e7bb1" started_at:1535402513226706701 
I180827 20:41:53.226805 50862 storage/stores.go:242  [n3] read 0 node addresses from persistent storage
I180827 20:41:53.226851 50862 storage/stores.go:261  [n3] wrote 2 node addresses to persistent storage
I180827 20:41:53.227563 51809 storage/stores.go:261  [n1] wrote 2 node addresses to persistent storage
I180827 20:41:53.227869 51810 storage/stores.go:261  [n2] wrote 2 node addresses to persistent storage
I180827 20:41:53.228504 50862 server/node.go:672  [n3] bootstrapped store [n3,s3]
I180827 20:41:53.229044 50862 server/node.go:546  [n3] node=3: started with [] engine(s) and attributes []
I180827 20:41:53.229696 50862 server/status/recorder.go:652  [n3] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180827 20:41:53.229749 50862 server/server.go:1807  [n3] Could not start heap profiler worker due to: directory to store profiles could not be determined
I180827 20:41:53.235251 50862 server/server.go:1538  [n3] starting https server at 127.0.0.1:43307 (use: 127.0.0.1:43307)
I180827 20:41:53.235271 50862 server/server.go:1540  [n3] starting grpc/postgres server at 127.0.0.1:46463
I180827 20:41:53.235283 50862 server/server.go:1541  [n3] advertising CockroachDB node at 127.0.0.1:46463
I180827 20:41:53.240284 50862 server/server.go:1594  [n3] done ensuring all necessary migrations have run
I180827 20:41:53.240307 50862 server/server.go:1597  [n3] serving sql connections
I180827 20:41:53.243124 51945 server/status/recorder.go:652  [n3,summaries] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180827 20:41:53.248117 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r20/1:/Table/{23-50}] sending preemptive snapshot 59e1afc9 at applied index 16
I180827 20:41:53.249136 50862 testutils/testcluster/testcluster.go:536  [n1,s1] has 22 underreplicated ranges
I180827 20:41:53.251012 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r20/1:/Table/{23-50}] streamed snapshot to (n2,s2):?: kv pairs: 12, log entries: 6, rate-limit: 8.0 MiB/sec, 3ms
I180827 20:41:53.251369 51983 storage/replica_raftstorage.go:784  [n2,s2,r20/?:{-}] applying preemptive snapshot at index 16 (id=59e1afc9, encoded size=2241, 1 rocksdb batches, 6 log entries)
I180827 20:41:53.254056 51839 server/server_update.go:67  [n3] no need to upgrade, cluster already at the newest version
I180827 20:41:53.255122 51841 sql/event_log.go:126  [n3] Event: "node_join", target: 3, info: {Descriptor:{NodeID:3 Address:{NetworkField:tcp AddressField:127.0.0.1:46463} Attrs: Locality: ServerVersion:2.0-12 BuildTag:v2.1.0-alpha.20180702-2025-gf1e7bb1 StartedAt:1535402513226706701 LocalityAddress:[]} ClusterID:d5e53e69-a109-4eb6-91bf-29e74ae744ba StartedAt:1535402513226706701 LastUp:1535402513226706701}
I180827 20:41:53.256061 51983 storage/replica_raftstorage.go:790  [n2,s2,r20/?:/Table/{23-50}] applied preemptive snapshot in 1ms [clear=0ms batch=0ms entries=0ms commit=1ms]
I180827 20:41:53.256605 50930 storage/replica_command.go:812  [replicate,n1,s1,r20/1:/Table/{23-50}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r20:/Table/{23-50} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.259565 50930 storage/replica.go:3743  [n1,s1,r20/1:/Table/{23-50}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I180827 20:41:53.261627 51625 rpc/nodedialer/nodedialer.go:92  [n2] connection to n1 established
I180827 20:41:53.264544 50862 testutils/testcluster/testcluster.go:536  [n1,s1] has 22 underreplicated ranges
I180827 20:41:53.286630 50930 rpc/nodedialer/nodedialer.go:92  [replicate,n1,s1,r21/1:/Table/5{0-1}] connection to n3 established
I180827 20:41:53.287245 50862 testutils/testcluster/testcluster.go:536  [n1,s1] has 22 underreplicated ranges
I180827 20:41:53.287799 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r21/1:/Table/5{0-1}] sending preemptive snapshot de08568a at applied index 18
I180827 20:41:53.288157 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r21/1:/Table/5{0-1}] streamed snapshot to (n3,s3):?: kv pairs: 8, log entries: 8, rate-limit: 8.0 MiB/sec, 1ms
I180827 20:41:53.288623 51959 storage/replica_raftstorage.go:784  [n3,s3,r21/?:{-}] applying preemptive snapshot at index 18 (id=de08568a, encoded size=2646, 1 rocksdb batches, 8 log entries)
I180827 20:41:53.289814 51959 storage/replica_raftstorage.go:790  [n3,s3,r21/?:/Table/5{0-1}] applied preemptive snapshot in 1ms [clear=0ms batch=0ms entries=0ms commit=1ms]
I180827 20:41:53.290329 50930 storage/replica_command.go:812  [replicate,n1,s1,r21/1:/Table/5{0-1}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r21:/Table/5{0-1} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.293678 50930 storage/replica.go:3743  [n1,s1,r21/1:/Table/5{0-1}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I180827 20:41:53.294953 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r22/1:/{Table/51-Max}] sending preemptive snapshot a84e7278 at applied index 12
I180827 20:41:53.295229 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r22/1:/{Table/51-Max}] streamed snapshot to (n3,s3):?: kv pairs: 7, log entries: 2, rate-limit: 8.0 MiB/sec, 1ms
I180827 20:41:53.295441 51883 rpc/nodedialer/nodedialer.go:92  [n3] connection to n1 established
I180827 20:41:53.295585 51953 storage/replica_raftstorage.go:784  [n3,s3,r22/?:{-}] applying preemptive snapshot at index 12 (id=a84e7278, encoded size=386, 1 rocksdb batches, 2 log entries)
I180827 20:41:53.295717 51953 storage/replica_raftstorage.go:790  [n3,s3,r22/?:/{Table/51-Max}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.295955 50930 storage/replica_command.go:812  [replicate,n1,s1,r22/1:/{Table/51-Max}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r22:/{Table/51-Max} [(n1,s1):1, next=2, gen=0]
I180827 20:41:53.298097 50930 storage/replica.go:3743  [n1,s1,r22/1:/{Table/51-Max}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I180827 20:41:53.301122 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r8/1:/Table/1{1-2}] sending preemptive snapshot 201bdccc at applied index 18
I180827 20:41:53.301565 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r8/1:/Table/1{1-2}] streamed snapshot to (n3,s3):?: kv pairs: 9, log entries: 8, rate-limit: 8.0 MiB/sec, 3ms
I180827 20:41:53.306578 52088 storage/replica_raftstorage.go:784  [n3,s3,r8/?:{-}] applying preemptive snapshot at index 18 (id=201bdccc, encoded size=4352, 1 rocksdb batches, 8 log entries)
I180827 20:41:53.306868 52088 storage/replica_raftstorage.go:790  [n3,s3,r8/?:/Table/1{1-2}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.307601 50930 storage/replica_command.go:812  [replicate,n1,s1,r8/1:/Table/1{1-2}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r8:/Table/1{1-2} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.311873 50930 storage/replica.go:3743  [n1,s1,r8/1:/Table/1{1-2}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I180827 20:41:53.314134 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r17/1:/Table/2{0-1}] sending preemptive snapshot 53116eb2 at applied index 16
I180827 20:41:53.314317 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r17/1:/Table/2{0-1}] streamed snapshot to (n3,s3):?: kv pairs: 8, log entries: 6, rate-limit: 8.0 MiB/sec, 2ms
I180827 20:41:53.314683 52103 storage/replica_raftstorage.go:784  [n3,s3,r17/?:{-}] applying preemptive snapshot at index 16 (id=53116eb2, encoded size=2105, 1 rocksdb batches, 6 log entries)
I180827 20:41:53.314887 52103 storage/replica_raftstorage.go:790  [n3,s3,r17/?:/Table/2{0-1}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.315401 50930 storage/replica_command.go:812  [replicate,n1,s1,r17/1:/Table/2{0-1}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r17:/Table/2{0-1} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.318398 50930 storage/replica.go:3743  [n1,s1,r17/1:/Table/2{0-1}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I180827 20:41:53.319436 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r16/1:/Table/{19-20}] sending preemptive snapshot e0be8540 at applied index 16
I180827 20:41:53.319691 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r16/1:/Table/{19-20}] streamed snapshot to (n2,s2):?: kv pairs: 8, log entries: 6, rate-limit: 8.0 MiB/sec, 1ms
I180827 20:41:53.320127 52072 storage/replica_raftstorage.go:784  [n2,s2,r16/?:{-}] applying preemptive snapshot at index 16 (id=e0be8540, encoded size=2109, 1 rocksdb batches, 6 log entries)
I180827 20:41:53.320339 52072 storage/replica_raftstorage.go:790  [n2,s2,r16/?:/Table/{19-20}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.320816 50930 storage/replica_command.go:812  [replicate,n1,s1,r16/1:/Table/{19-20}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r16:/Table/{19-20} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.323849 50930 storage/replica.go:3743  [n1,s1,r16/1:/Table/{19-20}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I180827 20:41:53.326208 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r15/1:/Table/1{8-9}] sending preemptive snapshot d259ae5c at applied index 16
I180827 20:41:53.326404 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r15/1:/Table/1{8-9}] streamed snapshot to (n2,s2):?: kv pairs: 8, log entries: 6, rate-limit: 8.0 MiB/sec, 2ms
I180827 20:41:53.326731 52116 storage/replica_raftstorage.go:784  [n2,s2,r15/?:{-}] applying preemptive snapshot at index 16 (id=d259ae5c, encoded size=2276, 1 rocksdb batches, 6 log entries)
I180827 20:41:53.326923 50862 testutils/testcluster/testcluster.go:536  [n1,s1] has 22 underreplicated ranges
I180827 20:41:53.326953 52116 storage/replica_raftstorage.go:790  [n2,s2,r15/?:/Table/1{8-9}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.334514 50930 storage/replica_command.go:812  [replicate,n1,s1,r15/1:/Table/1{8-9}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r15:/Table/1{8-9} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.337656 50930 storage/replica.go:3743  [n1,s1,r15/1:/Table/1{8-9}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I180827 20:41:53.338767 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r14/1:/Table/1{7-8}] sending preemptive snapshot 9d0058d5 at applied index 16
I180827 20:41:53.339034 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r14/1:/Table/1{7-8}] streamed snapshot to (n2,s2):?: kv pairs: 8, log entries: 6, rate-limit: 8.0 MiB/sec, 1ms
I180827 20:41:53.339612 52090 storage/replica_raftstorage.go:784  [n2,s2,r14/?:{-}] applying preemptive snapshot at index 16 (id=9d0058d5, encoded size=2276, 1 rocksdb batches, 6 log entries)
I180827 20:41:53.339831 52090 storage/replica_raftstorage.go:790  [n2,s2,r14/?:/Table/1{7-8}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.340173 50930 storage/replica_command.go:812  [replicate,n1,s1,r14/1:/Table/1{7-8}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r14:/Table/1{7-8} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.343121 50930 storage/replica.go:3743  [n1,s1,r14/1:/Table/1{7-8}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I180827 20:41:53.345432 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r9/1:/Table/1{2-3}] sending preemptive snapshot 0eea2d20 at applied index 26
I180827 20:41:53.345859 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r9/1:/Table/1{2-3}] streamed snapshot to (n2,s2):?: kv pairs: 53, log entries: 16, rate-limit: 8.0 MiB/sec, 2ms
I180827 20:41:53.347137 52066 storage/replica_raftstorage.go:784  [n2,s2,r9/?:{-}] applying preemptive snapshot at index 26 (id=0eea2d20, encoded size=15139, 1 rocksdb batches, 16 log entries)
I180827 20:41:53.347467 52066 storage/replica_raftstorage.go:790  [n2,s2,r9/?:/Table/1{2-3}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.348208 50930 storage/replica_command.go:812  [replicate,n1,s1,r9/1:/Table/1{2-3}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r9:/Table/1{2-3} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.352166 50930 storage/replica.go:3743  [n1,s1,r9/1:/Table/1{2-3}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I180827 20:41:53.353188 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r4/1:/System/{NodeLive…-tsd}] sending preemptive snapshot 0cdee511 at applied index 39
I180827 20:41:53.353765 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r4/1:/System/{NodeLive…-tsd}] streamed snapshot to (n2,s2):?: kv pairs: 36, log entries: 29, rate-limit: 8.0 MiB/sec, 1ms
I180827 20:41:53.354286 51723 storage/replica_raftstorage.go:784  [n2,s2,r4/?:{-}] applying preemptive snapshot at index 39 (id=0cdee511, encoded size=98384, 1 rocksdb batches, 29 log entries)
I180827 20:41:53.354994 51723 storage/replica_raftstorage.go:790  [n2,s2,r4/?:/System/{NodeLive…-tsd}] applied preemptive snapshot in 1ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.355529 50930 storage/replica_command.go:812  [replicate,n1,s1,r4/1:/System/{NodeLive…-tsd}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r4:/System/{NodeLivenessMax-tsd} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.358523 50930 storage/replica.go:3743  [n1,s1,r4/1:/System/{NodeLive…-tsd}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I180827 20:41:53.360250 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r3/1:/System/NodeLiveness{-Max}] sending preemptive snapshot 965d58b1 at applied index 19
I180827 20:41:53.360436 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r3/1:/System/NodeLiveness{-Max}] streamed snapshot to (n3,s3):?: kv pairs: 10, log entries: 9, rate-limit: 8.0 MiB/sec, 1ms
I180827 20:41:53.360789 52150 storage/replica_raftstorage.go:784  [n3,s3,r3/?:{-}] applying preemptive snapshot at index 19 (id=965d58b1, encoded size=4003, 1 rocksdb batches, 9 log entries)
I180827 20:41:53.361043 52150 storage/replica_raftstorage.go:790  [n3,s3,r3/?:/System/NodeLiveness{-Max}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.361522 50930 storage/replica_command.go:812  [replicate,n1,s1,r3/1:/System/NodeLiveness{-Max}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r3:/System/NodeLiveness{-Max} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.364392 50930 storage/replica.go:3743  [n1,s1,r3/1:/System/NodeLiveness{-Max}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I180827 20:41:53.366422 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r12/1:/Table/1{5-6}] sending preemptive snapshot 811af376 at applied index 16
I180827 20:41:53.366638 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r12/1:/Table/1{5-6}] streamed snapshot to (n3,s3):?: kv pairs: 8, log entries: 6, rate-limit: 8.0 MiB/sec, 2ms
I180827 20:41:53.367089 52137 storage/replica_raftstorage.go:784  [n3,s3,r12/?:{-}] applying preemptive snapshot at index 16 (id=811af376, encoded size=2276, 1 rocksdb batches, 6 log entries)
I180827 20:41:53.367359 52137 storage/replica_raftstorage.go:790  [n3,s3,r12/?:/Table/1{5-6}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.368127 50930 storage/replica_command.go:812  [replicate,n1,s1,r12/1:/Table/1{5-6}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r12:/Table/1{5-6} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.371691 50930 storage/replica.go:3743  [n1,s1,r12/1:/Table/1{5-6}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I180827 20:41:53.374563 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r19/1:/Table/2{2-3}] sending preemptive snapshot 9cd02555 at applied index 16
I180827 20:41:53.374760 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r19/1:/Table/2{2-3}] streamed snapshot to (n3,s3):?: kv pairs: 8, log entries: 6, rate-limit: 8.0 MiB/sec, 2ms
I180827 20:41:53.375252 52080 storage/replica_raftstorage.go:784  [n3,s3,r19/?:{-}] applying preemptive snapshot at index 16 (id=9cd02555, encoded size=2276, 1 rocksdb batches, 6 log entries)
I180827 20:41:53.375582 52080 storage/replica_raftstorage.go:790  [n3,s3,r19/?:/Table/2{2-3}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.375950 50930 storage/replica_command.go:812  [replicate,n1,s1,r19/1:/Table/2{2-3}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r19:/Table/2{2-3} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.381819 50930 storage/replica.go:3743  [n1,s1,r19/1:/Table/2{2-3}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I180827 20:41:53.386461 52091 rpc/nodedialer/nodedialer.go:92  [ct-client] connection to n3 established
I180827 20:41:53.386637 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r10/1:/Table/1{3-4}] sending preemptive snapshot a16f4b15 at applied index 64
I180827 20:41:53.388005 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r10/1:/Table/1{3-4}] streamed snapshot to (n3,s3):?: kv pairs: 204, log entries: 54, rate-limit: 8.0 MiB/sec, 4ms
I180827 20:41:53.388536 52181 storage/replica_raftstorage.go:784  [n3,s3,r10/?:{-}] applying preemptive snapshot at index 64 (id=a16f4b15, encoded size=62836, 1 rocksdb batches, 54 log entries)
I180827 20:41:53.389154 52181 storage/replica_raftstorage.go:790  [n3,s3,r10/?:/Table/1{3-4}] applied preemptive snapshot in 1ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.389513 50930 storage/replica_command.go:812  [replicate,n1,s1,r10/1:/Table/1{3-4}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r10:/Table/1{3-4} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.392649 50930 storage/replica.go:3743  [n1,s1,r10/1:/Table/1{3-4}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I180827 20:41:53.394122 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r2/1:/System/{-NodeLive…}] sending preemptive snapshot 69adabc1 at applied index 23
I180827 20:41:53.394365 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r2/1:/System/{-NodeLive…}] streamed snapshot to (n2,s2):?: kv pairs: 7, log entries: 13, rate-limit: 8.0 MiB/sec, 1ms
I180827 20:41:53.394729 52213 storage/replica_raftstorage.go:784  [n2,s2,r2/?:{-}] applying preemptive snapshot at index 23 (id=69adabc1, encoded size=6277, 1 rocksdb batches, 13 log entries)
I180827 20:41:53.394981 52213 storage/replica_raftstorage.go:790  [n2,s2,r2/?:/System/{-NodeLive…}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.395465 50930 storage/replica_command.go:812  [replicate,n1,s1,r2/1:/System/{-NodeLive…}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r2:/System/{-NodeLiveness} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.398757 50930 storage/replica.go:3743  [n1,s1,r2/1:/System/{-NodeLive…}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I180827 20:41:53.399709 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r18/1:/Table/2{1-2}] sending preemptive snapshot e9df2a4a at applied index 16
I180827 20:41:53.400036 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r18/1:/Table/2{1-2}] streamed snapshot to (n3,s3):?: kv pairs: 8, log entries: 6, rate-limit: 8.0 MiB/sec, 1ms
I180827 20:41:53.400391 52185 storage/replica_raftstorage.go:784  [n3,s3,r18/?:{-}] applying preemptive snapshot at index 16 (id=e9df2a4a, encoded size=2272, 1 rocksdb batches, 6 log entries)
I180827 20:41:53.400594 52185 storage/replica_raftstorage.go:790  [n3,s3,r18/?:/Table/2{1-2}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.400882 50930 storage/replica_command.go:812  [replicate,n1,s1,r18/1:/Table/2{1-2}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r18:/Table/2{1-2} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.407636 50930 storage/replica.go:3743  [n1,s1,r18/1:/Table/2{1-2}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I180827 20:41:53.408861 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r13/1:/Table/1{6-7}] sending preemptive snapshot 6f914d55 at applied index 16
I180827 20:41:53.409071 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r13/1:/Table/1{6-7}] streamed snapshot to (n2,s2):?: kv pairs: 8, log entries: 6, rate-limit: 8.0 MiB/sec, 1ms
I180827 20:41:53.409426 52218 storage/replica_raftstorage.go:784  [n2,s2,r13/?:{-}] applying preemptive snapshot at index 16 (id=6f914d55, encoded size=2276, 1 rocksdb batches, 6 log entries)
I180827 20:41:53.409616 52218 storage/replica_raftstorage.go:790  [n2,s2,r13/?:/Table/1{6-7}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.409970 50930 storage/replica_command.go:812  [replicate,n1,s1,r13/1:/Table/1{6-7}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r13:/Table/1{6-7} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.411262 50862 testutils/testcluster/testcluster.go:536  [n1,s1] has 22 underreplicated ranges
I180827 20:41:53.412831 50930 storage/replica.go:3743  [n1,s1,r13/1:/Table/1{6-7}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I180827 20:41:53.414081 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r11/1:/Table/1{4-5}] sending preemptive snapshot cca961c1 at applied index 16
I180827 20:41:53.414277 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r11/1:/Table/1{4-5}] streamed snapshot to (n3,s3):?: kv pairs: 8, log entries: 6, rate-limit: 8.0 MiB/sec, 1ms
I180827 20:41:53.414576 52199 storage/replica_raftstorage.go:784  [n3,s3,r11/?:{-}] applying preemptive snapshot at index 16 (id=cca961c1, encoded size=2272, 1 rocksdb batches, 6 log entries)
I180827 20:41:53.414816 52199 storage/replica_raftstorage.go:790  [n3,s3,r11/?:/Table/1{4-5}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.415293 50930 storage/replica_command.go:812  [replicate,n1,s1,r11/1:/Table/1{4-5}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r11:/Table/1{4-5} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.418111 50930 storage/replica.go:3743  [n1,s1,r11/1:/Table/1{4-5}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I180827 20:41:53.419054 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r5/1:/System/ts{d-e}] sending preemptive snapshot 3c3a015f at applied index 27
I180827 20:41:53.423022 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r5/1:/System/ts{d-e}] streamed snapshot to (n3,s3):?: kv pairs: 1391, log entries: 2, rate-limit: 8.0 MiB/sec, 4ms
I180827 20:41:53.423893 52201 storage/replica_raftstorage.go:784  [n3,s3,r5/?:{-}] applying preemptive snapshot at index 27 (id=3c3a015f, encoded size=194658, 1 rocksdb batches, 2 log entries)
I180827 20:41:53.429501 52201 storage/replica_raftstorage.go:790  [n3,s3,r5/?:/System/ts{d-e}] applied preemptive snapshot in 6ms [clear=0ms batch=0ms entries=2ms commit=4ms]
I180827 20:41:53.433500 50930 storage/replica_command.go:812  [replicate,n1,s1,r5/1:/System/ts{d-e}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r5:/System/ts{d-e} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.437580 50930 storage/replica.go:3743  [n1,s1,r5/1:/System/ts{d-e}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I180827 20:41:53.440575 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r6/1:/{System/tse-Table/System…}] sending preemptive snapshot cbd412df at applied index 21
I180827 20:41:53.440794 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r6/1:/{System/tse-Table/System…}] streamed snapshot to (n3,s3):?: kv pairs: 8, log entries: 11, rate-limit: 8.0 MiB/sec, 2ms
I180827 20:41:53.441181 52260 storage/replica_raftstorage.go:784  [n3,s3,r6/?:{-}] applying preemptive snapshot at index 21 (id=cbd412df, encoded size=4339, 1 rocksdb batches, 11 log entries)
I180827 20:41:53.441400 52260 storage/replica_raftstorage.go:790  [n3,s3,r6/?:/{System/tse-Table/System…}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.441676 50930 storage/replica_command.go:812  [replicate,n1,s1,r6/1:/{System/tse-Table/System…}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r6:/{System/tse-Table/SystemConfigSpan/Start} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.448564 52224 rpc/nodedialer/nodedialer.go:92  [ct-client] connection to n2 established
I180827 20:41:53.461587 50930 storage/replica.go:3743  [n1,s1,r6/1:/{System/tse-Table/System…}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I180827 20:41:53.463345 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r7/1:/Table/{SystemCon…-11}] sending preemptive snapshot 114f4385 at applied index 29
I180827 20:41:53.464896 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r7/1:/Table/{SystemCon…-11}] streamed snapshot to (n2,s2):?: kv pairs: 59, log entries: 19, rate-limit: 8.0 MiB/sec, 3ms
I180827 20:41:53.465343 52280 storage/replica_raftstorage.go:784  [n2,s2,r7/?:{-}] applying preemptive snapshot at index 29 (id=114f4385, encoded size=16646, 1 rocksdb batches, 19 log entries)
I180827 20:41:53.465821 52280 storage/replica_raftstorage.go:790  [n2,s2,r7/?:/Table/{SystemCon…-11}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.466988 50930 storage/replica_command.go:812  [replicate,n1,s1,r7/1:/Table/{SystemCon…-11}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r7:/Table/{SystemConfigSpan/Start-11} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.472743 50930 storage/replica.go:3743  [n1,s1,r7/1:/Table/{SystemCon…-11}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I180827 20:41:53.474632 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r1/1:/{Min-System/}] sending preemptive snapshot 0a244018 at applied index 114
I180827 20:41:53.475250 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r1/1:/{Min-System/}] streamed snapshot to (n2,s2):?: kv pairs: 73, log entries: 90, rate-limit: 8.0 MiB/sec, 1ms
I180827 20:41:53.475827 52267 storage/replica_raftstorage.go:784  [n2,s2,r1/?:{-}] applying preemptive snapshot at index 114 (id=0a244018, encoded size=40271, 1 rocksdb batches, 90 log entries)
I180827 20:41:53.476525 52267 storage/replica_raftstorage.go:790  [n2,s2,r1/?:/{Min-System/}] applied preemptive snapshot in 1ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.476869 50930 storage/replica_command.go:812  [replicate,n1,s1,r1/1:/{Min-System/}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r1:/{Min-System/} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.482912 50930 storage/replica.go:3743  [n1,s1,r1/1:/{Min-System/}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I180827 20:41:53.483281 50930 storage/queue.go:873  [n1,replicate] purgatory is now empty
I180827 20:41:53.485684 52286 storage/store_snapshot.go:615  [replicate,n1,s1,r20/1:/Table/{23-50}] sending preemptive snapshot f1426c69 at applied index 19
I180827 20:41:53.487316 52286 storage/store_snapshot.go:657  [replicate,n1,s1,r20/1:/Table/{23-50}] streamed snapshot to (n3,s3):?: kv pairs: 13, log entries: 9, rate-limit: 8.0 MiB/sec, 4ms
I180827 20:41:53.487681 52252 storage/replica_raftstorage.go:784  [n3,s3,r20/?:{-}] applying preemptive snapshot at index 19 (id=f1426c69, encoded size=3273, 1 rocksdb batches, 9 log entries)
I180827 20:41:53.487932 52252 storage/replica_raftstorage.go:790  [n3,s3,r20/?:/Table/{23-50}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.488311 52286 storage/replica_command.go:812  [replicate,n1,s1,r20/1:/Table/{23-50}] change replicas (ADD_REPLICA (n3,s3):3): read existing descriptor r20:/Table/{23-50} [(n1,s1):1, (n2,s2):2, next=3, gen=1]
I180827 20:41:53.503580 52286 storage/replica.go:3743  [n1,s1,r20/1:/Table/{23-50}] proposing ADD_REPLICA((n3,s3):3): updated=[(n1,s1):1 (n2,s2):2 (n3,s3):3] next=4
I180827 20:41:53.505707 52235 storage/store_snapshot.go:615  [replicate,n1,s1,r1/1:/{Min-System/}] sending preemptive snapshot 99036b07 at applied index 119
I180827 20:41:53.506514 52235 storage/store_snapshot.go:657  [replicate,n1,s1,r1/1:/{Min-System/}] streamed snapshot to (n3,s3):?: kv pairs: 78, log entries: 95, rate-limit: 8.0 MiB/sec, 1ms
I180827 20:41:53.507282 52188 storage/replica_raftstorage.go:784  [n3,s3,r1/?:{-}] applying preemptive snapshot at index 119 (id=99036b07, encoded size=42101, 1 rocksdb batches, 95 log entries)
I180827 20:41:53.508109 52188 storage/replica_raftstorage.go:790  [n3,s3,r1/?:/{Min-System/}] applied preemptive snapshot in 1ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.508641 52235 storage/replica_command.go:812  [replicate,n1,s1,r1/1:/{Min-System/}] change replicas (ADD_REPLICA (n3,s3):3): read existing descriptor r1:/{Min-System/} [(n1,s1):1, (n2,s2):2, next=3, gen=1]
I180827 20:41:53.512524 52235 storage/replica.go:3743  [n1,s1,r1/1:/{Min-System/}] proposing ADD_REPLICA((n3,s3):3): updated=[(n1,s1):1 (n2,s2):2 (n3,s3):3] next=4
I180827 20:41:53.513999 52209 storage/store_snapshot.go:615  [replicate,n1,s1,r7/1:/Table/{SystemCon…-11}] sending preemptive snapshot bb53109c at applied index 32
I180827 20:41:53.514379 52209 storage/store_snapshot.go:657  [replicate,n1,s1,r7/1:/Table/{SystemCon…-11}] streamed snapshot to (n3,s3):?: kv pairs: 60, log entries: 22, rate-limit: 8.0 MiB/sec, 1ms
I180827 20:41:53.514821 52292 storage/replica_raftstorage.go:784  [n3,s3,r7/?:{-}] applying preemptive snapshot at index 32 (id=bb53109c, encoded size=17687, 1 rocksdb batches, 22 log entries)
I180827 20:41:53.515905 52292 storage/replica_raftstorage.go:790  [n3,s3,r7/?:/Table/{SystemCon…-11}] applied preemptive snapshot in 1ms [clear=1ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.516367 52209 storage/replica_command.go:812  [replicate,n1,s1,r7/1:/Table/{SystemCon…-11}] change replicas (ADD_REPLICA (n3,s3):3): read existing descriptor r7:/Table/{SystemConfigSpan/Start-11} [(n1,s1):1, (n2,s2):2, next=3, gen=1]
I180827 20:41:53.520158 52209 storage/replica.go:3743  [n1,s1,r7/1:/Table/{SystemCon…-11}] proposing ADD_REPLICA((n3,s3):3): updated=[(n1,s1):1 (n2,s2):2 (n3,s3):3] next=4
I180827 20:41:53.521958 52312 storage/store_snapshot.go:615  [replicate,n1,s1,r6/1:/{System/tse-Table/System…}] sending preemptive snapshot 2ca43612 at applied index 24
I180827 20:41:53.522776 52312 storage/store_snapshot.go:657  [replicate,n1,s1,r6/1:/{System/tse-Table/System…}] streamed snapshot to (n2,s2):?: kv pairs: 9, log entries: 14, rate-limit: 8.0 MiB/sec, 2ms
I180827 20:41:53.523128 52239 storage/replica_raftstorage.go:784  [n2,s2,r6/?:{-}] applying preemptive snapshot at index 24 (id=2ca43612, encoded size=5410, 1 rocksdb batches, 14 log entries)
I180827 20:41:53.523377 52239 storage/replica_raftstorage.go:790  [n2,s2,r6/?:/{System/tse-Table/System…}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.523701 52312 storage/replica_command.go:812  [replicate,n1,s1,r6/1:/{System/tse-Table/System…}] change replicas (ADD_REPLICA (n2,s2):3): read existing descriptor r6:/{System/tse-Table/SystemConfigSpan/Start} [(n1,s1):1, (n3,s3):2, next=3, gen=1]
I180827 20:41:53.525176 50862 testutils/testcluster/testcluster.go:536  [n1,s1] has 19 underreplicated ranges
I180827 20:41:53.527482 52312 storage/replica.go:3743  [n1,s1,r6/1:/{System/tse-Table/System…}] proposing ADD_REPLICA((n2,s2):3): updated=[(n1,s1):1 (n3,s3):2 (n2,s2):3] next=4
I180827 20:41:53.528875 52327 storage/store_snapshot.go:615  [replicate,n1,s1,r5/1:/System/ts{d-e}] sending preemptive snapshot 731be2ae at applied index 30
I180827 20:41:53.532860 52327 storage/store_snapshot.go:657  [replicate,n1,s1,r5/1:/System/ts{d-e}] streamed snapshot to (n2,s2):?: kv pairs: 1392, log entries: 5, rate-limit: 8.0 MiB/sec, 4ms
I180827 20:41:53.533361 52316 storage/replica_raftstorage.go:784  [n2,s2,r5/?:{-}] applying preemptive snapshot at index 30 (id=731be2ae, encoded size=195741, 1 rocksdb batches, 5 log entries)
I180827 20:41:53.535834 52316 storage/replica_raftstorage.go:790  [n2,s2,r5/?:/System/ts{d-e}] applied preemptive snapshot in 2ms [clear=0ms batch=0ms entries=0ms commit=2ms]
I180827 20:41:53.536253 52327 storage/replica_command.go:812  [replicate,n1,s1,r5/1:/System/ts{d-e}] change replicas (ADD_REPLICA (n2,s2):3): read existing descriptor r5:/System/ts{d-e} [(n1,s1):1, (n3,s3):2, next=3, gen=1]
I180827 20:41:53.540576 52327 storage/replica.go:3743  [n1,s1,r5/1:/System/ts{d-e}] proposing ADD_REPLICA((n2,s2):3): updated=[(n1,s1):1 (n3,s3):2 (n2,s2):3] next=4
I180827 20:41:53.545804 52341 storage/store_snapshot.go:615  [replicate,n1,s1,r11/1:/Table/1{4-5}] sending preemptive snapshot 7497a95f at applied index 19
I180827 20:41:53.546108 52341 storage/store_snapshot.go:657  [replicate,n1,s1,r11/1:/Table/1{4-5}] streamed snapshot to (n2,s2):?: kv pairs: 9, log entries: 9, rate-limit: 8.0 MiB/sec, 4ms
I180827 20:41:53.546590 52275 storage/replica_raftstorage.go:784  [n2,s2,r11/?:{-}] applying preemptive snapshot at index 19 (id=7497a95f, encoded size=3304, 1 rocksdb batches, 9 log entries)
I180827 20:41:53.546960 52275 storage/replica_raftstorage.go:790  [n2,s2,r11/?:/Table/1{4-5}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.547386 52341 storage/replica_command.go:812  [replicate,n1,s1,r11/1:/Table/1{4-5}] change replicas (ADD_REPLICA (

Please assign, take a look and update the issue accordingly.

test failure #416

The following test appears to have failed:

#416:

I0326 06:15:31.530911      72 multiraft.go:631] HardState updated: {Term:6 Vote:257 Commit:117 XXX_unrecognized:[]}
I0326 06:15:31.531041      72 multiraft.go:634] New Entry[0]: 6/117 EntryNormal 0000000000000000552005934e7aec50: raft_id:2 cmd:<internal_resolve_intent:<header:<timestamp:<wall_time:0 logical:763 > cmd_id:<wall_time:0 random:0 > key:"sVJgQbWLmM" user:"root" rep
I0326 06:15:31.531195      72 multiraft.go:637] Committed Entry[0]: 6/117 EntryNormal 0000000000000000552005934e7aec50: raft_id:2 cmd:<internal_resolve_intent:<header:<timestamp:<wall_time:0 logical:763 > cmd_id:<wall_time:0 random:0 > key:"sVJgQbWLmM" user:"root" rep
I0326 06:15:31.532355      72 queue.go:207] processing range range=1 (""-"IZNbVZhCts") from split queue...
I0326 06:15:31.534510      72 range.go:1858] initiating a split of range 1 "\"\""-"\"IZNbVZhCts\"" at key "\"FnxZTCgeqD\""
panic: test timed out after 30s

goroutine 704 [running]:
testing.func·008()
    /usr/src/go/src/testing/testing.go:681 +0x12f
created by time.goFunc
    /usr/src/go/src/time/sleep.go:129 +0x4b

goroutine 1 [chan receive]:
testing.RunTests(0xfdd788, 0x145a340, 0x2b, 0x2b, 0xc208029a01)
    /usr/src/go/src/testing/testing.go:556 +0xad6
--
    /go/src/github.com/cockroachdb/cockroach/storage/split_queue.go:117 +0xaa6
github.com/cockroachdb/cockroach/storage.(*baseQueue).processLoop(0xc208249320, 0xc2081833c0, 0xc208213f40)
    /go/src/github.com/cockroachdb/cockroach/storage/queue.go:208 +0x55a
created by github.com/cockroachdb/cockroach/storage.(*baseQueue).Start
    /go/src/github.com/cockroachdb/cockroach/storage/queue.go:132 +0x5d
FAIL    github.com/cockroachdb/cockroach/kv 30.016s
=== RUN TestHeartbeatSingleGroup
I0326 06:15:31.437012      82 multiraft.go:408] node 1 starting
I0326 06:15:31.437204      82 multiraft.go:408] node 2 starting
I0326 06:15:31.437453      82 raft.go:315] raft: 1 became follower at term 5
I0326 06:15:31.437546      82 raft.go:134] raft: newRaft 1 [peers: [1,2], term: 5, commit: 10, applied: 10, lastindex: 10, lastterm: 5]
I0326 06:15:31.437661      82 raft.go:315] raft: 2 became follower at term 5
I0326 06:15:31.437745      82 raft.go:134] raft: newRaft 2 [peers: [1,2], term: 5, commit: 10, applied: 10, lastindex: 10, lastterm: 5]
I0326 06:15:31.437810      82 raft.go:389] raft: 1 is starting a new election at term 5
I0326 06:15:31.437840      82 raft.go:328] raft: 1 became candidate at term 6
I0326 06:15:31.437898      82 raft.go:372] raft: 1 received vote from 1 at term 6
I0326 06:15:31.530911      72 multiraft.go:631] HardState updated: {Term:6 Vote:257 Commit:117 XXX_unrecognized:[]}
I0326 06:15:31.531041      72 multiraft.go:634] New Entry[0]: 6/117 EntryNormal 0000000000000000552005934e7aec50: raft_id:2 cmd:<internal_resolve_intent:<header:<timestamp:<wall_time:0 logical:763 > cmd_id:<wall_time:0 random:0 > key:"sVJgQbWLmM" user:"root" rep
I0326 06:15:31.531195      72 multiraft.go:637] Committed Entry[0]: 6/117 EntryNormal 0000000000000000552005934e7aec50: raft_id:2 cmd:<internal_resolve_intent:<header:<timestamp:<wall_time:0 logical:763 > cmd_id:<wall_time:0 random:0 > key:"sVJgQbWLmM" user:"root" rep
I0326 06:15:31.532355      72 queue.go:207] processing range range=1 (""-"IZNbVZhCts") from split queue...
I0326 06:15:31.534510      72 range.go:1858] initiating a split of range 1 "\"\""-"\"IZNbVZhCts\"" at key "\"FnxZTCgeqD\""
panic: test timed out after 30s

goroutine 704 [running]:
testing.func·008()
    /usr/src/go/src/testing/testing.go:681 +0x12f
created by time.goFunc
    /usr/src/go/src/time/sleep.go:129 +0x4b

goroutine 1 [chan receive]:
testing.RunTests(0xfdd788, 0x145a340, 0x2b, 0x2b, 0xc208029a01)
    /usr/src/go/src/testing/testing.go:556 +0xad6
--
    /go/src/github.com/cockroachdb/cockroach/storage/split_queue.go:117 +0xaa6
github.com/cockroachdb/cockroach/storage.(*baseQueue).processLoop(0xc208249320, 0xc2081833c0, 0xc208213f40)
    /go/src/github.com/cockroachdb/cockroach/storage/queue.go:208 +0x55a
created by github.com/cockroachdb/cockroach/storage.(*baseQueue).Start
    /go/src/github.com/cockroachdb/cockroach/storage/queue.go:132 +0x5d
FAIL    github.com/cockroachdb/cockroach/kv 30.016s
=== RUN TestHeartbeatSingleGroup
I0326 06:15:31.437012      82 multiraft.go:408] node 1 starting
I0326 06:15:31.437204      82 multiraft.go:408] node 2 starting
I0326 06:15:31.437453      82 raft.go:315] raft: 1 became follower at term 5
I0326 06:15:31.437546      82 raft.go:134] raft: newRaft 1 [peers: [1,2], term: 5, commit: 10, applied: 10, lastindex: 10, lastterm: 5]
I0326 06:15:31.437661      82 raft.go:315] raft: 2 became follower at term 5
I0326 06:15:31.437745      82 raft.go:134] raft: newRaft 2 [peers: [1,2], term: 5, commit: 10, applied: 10, lastindex: 10, lastterm: 5]
I0326 06:15:31.437810      82 raft.go:389] raft: 1 is starting a new election at term 5
I0326 06:15:31.437840      82 raft.go:328] raft: 1 became candidate at term 6
I0326 06:15:31.437898      82 raft.go:372] raft: 1 received vote from 1 at term 6

Please assign, take a look and update the issue accordingly.

teamcity: failed tests on release-banana: test/TestImportPgDump, lint/TestLint

The following tests appear to have failed:

#864629:

--- FAIL: test/TestImportPgDump/read_data_only (0.000s)
Test ended in panic.

------- Stdout: -------
I180827 20:41:54.053559 52667 storage/replica_command.go:298  [n1,s1,r23/1:/{Table/52-Max}] initiating a split of this range at key /Table/53/1/106 [r24]
I180827 20:41:54.062208 52385 storage/replica_range_lease.go:554  [replicate,n1,s1,r23/1:/Table/5{2-3/1/106}] transferring lease to s2
I180827 20:41:54.063407 52385 storage/replica_range_lease.go:617  [replicate,n1,s1,r23/1:/Table/5{2-3/1/106}] done transferring lease to s2: <nil>
I180827 20:41:54.063498 51617 storage/replica_proposal.go:210  [n2,s2,r23/3:/Table/5{2-3/1/106}] new range lease repl=(n2,s2):3 seq=3 start=1535402514.062230012,0 epo=1 pro=1535402514.062232488,0 following repl=(n1,s1):1 seq=2 start=1535402512.768597075,0 exp=1535402521.769064687,0 pro=1535402512.769088099,0
I180827 20:41:54.077824 52355 storage/replica_command.go:298  [n1,s1,r24/1:/{Table/53/1/1…-Max}] initiating a split of this range at key /Table/53/2/"\x15\x8f\xe8\u007f\\\xf3\xdf\xf0nP\xdb\xd3\xe8\x1b\"B1K\xa8l+\x96/l\v\x9e\x0e\x91\xa0D\x96\xc0J\xf1\xa1͠\xd2̃\x05\xe3\xe2?ET蛂\x00\xe5\xb0\x1a\x8e\x13Zu\xfd\xf2\x81w^\xb7\xbdH\xb8\xe4\a\x9c\xfd\x99{\xb4\"\xe5Q\x9c\x17\x85\x97\xf7Ëb\x0f\xff\xb0-vmO\xe1\xfb\xc3\xf3\xab0\xa0\x05u\x1c\xb0{B\xeamp\xbd\x8f\x99?\x87\x0f\xb2e\xe3ؿ2LN\x03\x17\xa7\x9f\xd3\x0f\x15$\x02I\xd2\xd7\x04R\x193\x9d\xddX\u007f\x01A\xcc\xde`Pm:\xdbe\xfd\xa6\a\xf8i\x88\xa7\xee\xacӸ\xbf2\x84y\xcd\n\xe6]L)\xca\xd9`x\xb4\x1b|\xe8\x13\x82\x1a(/* 3`J\xe1ٰ\xe6AdN!-\xd9"/"ॹॹ;,✅\nπ<\t\nπ\tॹπ✅a\n,\nᐿ�\nॹ✅�ॹ�\"✅ॹ\\<\"\n;a\\\n,✅π\n<\n<\nॹπᐿ�ᐿ;,�\tᐿ\nᐿaᐿ,\nπ�\t\\<ॹ\\π;π�π\"<;\"�\\<�,<�\\a�<\nॹaᐿaॹ�\\ᐿπ,✅ᐿ\"<✅✅a\t�ॹ\t<π;�ॹ\\ᐿ;✅\r\\,;\\ᐿॹ\nॹᐿππ\nᐿ\nᐿaπ\\\nπ\r\"✅�π\nπ\rॹπ\"ॹ✅a\ra�✅\nπ;ॹ✅\n;ॹ,�\nπ\rπᐿa\\\\ᐿ,π<ᐿ✅\n�,\r\nᐿ✅\n<�ᐿ\"\"✅,,\"\n<\n✅\rπa�π\n<\\ॹ\nॹ�;\ra��✅ᐿ\n,\t�;,π<\r��\r\\�\n✅\r✅�;\\\n\n,\nॹ✅π\n,\n✅\t,�<\nπ\t;aπ\n<a<\n\tπ\r\"\\✅\n\n\n<ᐿπaπ\\�\"<✅\\a,✅\n✅\n<\"\"\n\n\r\rᐿ�\\\tᐿᐿ;\n\rᐿa\\π<\n\\\n\n\";\r\r\raπ\"\r�ॹa\r�\"\n\"✅ππ✅�\t�ᐿ\tᐿ\\\r�ᐿ<\\\nᐿπ✅\tॹ<π\ta\"✅\t,ॹa✅ᐿ;\\\r✅\\,ॹ\"a\n<ॹ\\\n<\"π\\\\ᐿ\n✅\nᐿ\n,\n\r\t\n\r\n<aᐿ;ᐿ;ᐿ\r;✅a<a,,<\t\n\\ππ\\\"✅\n\\a\n\tπa<\r<π\n✅\\π<ॹ,\t;<aaaπॹᐿaॹaॹ�,\"\t,ॹ;\\<✅a\nᐿ\"\nπ\\aᐿ�ᐿ<ॹ;\\<ᐿ\nᐿ\n\"aᐿᐿπ,\"\r✅ॹ\n,<\r<\n<<,ᐿᐿa,\rᐿ<;π\\�,\"\rπ�\nππ�,✅;�\ra<;\r�ᐿ\tπ;\"πᐿ\\�a\"ᐿ\\;\\ॹ\";ॹ;;✅\tॹ\r\n<\t\n\t<aॹ\tᐿ\n\"ॹᐿ\t✅✅�ॹ;;<�\t,\n\r\n\n\ta\"\\<\rπᐿa\t<\na;\t\"\nπ<πॹ\r\n<\n✅ᐿॹ✅�<,;✅\"\n<�π<✅<<✅\\;\n\"\rᐿ\t�\n\n\r\t,ॹ�\"\rᐿᐿᐿ,\"π\nπ\",a\"\"<�\t\\πॹ\n\taॹᐿ\tπॹ,\n✅\rπa\r<<,\n\nᐿ;\t\\<\tᐿ\n\n�\"ॹ<\n\r\nᐿ\n�\n\nπᐿ;\nॹ\n\"π<\"\r\r\n\r\\ᐿ;;πॹ;�\r✅�,✅\r\r�,a;ᐿ\\ॹ\"\t\r✅;\t<,π,�\t\"πaᐿ��\\ॹ\"\n\tᐿ\t,ॹ✅�ᐿ✅\tᐿ,ॹ✅;;�\r\n✅ᐿ�\nπa;\\,✅ॹ<ᐿ\nπ\n\"\n;a\t\\π\n<\r\r\rπ\"\n\nᐿ<<ᐿ\"\n,\n\"ॹπaᐿπ��\r\n�ᐿॹ,\na\n\rॹ<�ॹ\"\n\t\r\n,π\n�,<ॹ,<\n<�✅ᐿ\r✅a✅<\r;,�a�\\\nॹ<\\<✅ॹ\"\nॹ\r�\ta;\"\\ᐿ\n\n✅\"\r;✅\t,a✅✅<\"ᐿ\t�π\\✅✅�<;\"✅π✅ॹ,\nπ\n��<,\ta\r�✅ᐿ\nॹπ\nᐿॹ✅;\nॹ\t\r\\\nᐿ�ᐿ\n\tᐿ,\r\\;<<a,\"π,\tᐿ\nπ<a\"ॹ\\aa\r\r\"\";\tᐿ,ॹa\nॹ\nπ"/PrefixEnd [r25]
I180827 20:41:54.090278 52643 storage/replica_range_lease.go:554  [n1,s1,r24/1:/Table/53/{1/106-2/"\x15…}] transferring lease to s3
I180827 20:41:54.091255 52643 storage/replica_range_lease.go:617  [n1,s1,r24/1:/Table/53/{1/106-2/"\x15…}] done transferring lease to s3: <nil>
I180827 20:41:54.092073 51863 storage/replica_proposal.go:210  [n3,s3,r24/2:/Table/53/{1/106-2/"\x15…}] new range lease repl=(n3,s3):2 seq=3 start=1535402514.090298269,0 epo=1 pro=1535402514.090300825,0 following repl=(n1,s1):1 seq=2 start=1535402512.768597075,0 exp=1535402521.769064687,0 pro=1535402512.769088099,0
I180827 20:41:54.094854 52702 storage/replica_command.go:298  [n1,s1,r25/1:/{Table/53/2/"…-Max}] initiating a split of this range at key /Table/53/2/"\xc0\t\x13\xe0*c\xe4\xcfS-\x9b,\xe2\x82\xfa\xd8Z\xf6\x99\x81\\\x18ŕ\xea\x80Db\xa7\x94\xf7Q#\x13\\\xc7(\xc4=\xaaZ\xa2Hա}\xdeI\x06\x840I\xa9\x95\xcbи\r#iH\x97F~\x10\xe4<\xb2\xefFb\xac\xee\xf90H5\xd7D\xe4:\xf0Ae\xe3\xd1<\xd1\xf7\xb9\xad\xea\xd9\xe0r\xbc\xa6\xae\x92\xfb\xb5,\xc2\U0010f26eD\xe0 \xc5\x06\xfa\x04{\xf7\xe8\xbfZQ\xa3\x05M\xbb\xa8\xbe\xf4\xc4\x0f\xe9|s{|\x8fr\xad\xdaWĢ\x9e\xdf\x17\x9f\x02\xf3п\xd3\xea\xfd\x8ew3\xb8@7ꇘN%\n\xe0@jq\xb3\xb0&y\xe3K0ȼ_s\x1e\x15\x98\xe7\xbf6\xeb\xef}$dd/\xaa\xf1\xcb.U\x8f\xd4r"/"<a✅ᐿ<\n\n\nॹ\",\n\"πॹᐿ,\rᐿ\nॹ�ॹ<\naᐿᐿ,\"ॹ\"\\✅\n�✅<\n\r<\\<\"\\π;<✅,;✅ॹa\r✅ᐿ;ᐿ\r\\�\\�,ᐿ\r\n,�✅,\t;\\\"π,;ᐿa\nॹ✅\n;ॹ<\\\n;<ᐿॹ\n\r<\n�\\\",ॹ✅,\n\"✅ᐿ\raa\n\n\t;�π<,\",ᐿ<ᐿ\\ᐿ✅;\t<π\"\"ॹ<\t\n,,π\"\t�✅\r;;ॹ,ᐿ\tॹ✅ॹ,\nᐿ\n\\;\n\nπa<✅aπ\t;✅\r;�✅;a\t\n✅ππ\t\rॹ\n\\<✅✅<\r✅\t\r✅<π\n;\n\"\"��ॹ\r,ᐿ\naॹᐿπ\\π\n\n,�\t<�a\nπ\\✅,πॹ\",πᐿa,ᐿᐿa\r<\ta\t\nॹ\n\r✅\r;πa✅�\t�π\",πa\n<✅\"a\t�\r<ᐿa,\naᐿॹᐿ\rॹᐿa�\"\\a✅\"\"✅✅ᐿ\t\"ॹ<\n\"\nπ✅✅\n\\ॹ\t;ᐿ\"a,ॹ\"aॹᐿᐿ\n,\\\nॹॹ,;ॹ;\\\n\n\t\n,\naॹπ\r\"\n\t✅ॹ\r\tᐿ✅;\r\ta�ᐿ\t\t\nπᐿ<ॹ\tᐿ\na\nᐿ\tππ<<π\t✅;\r\"ॹ\n\na�<ᐿ\r\nᐿᐿ\";a<\rॹ\\πॹ\tᐿ\n\nᐿπ;\t;;✅ᐿ✅✅<�ॹ;\tᐿ,,\t,π✅\nᐿॹ;\r\"ᐿa,ᐿᐿᐿa\nॹ<aॹ\r,;π<<\nπa�\n\\\r,ॹπ�\n\"ᐿππ\n✅ॹπ\ra,πॹ\n\t<;πᐿᐿ✅ॹ\t�a✅\r�\"a,π��,ॹa\n\\\rॹ\nॹ\"π\"π\ta�<π�\r;a,a\r<πᐿ\na<\r\t\nॹ\\\\\n\\<�\\�aπa;\r\\,,\nॹ\"✅;\"\n✅ᐿ✅a\n<ππ\tπ<a\t�\\<\"✅\\\nᐿ;\r\t✅�✅π\r\r\r\n\",ᐿ;ॹ\nᐿ\r\"\naa\"\n\t<,✅<a\\\n\"ॹᐿ\n\\\t\t\r\"ॹ<,,π�\"ॹ<ॹ,\\ᐿ<\\π\"\\<ᐿ\n\rॹ\na\t\nπ;\\π\\✅\r,\r�\",,;;,<\n\t\"\\\r\r\"ॹ✅ॹ�\n�✅�\n�\\\n\n\rᐿॹ\tπa<;;\n\n�a\n\\ॹॹ\t;\n<\t\\<ॹॹ�✅π\t\"<\n\tᐿπᐿ\"\"�;\t;ॹ<π<✅\nππ<\"\rॹ�πᐿ�\rπ,<,<ᐿ<;;�,\t<<ᐿ\t<\tॹ,π,<\\a\t;\n\r�a✅\r\r\t\nᐿᐿ✅\r;\n;�;\r✅\n,;✅ᐿ,\\\n<,<\\\t\n;<\\aᐿ\r,\n;\\\r�\rॹ\\\t\";\t�;\n,\ta✅\r\t\r,\n\\\t\nᐿ✅\\\"\naᐿ\\\\\";\r<�✅;aॹ\t\t\\\t\tᐿ�\r,\"\n\"\taᐿ\na\rππॹ\nπᐿ\rॹ\nॹ\tπ,π\r<\"\n,�\na;�\\✅,<\"\"�\nππ\t\nπaॹaa✅\\;\\a\r\rᐿ,\\�;\\ᐿᐿ<a\"\r;π\"\\ᐿπ\tπॹ;\\a\ra;\t\n\\�\r<\t<ॹ\r✅\na\t\t<\n\n✅π\nᐿ✅\\<,\nπ\rᐿ✅a\"\r\"\n✅ॹ\\�\r\n\\�\nॹ\\<\\<\n\n\n"/PrefixEnd [r26]
I180827 20:41:54.112580 52726 storage/replica_range_lease.go:554  [n1,s1,r25/1:/Table/53/2/"\x{15\x8…-c0\t\…}] transferring lease to s3
I180827 20:41:54.113319 52726 storage/replica_range_lease.go:617  [n1,s1,r25/1:/Table/53/2/"\x{15\x8…-c0\t\…}] done transferring lease to s3: <nil>
I180827 20:41:54.113657 51850 storage/replica_proposal.go:210  [n3,s3,r25/2:/Table/53/2/"\x{15\x8…-c0\t\…}] new range lease repl=(n3,s3):2 seq=3 start=1535402514.112607829,0 epo=1 pro=1535402514.112610518,0 following repl=(n1,s1):1 seq=2 start=1535402512.768597075,0 exp=1535402521.769064687,0 pro=1535402512.769088099,0
I180827 20:41:54.117098 52757 storage/replica_command.go:298  [n1,s1,r26/1:/{Table/53/2/"…-Max}] initiating a split of this range at key /Table/53/3/";π,\\✅✅ᐿπ✅,�a\r<\nπᐿॹ;π\\,✅\nᐿॹ✅�,\r�\r\r\r,;\r;ᐿ,\n\nᐿaπ\r,,✅\na,a\\<✅\"✅\\,,a\"π\r\n�✅π\"ॹπ;\nπ;<,<\n;<\n\tॹ\rπ\r,a\\\t�\n\r\\ᐿ<\t,\n\\ᐿa\t\t\n\nπ\t\\\n\\πa;π\r\rᐿ\",a<\"\n�\r\ta\r\t�\r\t✅\t;ᐿᐿ��\nᐿᐿ\\,ᐿᐿ\na\"ᐿ\"\"aa\n;✅π\nॹ\\\"\"�✅ॹॹ\\\nॹ\t\nॹ✅,\n\"πᐿ\n\n;<<;\r\tᐿ�,,\\\n\n\n\n\nπ�;\n;,\"✅\r;a\n\\;aa\n\n\n;\n\n<<ॹ�\nπa�πॹaॹ\r\n;✅✅,;ᐿ\n;π\"\\πᐿ\n\"\n\\a\\aπ✅ᐿ\n<\",\tॹ\r<;\";\nππ\"�\n\n\t,π\\\\<\\\t;ॹπ\\,;�✅ॹ,\r<\n✅�;ᐿ\",;✅\n\nॹॹ\r\n✅\n<<\n\"\",a\t\r\r,ॹ\t�;�,π\\\t,;\\\"✅\t\n\n\nᐿ\n<\\\rᐿ,;�\nπ\r\\<ᐿᐿ\n<✅ᐿaॹ�✅\n\t,π\"\r<<\n\nπ\tπ✅\n<\\πॹaᐿ\t;�ॹॹ,\"\n\\a\n,\"πॹ,\r,ॹॹ\\\";�\n\\π✅\n\"ππ\n✅�πॹ,\r✅\n;π;ᐿ\"\nᐿ✅\rॹ;;\n\"ॹ\"�\"a\n\rπ\n<\n\t\"aπ\t;\\\n\";\"π,\ta\t\n\nᐿ<,;�<ᐿ\"\\ᐿᐿa,\n;;ॹॹ\tॹπa�ᐿ\ra,π<✅\tᐿᐿ\n,✅\ra�\"\r\r\",;π�<;\n<;ᐿ\"�;ᐿ;�;✅\t\\<\\<;πa✅\rॹ\\\\\\ᐿ\n;\r\t\n\\\r\"✅\n\tπ✅\"\"<\r\rπ\r<,\n,\\✅ᐿᐿ�\t�,ππ;ᐿ\t�\"\\ππ\"ॹ,πa<\n\n<��\rॹॹ\t,\r\"ॹ✅✅\n\n;\\ॹ;π<\"�\t�<\"ᐿॹॹ;\n<\n\r\na\t�ॹᐿa\n,\"\t\r\"\n,\r<,\"\tᐿ\\\n<,;<\"\t\n\nᐿ,ᐿ\tπ✅\n,\r,\n\t<,�\\;<\\a\nπ\t,\t�ॹ\t\n�a✅\n✅\nॹ\";\r\t✅<\tᐿ\n\tᐿॹ✅\"\r\rॹ✅π\n\n,\t\\\t\\;\"a\t,ॹॹ\"aॹ\n,\n✅�\t\nॹᐿ\n\r✅<πॹ\n✅\tॹ\"ॹ\"�\r\\;\\✅;ॹπ;\n\nᐿ<\r<\"ॹ\n,\n;π\nॹ\ta✅\n�;ᐿ\"a�✅π\r✅ॹ,\n\n\",✅\nᐿ\n<�\r\nπᐿ\"πॹᐿ\r�\n<,✅a\\ॹ\r✅<;πᐿ✅ॹ<\"<✅\"π,\\\rπ\\<\"<\"π\n✅<;\\�\tॹ\n\n\r<\n\rᐿ\nᐿaॹaॹ\\\r<<\n\r\n�\ta,\nॹॹᐿ\n,π✅<;\\\nπॹπᐿॹ<;\"a\r<;\t\t<,;�π\n<✅ॹॹ\tᐿ\rᐿaaaॹ\t\\,ᐿ✅\n\\ॹ<\"π\t\r\"\tᐿ\n\ta\t,<ππ;\n\\\r�\n,\n\n\\ᐿa\nॹᐿa\n\t\n\t\n✅\"ᐿ\"\r\n\n\"�\r\n\n<<π\ra✅\\<ᐿ�\n\n✅�a✅�"/105 [r27]
I180827 20:41:54.137397 52716 storage/store_snapshot.go:615  [raftsnapshot,n3,s3,r25/2:/Table/53/2/"\x{15\x8…-c0\t\…}] sending Raft snapshot 547ab8d0 at applied index 21
I180827 20:41:54.140430 52716 storage/store_snapshot.go:657  [raftsnapshot,n3,s3,r25/2:/Table/53/2/"\x{15\x8…-c0\t\…}] streamed snapshot to (n2,s2):3: kv pairs: 14, log entries: 2, rate-limit: 8.0 MiB/sec, 22ms
I180827 20:41:54.140860 52705 storage/replica_raftstorage.go:784  [n2,s2,r25/3:/Table/53/2/"\x{15\x8…-c0\t\…}] applying Raft snapshot at index 21 (id=547ab8d0, encoded size=31270, 1 rocksdb batches, 2 log entries)
I180827 20:41:54.162696 52705 storage/replica_raftstorage.go:790  [n2,s2,r25/3:/Table/53/2/"\x{15\x8…-c0\t\…}] applied Raft snapshot in 22ms [clear=0ms batch=0ms entries=21ms commit=0ms]
I180827 20:41:54.166103 52791 storage/replica_range_lease.go:554  [n1,s1,r26/1:/Table/53/{2/"\xc0…-3/";π,…}] transferring lease to s3
I180827 20:41:54.167118 51903 storage/replica_proposal.go:210  [n3,s3,r26/2:/Table/53/{2/"\xc0…-3/";π,…}] new range lease repl=(n3,s3):2 seq=3 start=1535402514.166156675,0 epo=1 pro=1535402514.166159831,0 following repl=(n1,s1):1 seq=2 start=1535402512.768597075,0 exp=1535402521.769064687,0 pro=1535402512.769088099,0
I180827 20:41:54.167216 52791 storage/replica_range_lease.go:617  [n1,s1,r26/1:/Table/53/{2/"\xc0…-3/";π,…}] done transferring lease to s3: <nil>
I180827 20:41:54.172711 52589 storage/replica_command.go:298  [n1,s1,r27/1:/{Table/53/3/"…-Max}] initiating a split of this range at key /Table/54 [r28]
I180827 20:41:54.182740 52807 storage/replica_range_lease.go:554  [n1,s1,r27/1:/Table/5{3/3/";π…-4}] transferring lease to s2
I180827 20:41:54.183947 52807 storage/replica_range_lease.go:617  [n1,s1,r27/1:/Table/5{3/3/";π…-4}] done transferring lease to s2: <nil>
I180827 20:41:54.184954 51646 storage/replica_proposal.go:210  [n2,s2,r27/3:/Table/5{3/3/";π…-4}] new range lease repl=(n2,s2):3 seq=3 start=1535402514.182761052,0 epo=1 pro=1535402514.182764040,0 following repl=(n1,s1):1 seq=2 start=1535402512.768597075,0 exp=1535402521.769064687,0 pro=1535402512.769088099,0
--- FAIL: test/TestImportPgDump (0.000s)
Test ended in panic.

------- Stdout: -------
W180827 20:41:52.746991 50862 server/status/runtime.go:294  [n?] Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006"
I180827 20:41:52.757923 50862 server/server.go:830  [n?] monitoring forward clock jumps based on server.clock.forward_jump_check_enabled
I180827 20:41:52.758132 50862 base/addr_validation.go:260  [n?] server certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
I180827 20:41:52.758156 50862 base/addr_validation.go:300  [n?] web UI certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
I180827 20:41:52.761168 50862 server/config.go:496  [n?] 1 storage engine initialized
I180827 20:41:52.761191 50862 server/config.go:499  [n?] RocksDB cache size: 128 MiB
I180827 20:41:52.761204 50862 server/config.go:499  [n?] store 0: in-memory, size 0 B
I180827 20:41:52.767725 50862 server/node.go:373  [n?] **** cluster d5e53e69-a109-4eb6-91bf-29e74ae744ba has been created
I180827 20:41:52.767752 50862 server/server.go:1401  [n?] **** add additional nodes by specifying --join=127.0.0.1:41477
I180827 20:41:52.767936 50862 gossip/gossip.go:382  [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:41477" > attrs:<> locality:<> ServerVersion:<major_val:2 minor_val:0 patch:0 unstable:12 > build_tag:"v2.1.0-alpha.20180702-2025-gf1e7bb1" started_at:1535402512767856449 
I180827 20:41:52.770338 50862 storage/store.go:1541  [n1,s1] [n1,s1]: failed initial metrics computation: [n1,s1]: system config not yet available
I180827 20:41:52.770546 50862 server/node.go:476  [n1] initialized store [n1,s1]: disk (capacity=512 MiB, available=512 MiB, used=0 B, logicalBytes=6.9 KiB), ranges=1, leases=1, queries=0.00, writes=0.00, bytesPerReplica={p10=7103.00 p25=7103.00 p50=7103.00 p75=7103.00 p90=7103.00 pMax=7103.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
I180827 20:41:52.770626 50862 storage/stores.go:242  [n1] read 0 node addresses from persistent storage
I180827 20:41:52.770721 50862 server/node.go:697  [n1] connecting to gossip network to verify cluster ID...
I180827 20:41:52.770760 50862 server/node.go:722  [n1] node connected via gossip and verified as part of cluster "d5e53e69-a109-4eb6-91bf-29e74ae744ba"
I180827 20:41:52.770788 50862 server/node.go:546  [n1] node=1: started with [<no-attributes>=<in-mem>] engine(s) and attributes []
I180827 20:41:52.771023 50862 server/status/recorder.go:652  [n1] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180827 20:41:52.771066 50862 server/server.go:1807  [n1] Could not start heap profiler worker due to: directory to store profiles could not be determined
I180827 20:41:52.771159 50862 server/server.go:1538  [n1] starting https server at 127.0.0.1:42563 (use: 127.0.0.1:42563)
I180827 20:41:52.771188 50862 server/server.go:1540  [n1] starting grpc/postgres server at 127.0.0.1:41477
I180827 20:41:52.771209 50862 server/server.go:1541  [n1] advertising CockroachDB node at 127.0.0.1:41477
I180827 20:41:52.775258 51089 server/status/recorder.go:652  [n1,summaries] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180827 20:41:52.776337 50925 storage/replica_command.go:298  [split,n1,s1,r1/1:/M{in-ax}] initiating a split of this range at key /System/"" [r2]
I180827 20:41:52.788832 51094 storage/replica_command.go:298  [split,n1,s1,r2/1:/{System/-Max}] initiating a split of this range at key /System/NodeLiveness [r3]
W180827 20:41:52.790188 51128 storage/intent_resolver.go:668  [n1,s1] failed to push during intent resolution: failed to push "unnamed" id=ec083bbe key=/Table/SystemConfigSpan/Start rw=true pri=0.01126188 iso=SERIALIZABLE stat=PENDING epo=0 ts=1535402512.772758792,0 orig=1535402512.772758792,0 max=1535402512.772758792,0 wto=false rop=false seq=6
I180827 20:41:52.790695 51118 sql/event_log.go:126  [n1,intExec=optInToDiagnosticsStatReporting] Event: "set_cluster_setting", target: 0, info: {SettingName:diagnostics.reporting.enabled Value:true User:root}
I180827 20:41:52.795125 51100 storage/replica_command.go:298  [split,n1,s1,r3/1:/{System/NodeL…-Max}] initiating a split of this range at key /System/NodeLivenessMax [r4]
I180827 20:41:52.800783 51143 storage/replica_command.go:298  [split,n1,s1,r4/1:/{System/NodeL…-Max}] initiating a split of this range at key /System/tsd [r5]
I180827 20:41:52.807906 51165 storage/replica_command.go:298  [split,n1,s1,r5/1:/{System/tsd-Max}] initiating a split of this range at key /System/"tse" [r6]
I180827 20:41:52.811784 51141 sql/event_log.go:126  [n1,intExec=set-setting] Event: "set_cluster_setting", target: 0, info: {SettingName:version Value:2.0-12 User:root}
I180827 20:41:52.818164 50799 sql/event_log.go:126  [n1,intExec=disableNetTrace] Event: "set_cluster_setting", target: 0, info: {SettingName:trace.debug.enable Value:false User:root}
I180827 20:41:52.821094 51188 storage/replica_command.go:298  [split,n1,s1,r6/1:/{System/tse-Max}] initiating a split of this range at key /Table/SystemConfigSpan/Start [r7]
I180827 20:41:52.830709 51176 storage/replica_command.go:298  [split,n1,s1,r7/1:/{Table/System…-Max}] initiating a split of this range at key /Table/11 [r8]
I180827 20:41:52.839374 51187 sql/event_log.go:126  [n1,intExec=initializeClusterSecret] Event: "set_cluster_setting", target: 0, info: {SettingName:cluster.secret Value:045a1c98-219f-445b-bd6b-d481f04d6b0d User:root}
I180827 20:41:52.849534 51154 storage/replica_command.go:298  [split,n1,s1,r8/1:/{Table/11-Max}] initiating a split of this range at key /Table/12 [r9]
I180827 20:41:52.855898 51218 sql/event_log.go:126  [n1,intExec=create-default-db] Event: "create_database", target: 50, info: {DatabaseName:defaultdb Statement:CREATE DATABASE IF NOT EXISTS defaultdb User:root}
I180827 20:41:52.861462 51240 storage/replica_command.go:298  [split,n1,s1,r9/1:/{Table/12-Max}] initiating a split of this range at key /Table/13 [r10]
I180827 20:41:52.868342 51268 storage/replica_command.go:298  [split,n1,s1,r10/1:/{Table/13-Max}] initiating a split of this range at key /Table/14 [r11]
I180827 20:41:52.872706 51256 sql/event_log.go:126  [n1,intExec=create-default-db] Event: "create_database", target: 51, info: {DatabaseName:postgres Statement:CREATE DATABASE IF NOT EXISTS postgres User:root}
I180827 20:41:52.874819 51264 storage/replica_command.go:298  [split,n1,s1,r11/1:/{Table/14-Max}] initiating a split of this range at key /Table/15 [r12]
I180827 20:41:52.876403 50862 server/server.go:1594  [n1] done ensuring all necessary migrations have run
I180827 20:41:52.876433 50862 server/server.go:1597  [n1] serving sql connections
I180827 20:41:52.879108 51233 server/server_update.go:67  [n1] no need to upgrade, cluster already at the newest version
I180827 20:41:52.879639 51235 sql/event_log.go:126  [n1] Event: "node_join", target: 1, info: {Descriptor:{NodeID:1 Address:{NetworkField:tcp AddressField:127.0.0.1:41477} Attrs: Locality: ServerVersion:2.0-12 BuildTag:v2.1.0-alpha.20180702-2025-gf1e7bb1 StartedAt:1535402512767856449 LocalityAddress:[]} ClusterID:d5e53e69-a109-4eb6-91bf-29e74ae744ba StartedAt:1535402512767856449 LastUp:1535402512767856449}
I180827 20:41:52.880318 51302 storage/replica_command.go:298  [split,n1,s1,r12/1:/{Table/15-Max}] initiating a split of this range at key /Table/16 [r13]
I180827 20:41:52.927701 50819 storage/replica_command.go:298  [split,n1,s1,r13/1:/{Table/16-Max}] initiating a split of this range at key /Table/17 [r14]
I180827 20:41:52.940165 51323 storage/replica_command.go:298  [split,n1,s1,r14/1:/{Table/17-Max}] initiating a split of this range at key /Table/18 [r15]
I180827 20:41:52.948539 51355 storage/replica_command.go:298  [split,n1,s1,r15/1:/{Table/18-Max}] initiating a split of this range at key /Table/19 [r16]
I180827 20:41:52.953658 51380 storage/replica_command.go:298  [split,n1,s1,r16/1:/{Table/19-Max}] initiating a split of this range at key /Table/20 [r17]
I180827 20:41:52.961237 51137 storage/replica_command.go:298  [split,n1,s1,r17/1:/{Table/20-Max}] initiating a split of this range at key /Table/21 [r18]
I180827 20:41:52.966548 50832 storage/replica_command.go:298  [split,n1,s1,r18/1:/{Table/21-Max}] initiating a split of this range at key /Table/22 [r19]
I180827 20:41:52.977113 51362 storage/replica_command.go:298  [split,n1,s1,r19/1:/{Table/22-Max}] initiating a split of this range at key /Table/23 [r20]
I180827 20:41:53.041315 51440 storage/replica_command.go:298  [split,n1,s1,r20/1:/{Table/23-Max}] initiating a split of this range at key /Table/50 [r21]
I180827 20:41:53.047478 51414 storage/replica_command.go:298  [split,n1,s1,r21/1:/{Table/50-Max}] initiating a split of this range at key /Table/51 [r22]
W180827 20:41:53.081214 50862 server/status/runtime.go:294  [n?] Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006"
I180827 20:41:53.089127 50862 server/server.go:830  [n?] monitoring forward clock jumps based on server.clock.forward_jump_check_enabled
I180827 20:41:53.089322 50862 base/addr_validation.go:260  [n?] server certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
I180827 20:41:53.089338 50862 base/addr_validation.go:300  [n?] web UI certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
I180827 20:41:53.102793 50862 server/config.go:496  [n?] 1 storage engine initialized
I180827 20:41:53.102863 50862 server/config.go:499  [n?] RocksDB cache size: 128 MiB
I180827 20:41:53.102878 50862 server/config.go:499  [n?] store 0: in-memory, size 0 B
W180827 20:41:53.102953 50862 gossip/gossip.go:1371  [n?] no incoming or outgoing connections
I180827 20:41:53.103001 50862 server/server.go:1403  [n?] no stores bootstrapped and --join flag specified, awaiting init command.
I180827 20:41:53.115344 51458 gossip/client.go:129  [n?] started gossip client to 127.0.0.1:41477
I180827 20:41:53.125579 51530 gossip/server.go:217  [n1] received initial cluster-verification connection from {tcp 127.0.0.1:36113}
I180827 20:41:53.127987 50862 server/node.go:697  [n?] connecting to gossip network to verify cluster ID...
I180827 20:41:53.128034 50862 server/node.go:722  [n?] node connected via gossip and verified as part of cluster "d5e53e69-a109-4eb6-91bf-29e74ae744ba"
I180827 20:41:53.128397 51575 kv/dist_sender.go:345  [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I180827 20:41:53.134920 51574 kv/dist_sender.go:345  [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I180827 20:41:53.135628 50862 kv/dist_sender.go:345  [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I180827 20:41:53.136461 50862 server/node.go:428  [n?] new node allocated ID 2
I180827 20:41:53.136541 50862 gossip/gossip.go:382  [n2] NodeDescriptor set to node_id:2 address:<network_field:"tcp" address_field:"127.0.0.1:36113" > attrs:<> locality:<> ServerVersion:<major_val:2 minor_val:0 patch:0 unstable:12 > build_tag:"v2.1.0-alpha.20180702-2025-gf1e7bb1" started_at:1535402513136479434 
I180827 20:41:53.136591 50862 storage/stores.go:242  [n2] read 0 node addresses from persistent storage
I180827 20:41:53.136624 50862 storage/stores.go:261  [n2] wrote 1 node addresses to persistent storage
I180827 20:41:53.137485 51552 storage/stores.go:261  [n1] wrote 1 node addresses to persistent storage
I180827 20:41:53.139442 50862 server/node.go:672  [n2] bootstrapped store [n2,s2]
I180827 20:41:53.139577 50862 server/node.go:546  [n2] node=2: started with [] engine(s) and attributes []
I180827 20:41:53.140140 50862 server/status/recorder.go:652  [n2] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180827 20:41:53.140166 50862 server/server.go:1807  [n2] Could not start heap profiler worker due to: directory to store profiles could not be determined
I180827 20:41:53.140233 50862 server/server.go:1538  [n2] starting https server at 127.0.0.1:39947 (use: 127.0.0.1:39947)
I180827 20:41:53.140246 50862 server/server.go:1540  [n2] starting grpc/postgres server at 127.0.0.1:36113
I180827 20:41:53.140256 50862 server/server.go:1541  [n2] advertising CockroachDB node at 127.0.0.1:36113
I180827 20:41:53.140624 51685 server/status/recorder.go:652  [n2,summaries] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180827 20:41:53.153945 50862 server/server.go:1594  [n2] done ensuring all necessary migrations have run
I180827 20:41:53.153974 50862 server/server.go:1597  [n2] serving sql connections
W180827 20:41:53.165268 50862 server/status/runtime.go:294  [n?] Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006"
I180827 20:41:53.185802 51467 server/server_update.go:67  [n2] no need to upgrade, cluster already at the newest version
I180827 20:41:53.186848 51469 sql/event_log.go:126  [n2] Event: "node_join", target: 2, info: {Descriptor:{NodeID:2 Address:{NetworkField:tcp AddressField:127.0.0.1:36113} Attrs: Locality: ServerVersion:2.0-12 BuildTag:v2.1.0-alpha.20180702-2025-gf1e7bb1 StartedAt:1535402513136479434 LocalityAddress:[]} ClusterID:d5e53e69-a109-4eb6-91bf-29e74ae744ba StartedAt:1535402513136479434 LastUp:1535402513136479434}
I180827 20:41:53.189622 50862 server/server.go:830  [n?] monitoring forward clock jumps based on server.clock.forward_jump_check_enabled
I180827 20:41:53.189776 50862 base/addr_validation.go:260  [n?] server certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
I180827 20:41:53.189808 50862 base/addr_validation.go:300  [n?] web UI certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
I180827 20:41:53.207782 50862 server/config.go:496  [n?] 1 storage engine initialized
I180827 20:41:53.207807 50862 server/config.go:499  [n?] RocksDB cache size: 128 MiB
I180827 20:41:53.207815 50862 server/config.go:499  [n?] store 0: in-memory, size 0 B
W180827 20:41:53.207911 50862 gossip/gossip.go:1371  [n?] no incoming or outgoing connections
I180827 20:41:53.207947 50862 server/server.go:1403  [n?] no stores bootstrapped and --join flag specified, awaiting init command.
I180827 20:41:53.211471 51475 rpc/nodedialer/nodedialer.go:92  [ct-client] connection to n2 established
I180827 20:41:53.223653 51740 gossip/client.go:129  [n?] started gossip client to 127.0.0.1:41477
I180827 20:41:53.223954 51816 gossip/server.go:217  [n1] received initial cluster-verification connection from {tcp 127.0.0.1:46463}
I180827 20:41:53.224401 50862 server/node.go:697  [n?] connecting to gossip network to verify cluster ID...
I180827 20:41:53.224432 50862 server/node.go:722  [n?] node connected via gossip and verified as part of cluster "d5e53e69-a109-4eb6-91bf-29e74ae744ba"
I180827 20:41:53.224690 51837 kv/dist_sender.go:345  [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I180827 20:41:53.225445 51836 kv/dist_sender.go:345  [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I180827 20:41:53.226030 50862 kv/dist_sender.go:345  [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I180827 20:41:53.226699 50862 server/node.go:428  [n?] new node allocated ID 3
I180827 20:41:53.226763 50862 gossip/gossip.go:382  [n3] NodeDescriptor set to node_id:3 address:<network_field:"tcp" address_field:"127.0.0.1:46463" > attrs:<> locality:<> ServerVersion:<major_val:2 minor_val:0 patch:0 unstable:12 > build_tag:"v2.1.0-alpha.20180702-2025-gf1e7bb1" started_at:1535402513226706701 
I180827 20:41:53.226805 50862 storage/stores.go:242  [n3] read 0 node addresses from persistent storage
I180827 20:41:53.226851 50862 storage/stores.go:261  [n3] wrote 2 node addresses to persistent storage
I180827 20:41:53.227563 51809 storage/stores.go:261  [n1] wrote 2 node addresses to persistent storage
I180827 20:41:53.227869 51810 storage/stores.go:261  [n2] wrote 2 node addresses to persistent storage
I180827 20:41:53.228504 50862 server/node.go:672  [n3] bootstrapped store [n3,s3]
I180827 20:41:53.229044 50862 server/node.go:546  [n3] node=3: started with [] engine(s) and attributes []
I180827 20:41:53.229696 50862 server/status/recorder.go:652  [n3] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180827 20:41:53.229749 50862 server/server.go:1807  [n3] Could not start heap profiler worker due to: directory to store profiles could not be determined
I180827 20:41:53.235251 50862 server/server.go:1538  [n3] starting https server at 127.0.0.1:43307 (use: 127.0.0.1:43307)
I180827 20:41:53.235271 50862 server/server.go:1540  [n3] starting grpc/postgres server at 127.0.0.1:46463
I180827 20:41:53.235283 50862 server/server.go:1541  [n3] advertising CockroachDB node at 127.0.0.1:46463
I180827 20:41:53.240284 50862 server/server.go:1594  [n3] done ensuring all necessary migrations have run
I180827 20:41:53.240307 50862 server/server.go:1597  [n3] serving sql connections
I180827 20:41:53.243124 51945 server/status/recorder.go:652  [n3,summaries] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180827 20:41:53.248117 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r20/1:/Table/{23-50}] sending preemptive snapshot 59e1afc9 at applied index 16
I180827 20:41:53.249136 50862 testutils/testcluster/testcluster.go:536  [n1,s1] has 22 underreplicated ranges
I180827 20:41:53.251012 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r20/1:/Table/{23-50}] streamed snapshot to (n2,s2):?: kv pairs: 12, log entries: 6, rate-limit: 8.0 MiB/sec, 3ms
I180827 20:41:53.251369 51983 storage/replica_raftstorage.go:784  [n2,s2,r20/?:{-}] applying preemptive snapshot at index 16 (id=59e1afc9, encoded size=2241, 1 rocksdb batches, 6 log entries)
I180827 20:41:53.254056 51839 server/server_update.go:67  [n3] no need to upgrade, cluster already at the newest version
I180827 20:41:53.255122 51841 sql/event_log.go:126  [n3] Event: "node_join", target: 3, info: {Descriptor:{NodeID:3 Address:{NetworkField:tcp AddressField:127.0.0.1:46463} Attrs: Locality: ServerVersion:2.0-12 BuildTag:v2.1.0-alpha.20180702-2025-gf1e7bb1 StartedAt:1535402513226706701 LocalityAddress:[]} ClusterID:d5e53e69-a109-4eb6-91bf-29e74ae744ba StartedAt:1535402513226706701 LastUp:1535402513226706701}
I180827 20:41:53.256061 51983 storage/replica_raftstorage.go:790  [n2,s2,r20/?:/Table/{23-50}] applied preemptive snapshot in 1ms [clear=0ms batch=0ms entries=0ms commit=1ms]
I180827 20:41:53.256605 50930 storage/replica_command.go:812  [replicate,n1,s1,r20/1:/Table/{23-50}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r20:/Table/{23-50} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.259565 50930 storage/replica.go:3743  [n1,s1,r20/1:/Table/{23-50}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I180827 20:41:53.261627 51625 rpc/nodedialer/nodedialer.go:92  [n2] connection to n1 established
I180827 20:41:53.264544 50862 testutils/testcluster/testcluster.go:536  [n1,s1] has 22 underreplicated ranges
I180827 20:41:53.286630 50930 rpc/nodedialer/nodedialer.go:92  [replicate,n1,s1,r21/1:/Table/5{0-1}] connection to n3 established
I180827 20:41:53.287245 50862 testutils/testcluster/testcluster.go:536  [n1,s1] has 22 underreplicated ranges
I180827 20:41:53.287799 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r21/1:/Table/5{0-1}] sending preemptive snapshot de08568a at applied index 18
I180827 20:41:53.288157 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r21/1:/Table/5{0-1}] streamed snapshot to (n3,s3):?: kv pairs: 8, log entries: 8, rate-limit: 8.0 MiB/sec, 1ms
I180827 20:41:53.288623 51959 storage/replica_raftstorage.go:784  [n3,s3,r21/?:{-}] applying preemptive snapshot at index 18 (id=de08568a, encoded size=2646, 1 rocksdb batches, 8 log entries)
I180827 20:41:53.289814 51959 storage/replica_raftstorage.go:790  [n3,s3,r21/?:/Table/5{0-1}] applied preemptive snapshot in 1ms [clear=0ms batch=0ms entries=0ms commit=1ms]
I180827 20:41:53.290329 50930 storage/replica_command.go:812  [replicate,n1,s1,r21/1:/Table/5{0-1}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r21:/Table/5{0-1} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.293678 50930 storage/replica.go:3743  [n1,s1,r21/1:/Table/5{0-1}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I180827 20:41:53.294953 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r22/1:/{Table/51-Max}] sending preemptive snapshot a84e7278 at applied index 12
I180827 20:41:53.295229 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r22/1:/{Table/51-Max}] streamed snapshot to (n3,s3):?: kv pairs: 7, log entries: 2, rate-limit: 8.0 MiB/sec, 1ms
I180827 20:41:53.295441 51883 rpc/nodedialer/nodedialer.go:92  [n3] connection to n1 established
I180827 20:41:53.295585 51953 storage/replica_raftstorage.go:784  [n3,s3,r22/?:{-}] applying preemptive snapshot at index 12 (id=a84e7278, encoded size=386, 1 rocksdb batches, 2 log entries)
I180827 20:41:53.295717 51953 storage/replica_raftstorage.go:790  [n3,s3,r22/?:/{Table/51-Max}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.295955 50930 storage/replica_command.go:812  [replicate,n1,s1,r22/1:/{Table/51-Max}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r22:/{Table/51-Max} [(n1,s1):1, next=2, gen=0]
I180827 20:41:53.298097 50930 storage/replica.go:3743  [n1,s1,r22/1:/{Table/51-Max}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I180827 20:41:53.301122 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r8/1:/Table/1{1-2}] sending preemptive snapshot 201bdccc at applied index 18
I180827 20:41:53.301565 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r8/1:/Table/1{1-2}] streamed snapshot to (n3,s3):?: kv pairs: 9, log entries: 8, rate-limit: 8.0 MiB/sec, 3ms
I180827 20:41:53.306578 52088 storage/replica_raftstorage.go:784  [n3,s3,r8/?:{-}] applying preemptive snapshot at index 18 (id=201bdccc, encoded size=4352, 1 rocksdb batches, 8 log entries)
I180827 20:41:53.306868 52088 storage/replica_raftstorage.go:790  [n3,s3,r8/?:/Table/1{1-2}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.307601 50930 storage/replica_command.go:812  [replicate,n1,s1,r8/1:/Table/1{1-2}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r8:/Table/1{1-2} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.311873 50930 storage/replica.go:3743  [n1,s1,r8/1:/Table/1{1-2}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I180827 20:41:53.314134 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r17/1:/Table/2{0-1}] sending preemptive snapshot 53116eb2 at applied index 16
I180827 20:41:53.314317 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r17/1:/Table/2{0-1}] streamed snapshot to (n3,s3):?: kv pairs: 8, log entries: 6, rate-limit: 8.0 MiB/sec, 2ms
I180827 20:41:53.314683 52103 storage/replica_raftstorage.go:784  [n3,s3,r17/?:{-}] applying preemptive snapshot at index 16 (id=53116eb2, encoded size=2105, 1 rocksdb batches, 6 log entries)
I180827 20:41:53.314887 52103 storage/replica_raftstorage.go:790  [n3,s3,r17/?:/Table/2{0-1}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.315401 50930 storage/replica_command.go:812  [replicate,n1,s1,r17/1:/Table/2{0-1}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r17:/Table/2{0-1} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.318398 50930 storage/replica.go:3743  [n1,s1,r17/1:/Table/2{0-1}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I180827 20:41:53.319436 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r16/1:/Table/{19-20}] sending preemptive snapshot e0be8540 at applied index 16
I180827 20:41:53.319691 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r16/1:/Table/{19-20}] streamed snapshot to (n2,s2):?: kv pairs: 8, log entries: 6, rate-limit: 8.0 MiB/sec, 1ms
I180827 20:41:53.320127 52072 storage/replica_raftstorage.go:784  [n2,s2,r16/?:{-}] applying preemptive snapshot at index 16 (id=e0be8540, encoded size=2109, 1 rocksdb batches, 6 log entries)
I180827 20:41:53.320339 52072 storage/replica_raftstorage.go:790  [n2,s2,r16/?:/Table/{19-20}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.320816 50930 storage/replica_command.go:812  [replicate,n1,s1,r16/1:/Table/{19-20}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r16:/Table/{19-20} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.323849 50930 storage/replica.go:3743  [n1,s1,r16/1:/Table/{19-20}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I180827 20:41:53.326208 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r15/1:/Table/1{8-9}] sending preemptive snapshot d259ae5c at applied index 16
I180827 20:41:53.326404 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r15/1:/Table/1{8-9}] streamed snapshot to (n2,s2):?: kv pairs: 8, log entries: 6, rate-limit: 8.0 MiB/sec, 2ms
I180827 20:41:53.326731 52116 storage/replica_raftstorage.go:784  [n2,s2,r15/?:{-}] applying preemptive snapshot at index 16 (id=d259ae5c, encoded size=2276, 1 rocksdb batches, 6 log entries)
I180827 20:41:53.326923 50862 testutils/testcluster/testcluster.go:536  [n1,s1] has 22 underreplicated ranges
I180827 20:41:53.326953 52116 storage/replica_raftstorage.go:790  [n2,s2,r15/?:/Table/1{8-9}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.334514 50930 storage/replica_command.go:812  [replicate,n1,s1,r15/1:/Table/1{8-9}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r15:/Table/1{8-9} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.337656 50930 storage/replica.go:3743  [n1,s1,r15/1:/Table/1{8-9}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I180827 20:41:53.338767 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r14/1:/Table/1{7-8}] sending preemptive snapshot 9d0058d5 at applied index 16
I180827 20:41:53.339034 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r14/1:/Table/1{7-8}] streamed snapshot to (n2,s2):?: kv pairs: 8, log entries: 6, rate-limit: 8.0 MiB/sec, 1ms
I180827 20:41:53.339612 52090 storage/replica_raftstorage.go:784  [n2,s2,r14/?:{-}] applying preemptive snapshot at index 16 (id=9d0058d5, encoded size=2276, 1 rocksdb batches, 6 log entries)
I180827 20:41:53.339831 52090 storage/replica_raftstorage.go:790  [n2,s2,r14/?:/Table/1{7-8}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.340173 50930 storage/replica_command.go:812  [replicate,n1,s1,r14/1:/Table/1{7-8}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r14:/Table/1{7-8} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.343121 50930 storage/replica.go:3743  [n1,s1,r14/1:/Table/1{7-8}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I180827 20:41:53.345432 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r9/1:/Table/1{2-3}] sending preemptive snapshot 0eea2d20 at applied index 26
I180827 20:41:53.345859 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r9/1:/Table/1{2-3}] streamed snapshot to (n2,s2):?: kv pairs: 53, log entries: 16, rate-limit: 8.0 MiB/sec, 2ms
I180827 20:41:53.347137 52066 storage/replica_raftstorage.go:784  [n2,s2,r9/?:{-}] applying preemptive snapshot at index 26 (id=0eea2d20, encoded size=15139, 1 rocksdb batches, 16 log entries)
I180827 20:41:53.347467 52066 storage/replica_raftstorage.go:790  [n2,s2,r9/?:/Table/1{2-3}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.348208 50930 storage/replica_command.go:812  [replicate,n1,s1,r9/1:/Table/1{2-3}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r9:/Table/1{2-3} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.352166 50930 storage/replica.go:3743  [n1,s1,r9/1:/Table/1{2-3}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I180827 20:41:53.353188 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r4/1:/System/{NodeLive…-tsd}] sending preemptive snapshot 0cdee511 at applied index 39
I180827 20:41:53.353765 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r4/1:/System/{NodeLive…-tsd}] streamed snapshot to (n2,s2):?: kv pairs: 36, log entries: 29, rate-limit: 8.0 MiB/sec, 1ms
I180827 20:41:53.354286 51723 storage/replica_raftstorage.go:784  [n2,s2,r4/?:{-}] applying preemptive snapshot at index 39 (id=0cdee511, encoded size=98384, 1 rocksdb batches, 29 log entries)
I180827 20:41:53.354994 51723 storage/replica_raftstorage.go:790  [n2,s2,r4/?:/System/{NodeLive…-tsd}] applied preemptive snapshot in 1ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.355529 50930 storage/replica_command.go:812  [replicate,n1,s1,r4/1:/System/{NodeLive…-tsd}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r4:/System/{NodeLivenessMax-tsd} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.358523 50930 storage/replica.go:3743  [n1,s1,r4/1:/System/{NodeLive…-tsd}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I180827 20:41:53.360250 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r3/1:/System/NodeLiveness{-Max}] sending preemptive snapshot 965d58b1 at applied index 19
I180827 20:41:53.360436 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r3/1:/System/NodeLiveness{-Max}] streamed snapshot to (n3,s3):?: kv pairs: 10, log entries: 9, rate-limit: 8.0 MiB/sec, 1ms
I180827 20:41:53.360789 52150 storage/replica_raftstorage.go:784  [n3,s3,r3/?:{-}] applying preemptive snapshot at index 19 (id=965d58b1, encoded size=4003, 1 rocksdb batches, 9 log entries)
I180827 20:41:53.361043 52150 storage/replica_raftstorage.go:790  [n3,s3,r3/?:/System/NodeLiveness{-Max}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.361522 50930 storage/replica_command.go:812  [replicate,n1,s1,r3/1:/System/NodeLiveness{-Max}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r3:/System/NodeLiveness{-Max} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.364392 50930 storage/replica.go:3743  [n1,s1,r3/1:/System/NodeLiveness{-Max}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I180827 20:41:53.366422 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r12/1:/Table/1{5-6}] sending preemptive snapshot 811af376 at applied index 16
I180827 20:41:53.366638 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r12/1:/Table/1{5-6}] streamed snapshot to (n3,s3):?: kv pairs: 8, log entries: 6, rate-limit: 8.0 MiB/sec, 2ms
I180827 20:41:53.367089 52137 storage/replica_raftstorage.go:784  [n3,s3,r12/?:{-}] applying preemptive snapshot at index 16 (id=811af376, encoded size=2276, 1 rocksdb batches, 6 log entries)
I180827 20:41:53.367359 52137 storage/replica_raftstorage.go:790  [n3,s3,r12/?:/Table/1{5-6}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.368127 50930 storage/replica_command.go:812  [replicate,n1,s1,r12/1:/Table/1{5-6}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r12:/Table/1{5-6} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.371691 50930 storage/replica.go:3743  [n1,s1,r12/1:/Table/1{5-6}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I180827 20:41:53.374563 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r19/1:/Table/2{2-3}] sending preemptive snapshot 9cd02555 at applied index 16
I180827 20:41:53.374760 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r19/1:/Table/2{2-3}] streamed snapshot to (n3,s3):?: kv pairs: 8, log entries: 6, rate-limit: 8.0 MiB/sec, 2ms
I180827 20:41:53.375252 52080 storage/replica_raftstorage.go:784  [n3,s3,r19/?:{-}] applying preemptive snapshot at index 16 (id=9cd02555, encoded size=2276, 1 rocksdb batches, 6 log entries)
I180827 20:41:53.375582 52080 storage/replica_raftstorage.go:790  [n3,s3,r19/?:/Table/2{2-3}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.375950 50930 storage/replica_command.go:812  [replicate,n1,s1,r19/1:/Table/2{2-3}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r19:/Table/2{2-3} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.381819 50930 storage/replica.go:3743  [n1,s1,r19/1:/Table/2{2-3}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I180827 20:41:53.386461 52091 rpc/nodedialer/nodedialer.go:92  [ct-client] connection to n3 established
I180827 20:41:53.386637 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r10/1:/Table/1{3-4}] sending preemptive snapshot a16f4b15 at applied index 64
I180827 20:41:53.388005 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r10/1:/Table/1{3-4}] streamed snapshot to (n3,s3):?: kv pairs: 204, log entries: 54, rate-limit: 8.0 MiB/sec, 4ms
I180827 20:41:53.388536 52181 storage/replica_raftstorage.go:784  [n3,s3,r10/?:{-}] applying preemptive snapshot at index 64 (id=a16f4b15, encoded size=62836, 1 rocksdb batches, 54 log entries)
I180827 20:41:53.389154 52181 storage/replica_raftstorage.go:790  [n3,s3,r10/?:/Table/1{3-4}] applied preemptive snapshot in 1ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.389513 50930 storage/replica_command.go:812  [replicate,n1,s1,r10/1:/Table/1{3-4}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r10:/Table/1{3-4} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.392649 50930 storage/replica.go:3743  [n1,s1,r10/1:/Table/1{3-4}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I180827 20:41:53.394122 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r2/1:/System/{-NodeLive…}] sending preemptive snapshot 69adabc1 at applied index 23
I180827 20:41:53.394365 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r2/1:/System/{-NodeLive…}] streamed snapshot to (n2,s2):?: kv pairs: 7, log entries: 13, rate-limit: 8.0 MiB/sec, 1ms
I180827 20:41:53.394729 52213 storage/replica_raftstorage.go:784  [n2,s2,r2/?:{-}] applying preemptive snapshot at index 23 (id=69adabc1, encoded size=6277, 1 rocksdb batches, 13 log entries)
I180827 20:41:53.394981 52213 storage/replica_raftstorage.go:790  [n2,s2,r2/?:/System/{-NodeLive…}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.395465 50930 storage/replica_command.go:812  [replicate,n1,s1,r2/1:/System/{-NodeLive…}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r2:/System/{-NodeLiveness} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.398757 50930 storage/replica.go:3743  [n1,s1,r2/1:/System/{-NodeLive…}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I180827 20:41:53.399709 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r18/1:/Table/2{1-2}] sending preemptive snapshot e9df2a4a at applied index 16
I180827 20:41:53.400036 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r18/1:/Table/2{1-2}] streamed snapshot to (n3,s3):?: kv pairs: 8, log entries: 6, rate-limit: 8.0 MiB/sec, 1ms
I180827 20:41:53.400391 52185 storage/replica_raftstorage.go:784  [n3,s3,r18/?:{-}] applying preemptive snapshot at index 16 (id=e9df2a4a, encoded size=2272, 1 rocksdb batches, 6 log entries)
I180827 20:41:53.400594 52185 storage/replica_raftstorage.go:790  [n3,s3,r18/?:/Table/2{1-2}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.400882 50930 storage/replica_command.go:812  [replicate,n1,s1,r18/1:/Table/2{1-2}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r18:/Table/2{1-2} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.407636 50930 storage/replica.go:3743  [n1,s1,r18/1:/Table/2{1-2}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I180827 20:41:53.408861 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r13/1:/Table/1{6-7}] sending preemptive snapshot 6f914d55 at applied index 16
I180827 20:41:53.409071 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r13/1:/Table/1{6-7}] streamed snapshot to (n2,s2):?: kv pairs: 8, log entries: 6, rate-limit: 8.0 MiB/sec, 1ms
I180827 20:41:53.409426 52218 storage/replica_raftstorage.go:784  [n2,s2,r13/?:{-}] applying preemptive snapshot at index 16 (id=6f914d55, encoded size=2276, 1 rocksdb batches, 6 log entries)
I180827 20:41:53.409616 52218 storage/replica_raftstorage.go:790  [n2,s2,r13/?:/Table/1{6-7}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.409970 50930 storage/replica_command.go:812  [replicate,n1,s1,r13/1:/Table/1{6-7}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r13:/Table/1{6-7} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.411262 50862 testutils/testcluster/testcluster.go:536  [n1,s1] has 22 underreplicated ranges
I180827 20:41:53.412831 50930 storage/replica.go:3743  [n1,s1,r13/1:/Table/1{6-7}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I180827 20:41:53.414081 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r11/1:/Table/1{4-5}] sending preemptive snapshot cca961c1 at applied index 16
I180827 20:41:53.414277 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r11/1:/Table/1{4-5}] streamed snapshot to (n3,s3):?: kv pairs: 8, log entries: 6, rate-limit: 8.0 MiB/sec, 1ms
I180827 20:41:53.414576 52199 storage/replica_raftstorage.go:784  [n3,s3,r11/?:{-}] applying preemptive snapshot at index 16 (id=cca961c1, encoded size=2272, 1 rocksdb batches, 6 log entries)
I180827 20:41:53.414816 52199 storage/replica_raftstorage.go:790  [n3,s3,r11/?:/Table/1{4-5}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.415293 50930 storage/replica_command.go:812  [replicate,n1,s1,r11/1:/Table/1{4-5}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r11:/Table/1{4-5} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.418111 50930 storage/replica.go:3743  [n1,s1,r11/1:/Table/1{4-5}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I180827 20:41:53.419054 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r5/1:/System/ts{d-e}] sending preemptive snapshot 3c3a015f at applied index 27
I180827 20:41:53.423022 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r5/1:/System/ts{d-e}] streamed snapshot to (n3,s3):?: kv pairs: 1391, log entries: 2, rate-limit: 8.0 MiB/sec, 4ms
I180827 20:41:53.423893 52201 storage/replica_raftstorage.go:784  [n3,s3,r5/?:{-}] applying preemptive snapshot at index 27 (id=3c3a015f, encoded size=194658, 1 rocksdb batches, 2 log entries)
I180827 20:41:53.429501 52201 storage/replica_raftstorage.go:790  [n3,s3,r5/?:/System/ts{d-e}] applied preemptive snapshot in 6ms [clear=0ms batch=0ms entries=2ms commit=4ms]
I180827 20:41:53.433500 50930 storage/replica_command.go:812  [replicate,n1,s1,r5/1:/System/ts{d-e}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r5:/System/ts{d-e} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.437580 50930 storage/replica.go:3743  [n1,s1,r5/1:/System/ts{d-e}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I180827 20:41:53.440575 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r6/1:/{System/tse-Table/System…}] sending preemptive snapshot cbd412df at applied index 21
I180827 20:41:53.440794 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r6/1:/{System/tse-Table/System…}] streamed snapshot to (n3,s3):?: kv pairs: 8, log entries: 11, rate-limit: 8.0 MiB/sec, 2ms
I180827 20:41:53.441181 52260 storage/replica_raftstorage.go:784  [n3,s3,r6/?:{-}] applying preemptive snapshot at index 21 (id=cbd412df, encoded size=4339, 1 rocksdb batches, 11 log entries)
I180827 20:41:53.441400 52260 storage/replica_raftstorage.go:790  [n3,s3,r6/?:/{System/tse-Table/System…}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.441676 50930 storage/replica_command.go:812  [replicate,n1,s1,r6/1:/{System/tse-Table/System…}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r6:/{System/tse-Table/SystemConfigSpan/Start} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.448564 52224 rpc/nodedialer/nodedialer.go:92  [ct-client] connection to n2 established
I180827 20:41:53.461587 50930 storage/replica.go:3743  [n1,s1,r6/1:/{System/tse-Table/System…}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I180827 20:41:53.463345 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r7/1:/Table/{SystemCon…-11}] sending preemptive snapshot 114f4385 at applied index 29
I180827 20:41:53.464896 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r7/1:/Table/{SystemCon…-11}] streamed snapshot to (n2,s2):?: kv pairs: 59, log entries: 19, rate-limit: 8.0 MiB/sec, 3ms
I180827 20:41:53.465343 52280 storage/replica_raftstorage.go:784  [n2,s2,r7/?:{-}] applying preemptive snapshot at index 29 (id=114f4385, encoded size=16646, 1 rocksdb batches, 19 log entries)
I180827 20:41:53.465821 52280 storage/replica_raftstorage.go:790  [n2,s2,r7/?:/Table/{SystemCon…-11}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.466988 50930 storage/replica_command.go:812  [replicate,n1,s1,r7/1:/Table/{SystemCon…-11}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r7:/Table/{SystemConfigSpan/Start-11} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.472743 50930 storage/replica.go:3743  [n1,s1,r7/1:/Table/{SystemCon…-11}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I180827 20:41:53.474632 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r1/1:/{Min-System/}] sending preemptive snapshot 0a244018 at applied index 114
I180827 20:41:53.475250 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r1/1:/{Min-System/}] streamed snapshot to (n2,s2):?: kv pairs: 73, log entries: 90, rate-limit: 8.0 MiB/sec, 1ms
I180827 20:41:53.475827 52267 storage/replica_raftstorage.go:784  [n2,s2,r1/?:{-}] applying preemptive snapshot at index 114 (id=0a244018, encoded size=40271, 1 rocksdb batches, 90 log entries)
I180827 20:41:53.476525 52267 storage/replica_raftstorage.go:790  [n2,s2,r1/?:/{Min-System/}] applied preemptive snapshot in 1ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.476869 50930 storage/replica_command.go:812  [replicate,n1,s1,r1/1:/{Min-System/}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r1:/{Min-System/} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.482912 50930 storage/replica.go:3743  [n1,s1,r1/1:/{Min-System/}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I180827 20:41:53.483281 50930 storage/queue.go:873  [n1,replicate] purgatory is now empty
I180827 20:41:53.485684 52286 storage/store_snapshot.go:615  [replicate,n1,s1,r20/1:/Table/{23-50}] sending preemptive snapshot f1426c69 at applied index 19
I180827 20:41:53.487316 52286 storage/store_snapshot.go:657  [replicate,n1,s1,r20/1:/Table/{23-50}] streamed snapshot to (n3,s3):?: kv pairs: 13, log entries: 9, rate-limit: 8.0 MiB/sec, 4ms
I180827 20:41:53.487681 52252 storage/replica_raftstorage.go:784  [n3,s3,r20/?:{-}] applying preemptive snapshot at index 19 (id=f1426c69, encoded size=3273, 1 rocksdb batches, 9 log entries)
I180827 20:41:53.487932 52252 storage/replica_raftstorage.go:790  [n3,s3,r20/?:/Table/{23-50}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.488311 52286 storage/replica_command.go:812  [replicate,n1,s1,r20/1:/Table/{23-50}] change replicas (ADD_REPLICA (n3,s3):3): read existing descriptor r20:/Table/{23-50} [(n1,s1):1, (n2,s2):2, next=3, gen=1]
I180827 20:41:53.503580 52286 storage/replica.go:3743  [n1,s1,r20/1:/Table/{23-50}] proposing ADD_REPLICA((n3,s3):3): updated=[(n1,s1):1 (n2,s2):2 (n3,s3):3] next=4
I180827 20:41:53.505707 52235 storage/store_snapshot.go:615  [replicate,n1,s1,r1/1:/{Min-System/}] sending preemptive snapshot 99036b07 at applied index 119
I180827 20:41:53.506514 52235 storage/store_snapshot.go:657  [replicate,n1,s1,r1/1:/{Min-System/}] streamed snapshot to (n3,s3):?: kv pairs: 78, log entries: 95, rate-limit: 8.0 MiB/sec, 1ms
I180827 20:41:53.507282 52188 storage/replica_raftstorage.go:784  [n3,s3,r1/?:{-}] applying preemptive snapshot at index 119 (id=99036b07, encoded size=42101, 1 rocksdb batches, 95 log entries)
I180827 20:41:53.508109 52188 storage/replica_raftstorage.go:790  [n3,s3,r1/?:/{Min-System/}] applied preemptive snapshot in 1ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.508641 52235 storage/replica_command.go:812  [replicate,n1,s1,r1/1:/{Min-System/}] change replicas (ADD_REPLICA (n3,s3):3): read existing descriptor r1:/{Min-System/} [(n1,s1):1, (n2,s2):2, next=3, gen=1]
I180827 20:41:53.512524 52235 storage/replica.go:3743  [n1,s1,r1/1:/{Min-System/}] proposing ADD_REPLICA((n3,s3):3): updated=[(n1,s1):1 (n2,s2):2 (n3,s3):3] next=4
I180827 20:41:53.513999 52209 storage/store_snapshot.go:615  [replicate,n1,s1,r7/1:/Table/{SystemCon…-11}] sending preemptive snapshot bb53109c at applied index 32
I180827 20:41:53.514379 52209 storage/store_snapshot.go:657  [replicate,n1,s1,r7/1:/Table/{SystemCon…-11}] streamed snapshot to (n3,s3):?: kv pairs: 60, log entries: 22, rate-limit: 8.0 MiB/sec, 1ms
I180827 20:41:53.514821 52292 storage/replica_raftstorage.go:784  [n3,s3,r7/?:{-}] applying preemptive snapshot at index 32 (id=bb53109c, encoded size=17687, 1 rocksdb batches, 22 log entries)
I180827 20:41:53.515905 52292 storage/replica_raftstorage.go:790  [n3,s3,r7/?:/Table/{SystemCon…-11}] applied preemptive snapshot in 1ms [clear=1ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.516367 52209 storage/replica_command.go:812  [replicate,n1,s1,r7/1:/Table/{SystemCon…-11}] change replicas (ADD_REPLICA (n3,s3):3): read existing descriptor r7:/Table/{SystemConfigSpan/Start-11} [(n1,s1):1, (n2,s2):2, next=3, gen=1]
I180827 20:41:53.520158 52209 storage/replica.go:3743  [n1,s1,r7/1:/Table/{SystemCon…-11}] proposing ADD_REPLICA((n3,s3):3): updated=[(n1,s1):1 (n2,s2):2 (n3,s3):3] next=4
I180827 20:41:53.521958 52312 storage/store_snapshot.go:615  [replicate,n1,s1,r6/1:/{System/tse-Table/System…}] sending preemptive snapshot 2ca43612 at applied index 24
I180827 20:41:53.522776 52312 storage/store_snapshot.go:657  [replicate,n1,s1,r6/1:/{System/tse-Table/System…}] streamed snapshot to (n2,s2):?: kv pairs: 9, log entries: 14, rate-limit: 8.0 MiB/sec, 2ms
I180827 20:41:53.523128 52239 storage/replica_raftstorage.go:784  [n2,s2,r6/?:{-}] applying preemptive snapshot at index 24 (id=2ca43612, encoded size=5410, 1 rocksdb batches, 14 log entries)
I180827 20:41:53.523377 52239 storage/replica_raftstorage.go:790  [n2,s2,r6/?:/{System/tse-Table/System…}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.523701 52312 storage/replica_command.go:812  [replicate,n1,s1,r6/1:/{System/tse-Table/System…}] change replicas (ADD_REPLICA (n2,s2):3): read existing descriptor r6:/{System/tse-Table/SystemConfigSpan/Start} [(n1,s1):1, (n3,s3):2, next=3, gen=1]
I180827 20:41:53.525176 50862 testutils/testcluster/testcluster.go:536  [n1,s1] has 19 underreplicated ranges
I180827 20:41:53.527482 52312 storage/replica.go:3743  [n1,s1,r6/1:/{System/tse-Table/System…}] proposing ADD_REPLICA((n2,s2):3): updated=[(n1,s1):1 (n3,s3):2 (n2,s2):3] next=4
I180827 20:41:53.528875 52327 storage/store_snapshot.go:615  [replicate,n1,s1,r5/1:/System/ts{d-e}] sending preemptive snapshot 731be2ae at applied index 30
I180827 20:41:53.532860 52327 storage/store_snapshot.go:657  [replicate,n1,s1,r5/1:/System/ts{d-e}] streamed snapshot to (n2,s2):?: kv pairs: 1392, log entries: 5, rate-limit: 8.0 MiB/sec, 4ms
I180827 20:41:53.533361 52316 storage/replica_raftstorage.go:784  [n2,s2,r5/?:{-}] applying preemptive snapshot at index 30 (id=731be2ae, encoded size=195741, 1 rocksdb batches, 5 log entries)
I180827 20:41:53.535834 52316 storage/replica_raftstorage.go:790  [n2,s2,r5/?:/System/ts{d-e}] applied preemptive snapshot in 2ms [clear=0ms batch=0ms entries=0ms commit=2ms]
I180827 20:41:53.536253 52327 storage/replica_command.go:812  [replicate,n1,s1,r5/1:/System/ts{d-e}] change replicas (ADD_REPLICA (n2,s2):3): read existing descriptor r5:/System/ts{d-e} [(n1,s1):1, (n3,s3):2, next=3, gen=1]
I180827 20:41:53.540576 52327 storage/replica.go:3743  [n1,s1,r5/1:/System/ts{d-e}] proposing ADD_REPLICA((n2,s2):3): updated=[(n1,s1):1 (n3,s3):2 (n2,s2):3] next=4
I180827 20:41:53.545804 52341 storage/store_snapshot.go:615  [replicate,n1,s1,r11/1:/Table/1{4-5}] sending preemptive snapshot 7497a95f at applied index 19
I180827 20:41:53.546108 52341 storage/store_snapshot.go:657  [replicate,n1,s1,r11/1:/Table/1{4-5}] streamed snapshot to (n2,s2):?: kv pairs: 9, log entries: 9, rate-limit: 8.0 MiB/sec, 4ms
I180827 20:41:53.546590 52275 storage/replica_raftstorage.go:784  [n2,s2,r11/?:{-}] applying preemptive snapshot at index 19 (id=7497a95f, encoded size=3304, 1 rocksdb batches, 9 log entries)
I180827 20:41:53.546960 52275 storage/replica_raftstorage.go:790  [n2,s2,r11/?:/Table/1{4-5}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.547386 52341 storage/replica_command.go:812  [replicate,n1,s1,r11/1:/Table/1{4-5}] change replicas (ADD_REPLICA (

Please assign, take a look and update the issue accordingly.

test failure #407

The following test appears to have failed:

#407:

--- PASS: TestRawBroadcast (0.00s)
=== RUN TestMetricSystemStop
--- PASS: TestMetricSystemStop (0.00s)
=== RUN: ExampleMetricSystem
==================
WARNING: DATA RACE
Read by goroutine 44:
  github.com/cockroachdb/cockroach/util/metrics.func·006()
      /go/src/github.com/cockroachdb/cockroach/util/metrics/metrics.go:591 +0x571
  github.com/cockroachdb/cockroach/util/metrics.func·005()
      /go/src/github.com/cockroachdb/cockroach/util/metrics/metrics.go:530 +0x8f

Previous write by main goroutine:
  sync/atomic.AddInt64()
      /usr/src/go/src/runtime/race_amd64.s:261 +0xc
  github.com/cockroachdb/cockroach/util/metrics.(*MetricSystem).Histogram()
      /go/src/github.com/cockroachdb/cockroach/util/metrics/metrics.go:295 +0xd93
  github.com/cockroachdb/cockroach/util/metrics.(*MetricSystem).StopTimer()
      /go/src/github.com/cockroachdb/cockroach/util/metrics/metrics.go:226 +0x75
  github.com/cockroachdb/cockroach/util/metrics.ExampleMetricSystem()
      /go/src/github.com/cockroachdb/cockroach/util/metrics/metrics_test.go:37 +0x1cd
  testing.runExample()
      /usr/src/go/src/testing/example.go:98 +0x5e6
--
Goroutine 44 (running) created at:
  github.com/cockroachdb/cockroach/util/metrics.(*MetricSystem).reaper()
      /go/src/github.com/cockroachdb/cockroach/util/metrics/metrics.go:532 +0x1ed
==================
==================
WARNING: DATA RACE
Read by goroutine 44:
  github.com/cockroachdb/cockroach/util/metrics.func·006()
      /go/src/github.com/cockroachdb/cockroach/util/metrics/metrics.go:593 +0x6d4
  github.com/cockroachdb/cockroach/util/metrics.func·005()
      /go/src/github.com/cockroachdb/cockroach/util/metrics/metrics.go:530 +0x8f

Previous write by main goroutine:
  sync/atomic.AddInt64()
      /usr/src/go/src/runtime/race_amd64.s:261 +0xc
  github.com/cockroachdb/cockroach/util/metrics.(*MetricSystem).Histogram()
      /go/src/github.com/cockroachdb/cockroach/util/metrics/metrics.go:294 +0xce9
  github.com/cockroachdb/cockroach/util/metrics.(*MetricSystem).StopTimer()
      /go/src/github.com/cockroachdb/cockroach/util/metrics/metrics.go:226 +0x75
  github.com/cockroachdb/cockroach/util/metrics.ExampleMetricSystem()
      /go/src/github.com/cockroachdb/cockroach/util/metrics/metrics_test.go:37 +0x1cd
  testing.runExample()
      /usr/src/go/src/testing/example.go:98 +0x5e6

Please assign, take a look and update the issue accordingly.

test failure #405

The following test appears to have failed:

#405:

test

Please assign, take a look and update the issue accordingly.

testeroo

Is this a question, feature request, or bug report?

QUESTION

Have you checked our documentation at https://cockroachlabs.com/docs/stable/? If you could not find an answer there, please consider asking your question in our community forum at https://forum.cockroachlabs.com/, as it would benefit other members of our community.

Prefer live chat? Message our engineers on our Gitter channel at https://gitter.im/cockroachdb/cockroach.

FEATURE REQUEST

  1. Does an issue already exist addressing this request? If yes, please add a 👍 reaction to the existing issue. If not, move on to step 2.

  2. Please describe the feature you are requesting, as well as your proposed use case for this feature.

  3. Indicate the importance of this issue to you (blocker, must-have, should-have, nice-to-have). Are you currently using any workarounds to address this issue?

BUG REPORT

  1. Please supply the header (i.e. the first few lines) of your most recent
    log file for each node in your cluster. On most unix-based systems
    running with defaults, this boils down to the output of

    grep -F '[config]' cockroach-data/logs/cockroach.log

    When log files are not available, supply the output of cockroach version
    and all flags/environment variables passed to cockroach start instead.

  2. Please describe the issue you observed:

  • What did you do?

  • What did you expect to see?

  • What did you see instead?

Test failure in CI build 421

The following test appears to have failed:

#421:

I0329 20:27:32.853680     261 multiraft.go:644] Committed Entry[0]: 6/13 EntryNormal 13d011fb49eae1f645a6e3cafccae224: raft_id:1 cmd:<increment:<header:<timestamp:<wall_time:0 logical:7 > cmd_id:<wall_time:1427660852851040758 random:5018949295715050020 > user:"" repli
I0329 20:27:32.855118     261 kv.go:125] failed Increment: mvcc.go:450: attempted access to empty key
I0329 20:27:32.855229     261 retry.go:85]  failed an iteration: mvcc.go:450: attempted access to empty key
I0329 20:27:32.855350     261 retry.go:100]  failed; retrying in 50ms
==================
WARNING: DATA RACE
Write by goroutine 54:
  github.com/cockroachdb/cockroach/storage.TestAllocateErrorAndRecovery()
      /go/src/github.com/cockroachdb/cockroach/storage/id_alloc_test.go:162 +0x874
  testing.tRunner()
      /usr/src/go/src/testing/testing.go:447 +0x133

Previous read by goroutine 74:
  github.com/cockroachdb/cockroach/storage.func·009()
      /go/src/github.com/cockroachdb/cockroach/storage/id_alloc.go:114 +0x24e
  github.com/cockroachdb/cockroach/util.RetryWithBackoff()
      /go/src/github.com/cockroachdb/cockroach/util/retry.go:80 +0x6d
  github.com/cockroachdb/cockroach/storage.(*IDAllocator).allocateBlock()
      /go/src/github.com/cockroachdb/cockroach/storage/id_alloc.go:118 +0x158
  github.com/cockroachdb/cockroach/storage.func·008()
      /go/src/github.com/cockroachdb/cockroach/storage/id_alloc.go:92 +0x7a

Goroutine 54 (running) created at:
I0329 20:27:32.853680     261 multiraft.go:644] Committed Entry[0]: 6/13 EntryNormal 13d011fb49eae1f645a6e3cafccae224: raft_id:1 cmd:<increment:<header:<timestamp:<wall_time:0 logical:7 > cmd_id:<wall_time:1427660852851040758 random:5018949295715050020 > user:"" repli
I0329 20:27:32.855118     261 kv.go:125] failed Increment: mvcc.go:450: attempted access to empty key
I0329 20:27:32.855229     261 retry.go:85]  failed an iteration: mvcc.go:450: attempted access to empty key
I0329 20:27:32.855350     261 retry.go:100]  failed; retrying in 50ms
==================
WARNING: DATA RACE
Write by goroutine 54:
  github.com/cockroachdb/cockroach/storage.TestAllocateErrorAndRecovery()
      /go/src/github.com/cockroachdb/cockroach/storage/id_alloc_test.go:162 +0x874
  testing.tRunner()
      /usr/src/go/src/testing/testing.go:447 +0x133

Previous read by goroutine 74:
  github.com/cockroachdb/cockroach/storage.func·009()
      /go/src/github.com/cockroachdb/cockroach/storage/id_alloc.go:114 +0x24e
  github.com/cockroachdb/cockroach/util.RetryWithBackoff()
      /go/src/github.com/cockroachdb/cockroach/util/retry.go:80 +0x6d
  github.com/cockroachdb/cockroach/storage.(*IDAllocator).allocateBlock()
      /go/src/github.com/cockroachdb/cockroach/storage/id_alloc.go:118 +0x158
  github.com/cockroachdb/cockroach/storage.func·008()
      /go/src/github.com/cockroachdb/cockroach/storage/id_alloc.go:92 +0x7a

Goroutine 54 (running) created at:

Please assign, take a look and update the issue accordingly.

Test failure in CI build 420

The following test appears to have failed:

#420:

I0329 10:55:22.930209     265 multiraft.go:644] Committed Entry[0]: 6/13 EntryNormal 13cff2c23b0fc66545a6e3cafccae224: raft_id:1 cmd:<increment:<header:<timestamp:<wall_time:0 logical:7 > cmd_id:<wall_time:1427626522928203365 random:5018949295715050020 > user:"" repli
I0329 10:55:22.931065     265 kv.go:125] failed Increment: mvcc.go:450: attempted access to empty key
I0329 10:55:22.931155     265 retry.go:85]  failed an iteration: mvcc.go:450: attempted access to empty key
I0329 10:55:22.931218     265 retry.go:100]  failed; retrying in 50ms
==================
WARNING: DATA RACE
Write by goroutine 54:
  github.com/cockroachdb/cockroach/storage.TestAllocateErrorAndRecovery()
      /go/src/github.com/cockroachdb/cockroach/storage/id_alloc_test.go:162 +0x874
  testing.tRunner()
      /usr/src/go/src/testing/testing.go:447 +0x133

Previous read by goroutine 74:
  github.com/cockroachdb/cockroach/storage.func·009()
      /go/src/github.com/cockroachdb/cockroach/storage/id_alloc.go:114 +0x24e
  github.com/cockroachdb/cockroach/util.RetryWithBackoff()
      /go/src/github.com/cockroachdb/cockroach/util/retry.go:80 +0x6d
  github.com/cockroachdb/cockroach/storage.(*IDAllocator).allocateBlock()
      /go/src/github.com/cockroachdb/cockroach/storage/id_alloc.go:118 +0x158
  github.com/cockroachdb/cockroach/storage.func·008()
      /go/src/github.com/cockroachdb/cockroach/storage/id_alloc.go:92 +0x7a

Goroutine 54 (running) created at:
I0329 10:55:22.930209     265 multiraft.go:644] Committed Entry[0]: 6/13 EntryNormal 13cff2c23b0fc66545a6e3cafccae224: raft_id:1 cmd:<increment:<header:<timestamp:<wall_time:0 logical:7 > cmd_id:<wall_time:1427626522928203365 random:5018949295715050020 > user:"" repli
I0329 10:55:22.931065     265 kv.go:125] failed Increment: mvcc.go:450: attempted access to empty key
I0329 10:55:22.931155     265 retry.go:85]  failed an iteration: mvcc.go:450: attempted access to empty key
I0329 10:55:22.931218     265 retry.go:100]  failed; retrying in 50ms
==================
WARNING: DATA RACE
Write by goroutine 54:
  github.com/cockroachdb/cockroach/storage.TestAllocateErrorAndRecovery()
      /go/src/github.com/cockroachdb/cockroach/storage/id_alloc_test.go:162 +0x874
  testing.tRunner()
      /usr/src/go/src/testing/testing.go:447 +0x133

Previous read by goroutine 74:
  github.com/cockroachdb/cockroach/storage.func·009()
      /go/src/github.com/cockroachdb/cockroach/storage/id_alloc.go:114 +0x24e
  github.com/cockroachdb/cockroach/util.RetryWithBackoff()
      /go/src/github.com/cockroachdb/cockroach/util/retry.go:80 +0x6d
  github.com/cockroachdb/cockroach/storage.(*IDAllocator).allocateBlock()
      /go/src/github.com/cockroachdb/cockroach/storage/id_alloc.go:118 +0x158
  github.com/cockroachdb/cockroach/storage.func·008()
      /go/src/github.com/cockroachdb/cockroach/storage/id_alloc.go:92 +0x7a

Goroutine 54 (running) created at:

Please assign, take a look and update the issue accordingly.

teamcity: failed test: test/TestImportPgDump

The following tests appear to have failed on release-banana.
You may want to check for open issues.

#864629:

--- FAIL: test/TestImportPgDump/single_table_dump (0.000s)
Test ended in panic.

------- Stdout: -------
I180827 20:41:54.240444 52553 sql/lease.go:345  [n1,client=127.0.0.1:57092,user=root] publish: descID=53 (simple) version=2 mtime=2018-08-27 20:41:54.239317192 +0000 UTC
I180827 20:41:54.248859 52553 sql/event_log.go:126  [n1,client=127.0.0.1:57092,user=root] Event: "drop_table", target: 53, info: {TableName:foo.public.simple Statement:DROP TABLE IF EXISTS simple, second User:root CascadeDroppedViews:[]}
I180827 20:41:54.253678 52553 sql/lease.go:315  publish (1 leases): desc=[{simple 53 1}]
I180827 20:41:54.262558 52936 storage/replica_command.go:430  [merge,n2,s2,r27/3:/Table/5{3/3/";π…-4}] initiating a merge of r28:/{Table/54-Max} [(n1,s1):1, (n3,s3):2, (n2,s2):3, next=4, gen=0] into this range
I180827 20:41:54.263093 52937 storage/replica_command.go:298  [split,n2,s2,r23/3:/Table/5{2-3/1/106}] initiating a split of this range at key /Table/53 [r41]
I180827 20:41:54.307957 52553 sql/lease.go:345  [n1,client=127.0.0.1:57092,user=root,scExec=] publish: descID=53 (simple) version=3 mtime=2018-08-27 20:41:54.307790444 +0000 UTC
I180827 20:41:54.310094 51646 storage/replica_proposal.go:210  [n2,s2,r23/3:/Table/5{2-3/1/106}] new range lease repl=(n2,s2):3 seq=3 start=1535402514.062230012,0 epo=1 pro=1535402514.062232488,0 following repl=(n2,s2):3 seq=3 start=1535402514.062230012,0 epo=1 pro=1535402514.062232488,0
I180827 20:41:54.337077 52915 rpc/nodedialer/nodedialer.go:92  [merge,n2,s2,r27/3:/Table/5{3/3/";π…-4}] connection to n2 established
I180827 20:41:54.349169 51060 storage/store.go:2656  [n1,s1,r27/1:/Table/5{3/3/";π…-4}] removing replica r28/1
I180827 20:41:54.350346 51903 storage/store.go:2656  [n3,s3,r27/2:/Table/5{3/3/";π…-4}] removing replica r28/2
I180827 20:41:54.352255 51555 storage/store.go:2656  [n2,s2,r27/3:/Table/5{3/3/";π…-4}] removing replica r28/3
I180827 20:41:54.353337 52935 storage/replica_command.go:430  [merge,n3,s3,r26/2:/Table/53/{2/"\xc0…-3/";π,…}] initiating a merge of r27:/{Table/53/3/";π,\\✅✅ᐿπ✅,�a\r<\nπᐿॹ;π\\,✅\nᐿॹ✅�,\r�\r\r\r,;\r;ᐿ,\n\nᐿaπ\r,,✅\na,a\\<✅\"✅\\,,a\"π\r\n�✅π\"ॹπ;\nπ;<,<\n;<\n\tॹ\rπ\r,a\\\t�\n\r\\ᐿ<\t,\n\\ᐿa\t\t\n\nπ\t\\\n\\πa;π\r\rᐿ\",a<\"\n�\r\ta\r\t�\r\t✅\t;ᐿᐿ��\nᐿᐿ\\,ᐿᐿ\na\"ᐿ\"\"aa\n;✅π\nॹ\\\"\"�✅ॹॹ\\\nॹ\t\nॹ✅,\n\"πᐿ\n\n;<<;\r\tᐿ�,,\\\n\n\n\n\nπ�;\n;,\"✅\r;a\n\\;aa\n\n\n;\n\n<<ॹ�\nπa�πॹaॹ\r\n;✅✅,;ᐿ\n;π\"\\πᐿ\n\"\n\\a\\aπ✅ᐿ\n<\",\tॹ\r<;\";\nππ\"�\n\n\t,π\\\\<\\\t;ॹπ\\,;�✅ॹ,\r<\n✅�;ᐿ\",;✅\n\nॹॹ\r\n✅\n<<\n\"\",a\t\r\r,ॹ\t�;�,π\\\t,;\\\"✅\t\n\n\nᐿ\n<\\\rᐿ,;�\nπ\r\\<ᐿᐿ\n<✅ᐿaॹ�✅\n\t,π\"\r<<\n\nπ\tπ✅\n<\\πॹaᐿ\t;�ॹॹ,\"\n\\a\n,\"πॹ,\r,ॹॹ\\\";�\n\\π✅\n\"ππ\n✅�πॹ,\r✅\n;π;ᐿ\"\nᐿ✅\rॹ;;\n\"ॹ\"�\"a\n\rπ\n<\n\t\"aπ\t;\\\n\";\"π,\ta\t\n\nᐿ<,;�<ᐿ\"\\ᐿᐿa,\n;;ॹॹ\tॹπa�ᐿ\ra,π<✅\tᐿᐿ\n,✅\ra�\"\r\r\",;π�<;\n<;ᐿ\"�;ᐿ;�;✅\t\\<\\<;πa✅\rॹ\\\\\\ᐿ\n;\r\t\n\\\r\"✅\n\tπ✅\"\"<\r\rπ\r<,\n,\\✅ᐿᐿ�\t�,ππ;ᐿ\t�\"\\ππ\"ॹ,πa<\n\n<��\rॹॹ\t,\r\"ॹ✅✅\n\n;\\ॹ;π<\"�\t�<\"ᐿॹॹ;\n<\n\r\na\t�ॹᐿa\n,\"\t\r\"\n,\r<,\"\tᐿ\\\n<,;<\"\t\n\nᐿ,ᐿ\tπ✅\n,\r,\n\t<,�\\;<\\a\nπ\t,\t�ॹ\t\n�a✅\n✅\nॹ\";\r\t✅<\tᐿ\n\tᐿॹ✅\"\r\rॹ✅π\n\n,\t\\\t\\;\"a\t,ॹॹ\"aॹ\n,\n✅�\t\nॹᐿ\n\r✅<πॹ\n✅\tॹ\"ॹ\"�\r\\;\\✅;ॹπ;\n\nᐿ<\r<\"ॹ\n,\n;π\nॹ\ta✅\n�;ᐿ\"a�✅π\r✅ॹ,\n\n\",✅\nᐿ\n<�\r\nπᐿ\"πॹᐿ\r�\n<,✅a\\ॹ\r✅<;πᐿ✅ॹ<\"<✅\"π,\\\rπ\\<\"<\"π\n✅<;\\�\tॹ\n\n\r<\n\rᐿ\nᐿaॹaॹ\\\r<<\n\r\n�\ta,\nॹॹᐿ\n,π✅<;\\\nπॹπᐿॹ<;\"a\r<;\t\t<,;�π\n<✅ॹॹ\tᐿ\rᐿaaaॹ\t\\,ᐿ✅\n\\ॹ<\"π\t\r\"\tᐿ\n\ta\t,<ππ;\n\\\r�\n,\n\n\\ᐿa\nॹᐿa\n\t\n\t\n✅\"ᐿ\"\r\n\n\"�\r\n\n<<π\ra✅\\<ᐿ�\n\n✅�a✅�"/105-Max} [(n1,s1):1, (n3,s3):2, (n2,s2):3, next=4, gen=2] into this range
I180827 20:41:54.356275 53004 storage/replica_command.go:430  [merge,n2,s2,r41/3:/Table/53{-/1/106}] initiating a merge of r24:/Table/53/{1/106-2/"\x15\x8f\xe8\u007f\\\xf3\xdf\xf0nP\xdb\xd3\xe8\x1b\"B1K\xa8l+\x96/l\v\x9e\x0e\x91\xa0D\x96\xc0J\xf1\xa1͠\xd2̃\x05\xe3\xe2?ET蛂\x00\xe5\xb0\x1a\x8e\x13Zu\xfd\xf2\x81w^\xb7\xbdH\xb8\xe4\a\x9c\xfd\x99{\xb4\"\xe5Q\x9c\x17\x85\x97\xf7Ëb\x0f\xff\xb0-vmO\xe1\xfb\xc3\xf3\xab0\xa0\x05u\x1c\xb0{B\xeamp\xbd\x8f\x99?\x87\x0f\xb2e\xe3ؿ2LN\x03\x17\xa7\x9f\xd3\x0f\x15$\x02I\xd2\xd7\x04R\x193\x9d\xddX\u007f\x01A\xcc\xde`Pm:\xdbe\xfd\xa6\a\xf8i\x88\xa7\xee\xacӸ\xbf2\x84y\xcd\n\xe6]L)\xca\xd9`x\xb4\x1b|\xe8\x13\x82\x1a(/* 3`J\xe1ٰ\xe6AdN!-\xd9"/"ॹॹ;,✅\nπ<\t\nπ\tॹπ✅a\n,\nᐿ�\nॹ✅�ॹ�\"✅ॹ\\<\"\n;a\\\n,✅π\n<\n<\nॹπᐿ�ᐿ;,�\tᐿ\nᐿaᐿ,\nπ�\t\\<ॹ\\π;π�π\"<;\"�\\<�,<�\\a�<\nॹaᐿaॹ�\\ᐿπ,✅ᐿ\"<✅✅a\t�ॹ\t<π;�ॹ\\ᐿ;✅\r\\,;\\ᐿॹ\nॹᐿππ\nᐿ\nᐿaπ\\\nπ\r\"✅�π\nπ\rॹπ\"ॹ✅a\ra�✅\nπ;ॹ✅\n;ॹ,�\nπ\rπᐿa\\\\ᐿ,π<ᐿ✅\n�,\r\nᐿ✅\n<�ᐿ\"\"✅,,\"\n<\n✅\rπa�π\n<\\ॹ\nॹ�;\ra��✅ᐿ\n,\t�;,π<\r��\r\\�\n✅\r✅�;\\\n\n,\nॹ✅π\n,\n✅\t,�<\nπ\t;aπ\n<a<\n\tπ\r\"\\✅\n\n\n<ᐿπaπ\\�\"<✅\\a,✅\n✅\n<\"\"\n\n\r\rᐿ�\\\tᐿᐿ;\n\rᐿa\\π<\n\\\n\n\";\r\r\raπ\"\r�ॹa\r�\"\n\"✅ππ✅�\t�ᐿ\tᐿ\\\r�ᐿ<\\\nᐿπ✅\tॹ<π\ta\"✅\t,ॹa✅ᐿ;\\\r✅\\,ॹ\"a\n<ॹ\\\n<\"π\\\\ᐿ\n✅\nᐿ\n,\n\r\t\n\r\n<aᐿ;ᐿ;ᐿ\r;✅a<a,,<\t\n\\ππ\\\"✅\n\\a\n\tπa<\r<π\n✅\\π<ॹ,\t;<aaaπॹᐿaॹaॹ�,\"\t,ॹ;\\<✅a\nᐿ\"\nπ\\aᐿ�ᐿ<ॹ;\\<ᐿ\nᐿ\n\"aᐿᐿπ,\"\r✅ॹ\n,<\r<\n<<,ᐿᐿa,\rᐿ<;π\\�,\"\rπ�\nππ�,✅;�\ra<;\r�ᐿ\tπ;\"πᐿ\\�a\"ᐿ\\;\\ॹ\";ॹ;;✅\tॹ\r\n<\t\n\t<aॹ\tᐿ\n\"ॹᐿ\t✅✅�ॹ;;<�\t,\n\r\n\n\ta\"\\<\rπᐿa\t<\na;\t\"\nπ<πॹ\r\n<\n✅ᐿॹ✅�<,;✅\"\n<�π<✅<<✅\\;\n\"\rᐿ\t�\n\n\r\t,ॹ�\"\rᐿᐿᐿ,\"π\nπ\",a\"\"<�\t\\πॹ\n\taॹᐿ\tπॹ,\n✅\rπa\r<<,\n\nᐿ;\t\\<\tᐿ\n\n�\"ॹ<\n\r\nᐿ\n�\n\nπᐿ;\nॹ\n\"π<\"\r\r\n\r\\ᐿ;;πॹ;�\r✅�,✅\r\r�,a;ᐿ\\ॹ\"\t\r✅;\t<,π,�\t\"πaᐿ��\\ॹ\"\n\tᐿ\t,ॹ✅�ᐿ✅\tᐿ,ॹ✅;;�\r\n✅ᐿ�\nπa;\\,✅ॹ<ᐿ\nπ\n\"\n;a\t\\π\n<\r\r\rπ\"\n\nᐿ<<ᐿ\"\n,\n\"ॹπaᐿπ��\r\n�ᐿॹ,\na\n\rॹ<�ॹ\"\n\t\r\n,π\n�,<ॹ,<\n<�✅ᐿ\r✅a✅<\r;,�a�\\\nॹ<\\<✅ॹ\"\nॹ\r�\ta;\"\\ᐿ\n\n✅\"\r;✅\t,a✅✅<\"ᐿ\t�π\\✅✅�<;\"✅π✅ॹ,\nπ\n��<,\ta\r�✅ᐿ\nॹπ\nᐿॹ✅;\nॹ\t\r\\\nᐿ�ᐿ\n\tᐿ,\r\\;<<a,\"π,\tᐿ\nπ<a\"ॹ\\aa\r\r\"\";\tᐿ,ॹa\nॹ\nπ"/PrefixEnd} [(n1,s1):1, (n3,s3):2, (n2,s2):3, next=4, gen=1] into this range
I180827 20:41:54.370834 53062 storage/replica_command.go:298  [n2,s2,r27/3:/{Table/53/3/"…-Max}] initiating a split of this range at key /Table/54/1/106 [r42]
I180827 20:41:54.416413 51595 storage/store.go:2656  [n2,s2,r41/3:/Table/53{-/1/106}] removing replica r24/3
I180827 20:41:54.417001 51031 storage/store.go:2656  [n1,s1,r41/1:/Table/53{-/1/106}] removing replica r24/1
I180827 20:41:54.418902 51854 storage/store.go:2656  [n3,s3,r41/2:/Table/53{-/1/106}] removing replica r24/2
I180827 20:41:54.428525 53127 storage/replica_command.go:298  [n2,s2,r27/3:/{Table/53/3/"…-Max}] initiating a split of this range at key /Table/55 [r43]
I180827 20:41:54.435369 53211 rpc/nodedialer/nodedialer.go:92  [merge,n3,s3,r26/2:/Table/53/{2/"\xc0…-3/";π,…}] connection to n3 established
I180827 20:41:55.787315 53158 storage/replica_range_lease.go:554  [replicate,n1,s1,r13/1:/Table/1{6-7}] transferring lease to s2
I180827 20:41:55.788475 53158 storage/replica_range_lease.go:617  [replicate,n1,s1,r13/1:/Table/1{6-7}] done transferring lease to s2: <nil>
I180827 20:41:55.789101 51631 storage/replica_proposal.go:210  [n2,s2,r13/2:/Table/1{6-7}] new range lease repl=(n2,s2):2 seq=3 start=1535402515.787346068,0 epo=1 pro=1535402515.787349195,0 following repl=(n1,s1):1 seq=2 start=1535402512.768597075,0 exp=1535402521.769064687,0 pro=1535402512.769088099,0
I180827 20:41:57.787917 53257 storage/replica_range_lease.go:554  [replicate,n1,s1,r15/1:/Table/1{8-9}] transferring lease to s2
I180827 20:41:57.788730 51649 storage/replica_proposal.go:210  [n2,s2,r15/2:/Table/1{8-9}] new range lease repl=(n2,s2):2 seq=3 start=1535402517.787940061,0 epo=1 pro=1535402517.787943264,0 following repl=(n1,s1):1 seq=2 start=1535402512.768597075,0 exp=1535402521.769064687,0 pro=1535402512.769088099,0
I180827 20:41:57.788985 53257 storage/replica_range_lease.go:617  [replicate,n1,s1,r15/1:/Table/1{8-9}] done transferring lease to s2: <nil>
I180827 20:41:59.792943 53273 storage/replica_range_lease.go:554  [replicate,n1,s1,r5/1:/System/ts{d-e}] transferring lease to s2
I180827 20:41:59.794478 53273 storage/replica_range_lease.go:617  [replicate,n1,s1,r5/1:/System/ts{d-e}] done transferring lease to s2: <nil>
I180827 20:41:59.795445 51589 storage/replica_proposal.go:210  [n2,s2,r5/3:/System/ts{d-e}] new range lease repl=(n2,s2):3 seq=3 start=1535402519.792959805,0 epo=1 pro=1535402519.792963582,0 following repl=(n1,s1):1 seq=2 start=1535402512.768597075,0 exp=1535402521.769064687,0 pro=1535402512.769088099,0
I180827 20:42:01.790448 51043 storage/replica_proposal.go:210  [n1,s1,r6/1:/{System/tse-Table/System…}] new range lease repl=(n1,s1):1 seq=3 start=1535402512.768597075,0 epo=1 pro=1535402521.789417536,0 following repl=(n1,s1):1 seq=2 start=1535402512.768597075,0 exp=1535402521.769064687,0 pro=1535402512.769088099,0
I180827 20:42:02.773725 51087 server/status/runtime.go:433  [n1] runtime stats: 307 MiB RSS, 685 goroutines, 39 MiB/61 MiB/127 MiB GO alloc/idle/total, 72 MiB/117 MiB CGO alloc/total, 0.00cgo/sec, 0.00/0.00 %(u/s)time, 0.00 %gc (231x)
I180827 20:42:02.780810 51089 server/status/recorder.go:652  [n1,summaries] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180827 20:42:02.795646 51018 storage/replica_proposal.go:210  [n1,s1,r11/1:/Table/1{4-5}] new range lease repl=(n1,s1):1 seq=3 start=1535402512.768597075,0 epo=1 pro=1535402522.794523618,0 following repl=(n1,s1):1 seq=2 start=1535402512.768597075,0 exp=1535402521.769064687,0 pro=1535402512.769088099,0
W180827 20:42:02.826581 51089 server/node.go:886  [n1,summaries] health alerts detected: {Alerts:[{StoreID:1 Category:METRICS Description:queue.replicate.process.failure Value:347}]}
I180827 20:42:02.828154 51026 storage/replica_proposal.go:210  [n1,s1,r4/1:/System/{NodeLive…-tsd}] new range lease repl=(n1,s1):1 seq=3 start=1535402512.768597075,0 epo=1 pro=1535402522.827093568,0 following repl=(n1,s1):1 seq=2 start=1535402512.768597075,0 exp=1535402521.769064687,0 pro=1535402512.769088099,0
I180827 20:42:03.141384 51667 server/status/runtime.go:433  [n2] runtime stats: 309 MiB RSS, 684 goroutines, 47 MiB/53 MiB/127 MiB GO alloc/idle/total, 76 MiB/120 MiB CGO alloc/total, 0.00cgo/sec, 0.00/0.00 %(u/s)time, 0.00 %gc (231x)
I180827 20:42:03.154263 51685 server/status/recorder.go:652  [n2,summaries] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180827 20:42:03.233313 51943 server/status/runtime.go:433  [n3] runtime stats: 312 MiB RSS, 684 goroutines, 32 MiB/65 MiB/127 MiB GO alloc/idle/total, 78 MiB/123 MiB CGO alloc/total, 0.00cgo/sec, 0.00/0.00 %(u/s)time, 0.00 %gc (232x)
I180827 20:42:03.249160 51945 server/status/recorder.go:652  [n3,summaries] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180827 20:42:03.796116 51014 storage/replica_proposal.go:210  [n1,s1,r19/1:/Table/2{2-3}] new range lease repl=(n1,s1):1 seq=3 start=1535402512.768597075,0 epo=1 pro=1535402523.795031389,0 following repl=(n1,s1):1 seq=2 start=1535402512.768597075,0 exp=1535402521.769064687,0 pro=1535402512.769088099,0
I180827 20:42:04.796590 51023 storage/replica_proposal.go:210  [n1,s1,r7/1:/Table/{SystemCon…-11}] new range lease repl=(n1,s1):1 seq=3 start=1535402512.768597075,0 epo=1 pro=1535402524.795533218,0 following repl=(n1,s1):1 seq=2 start=1535402512.768597075,0 exp=1535402521.769064687,0 pro=1535402512.769088099,0
I180827 20:42:05.153061 51635 storage/replica_proposal.go:210  [n2,s2,r18/3:/Table/2{1-2}] new range lease repl=(n2,s2):3 seq=3 start=1535402521.769064687,1 epo=1 pro=1535402525.151225388,0 following repl=(n1,s1):1 seq=2 start=1535402512.768597075,0 exp=1535402521.769064687,0 pro=1535402512.769088099,0
I180827 20:42:06.160935 51644 storage/replica_proposal.go:210  [n2,s2,r22/3:/Table/5{1-2}] new range lease repl=(n2,s2):3 seq=3 start=1535402521.769064687,1 epo=1 pro=1535402526.159140119,0 following repl=(n1,s1):1 seq=2 start=1535402512.768597075,0 exp=1535402521.769064687,0 pro=1535402512.769088099,0
I180827 20:42:06.798225 51043 storage/replica_proposal.go:210  [n1,s1,r9/1:/Table/1{2-3}] new range lease repl=(n1,s1):1 seq=3 start=1535402512.768597075,0 epo=1 pro=1535402526.797397955,0 following repl=(n1,s1):1 seq=2 start=1535402512.768597075,0 exp=1535402521.769064687,0 pro=1535402512.769088099,0
I180827 20:42:07.240107 51859 storage/replica_proposal.go:210  [n3,s3,r8/2:/Table/1{1-2}] new range lease repl=(n3,s3):2 seq=3 start=1535402521.769064687,1 epo=1 pro=1535402527.238093242,0 following repl=(n1,s1):1 seq=2 start=1535402512.768597075,0 exp=1535402521.769064687,0 pro=1535402512.769088099,0
I180827 20:42:08.161529 51597 storage/replica_proposal.go:210  [n2,s2,r17/3:/Table/2{0-1}] new range lease repl=(n2,s2):3 seq=3 start=1535402521.769064687,1 epo=1 pro=1535402528.159949438,0 following repl=(n1,s1):1 seq=2 start=1535402512.768597075,0 exp=1535402521.769064687,0 pro=1535402512.769088099,0
I180827 20:42:09.244632 51881 storage/replica_proposal.go:210  [n3,s3,r21/2:/Table/5{0-1}] new range lease repl=(n3,s3):2 seq=3 start=1535402521.769064687,1 epo=1 pro=1535402529.242538000,0 following repl=(n1,s1):1 seq=2 start=1535402512.768597075,0 exp=1535402521.769064687,0 pro=1535402512.769088099,0
I180827 20:42:09.417502 51661 storage/compactor/compactor.go:329  [n2,s2,compactor] purging suggested compaction for range /Table/53/1/106 - /Table/53/2/"\x15\x8f\xe8\u007f\\\xf3\xdf\xf0nP\xdb\xd3\xe8\x1b\"B1K\xa8l+\x96/l\v\x9e\x0e\x91\xa0D\x96\xc0J\xf1\xa1͠\xd2̃\x05\xe3\xe2?ET蛂\x00\xe5\xb0\x1a\x8e\x13Zu\xfd\xf2\x81w^\xb7\xbdH\xb8\xe4\a\x9c\xfd\x99{\xb4\"\xe5Q\x9c\x17\x85\x97\xf7Ëb\x0f\xff\xb0-vmO\xe1\xfb\xc3\xf3\xab0\xa0\x05u\x1c\xb0{B\xeamp\xbd\x8f\x99?\x87\x0f\xb2e\xe3ؿ2LN\x03\x17\xa7\x9f\xd3\x0f\x15$\x02I\xd2\xd7\x04R\x193\x9d\xddX\u007f\x01A\xcc\xde`Pm:\xdbe\xfd\xa6\a\xf8i\x88\xa7\xee\xacӸ\xbf2\x84y\xcd\n\xe6]L)\xca\xd9`x\xb4\x1b|\xe8\x13\x82\x1a(/* 3`J\xe1ٰ\xe6AdN!-\xd9"/"ॹॹ;,✅\nπ<\t\nπ\tॹπ✅a\n,\nᐿ�\nॹ✅�ॹ�\"✅ॹ\\<\"\n;a\\\n,✅π\n<\n<\nॹπᐿ�ᐿ;,�\tᐿ\nᐿaᐿ,\nπ�\t\\<ॹ\\π;π�π\"<;\"�\\<�,<�\\a�<\nॹaᐿaॹ�\\ᐿπ,✅ᐿ\"<✅✅a\t�ॹ\t<π;�ॹ\\ᐿ;✅\r\\,;\\ᐿॹ\nॹᐿππ\nᐿ\nᐿaπ\\\nπ\r\"✅�π\nπ\rॹπ\"ॹ✅a\ra�✅\nπ;ॹ✅\n;ॹ,�\nπ\rπᐿa\\\\ᐿ,π<ᐿ✅\n�,\r\nᐿ✅\n<�ᐿ\"\"✅,,\"\n<\n✅\rπa�π\n<\\ॹ\nॹ�;\ra��✅ᐿ\n,\t�;,π<\r��\r\\�\n✅\r✅�;\\\n\n,\nॹ✅π\n,\n✅\t,�<\nπ\t;aπ\n<a<\n\tπ\r\"\\✅\n\n\n<ᐿπaπ\\�\"<✅\\a,✅\n✅\n<\"\"\n\n\r\rᐿ�\\\tᐿᐿ;\n\rᐿa\\π<\n\\\n\n\";\r\r\raπ\"\r�ॹa\r�\"\n\"✅ππ✅�\t�ᐿ\tᐿ\\\r�ᐿ<\\\nᐿπ✅\tॹ<π\ta\"✅\t,ॹa✅ᐿ;\\\r✅\\,ॹ\"a\n<ॹ\\\n<\"π\\\\ᐿ\n✅\nᐿ\n,\n\r\t\n\r\n<aᐿ;ᐿ;ᐿ\r;✅a<a,,<\t\n\\ππ\\\"✅\n\\a\n\tπa<\r<π\n✅\\π<ॹ,\t;<aaaπॹᐿaॹaॹ�,\"\t,ॹ;\\<✅a\nᐿ\"\nπ\\aᐿ�ᐿ<ॹ;\\<ᐿ\nᐿ\n\"aᐿᐿπ,\"\r✅ॹ\n,<\r<\n<<,ᐿᐿa,\rᐿ<;π\\�,\"\rπ�\nππ�,✅;�\ra<;\r�ᐿ\tπ;\"πᐿ\\�a\"ᐿ\\;\\ॹ\";ॹ;;✅\tॹ\r\n<\t\n\t<aॹ\tᐿ\n\"ॹᐿ\t✅✅�ॹ;;<�\t,\n\r\n\n\ta\"\\<\rπᐿa\t<\na;\t\"\nπ<πॹ\r\n<\n✅ᐿॹ✅�<,;✅\"\n<�π<✅<<✅\\;\n\"\rᐿ\t�\n\n\r\t,ॹ�\"\rᐿᐿᐿ,\"π\nπ\",a\"\"<�\t\\πॹ\n\taॹᐿ\tπॹ,\n✅\rπa\r<<,\n\nᐿ;\t\\<\tᐿ\n\n�\"ॹ<\n\r\nᐿ\n�\n\nπᐿ;\nॹ\n\"π<\"\r\r\n\r\\ᐿ;;πॹ;�\r✅�,✅\r\r�,a;ᐿ\\ॹ\"\t\r✅;\t<,π,�\t\"πaᐿ��\\ॹ\"\n\tᐿ\t,ॹ✅�ᐿ✅\tᐿ,ॹ✅;;�\r\n✅ᐿ�\nπa;\\,✅ॹ<ᐿ\nπ\n\"\n;a\t\\π\n<\r\r\rπ\"\n\nᐿ<<ᐿ\"\n,\n\"ॹπaᐿπ��\r\n�ᐿॹ,\na\n\rॹ<�ॹ\"\n\t\r\n,π\n�,<ॹ,<\n<�✅ᐿ\r✅a✅<\r;,�a�\\\nॹ<\\<✅ॹ\"\nॹ\r�\ta;\"\\ᐿ\n\n✅\"\r;✅\t,a✅✅<\"ᐿ\t�π\\✅✅�<;\"✅π✅ॹ,\nπ\n��<,\ta\r�✅ᐿ\nॹπ\nᐿॹ✅;\nॹ\t\r\\\nᐿ�ᐿ\n\tᐿ,\r\\;<<a,\"π,\tᐿ\nπ<a\"ॹ\\aa\r\r\"\";\tᐿ,ॹa\nॹ\nπ"/PrefixEnd that contains live data
I180827 20:42:09.417858 51082 storage/compactor/compactor.go:329  [n1,s1,compactor] purging suggested compaction for range /Table/53/1/106 - /Table/53/2/"\x15\x8f\xe8\u007f\\\xf3\xdf\xf0nP\xdb\xd3\xe8\x1b\"B1K\xa8l+\x96/l\v\x9e\x0e\x91\xa0D\x96\xc0J\xf1\xa1͠\xd2̃\x05\xe3\xe2?ET蛂\x00\xe5\xb0\x1a\x8e\x13Zu\xfd\xf2\x81w^\xb7\xbdH\xb8\xe4\a\x9c\xfd\x99{\xb4\"\xe5Q\x9c\x17\x85\x97\xf7Ëb\x0f\xff\xb0-vmO\xe1\xfb\xc3\xf3\xab0\xa0\x05u\x1c\xb0{B\xeamp\xbd\x8f\x99?\x87\x0f\xb2e\xe3ؿ2LN\x03\x17\xa7\x9f\xd3\x0f\x15$\x02I\xd2\xd7\x04R\x193\x9d\xddX\u007f\x01A\xcc\xde`Pm:\xdbe\xfd\xa6\a\xf8i\x88\xa7\xee\xacӸ\xbf2\x84y\xcd\n\xe6]L)\xca\xd9`x\xb4\x1b|\xe8\x13\x82\x1a(/* 3`J\xe1ٰ\xe6AdN!-\xd9"/"ॹॹ;,✅\nπ<\t\nπ\tॹπ✅a\n,\nᐿ�\nॹ✅�ॹ�\"✅ॹ\\<\"\n;a\\\n,✅π\n<\n<\nॹπᐿ�ᐿ;,�\tᐿ\nᐿaᐿ,\nπ�\t\\<ॹ\\π;π�π\"<;\"�\\<�,<�\\a�<\nॹaᐿaॹ�\\ᐿπ,✅ᐿ\"<✅✅a\t�ॹ\t<π;�ॹ\\ᐿ;✅\r\\,;\\ᐿॹ\nॹᐿππ\nᐿ\nᐿaπ\\\nπ\r\"✅�π\nπ\rॹπ\"ॹ✅a\ra�✅\nπ;ॹ✅\n;ॹ,�\nπ\rπᐿa\\\\ᐿ,π<ᐿ✅\n�,\r\nᐿ✅\n<�ᐿ\"\"✅,,\"\n<\n✅\rπa�π\n<\\ॹ\nॹ�;\ra��✅ᐿ\n,\t�;,π<\r��\r\\�\n✅\r✅�;\\\n\n,\nॹ✅π\n,\n✅\t,�<\nπ\t;aπ\n<a<\n\tπ\r\"\\✅\n\n\n<ᐿπaπ\\�\"<✅\\a,✅\n✅\n<\"\"\n\n\r\rᐿ�\\\tᐿᐿ;\n\rᐿa\\π<\n\\\n\n\";\r\r\raπ\"\r�ॹa\r�\"\n\"✅ππ✅�\t�ᐿ\tᐿ\\\r�ᐿ<\\\nᐿπ✅\tॹ<π\ta\"✅\t,ॹa✅ᐿ;\\\r✅\\,ॹ\"a\n<ॹ\\\n<\"π\\\\ᐿ\n✅\nᐿ\n,\n\r\t\n\r\n<aᐿ;ᐿ;ᐿ\r;✅a<a,,<\t\n\\ππ\\\"✅\n\\a\n\tπa<\r<π\n✅\\π<ॹ,\t;<aaaπॹᐿaॹaॹ�,\"\t,ॹ;\\<✅a\nᐿ\"\nπ\\aᐿ�ᐿ<ॹ;\\<ᐿ\nᐿ\n\"aᐿᐿπ,\"\r✅ॹ\n,<\r<\n<<,ᐿᐿa,\rᐿ<;π\\�,\"\rπ�\nππ�,✅;�\ra<;\r�ᐿ\tπ;\"πᐿ\\�a\"ᐿ\\;\\ॹ\";ॹ;;✅\tॹ\r\n<\t\n\t<aॹ\tᐿ\n\"ॹᐿ\t✅✅�ॹ;;<�\t,\n\r\n\n\ta\"\\<\rπᐿa\t<\na;\t\"\nπ<πॹ\r\n<\n✅ᐿॹ✅�<,;✅\"\n<�π<✅<<✅\\;\n\"\rᐿ\t�\n\n\r\t,ॹ�\"\rᐿᐿᐿ,\"π\nπ\",a\"\"<�\t\\πॹ\n\taॹᐿ\tπॹ,\n✅\rπa\r<<,\n\nᐿ;\t\\<\tᐿ\n\n�\"ॹ<\n\r\nᐿ\n�\n\nπᐿ;\nॹ\n\"π<\"\r\r\n\r\\ᐿ;;πॹ;�\r✅�,✅\r\r�,a;ᐿ\\ॹ\"\t\r✅;\t<,π,�\t\"πaᐿ��\\ॹ\"\n\tᐿ\t,ॹ✅�ᐿ✅\tᐿ,ॹ✅;;�\r\n✅ᐿ�\nπa;\\,✅ॹ<ᐿ\nπ\n\"\n;a\t\\π\n<\r\r\rπ\"\n\nᐿ<<ᐿ\"\n,\n\"ॹπaᐿπ��\r\n�ᐿॹ,\na\n\rॹ<�ॹ\"\n\t\r\n,π\n�,<ॹ,<\n<�✅ᐿ\r✅a✅<\r;,�a�\\\nॹ<\\<✅ॹ\"\nॹ\r�\ta;\"\\ᐿ\n\n✅\"\r;✅\t,a✅✅<\"ᐿ\t�π\\✅✅�<;\"✅π✅ॹ,\nπ\n��<,\ta\r�✅ᐿ\nॹπ\nᐿॹ✅;\nॹ\t\r\\\nᐿ�ᐿ\n\tᐿ,\r\\;<<a,\"π,\tᐿ\nπ<a\"ॹ\\aa\r\r\"\";\tᐿ,ॹa\nॹ\nπ"/PrefixEnd that contains live data
I180827 20:42:09.419667 51921 storage/compactor/compactor.go:329  [n3,s3,compactor] purging suggested compaction for range /Table/53/1/106 - /Table/53/2/"\x15\x8f\xe8\u007f\\\xf3\xdf\xf0nP\xdb\xd3\xe8\x1b\"B1K\xa8l+\x96/l\v\x9e\x0e\x91\xa0D\x96\xc0J\xf1\xa1͠\xd2̃\x05\xe3\xe2?ET蛂\x00\xe5\xb0\x1a\x8e\x13Zu\xfd\xf2\x81w^\xb7\xbdH\xb8\xe4\a\x9c\xfd\x99{\xb4\"\xe5Q\x9c\x17\x85\x97\xf7Ëb\x0f\xff\xb0-vmO\xe1\xfb\xc3\xf3\xab0\xa0\x05u\x1c\xb0{B\xeamp\xbd\x8f\x99?\x87\x0f\xb2e\xe3ؿ2LN\x03\x17\xa7\x9f\xd3\x0f\x15$\x02I\xd2\xd7\x04R\x193\x9d\xddX\u007f\x01A\xcc\xde`Pm:\xdbe\xfd\xa6\a\xf8i\x88\xa7\xee\xacӸ\xbf2\x84y\xcd\n\xe6]L)\xca\xd9`x\xb4\x1b|\xe8\x13\x82\x1a(/* 3`J\xe1ٰ\xe6AdN!-\xd9"/"ॹॹ;,✅\nπ<\t\nπ\tॹπ✅a\n,\nᐿ�\nॹ✅�ॹ�\"✅ॹ\\<\"\n;a\\\n,✅π\n<\n<\nॹπᐿ�ᐿ;,�\tᐿ\nᐿaᐿ,\nπ�\t\\<ॹ\\π;π�π\"<;\"�\\<�,<�\\a�<\nॹaᐿaॹ�\\ᐿπ,✅ᐿ\"<✅✅a\t�ॹ\t<π;�ॹ\\ᐿ;✅\r\\,;\\ᐿॹ\nॹᐿππ\nᐿ\nᐿaπ\\\nπ\r\"✅�π\nπ\rॹπ\"ॹ✅a\ra�✅\nπ;ॹ✅\n;ॹ,�\nπ\rπᐿa\\\\ᐿ,π<ᐿ✅\n�,\r\nᐿ✅\n<�ᐿ\"\"✅,,\"\n<\n✅\rπa�π\n<\\ॹ\nॹ�;\ra��✅ᐿ\n,\t�;,π<\r��\r\\�\n✅\r✅�;\\\n\n,\nॹ✅π\n,\n✅\t,�<\nπ\t;aπ\n<a<\n\tπ\r\"\\✅\n\n\n<ᐿπaπ\\�\"<✅\\a,✅\n✅\n<\"\"\n\n\r\rᐿ�\\\tᐿᐿ;\n\rᐿa\\π<\n\\\n\n\";\r\r\raπ\"\r�ॹa\r�\"\n\"✅ππ✅�\t�ᐿ\tᐿ\\\r�ᐿ<\\\nᐿπ✅\tॹ<π\ta\"✅\t,ॹa✅ᐿ;\\\r✅\\,ॹ\"a\n<ॹ\\\n<\"π\\\\ᐿ\n✅\nᐿ\n,\n\r\t\n\r\n<aᐿ;ᐿ;ᐿ\r;✅a<a,,<\t\n\\ππ\\\"✅\n\\a\n\tπa<\r<π\n✅\\π<ॹ,\t;<aaaπॹᐿaॹaॹ�,\"\t,ॹ;\\<✅a\nᐿ\"\nπ\\aᐿ�ᐿ<ॹ;\\<ᐿ\nᐿ\n\"aᐿᐿπ,\"\r✅ॹ\n,<\r<\n<<,ᐿᐿa,\rᐿ<;π\\�,\"\rπ�\nππ�,✅;�\ra<;\r�ᐿ\tπ;\"πᐿ\\�a\"ᐿ\\;\\ॹ\";ॹ;;✅\tॹ\r\n<\t\n\t<aॹ\tᐿ\n\"ॹᐿ\t✅✅�ॹ;;<�\t,\n\r\n\n\ta\"\\<\rπᐿa\t<\na;\t\"\nπ<πॹ\r\n<\n✅ᐿॹ✅�<,;✅\"\n<�π<✅<<✅\\;\n\"\rᐿ\t�\n\n\r\t,ॹ�\"\rᐿᐿᐿ,\"π\nπ\",a\"\"<�\t\\πॹ\n\taॹᐿ\tπॹ,\n✅\rπa\r<<,\n\nᐿ;\t\\<\tᐿ\n\n�\"ॹ<\n\r\nᐿ\n�\n\nπᐿ;\nॹ\n\"π<\"\r\r\n\r\\ᐿ;;πॹ;�\r✅�,✅\r\r�,a;ᐿ\\ॹ\"\t\r✅;\t<,π,�\t\"πaᐿ��\\ॹ\"\n\tᐿ\t,ॹ✅�ᐿ✅\tᐿ,ॹ✅;;�\r\n✅ᐿ�\nπa;\\,✅ॹ<ᐿ\nπ\n\"\n;a\t\\π\n<\r\r\rπ\"\n\nᐿ<<ᐿ\"\n,\n\"ॹπaᐿπ��\r\n�ᐿॹ,\na\n\rॹ<�ॹ\"\n\t\r\n,π\n�,<ॹ,<\n<�✅ᐿ\r✅a✅<\r;,�a�\\\nॹ<\\<✅ॹ\"\nॹ\r�\ta;\"\\ᐿ\n\n✅\"\r;✅\t,a✅✅<\"ᐿ\t�π\\✅✅�<;\"✅π✅ॹ,\nπ\n��<,\ta\r�✅ᐿ\nॹπ\nᐿॹ✅;\nॹ\t\r\\\nᐿ�ᐿ\n\tᐿ,\r\\;<<a,\"π,\tᐿ\nπ<a\"ॹ\\aa\r\r\"\";\tᐿ,ॹa\nॹ\nπ"/PrefixEnd that contains live data
I180827 20:42:11.800653 51065 storage/replica_proposal.go:210  [n1,s1,r12/1:/Table/1{5-6}] new range lease repl=(n1,s1):1 seq=3 start=1535402512.768597075,0 epo=1 pro=1535402531.799762629,0 following repl=(n1,s1):1 seq=2 start=1535402512.768597075,0 exp=1535402521.769064687,0 pro=1535402512.769088099,0
I180827 20:42:12.772885 51087 server/status/runtime.go:433  [n1] runtime stats: 318 MiB RSS, 684 goroutines, 49 MiB/51 MiB/127 MiB GO alloc/idle/total, 78 MiB/123 MiB CGO alloc/total, 361.49cgo/sec, 0.04/0.01 %(u/s)time, 0.00 %gc (1x)
I180827 20:42:12.782389 51089 server/status/recorder.go:652  [n1,summaries] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180827 20:42:13.141376 51667 server/status/runtime.go:433  [n2] runtime stats: 318 MiB RSS, 684 goroutines, 30 MiB/68 MiB/127 MiB GO alloc/idle/total, 78 MiB/123 MiB CGO alloc/total, 327.70cgo/sec, 0.04/0.01 %(u/s)time, 0.00 %gc (2x)
I180827 20:42:13.154199 51685 server/status/recorder.go:652  [n2,summaries] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180827 20:42:13.233456 51943 server/status/runtime.go:433  [n3] runtime stats: 318 MiB RSS, 684 goroutines, 38 MiB/61 MiB/127 MiB GO alloc/idle/total, 78 MiB/123 MiB CGO alloc/total, 353.90cgo/sec, 0.04/0.00 %(u/s)time, 0.00 %gc (1x)
I180827 20:42:13.248047 51945 server/status/recorder.go:652  [n3,summaries] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180827 20:42:14.802152 51027 storage/replica_proposal.go:210  [n1,s1,r14/1:/Table/1{7-8}] new range lease repl=(n1,s1):1 seq=3 start=1535402512.768597075,0 epo=1 pro=1535402534.801058522,0 following repl=(n1,s1):1 seq=2 start=1535402512.768597075,0 exp=1535402521.769064687,0 pro=1535402512.769088099,0
I180827 20:42:15.766947 51738 gossip/gossip.go:1385  [n3] node has connected to cluster via gossip
I180827 20:42:15.767163 51738 storage/stores.go:261  [n3] wrote 2 node addresses to persistent storage
I180827 20:42:16.148750 50916 gossip/gossip.go:1385  [n1] node has connected to cluster via gossip
I180827 20:42:16.148996 50916 storage/stores.go:261  [n1] wrote 2 node addresses to persistent storage
I180827 20:42:16.357033 51518 gossip/gossip.go:1385  [n2] node has connected to cluster via gossip
I180827 20:42:16.357243 51518 storage/stores.go:261  [n2] wrote 2 node addresses to persistent storage
I180827 20:42:18.191595 53699 storage/replica_consistency.go:127  [consistencyChecker,n2,s2,r5/3:/System/ts{d-e}] triggering stats recomputation to resolve delta of {ContainsEstimates:true LastUpdateNanos:1535402533254751894 IntentAge:0 GCBytesAge:0 LiveBytes:65870 LiveCount:-922 KeyBytes:-42936 KeyCount:-922 ValBytes:108806 ValCount:-922 IntentBytes:0 IntentCount:0 SysBytes:0 SysCount:0}
I180827 20:42:22.772525 51087 server/status/runtime.go:433  [n1] runtime stats: 318 MiB RSS, 684 goroutines, 50 MiB/49 MiB/127 MiB GO alloc/idle/total, 78 MiB/123 MiB CGO alloc/total, 565.32cgo/sec, 0.04/0.00 %(u/s)time, 0.00 %gc (1x)
I180827 20:42:22.780786 51089 server/status/recorder.go:652  [n1,summaries] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180827 20:42:23.142054 51667 server/status/runtime.go:433  [n2] runtime stats: 318 MiB RSS, 685 goroutines, 32 MiB/67 MiB/127 MiB GO alloc/idle/total, 78 MiB/123 MiB CGO alloc/total, 570.86cgo/sec, 0.04/0.00 %(u/s)time, 0.00 %gc (1x)
I180827 20:42:23.154218 51685 server/status/recorder.go:652  [n2,summaries] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180827 20:42:23.234780 51943 server/status/runtime.go:433  [n3] runtime stats: 316 MiB RSS, 684 goroutines, 39 MiB/60 MiB/127 MiB GO alloc/idle/total, 78 MiB/121 MiB CGO alloc/total, 546.33cgo/sec, 0.04/0.00 %(u/s)time, 0.00 %gc (1x)
I180827 20:42:23.248030 51945 server/status/recorder.go:652  [n3,summaries] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
W180827 20:42:24.438374 53144 storage/intent_resolver.go:738  [n3,s3] failed to cleanup transaction intents: failed to resolve intents: the batch experienced mixed success and failure
I180827 20:42:32.772775 51087 server/status/runtime.go:433  [n1] runtime stats: 316 MiB RSS, 683 goroutines, 33 MiB/66 MiB/127 MiB GO alloc/idle/total, 78 MiB/121 MiB CGO alloc/total, 101.60cgo/sec, 0.03/0.00 %(u/s)time, 0.00 %gc (2x)
I180827 20:42:32.780856 51089 server/status/recorder.go:652  [n1,summaries] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180827 20:42:33.141433 51667 server/status/runtime.go:433  [n2] runtime stats: 316 MiB RSS, 683 goroutines, 40 MiB/60 MiB/127 MiB GO alloc/idle/total, 78 MiB/121 MiB CGO alloc/total, 97.71cgo/sec, 0.03/0.00 %(u/s)time, 0.00 %gc (1x)
I180827 20:42:33.154241 51685 server/status/recorder.go:652  [n2,summaries] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180827 20:42:33.233327 51943 server/status/runtime.go:433  [n3] runtime stats: 315 MiB RSS, 683 goroutines, 46 MiB/53 MiB/127 MiB GO alloc/idle/total, 81 MiB/123 MiB CGO alloc/total, 96.21cgo/sec, 0.03/0.00 %(u/s)time, 0.00 %gc (1x)
I180827 20:42:33.248005 51945 server/status/recorder.go:652  [n3,summaries] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180827 20:42:40.183209 54006 storage/replica_command.go:430  [merge,n2,s2,r41/3:/Table/53{-/2/"\x15…}] initiating a merge of r25:/Table/53/2/"\x{15\x8f\xe8\u007f\\\xf3\xdf\xf0nP\xdb\xd3\xe8\x1b\"B1K\xa8l+\x96/l\v\x9e\x0e\x91\xa0D\x96\xc0J\xf1\xa1͠\xd2̃\x05\xe3\xe2?ET蛂\x00\xe5\xb0\x1a\x8e\x13Zu\xfd\xf2\x81w^\xb7\xbdH\xb8\xe4\a\x9c\xfd\x99{\xb4\"\xe5Q\x9c\x17\x85\x97\xf7Ëb\x0f\xff\xb0-vmO\xe1\xfb\xc3\xf3\xab0\xa0\x05u\x1c\xb0{B\xeamp\xbd\x8f\x99?\x87\x0f\xb2e\xe3ؿ2LN\x03\x17\xa7\x9f\xd3\x0f\x15$\x02I\xd2\xd7\x04R\x193\x9d\xddX\u007f\x01A\xcc\xde`Pm:\xdbe\xfd\xa6\a\xf8i\x88\xa7\xee\xacӸ\xbf2\x84y\xcd\n\xe6]L)\xca\xd9`x\xb4\x1b|\xe8\x13\x82\x1a(/* 3`J\xe1ٰ\xe6AdN!-\xd9"/"ॹॹ;,✅\nπ<\t\nπ\tॹπ✅a\n,\nᐿ�\nॹ✅�ॹ�\"✅ॹ\\<\"\n;a\\\n,✅π\n<\n<\nॹπᐿ�ᐿ;,�\tᐿ\nᐿaᐿ,\nπ�\t\\<ॹ\\π;π�π\"<;\"�\\<�,<�\\a�<\nॹaᐿaॹ�\\ᐿπ,✅ᐿ\"<✅✅a\t�ॹ\t<π;�ॹ\\ᐿ;✅\r\\,;\\ᐿॹ\nॹᐿππ\nᐿ\nᐿaπ\\\nπ\r\"✅�π\nπ\rॹπ\"ॹ✅a\ra�✅\nπ;ॹ✅\n;ॹ,�\nπ\rπᐿa\\\\ᐿ,π<ᐿ✅\n�,\r\nᐿ✅\n<�ᐿ\"\"✅,,\"\n<\n✅\rπa�π\n<\\ॹ\nॹ�;\ra��✅ᐿ\n,\t�;,π<\r��\r\\�\n✅\r✅�;\\\n\n,\nॹ✅π\n,\n✅\t,�<\nπ\t;aπ\n<a<\n\tπ\r\"\\✅\n\n\n<ᐿπaπ\\�\"<✅\\a,✅\n✅\n<\"\"\n\n\r\rᐿ�\\\tᐿᐿ;\n\rᐿa\\π<\n\\\n\n\";\r\r\raπ\"\r�ॹa\r�\"\n\"✅ππ✅�\t�ᐿ\tᐿ\\\r�ᐿ<\\\nᐿπ✅\tॹ<π\ta\"✅\t,ॹa✅ᐿ;\\\r✅\\,ॹ\"a\n<ॹ\\\n<\"π\\\\ᐿ\n✅\nᐿ\n,\n\r\t\n\r\n<aᐿ;ᐿ;ᐿ\r;✅a<a,,<\t\n\\ππ\\\"✅\n\\a\n\tπa<\r<π\n✅\\π<ॹ,\t;<aaaπॹᐿaॹaॹ�,\"\t,ॹ;\\<✅a\nᐿ\"\nπ\\aᐿ�ᐿ<ॹ;\\<ᐿ\nᐿ\n\"aᐿᐿπ,\"\r✅ॹ\n,<\r<\n<<,ᐿᐿa,\rᐿ<;π\\�,\"\rπ�\nππ�,✅;�\ra<;\r�ᐿ\tπ;\"πᐿ\\�a\"ᐿ\\;\\ॹ\";ॹ;;✅\tॹ\r\n<\t\n\t<aॹ\tᐿ\n\"ॹᐿ\t✅✅�ॹ;;<�\t,\n\r\n\n\ta\"\\<\rπᐿa\t<\na;\t\"\nπ<πॹ\r\n<\n✅ᐿॹ✅�<,;✅\"\n<�π<✅<<✅\\;\n\"\rᐿ\t�\n\n\r\t,ॹ�\"\rᐿᐿᐿ,\"π\nπ\",a\"\"<�\t\\πॹ\n\taॹᐿ\tπॹ,\n✅\rπa\r<<,\n\nᐿ;\t\\<\tᐿ\n\n�\"ॹ<\n\r\nᐿ\n�\n\nπᐿ;\nॹ\n\"π<\"\r\r\n\r\\ᐿ;;πॹ;�\r✅�,✅\r\r�,a;ᐿ\\ॹ\"\t\r✅;\t<,π,�\t\"πaᐿ��\\ॹ\"\n\tᐿ\t,ॹ✅�ᐿ✅\tᐿ,ॹ✅;;�\r\n✅ᐿ�\nπa;\\,✅ॹ<ᐿ\nπ\n\"\n;a\t\\π\n<\r\r\rπ\"\n\nᐿ<<ᐿ\"\n,\n\"ॹπaᐿπ��\r\n�ᐿॹ,\na\n\rॹ<�ॹ\"\n\t\r\n,π\n�,<ॹ,<\n<�✅ᐿ\r✅a✅<\r;,�a�\\\nॹ<\\<✅ॹ\"\nॹ\r�\ta;\"\\ᐿ\n\n✅\"\r;✅\t,a✅✅<\"ᐿ\t�π\\✅✅�<;\"✅π✅ॹ,\nπ\n��<,\ta\r�✅ᐿ\nॹπ\nᐿॹ✅;\nॹ\t\r\\\nᐿ�ᐿ\n\tᐿ,\r\\;<<a,\"π,\tᐿ\nπ<a\"ॹ\\aa\r\r\"\";\tᐿ,ॹa\nॹ\nπ"/PrefixEnd-c0\t\x13\xe0*c\xe4\xcfS-\x9b,\xe2\x82\xfa\xd8Z\xf6\x99\x81\\\x18ŕ\xea\x80Db\xa7\x94\xf7Q#\x13\\\xc7(\xc4=\xaaZ\xa2Hա}\xdeI\x06\x840I\xa9\x95\xcbи\r#iH\x97F~\x10\xe4<\xb2\xefFb\xac\xee\xf90H5\xd7D\xe4:\xf0Ae\xe3\xd1<\xd1\xf7\xb9\xad\xea\xd9\xe0r\xbc\xa6\xae\x92\xfb\xb5,\xc2\U0010f26eD\xe0 \xc5\x06\xfa\x04{\xf7\xe8\xbfZQ\xa3\x05M\xbb\xa8\xbe\xf4\xc4\x0f\xe9|s{|\x8fr\xad\xdaWĢ\x9e\xdf\x17\x9f\x02\xf3п\xd3\xea\xfd\x8ew3\xb8@7ꇘN%\n\xe0@jq\xb3\xb0&y\xe3K0ȼ_s\x1e\x15\x98\xe7\xbf6\xeb\xef}$dd/\xaa\xf1\xcb.U\x8f\xd4r"/"<a✅ᐿ<\n\n\nॹ\",\n\"πॹᐿ,\rᐿ\nॹ�ॹ<\naᐿᐿ,\"ॹ\"\\✅\n�✅<\n\r<\\<\"\\π;<✅,;✅ॹa\r✅ᐿ;ᐿ\r\\�\\�,ᐿ\r\n,�✅,\t;\\\"π,;ᐿa\nॹ✅\n;ॹ<\\\n;<ᐿॹ\n\r<\n�\\\",ॹ✅,\n\"✅ᐿ\raa\n\n\t;�π<,\",ᐿ<ᐿ\\ᐿ✅;\t<π\"\"ॹ<\t\n,,π\"\t�✅\r;;ॹ,ᐿ\tॹ✅ॹ,\nᐿ\n\\;\n\nπa<✅aπ\t;✅\r;�✅;a\t\n✅ππ\t\rॹ\n\\<✅✅<\r✅\t\r✅<π\n;\n\"\"��ॹ\r,ᐿ\naॹᐿπ\\π\n\n,�\t<�a\nπ\\✅,πॹ\",πᐿa,ᐿᐿa\r<\ta\t\nॹ\n\r✅\r;πa✅�\t�π\",πa\n<✅\"a\t�\r<ᐿa,\naᐿॹᐿ\rॹᐿa�\"\\a✅\"\"✅✅ᐿ\t\"ॹ<\n\"\nπ✅✅\n\\ॹ\t;ᐿ\"a,ॹ\"aॹᐿᐿ\n,\\\nॹॹ,;ॹ;\\\n\n\t\n,\naॹπ\r\"\n\t✅ॹ\r\tᐿ✅;\r\ta�ᐿ\t\t\nπᐿ<ॹ\tᐿ\na\nᐿ\tππ<<π\t✅;\r\"ॹ\n\na�<ᐿ\r\nᐿᐿ\";a<\rॹ\\πॹ\tᐿ\n\nᐿπ;\t;;✅ᐿ✅✅<�ॹ;\tᐿ,,\t,π✅\nᐿॹ;\r\"ᐿa,ᐿᐿᐿa\nॹ<aॹ\r,;π<<\nπa�\n\\\r,ॹπ�\n\"ᐿππ\n✅ॹπ\ra,πॹ\n\t<;πᐿᐿ✅ॹ\t�a✅\r�\"a,π��,ॹa\n\\\rॹ\nॹ\"π\"π\ta�<π�\r;a,a\r<πᐿ\na<\r\t\nॹ\\\\\n\\<�\\�aπa;\r\\,,\nॹ\"✅;\"\n✅ᐿ✅a\n<ππ\tπ<a\t�\\<\"✅\\\nᐿ;\r\t✅�✅π\r\r\r\n\",ᐿ;ॹ\nᐿ\r\"\naa\"\n\t<,✅<a\\\n\"ॹᐿ\n\\\t\t\r\"ॹ<,,π�\"ॹ<ॹ,\\ᐿ<\\π\"\\<ᐿ\n\rॹ\na\t\nπ;\\π\\✅\r,\r�\",,;;,<\n\t\"\\\r\r\"ॹ✅ॹ�\n�✅�\n�\\\n\n\rᐿॹ\tπa<;;\n\n�a\n\\ॹॹ\t;\n<\t\\<ॹॹ�✅π\t\"<\n\tᐿπᐿ\"\"�;\t;ॹ<π<✅\nππ<\"\rॹ�πᐿ�\rπ,<,<ᐿ<;;�,\t<<ᐿ\t<\tॹ,π,<\\a\t;\n\r�a✅\r\r\t\nᐿᐿ✅\r;\n;�;\r✅\n,;✅ᐿ,\\\n<,<\\\t\n;<\\aᐿ\r,\n;\\\r�\rॹ\\\t\";\t�;\n,\ta✅\r\t\r,\n\\\t\nᐿ✅\\\"\naᐿ\\\\\";\r<�✅;aॹ\t\t\\\t\tᐿ�\r,\"\n\"\taᐿ\na\rππॹ\nπᐿ\rॹ\nॹ\tπ,π\r<\"\n,�\na;�\\✅,<\"\"�\nππ\t\nπaॹaa✅\\;\\a\r\rᐿ,\\�;\\ᐿᐿ<a\"\r;π\"\\ᐿπ\tπॹ;\\a\ra;\t\n\\�\r<\t<ॹ\r✅\na\t\t<\n\n✅π\nᐿ✅\\<,\nπ\rᐿ✅a\"\r\"\n✅ॹ\\�\r\n\\�\nॹ\\<\\<\n\n\n"/PrefixEnd} [(n1,s1):1, (n3,s3):2, (n2,s2):3, next=4, gen=1] into this range
I180827 20:42:40.192890 51591 storage/replica_proposal.go:210  [n2,s2,r10/3:/Table/1{3-4}] new range lease repl=(n2,s2):3 seq=3 start=1535402521.769064687,1 epo=1 pro=1535402560.191483741,0 following repl=(n1,s1):1 seq=2 start=1535402512.768597075,0 exp=1535402521.769064687,0 pro=1535402512.769088099,0
I180827 20:42:40.211405 51641 storage/store.go:2656  [n2,s2,r41/3:/Table/53{-/2/"\x15…}] removing replica r25/3
I180827 20:42:40.212099 51037 storage/store.go:2656  [n1,s1,r41/1:/Table/53{-/2/"\x15…}] removing replica r25/1
I180827 20:42:40.213105 51867 storage/store.go:2656  [n3,s3,r41/2:/Table/53{-/2/"\x15…}] removing replica r25/2
I180827 20:42:42.773297 51087 server/status/runtime.go:433  [n1] runtime stats: 318 MiB RSS, 686 goroutines, 40 MiB/60 MiB/127 MiB GO alloc/idle/total, 85 MiB/126 MiB CGO alloc/total, 143.19cgo/sec, 0.03/0.00 %(u/s)time, 0.00 %gc (1x)
I180827 20:42:42.780772 51089 server/status/recorder.go:652  [n1,summaries] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180827 20:42:43.141400 51667 server/status/runtime.go:433  [n2] runtime stats: 321 MiB RSS, 686 goroutines, 47 MiB/53 MiB/127 MiB GO alloc/idle/total, 90 MiB/130 MiB CGO alloc/total, 143.20cgo/sec, 0.03/0.01 %(u/s)time, 0.00 %gc (1x)
I180827 20:42:43.154227 51685 server/status/recorder.go:652  [n2,summaries] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180827 20:42:43.233311 51943 server/status/runtime.go:433  [n3] runtime stats: 323 MiB RSS, 686 goroutines, 29 MiB/68 MiB/127 MiB GO alloc/idle/total, 90 MiB/130 MiB CGO alloc/total, 142.50cgo/sec, 0.03/0.01 %(u/s)time, 0.00 %gc (2x)
I180827 20:42:43.248088 51945 server/status/recorder.go:652  [n3,summaries] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180827 20:42:52.774371 51087 server/status/runtime.go:433  [n1] runtime stats: 325 MiB RSS, 686 goroutines, 41 MiB/58 MiB/127 MiB GO alloc/idle/total, 90 MiB/130 MiB CGO alloc/total, 89.49cgo/sec, 0.03/0.00 %(u/s)time, 0.00 %gc (1x)
I180827 20:42:52.774733 51084 gossip/gossip.go:537  [n1] gossip status (ok, 3 nodes)
gossip client (0/3 cur/max conns)
gossip server (2/3 cur/max conns, infos 769/56 sent/received, bytes 392718B/19314B sent/received)
  2: 127.0.0.1:36113 (1m0s)
  3: 127.0.0.1:46463 (1m0s)
I180827 20:42:52.780779 51089 server/status/recorder.go:652  [n1,summaries] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180827 20:42:53.140744 51664 gossip/gossip.go:537  [n2] gossip status (ok, 3 nodes)
gossip client (1/3 cur/max conns)
  1: 127.0.0.1:41477 (1m0s: infos 35/362 sent/received, bytes 12059B/173200B sent/received)
gossip server (0/3 cur/max conns, infos 35/362 sent/received, bytes 12059B/173200B sent/received)
I180827 20:42:53.141727 51667 server/status/runtime.go:433  [n2] runtime stats: 325 MiB RSS, 686 goroutines, 48 MiB/51 MiB/127 MiB GO alloc/idle/total, 90 MiB/130 MiB CGO alloc/total, 91.60cgo/sec, 0.03/0.00 %(u/s)time, 0.00 %gc (1x)
I180827 20:42:53.156267 51685 server/status/recorder.go:652  [n2,summaries] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180827 20:42:53.229281 51940 gossip/gossip.go:537  [n3] gossip status (ok, 3 nodes)
gossip client (1/3 cur/max conns)
  1: 127.0.0.1:41477 (1m0s: infos 23/431 sent/received, bytes 7577B/222954B sent/received)
gossip server (0/3 cur/max conns, infos 23/431 sent/received, bytes 7577B/222954B sent/received)
I180827 20:42:53.234342 51943 server/status/runtime.go:433  [n3] runtime stats: 325 MiB RSS, 686 goroutines, 30 MiB/67 MiB/127 MiB GO alloc/idle/total, 90 MiB/130 MiB CGO alloc/total, 95.30cgo/sec, 0.03/0.00 %(u/s)time, 0.00 %gc (1x)
I180827 20:42:53.254311 51945 server/status/recorder.go:652  [n3,summaries] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180827 20:42:55.212672 51661 storage/compactor/compactor.go:329  [n2,s2,compactor] purging suggested compaction for range /Table/53/2/"\x15\x8f\xe8\u007f\\\xf3\xdf\xf0nP\xdb\xd3\xe8\x1b\"B1K\xa8l+\x96/l\v\x9e\x0e\x91\xa0D\x96\xc0J\xf1\xa1͠\xd2̃\x05\xe3\xe2?ET蛂\x00\xe5\xb0\x1a\x8e\x13Zu\xfd\xf2\x81w^\xb7\xbdH\xb8\xe4\a\x9c\xfd\x99{\xb4\"\xe5Q\x9c\x17\x85\x97\xf7Ëb\x0f\xff\xb0-vmO\xe1\xfb\xc3\xf3\xab0\xa0\x05u\x1c\xb0{B\xeamp\xbd\x8f\x99?\x87\x0f\xb2e\xe3ؿ2LN\x03\x17\xa7\x9f\xd3\x0f\x15$\x02I\xd2\xd7\x04R\x193\x9d\xddX\u007f\x01A\xcc\xde`Pm:\xdbe\xfd\xa6\a\xf8i\x88\xa7\xee\xacӸ\xbf2\x84y\xcd\n\xe6]L)\xca\xd9`x\xb4\x1b|\xe8\x13\x82\x1a(/* 3`J\xe1ٰ\xe6AdN!-\xd9"/"ॹॹ;,✅\nπ<\t\nπ\tॹπ✅a\n,\nᐿ�\nॹ✅�ॹ�\"✅ॹ\\<\"\n;a\\\n,✅π\n<\n<\nॹπᐿ�ᐿ;,�\tᐿ\nᐿaᐿ,\nπ�\t\\<ॹ\\π;π�π\"<;\"�\\<�,<�\\a�<\nॹaᐿaॹ�\\ᐿπ,✅ᐿ\"<✅✅a\t�ॹ\t<π;�ॹ\\ᐿ;✅\r\\,;\\ᐿॹ\nॹᐿππ\nᐿ\nᐿaπ\\\nπ\r\"✅�π\nπ\rॹπ\"ॹ✅a\ra�✅\nπ;ॹ✅\n;ॹ,�\nπ\rπᐿa\\\\ᐿ,π<ᐿ✅\n�,\r\nᐿ✅\n<�ᐿ\"\"✅,,\"\n<\n✅\rπa�π\n<\\ॹ\nॹ�;\ra��✅ᐿ\n,\t�;,π<\r��\r\\�\n✅\r✅�;\\\n\n,\nॹ✅π\n,\n✅\t,�<\nπ\t;aπ\n<a<\n\tπ\r\"\\✅\n\n\n<ᐿπaπ\\�\"<✅\\a,✅\n✅\n<\"\"\n\n\r\rᐿ�\\\tᐿᐿ;\n\rᐿa\\π<\n\\\n\n\";\r\r\raπ\"\r�ॹa\r�\"\n\"✅ππ✅�\t�ᐿ\tᐿ\\\r�ᐿ<\\\nᐿπ✅\tॹ<π\ta\"✅\t,ॹa✅ᐿ;\\\r✅\\,ॹ\"a\n<ॹ\\\n<\"π\\\\ᐿ\n✅\nᐿ\n,\n\r\t\n\r\n<aᐿ;ᐿ;ᐿ\r;✅a<a,,<\t\n\\ππ\\\"✅\n\\a\n\tπa<\r<π\n✅\\π<ॹ,\t;<aaaπॹᐿaॹaॹ�,\"\t,ॹ;\\<✅a\nᐿ\"\nπ\\aᐿ�ᐿ<ॹ;\\<ᐿ\nᐿ\n\"aᐿᐿπ,\"\r✅ॹ\n,<\r<\n<<,ᐿᐿa,\rᐿ<;π\\�,\"\rπ�\nππ�,✅;�\ra<;\r�ᐿ\tπ;\"πᐿ\\�a\"ᐿ\\;\\ॹ\";ॹ;;✅\tॹ\r\n<\t\n\t<aॹ\tᐿ\n\"ॹᐿ\t✅✅�ॹ;;<�\t,\n\r\n\n\ta\"\\<\rπᐿa\t<\na;\t\"\nπ<πॹ\r\n<\n✅ᐿॹ✅�<,;✅\"\n<�π<✅<<✅\\;\n\"\rᐿ\t�\n\n\r\t,ॹ�\"\rᐿᐿᐿ,\"π\nπ\",a\"\"<�\t\\πॹ\n\taॹᐿ\tπॹ,\n✅\rπa\r<<,\n\nᐿ;\t\\<\tᐿ\n\n�\"ॹ<\n\r\nᐿ\n�\n\nπᐿ;\nॹ\n\"π<\"\r\r\n\r\\ᐿ;;πॹ;�\r✅�,✅\r\r�,a;ᐿ\\ॹ\"\t\r✅;\t<,π,�\t\"πaᐿ��\\ॹ\"\n\tᐿ\t,ॹ✅�ᐿ✅\tᐿ,ॹ✅;;�\r\n✅ᐿ�\nπa;\\,✅ॹ<ᐿ\nπ\n\"\n;a\t\\π\n<\r\r\rπ\"\n\nᐿ<<ᐿ\"\n,\n\"ॹπaᐿπ��\r\n�ᐿॹ,\na\n\rॹ<�ॹ\"\n\t\r\n,π\n�,<ॹ,<\n<�✅ᐿ\r✅a✅<\r;,�a�\\\nॹ<\\<✅ॹ\"\nॹ\r�\ta;\"\\ᐿ\n\n✅\"\r;✅\t,a✅✅<\"ᐿ\t�π\\✅✅�<;\"✅π✅ॹ,\nπ\n��<,\ta\r�✅ᐿ\nॹπ\nᐿॹ✅;\nॹ\t\r\\\nᐿ�ᐿ\n\tᐿ,\r\\;<<a,\"π,\tᐿ\nπ<a\"ॹ\\aa\r\r\"\";\tᐿ,ॹa\nॹ\nπ"/PrefixEnd - /Table/53/2/"\xc0\t\x13\xe0*c\xe4\xcfS-\x9b,\xe2\x82\xfa\xd8Z\xf6\x99\x81\\\x18ŕ\xea\x80Db\xa7\x94\xf7Q#\x13\\\xc7(\xc4=\xaaZ\xa2Hա}\xdeI\x06\x840I\xa9\x95\xcbи\r#iH\x97F~\x10\xe4<\xb2\xefFb\xac\xee\xf90H5\xd7D\xe4:\xf0Ae\xe3\xd1<\xd1\xf7\xb9\xad\xea\xd9\xe0r\xbc\xa6\xae\x92\xfb\xb5,\xc2\U0010f26eD\xe0 \xc5\x06\xfa\x04{\xf7\xe8\xbfZQ\xa3\x05M\xbb\xa8\xbe\xf4\xc4\x0f\xe9|s{|\x8fr\xad\xdaWĢ\x9e\xdf\x17\x9f\x02\xf3п\xd3\xea\xfd\x8ew3\xb8@7ꇘN%\n\xe0@jq\xb3\xb0&y\xe3K0ȼ_s\x1e\x15\x98\xe7\xbf6\xeb\xef}$dd/\xaa\xf1\xcb.U\x8f\xd4r"/"<a✅ᐿ<\n\n\nॹ\",\n\"πॹᐿ,\rᐿ\nॹ�ॹ<\naᐿᐿ,\"ॹ\"\\✅\n�✅<\n\r<\\<\"\\π;<✅,;✅ॹa\r✅ᐿ;ᐿ\r\\�\\�,ᐿ\r\n,�✅,\t;\\\"π,;ᐿa\nॹ✅\n;ॹ<\\\n;<ᐿॹ\n\r<\n�\\\",ॹ✅,\n\"✅ᐿ\raa\n\n\t;�π<,\",ᐿ<ᐿ\\ᐿ✅;\t<π\"\"ॹ<\t\n,,π\"\t�✅\r;;ॹ,ᐿ\tॹ✅ॹ,\nᐿ\n\\;\n\nπa<✅aπ\t;✅\r;�✅;a\t\n✅ππ\t\rॹ\n\\<✅✅<\r✅\t\r✅<π\n;\n\"\"��ॹ\r,ᐿ\naॹᐿπ\\π\n\n,�\t<�a\nπ\\✅,πॹ\",πᐿa,ᐿᐿa\r<\ta\t\nॹ\n\r✅\r;πa✅�\t�π\",πa\n<✅\"a\t�\r<ᐿa,\naᐿॹᐿ\rॹᐿa�\"\\a✅\"\"✅✅ᐿ\t\"ॹ<\n\"\nπ✅✅\n\\ॹ\t;ᐿ\"a,ॹ\"aॹᐿᐿ\n,\\\nॹॹ,;ॹ;\\\n\n\t\n,\naॹπ\r\"\n\t✅ॹ\r\tᐿ✅;\r\ta�ᐿ\t\t\nπᐿ<ॹ\tᐿ\na\nᐿ\tππ<<π\t✅;\r\"ॹ\n\na�<ᐿ\r\nᐿᐿ\";a<\rॹ\\πॹ\tᐿ\n\nᐿπ;\t;;✅ᐿ✅✅<�ॹ;\tᐿ,,\t,π✅\nᐿॹ;\r\"ᐿa,ᐿᐿᐿa\nॹ<aॹ\r,;π<<\nπa�\n\\\r,ॹπ�\n\"ᐿππ\n✅ॹπ\ra,πॹ\n\t<;πᐿᐿ✅ॹ\t�a✅\r�\"a,π��,ॹa\n\\\rॹ\nॹ\"π\"π\ta�<π�\r;a,a\r<πᐿ\na<\r\t\nॹ\\\\\n\\<�\\�aπa;\r\\,,\nॹ\"✅;\"\n✅ᐿ✅a\n<ππ\tπ<a\t�\\<\"✅\\\nᐿ;\r\t✅�✅π\r\r\r\n\",ᐿ;ॹ\nᐿ\r\"\naa\"\n\t<,✅<a\\\n\"ॹᐿ\n\\\t\t\r\"ॹ<,,π�\"ॹ<ॹ,\\ᐿ<\\π\"\\<ᐿ\n\rॹ\na\t\nπ;\\π\\✅\r,\r�\",,;;,<\n\t\"\\\r\r\"ॹ✅ॹ�\n�✅�\n�\\\n\n\rᐿॹ\tπa<;;\n\n�a\n\\ॹॹ\t;\n<\t\\<ॹॹ�✅π\t\"<\n\tᐿπᐿ\"\"�;\t;ॹ<π<✅\nππ<\"\rॹ�πᐿ�\rπ,<,<ᐿ<;;�,\t<<ᐿ\t<\tॹ,π,<\\a\t;\n\r�a✅\r\r\t\nᐿᐿ✅\r;\n;�;\r✅\n,;✅ᐿ,\\\n<,<\\\t\n;<\\aᐿ\r,\n;\\\r�\rॹ\\\t\";\t�;\n,\ta✅\r\t\r,\n\\\t\nᐿ✅\\\"\naᐿ\\\\\";\r<�✅;aॹ\t\t\\\t\tᐿ�\r,\"\n\"\taᐿ\na\rππॹ\nπᐿ\rॹ\nॹ\tπ,π\r<\"\n,�\na;�\\✅,<\"\"�\nππ\t\nπaॹaa✅\\;\\a\r\rᐿ,\\�;\\ᐿᐿ<a\"\r;π\"\\ᐿπ\tπॹ;\\a\ra;\t\n\\�\r<\t<ॹ\r✅\na\t\t<\n\n✅π\nᐿ✅\\<,\nπ\rᐿ✅a\"\r\"\n✅ॹ\\�\r\n\\�\nॹ\\<\\<\n\n\n"/PrefixEnd that contains live data
I180827 20:42:55.213128 51082 storage/compactor/compactor.go:329  [n1,s1,compactor] purging suggested compaction for range /Table/53/2/"\x15\x8f\xe8\u007f\\\xf3\xdf\xf0nP\xdb\xd3\xe8\x1b\"B1K\xa8l+\x96/l\v\x9e\x0e\x91\xa0D\x96\xc0J\xf1\xa1͠\xd2̃\x05\xe3\xe2?ET蛂\x00\xe5\xb0\x1a\x8e\x13Zu\xfd\xf2\x81w^\xb7\xbdH\xb8\xe4\a\x9c\xfd\x99{\xb4\"\xe5Q\x9c\x17\x85\x97\xf7Ëb\x0f\xff\xb0-vmO\xe1\xfb\xc3\xf3\xab0\xa0\x05u\x1c\xb0{B\xeamp\xbd\x8f\x99?\x87\x0f\xb2e\xe3ؿ2LN\x03\x17\xa7\x9f\xd3\x0f\x15$\x02I\xd2\xd7\x04R\x193\x9d\xddX\u007f\x01A\xcc\xde`Pm:\xdbe\xfd\xa6\a\xf8i\x88\xa7\xee\xacӸ\xbf2\x84y\xcd\n\xe6]L)\xca\xd9`x\xb4\x1b|\xe8\x13\x82\x1a(/* 3`J\xe1ٰ\xe6AdN!-\xd9"/"ॹॹ;,✅\nπ<\t\nπ\tॹπ✅a\n,\nᐿ�\nॹ✅�ॹ�\"✅ॹ\\<\"\n;a\\\n,✅π\n<\n<\nॹπᐿ�ᐿ;,�\tᐿ\nᐿaᐿ,\nπ�\t\\<ॹ\\π;π�π\"<;\"�\\<�,<�\\a�<\nॹaᐿaॹ�\\ᐿπ,✅ᐿ\"<✅✅a\t�ॹ\t<π;�ॹ\\ᐿ;✅\r\\,;\\ᐿॹ\nॹᐿππ\nᐿ\nᐿaπ\\\nπ\r\"✅�π\nπ\rॹπ\"ॹ✅a\ra�✅\nπ;ॹ✅\n;ॹ,�\nπ\rπᐿa\\\\ᐿ,π<ᐿ✅\n�,\r\nᐿ✅\n<�ᐿ\"\"✅,,\"\n<\n✅\rπa�π\n<\\ॹ\nॹ�;\ra��✅ᐿ\n,\t�;,π<\r��\r\\�\n✅\r✅�;\\\n\n,\nॹ✅π\n,\n✅\t,�<\nπ\t;aπ\n<a<\n\tπ\r\"\\✅\n\n\n<ᐿπaπ\\�\"<✅\\a,✅\n✅\n<\"\"\n\n\r\rᐿ�\\\tᐿᐿ;\n\rᐿa\\π<\n\\\n\n\";\r\r\raπ\"\r�ॹa\r�\"\n\"✅ππ✅�\t�ᐿ\tᐿ\\\r�ᐿ<\\\nᐿπ✅\tॹ<π\ta\"✅\t,ॹa✅ᐿ;\\\r✅\\,ॹ\"a\n<ॹ\\\n<\"π\\\\ᐿ\n✅\nᐿ\n,\n\r\t\n\r\n<aᐿ;ᐿ;ᐿ\r;✅a<a,,<\t\n\\ππ\\\"✅\n\\a\n\tπa<\r<π\n✅\\π<ॹ,\t;<aaaπॹᐿaॹaॹ�,\"\t,ॹ;\\<✅a\nᐿ\"\nπ\\aᐿ�ᐿ<ॹ;\\<ᐿ\nᐿ\n\"aᐿᐿπ,\"\r✅ॹ\n,<\r<\n<<,ᐿᐿa,\rᐿ<;π\\�,\"\rπ�\nππ�,✅;�\ra<;\r�ᐿ\tπ;\"πᐿ\\�a\"ᐿ\\;\\ॹ\";ॹ;;✅\tॹ\r\n<\t\n\t<aॹ\tᐿ\n\"ॹᐿ\t✅✅�ॹ;;<�\t,\n\r\n\n\ta\"\\<\rπᐿa\t<\na;\t\"\nπ<πॹ\r\n<\n✅ᐿॹ✅�<,;✅\"\n<�π<✅<<✅\\;\n\"\rᐿ\t�\n\n\r\t,ॹ�\"\rᐿᐿᐿ,\"π\nπ\",a\"\"<�\t\\πॹ\n\taॹᐿ\tπॹ,\n✅\rπa\r<<,\n\nᐿ;\t\\<\tᐿ\n\n�\"ॹ<\n\r\nᐿ\n�\n\nπᐿ;\nॹ\n\"π<\"\r\r\n\r\\ᐿ;;πॹ;�\r✅�,✅\r\r�,a;ᐿ\\ॹ\"\t\r✅;\t<,π,�\t\"πaᐿ��\\ॹ\"\n\tᐿ\t,ॹ✅�ᐿ✅\tᐿ,ॹ✅;;�\r\n✅ᐿ�\nπa;\\,✅ॹ<ᐿ\nπ\n\"\n;a\t\\π\n<\r\r\rπ\"\n\nᐿ<<ᐿ\"\n,\n\"ॹπaᐿπ��\r\n�ᐿॹ,\na\n\rॹ<�ॹ\"\n\t\r\n,π\n�,<ॹ,<\n<�✅ᐿ\r✅a✅<\r;,�a�\\\nॹ<\\<✅ॹ\"\nॹ\r�\ta;\"\\ᐿ\n\n✅\"\r;✅\t,a✅✅<\"ᐿ\t�π\\✅✅�<;\"✅π✅ॹ,\nπ\n��<,\ta\r�✅ᐿ\nॹπ\nᐿॹ✅;\nॹ\t\r\\\nᐿ�ᐿ\n\tᐿ,\r\\;<<a,\"π,\tᐿ\nπ<a\"ॹ\\aa\r\r\"\";\tᐿ,ॹa\nॹ\nπ"/PrefixEnd - /Table/53/2/"\xc0\t\x13\xe0*c\xe4\xcfS-\x9b,\xe2\x82\xfa\xd8Z\xf6\x99\x81\\\x18ŕ\xea\x80Db\xa7\x94\xf7Q#\x13\\\xc7(\xc4=\xaaZ\xa2Hա}\xdeI\x06\x840I\xa9\x95\xcbи\r#iH\x97F~\x10\xe4<\xb2\xefFb\xac\xee\xf90H5\xd7D\xe4:\xf0Ae\xe3\xd1<\xd1\xf7\xb9\xad\xea\xd9\xe0r\xbc\xa6\xae\x92\xfb\xb5,\xc2\U0010f26eD\xe0 \xc5\x06\xfa\x04{\xf7\xe8\xbfZQ\xa3\x05M\xbb\xa8\xbe\xf4\xc4\x0f\xe9|s{|\x8fr\xad\xdaWĢ\x9e\xdf\x17\x9f\x02\xf3п\xd3\xea\xfd\x8ew3\xb8@7ꇘN%\n\xe0@jq\xb3\xb0&y\xe3K0ȼ_s\x1e\x15\x98\xe7\xbf6\xeb\xef}$dd/\xaa\xf1\xcb.U\x8f\xd4r"/"<a✅ᐿ<\n\n\nॹ\",\n\"πॹᐿ,\rᐿ\nॹ�ॹ<\naᐿᐿ,\"ॹ\"\\✅\n�✅<\n\r<\\<\"\\π;<✅,;✅ॹa\r✅ᐿ;ᐿ\r\\�\\�,ᐿ\r\n,�✅,\t;\\\"π,;ᐿa\nॹ✅\n;ॹ<\\\n;<ᐿॹ\n\r<\n�\\\",ॹ✅,\n\"✅ᐿ\raa\n\n\t;�π<,\",ᐿ<ᐿ\\ᐿ✅;\t<π\"\"ॹ<\t\n,,π\"\t�✅\r;;ॹ,ᐿ\tॹ✅ॹ,\nᐿ\n\\;\n\nπa<✅aπ\t;✅\r;�✅;a\t\n✅ππ\t\rॹ\n\\<✅✅<\r✅\t\r✅<π\n;\n\"\"��ॹ\r,ᐿ\naॹᐿπ\\π\n\n,�\t<�a\nπ\\✅,πॹ\",πᐿa,ᐿᐿa\r<\ta\t\nॹ\n\r✅\r;πa✅�\t�π\",πa\n<✅\"a\t�\r<ᐿa,\naᐿॹᐿ\rॹᐿa�\"\\a✅\"\"✅✅ᐿ\t\"ॹ<\n\"\nπ✅✅\n\\ॹ\t;ᐿ\"a,ॹ\"aॹᐿᐿ\n,\\\nॹॹ,;ॹ;\\\n\n\t\n,\naॹπ\r\"\n\t✅ॹ\r\tᐿ✅;\r\ta�ᐿ\t\t\nπᐿ<ॹ\tᐿ\na\nᐿ\tππ<<π\t✅;\r\"ॹ\n\na�<ᐿ\r\nᐿᐿ\";a<\rॹ\\πॹ\tᐿ\n\nᐿπ;\t;;✅ᐿ✅✅<�ॹ;\tᐿ,,\t,π✅\nᐿॹ;\r\"ᐿa,ᐿᐿᐿa\nॹ<aॹ\r,;π<<\nπa�\n\\\r,ॹπ�\n\"ᐿππ\n✅ॹπ\ra,πॹ\n\t<;πᐿᐿ✅ॹ\t�a✅\r�\"a,π��,ॹa\n\\\rॹ\nॹ\"π\"π\ta�<π�\r;a,a\r<πᐿ\na<\r\t\nॹ\\\\\n\\<�\\�aπa;\r\\,,\nॹ\"✅;\"\n✅ᐿ✅a\n<ππ\tπ<a\t�\\<\"✅\\\nᐿ;\r\t✅�✅π\r\r\r\n\",ᐿ;ॹ\nᐿ\r\"\naa\"\n\t<,✅<a\\\n\"ॹᐿ\n\\\t\t\r\"ॹ<,,π�\"ॹ<ॹ,\\ᐿ<\\π\"\\<ᐿ\n\rॹ\na\t\nπ;\\π\\✅\r,\r�\",,;;,<\n\t\"\\\r\r\"ॹ✅ॹ�\n�✅�\n�\\\n\n\rᐿॹ\tπa<;;\n\n�a\n\\ॹॹ\t;\n<\t\\<ॹॹ�✅π\t\"<\n\tᐿπᐿ\"\"�;\t;ॹ<π<✅\nππ<\"\rॹ�πᐿ�\rπ,<,<ᐿ<;;�,\t<<ᐿ\t<\tॹ,π,<\\a\t;\n\r�a✅\r\r\t\nᐿᐿ✅\r;\n;�;\r✅\n,;✅ᐿ,\\\n<,<\\\t\n;<\\aᐿ\r,\n;\\\r�\rॹ\\\t\";\t�;\n,\ta✅\r\t\r,\n\\\t\nᐿ✅\\\"\naᐿ\\\\\";\r<�✅;aॹ\t\t\\\t\tᐿ�\r,\"\n\"\taᐿ\na\rππॹ\nπᐿ\rॹ\nॹ\tπ,π\r<\"\n,�\na;�\\✅,<\"\"�\nππ\t\nπaॹaa✅\\;\\a\r\rᐿ,\\�;\\ᐿᐿ<a\"\r;π\"\\ᐿπ\tπॹ;\\a\ra;\t\n\\�\r<\t<ॹ\r✅\na\t\t<\n\n✅π\nᐿ✅\\<,\nπ\rᐿ✅a\"\r\"\n✅ॹ\\�\r\n\\�\nॹ\\<\\<\n\n\n"/PrefixEnd that contains live data
I180827 20:42:55.213983 51921 storage/compactor/compactor.go:329  [n3,s3,compactor] purging suggested compaction for range /Table/53/2/"\x15\x8f\xe8\u007f\\\xf3\xdf\xf0nP\xdb\xd3\xe8\x1b\"B1K\xa8l+\x96/l\v\x9e\x0e\x91\xa0D\x96\xc0J\xf1\xa1͠\xd2̃\x05\xe3\xe2?ET蛂\x00\xe5\xb0\x1a\x8e\x13Zu\xfd\xf2\x81w^\xb7\xbdH\xb8\xe4\a\x9c\xfd\x99{\xb4\"\xe5Q\x9c\x17\x85\x97\xf7Ëb\x0f\xff\xb0-vmO\xe1\xfb\xc3\xf3\xab0\xa0\x05u\x1c\xb0{B\xeamp\xbd\x8f\x99?\x87\x0f\xb2e\xe3ؿ2LN\x03\x17\xa7\x9f\xd3\x0f\x15$\x02I\xd2\xd7\x04R\x193\x9d\xddX\u007f\x01A\xcc\xde`Pm:\xdbe\xfd\xa6\a\xf8i\x88\xa7\xee\xacӸ\xbf2\x84y\xcd\n\xe6]L)\xca\xd9`x\xb4\x1b|\xe8\x13\x82\x1a(/* 3`J\xe1ٰ\xe6AdN!-\xd9"/"ॹॹ;,✅\nπ<\t\nπ\tॹπ✅a\n,\nᐿ�\nॹ✅�ॹ�\"✅ॹ\\<\"\n;a\\\n,✅π\n<\n<\nॹπᐿ�ᐿ;,�\tᐿ\nᐿaᐿ,\nπ�\t\\<ॹ\\π;π�π\"<;\"�\\<�,<�\\a�<\nॹaᐿaॹ�\\ᐿπ,✅ᐿ\"<✅✅a\t�ॹ\t<π;�ॹ\\ᐿ;✅\r\\,;\\ᐿॹ\nॹᐿππ\nᐿ\nᐿaπ\\\nπ\r\"✅�π\nπ\rॹπ\"ॹ✅a\ra�✅\nπ;ॹ✅\n;ॹ,�\nπ\rπᐿa\\\\ᐿ,π<ᐿ✅\n�,\r\nᐿ✅\n<�ᐿ\"\"✅,,\"\n<\n✅\rπa�π\n<\\ॹ\nॹ�;\ra��✅ᐿ\n,\t�;,π<\r��\r\\�\n✅\r✅�;\\\n\n,\nॹ✅π\n,\n✅\t,�<\nπ\t;aπ\n<a<\n\tπ\r\"\\✅\n\n\n<ᐿπaπ\\�\"<✅\\a,✅\n✅\n<\"\"\n\n\r\rᐿ�\\\tᐿᐿ;\n\rᐿa\\π<\n\\\n\n\";\r\r\raπ\"\r�ॹa\r�\"\n\"✅ππ✅�\t�ᐿ\tᐿ\\\r�ᐿ<\\\nᐿπ✅\tॹ<π\ta\"✅\t,ॹa✅ᐿ;\\\r✅\\,ॹ\"a\n<ॹ\\\n<\"π\\\\ᐿ\n✅\nᐿ\n,\n\r\t\n\r\n<aᐿ;ᐿ;ᐿ\r;✅a<a,,<\t\n\\ππ\\\"✅\n\\a\n\tπa<\r<π\n✅\\π<ॹ,\t;<aaaπॹᐿaॹaॹ�,\"\t,ॹ;\\<✅a\nᐿ\"\nπ\\aᐿ�ᐿ<ॹ;\\<ᐿ\nᐿ\n\"aᐿᐿπ,\"\r✅ॹ\n,<\r<\n<<,ᐿᐿa,\rᐿ<;π\\�,\"\rπ�\nππ�,✅;�\ra<;\r�ᐿ\tπ;\"πᐿ\\�a\"ᐿ\\;\\ॹ\";ॹ;;✅\tॹ\r\n<\t\n\t<aॹ\tᐿ\n\"ॹᐿ\t✅✅�ॹ;;<�\t,\n\r\n\n\ta\"\\<\rπᐿa\t<\na;\t\"\nπ<πॹ\r\n<\n✅ᐿॹ✅�<,;✅\"\n<�π<✅<<✅\\;\n\"\rᐿ\t�\n\n\r\t,ॹ�\"\rᐿᐿᐿ,\"π\nπ\",a\"\"<�\t\\πॹ\n\taॹᐿ\tπॹ,\n✅\rπa\r<<,\n\nᐿ;\t\\<\tᐿ\n\n�\"ॹ<\n\r\nᐿ\n�\n\nπᐿ;\nॹ\n\"π<\"\r\r\n\r\\ᐿ;;πॹ;�\r✅�,✅\r\r�,a;ᐿ\\ॹ\"\t\r✅;\t<,π,�\t\"πaᐿ��\\ॹ\"\n\tᐿ\t,ॹ✅�ᐿ✅\tᐿ,ॹ✅;;�\r\n✅ᐿ�\nπa;\\,✅ॹ<ᐿ\nπ\n\"\n;a\t\\π\n<\r\r\rπ\"\n\nᐿ<<ᐿ\"\n,\n\"ॹπaᐿπ��\r\n�ᐿॹ,\na\n\rॹ<�ॹ\"\n\t\r\n,π\n�,<ॹ,<\n<�✅ᐿ\r✅a✅<\r;,�a�\\\nॹ<\\<✅ॹ\"\nॹ\r�\ta;\"\\ᐿ\n\n✅\"\r;✅\t,a✅✅<\"ᐿ\t�π\\✅✅�<;\"✅π✅ॹ,\nπ\n��<,\ta\r�✅ᐿ\nॹπ\nᐿॹ✅;\nॹ\t\r\\\nᐿ�ᐿ\n\tᐿ,\r\\;<<a,\"π,\tᐿ\nπ<a\"ॹ\\aa\r\r\"\";\tᐿ,ॹa\nॹ\nπ"/PrefixEnd - /Table/53/2/"\xc0\t\x13\xe0*c\xe4\xcfS-\x9b,\xe2\x82\xfa\xd8Z\xf6\x99\x81\\\x18ŕ\xea\x80Db\xa7\x94\xf7Q#\x13\\\xc7(\xc4=\xaaZ\xa2Hա}\xdeI\x06\x840I\xa9\x95\xcbи\r#iH\x97F~\x10\xe4<\xb2\xefFb\xac\xee\xf90H5\xd7D\xe4:\xf0Ae\xe3\xd1<\xd1\xf7\xb9\xad\xea\xd9\xe0r\xbc\xa6\xae\x92\xfb\xb5,\xc2\U0010f26eD\xe0 \xc5\x06\xfa\x04{\xf7\xe8\xbfZQ\xa3\x05M\xbb\xa8\xbe\xf4\xc4\x0f\xe9|s{|\x8fr\xad\xdaWĢ\x9e\xdf\x17\x9f\x02\xf3п\xd3\xea\xfd\x8ew3\xb8@7ꇘN%\n\xe0@jq\xb3\xb0&y\xe3K0ȼ_s\x1e\x15\x98\xe7\xbf6\xeb\xef}$dd/\xaa\xf1\xcb.U\x8f\xd4r"/"<a✅ᐿ<\n\n\nॹ\",\n\"πॹᐿ,\rᐿ\nॹ�ॹ<\naᐿᐿ,\"ॹ\"\\✅\n�✅<\n\r<\\<\"\\π;<✅,;✅ॹa\r✅ᐿ;ᐿ\r\\�\\�,ᐿ\r\n,�✅,\t;\\\"π,;ᐿa\nॹ✅\n;ॹ<\\\n;<ᐿॹ\n\r<\n�\\\",ॹ✅,\n\"✅ᐿ\raa\n\n\t;�π<,\",ᐿ<ᐿ\\ᐿ✅;\t<π\"\"ॹ<\t\n,,π\"\t�✅\r;;ॹ,ᐿ\tॹ✅ॹ,\nᐿ\n\\;\n\nπa<✅aπ\t;✅\r;�✅;a\t\n✅ππ\t\rॹ\n\\<✅✅<\r✅\t\r✅<π\n;\n\"\"��ॹ\r,ᐿ\naॹᐿπ\\π\n\n,�\t<�a\nπ\\✅,πॹ\",πᐿa,ᐿᐿa\r<\ta\t\nॹ\n\r✅\r;πa✅�\t�π\",πa\n<✅\"a\t�\r<ᐿa,\naᐿॹᐿ\rॹᐿa�\"\\a✅\"\"✅✅ᐿ\t\"ॹ<\n\"\nπ✅✅\n\\ॹ\t;ᐿ\"a,ॹ\"aॹᐿᐿ\n,\\\nॹॹ,;ॹ;\\\n\n\t\n,\naॹπ\r\"\n\t✅ॹ\r\tᐿ✅;\r\ta�ᐿ\t\t\nπᐿ<ॹ\tᐿ\na\nᐿ\tππ<<π\t✅;\r\"ॹ\n\na�<ᐿ\r\nᐿᐿ\";a<\rॹ\\πॹ\tᐿ\n\nᐿπ;\t;;✅ᐿ✅✅<�ॹ;\tᐿ,,\t,π✅\nᐿॹ;\r\"ᐿa,ᐿᐿᐿa\nॹ<aॹ\r,;π<<\nπa�\n\\\r,ॹπ�\n\"ᐿππ\n✅ॹπ\ra,πॹ\n\t<;πᐿᐿ✅ॹ\t�a✅\r�\"a,π��,ॹa\n\\\rॹ\nॹ\"π\"π\ta�<π�\r;a,a\r<πᐿ\na<\r\t\nॹ\\\\\n\\<�\\�aπa;\r\\,,\nॹ\"✅;\"\n✅ᐿ✅a\n<ππ\tπ<a\t�\\<\"✅\\\nᐿ;\r\t✅�✅π\r\r\r\n\",ᐿ;ॹ\nᐿ\r\"\naa\"\n\t<,✅<a\\\n\"ॹᐿ\n\\\t\t\r\"ॹ<,,π�\"ॹ<ॹ,\\ᐿ<\\π\"\\<ᐿ\n\rॹ\na\t\nπ;\\π\\✅\r,\r�\",,;;,<\n\t\"\\\r\r\"ॹ✅ॹ�\n�✅�\n�\\\n\n\rᐿॹ\tπa<;;\n\n�a\n\\ॹॹ\t;\n<\t\\<ॹॹ�✅π\t\"<\n\tᐿπᐿ\"\"�;\t;ॹ<π<✅\nππ<\"\rॹ�πᐿ�\rπ,<,<ᐿ<;;�,\t<<ᐿ\t<\tॹ,π,<\\a\t;\n\r�a✅\r\r\t\nᐿᐿ✅\r;\n;�;\r✅\n,;✅ᐿ,\\\n<,<\\\t\n;<\\aᐿ\r,\n;\\\r�\rॹ\\\t\";\t�;\n,\ta✅\r\t\r,\n\\\t\nᐿ✅\\\"\naᐿ\\\\\";\r<�✅;aॹ\t\t\\\t\tᐿ�\r,\"\n\"\taᐿ\na\rππॹ\nπᐿ\rॹ\nॹ\tπ,π\r<\"\n,�\na;�\\✅,<\"\"�\nππ\t\nπaॹaa✅\\;\\a\r\rᐿ,\\�;\\ᐿᐿ<a\"\r;π\"\\ᐿπ\tπॹ;\\a\ra;\t\n\\�\r<\t<ॹ\r✅\na\t\t<\n\n✅π\nᐿ✅\\<,\nπ\rᐿ✅a\"\r\"\n✅ॹ\\�\r\n\\�\nॹ\\<\\<\n\n\n"/PrefixEnd that contains live data
I180827 20:43:02.772793 51087 server/status/runtime.go:433  [n1] runtime stats: 325 MiB RSS, 685 goroutines, 42 MiB/57 MiB/127 MiB GO alloc/idle/total, 90 MiB/130 MiB CGO alloc/total, 95.72cgo/sec, 0.03/0.01 %(u/s)time, 0.00 %gc (1x)
I180827 20:43:02.780836 51089 server/status/recorder.go:652  [n1,summaries] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180827 20:43:03.141428 51667 server/status/runtime.go:433  [n2] runtime stats: 325 MiB RSS, 685 goroutines, 49 MiB/51 MiB/127 MiB GO alloc/idle/total, 90 MiB/130 MiB CGO alloc/total, 94.10cgo/sec, 0.03/0.01 %(u/s)time, 0.00 %gc (1x)
I180827 20:43:03.155034 51685 server/status/recorder.go:652  [n2,summaries] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180827 20:43:03.211417 54226 storage/replica_command.go:430  [merge,n2,s2,r41/3:/Table/53{-/2/"\xc0…}] initiating a merge of r26:/Table/53/{2/"\xc0\t\x13\xe0*c\xe4\xcfS-\x9b,\xe2\x82\xfa\xd8Z\xf6\x99\x81\\\x18ŕ\xea\x80Db\xa7\x94\xf7Q#\x13\\\xc7(\xc4=\xaaZ\xa2Hա}\xdeI\x06\x840I\xa9\x95\xcbи\r#iH\x97F~\x10\xe4<\xb2\xefFb\xac\xee\xf90H5\xd7D\xe4:\xf0Ae\xe3\xd1<\xd1\xf7\xb9\xad\xea\xd9\xe0r\xbc\xa6\xae\x92\xfb\xb5,\xc2\U0010f26eD\xe0 \xc5\x06\xfa\x04{\xf7\xe8\xbfZQ\xa3\x05M\xbb\xa8\xbe\xf4\xc4\x0f\xe9|s{|\x8fr\xad\xdaWĢ\x9e\xdf\x17\x9f\x02\xf3п\xd3\xea\xfd\x8ew3\xb8@7ꇘN%\n\xe0@jq\xb3\xb0&y\xe3K0ȼ_s\x1e\x15\x98\xe7\xbf6\xeb\xef}$dd/\xaa\xf1\xcb.U\x8f\xd4r"/"<a✅ᐿ<\n\n\nॹ\",\n\"πॹᐿ,\rᐿ\nॹ�ॹ<\naᐿᐿ,\"ॹ\"\\✅\n�✅<\n\r<\\<\"\\π;<✅,;✅ॹa\r✅ᐿ;ᐿ\r\\�\\�,ᐿ\r\n,�✅,\t;\\\"π,;ᐿa\nॹ✅\n;ॹ<\\\n;<ᐿॹ\n\r<\n�\\\",ॹ✅,\n\"✅ᐿ\raa\n\n\t;�π<,\",ᐿ<ᐿ\\ᐿ✅;\t<π\"\"ॹ<\t\n,,π\"\t�✅\r;;ॹ,ᐿ\tॹ✅ॹ,\nᐿ\n\\;\n\nπa<✅aπ\t;✅\r;�✅;a\t\n✅ππ\t\rॹ\n\\<✅✅<\r✅\t\r✅<π\n;\n\"\"��ॹ\r,ᐿ\naॹᐿπ\\π\n\n,�\t<�a\nπ\\✅,πॹ\",πᐿa,ᐿᐿa\r<\ta\t\nॹ\n\r✅\r;πa✅�\t�π\",πa\n<✅\"a\t�\r<ᐿa,\naᐿॹᐿ\rॹᐿa�\"\\a✅\"\"✅✅ᐿ\t\"ॹ<\n\"\nπ✅✅\n\\ॹ\t;ᐿ\"a,ॹ\"aॹᐿᐿ\n,\\\nॹॹ,;ॹ;\\\n\n\t\n,\naॹπ\r\"\n\t✅ॹ\r\tᐿ✅;\r\ta�ᐿ\t\t\nπᐿ<ॹ\tᐿ\na\nᐿ\tππ<<π\t✅;\r\"ॹ\n\na�<ᐿ\r\nᐿᐿ\";a<\rॹ\\πॹ\tᐿ\n\nᐿπ;\t;;✅ᐿ✅✅<�ॹ;\tᐿ,,\t,π✅\nᐿॹ;\r\"ᐿa,ᐿᐿᐿa\nॹ<aॹ\r,;π<<\nπa�\n\\\r,ॹπ�\n\"ᐿππ\n✅ॹπ\ra,πॹ\n\t<;πᐿᐿ✅ॹ\t�a✅\r�\"a,π��,ॹa\n\\\rॹ\nॹ\"π\"π\ta�<π�\r;a,a\r<πᐿ\na<\r\t\nॹ\\\\\n\\<�\\�aπa;\r\\,,\nॹ\"✅;\"\n✅ᐿ✅a\n<ππ\tπ<a\t�\\<\"✅\\\nᐿ;\r\t✅�✅π\r\r\r\n\",ᐿ;ॹ\nᐿ\r\"\naa\"\n\t<,✅<a\\\n\"ॹᐿ\n\\\t\t\r\"ॹ<,,π�\"ॹ<ॹ,\\ᐿ<\\π\"\\<ᐿ\n\rॹ\na\t\nπ;\\π\\✅\r,\r�\",,;;,<\n\t\"\\\r\r\"ॹ✅ॹ�\n�✅�\n�\\\n\n\rᐿॹ\tπa<;;\n\n�a\n\\ॹॹ\t;\n<\t\\<ॹॹ�✅π\t\"<\n\tᐿπᐿ\"\"�;\t;ॹ<π<✅\nππ<\"\rॹ�πᐿ�\rπ,<,<ᐿ<;;�,\t<<ᐿ\t<\tॹ,π,<\\a\t;\n\r�a✅\r\r\t\nᐿᐿ✅\r;\n;�;\r✅\n,;✅ᐿ,\\\n<,<\\\t\n;<\\aᐿ\r,\n;\\\r�\rॹ\\\t\";\t�;\n,\ta✅\r\t\r,\n\\\t\nᐿ✅\\\"\naᐿ\\\\\";\r<�✅;aॹ\t\t\\\t\tᐿ�\r,\"\n\"\taᐿ\na\rππॹ\nπᐿ\rॹ\nॹ\tπ,π\r<\"\n,�\na;�\\✅,<\"\"�\nππ\t\nπaॹaa✅\\;\\a\r\rᐿ,\\�;\\ᐿᐿ<a\"\r;π\"\\ᐿπ\tπॹ;\\a\ra;\t\n\\�\r<\t<ॹ\r✅\na\t\t<\n\n✅π\nᐿ✅\\<,\nπ\rᐿ✅a\"\r\"\n✅ॹ\\�\r\n\\�\nॹ\\<\\<\n\n\n"/PrefixEnd-3/";π,\\✅✅ᐿπ✅,�a\r<\nπᐿॹ;π\\,✅\nᐿॹ✅�,\r�\r\r\r,;\r;ᐿ,\n\nᐿaπ\r,,✅\na,a\\<✅\"✅\\,,a\"π\r\n�✅π\"ॹπ;\nπ;<,<\n;<\n\tॹ\rπ\r,a\\\t�\n\r\\ᐿ<\t,\n\\ᐿa\t\t\n\nπ\t\\\n\\πa;π\r\rᐿ\",a<\"\n�\r\ta\r\t�\r\t✅\t;ᐿᐿ��\nᐿᐿ\\,ᐿᐿ\na\"ᐿ\"\"aa\n;✅π\nॹ\\\"\"�✅ॹॹ\\\nॹ\t\nॹ✅,\n\"πᐿ\n\n;<<;\r\tᐿ�,,\\\n\n\n\n\nπ�;\n;,\"✅\r;a\n\\;aa\n\n\n;\n\n<<ॹ�\nπa�πॹaॹ\r\n;✅✅,;ᐿ\n;π\"\\πᐿ\n\"\n\\a\\aπ✅ᐿ\n<\",\tॹ\r<;\";\nππ\"�\n\n\t,π\\\\<\\\t;ॹπ\\,;�✅ॹ,\r<\n✅�;ᐿ\",;✅\n\nॹॹ\r\n✅\n<<\n\"\",a\t\r\r,ॹ\t�;�,π\\\t,;\\\"✅\t\n\n\nᐿ\n<\\\rᐿ,;�\nπ\r\\<ᐿᐿ\n<✅ᐿaॹ�✅\n\t,π\"\r<<\n\nπ\tπ✅\n<\\πॹaᐿ\t;�ॹॹ,\"\n\\a\n,\"πॹ,\r,ॹॹ\\\";�\n\\π✅\n\"ππ\n✅�πॹ,\r✅\n;π;ᐿ\"\nᐿ✅\rॹ;;\n\"ॹ\"�\"a\n\rπ\n<\n\t\"aπ\t;\\\n\";\"π,\ta\t\n\nᐿ<,;�<ᐿ\"\\ᐿᐿa,\n;;ॹॹ\tॹπa�ᐿ\ra,π<✅\tᐿᐿ\n,✅\ra�\"\r\r\",;π�<;\n<;ᐿ\"�;ᐿ;�;✅\t\\<\\<;πa✅\rॹ\\\\\\ᐿ\n;\r\t\n\\\r\"✅\n\tπ✅\"\"<\r\rπ\r<,\n,\\✅ᐿᐿ�\t�,ππ;ᐿ\t�\"\\ππ\"ॹ,πa<\n\n<��\rॹॹ\t,\r\"ॹ✅✅\n\n;\\ॹ;π<\"�\t�<\"ᐿॹॹ;\n<\n\r\na\t�ॹᐿa\n,\"\t\r\"\n,\r<,\"\tᐿ\\\n<,;<\"\t\n\nᐿ,ᐿ\tπ✅\n,\r,\n\t<,�\\;<\\a\nπ\t,\t�ॹ\t\n�a✅\n✅\nॹ\";\r\t✅<\tᐿ\n\tᐿॹ✅\"\r\rॹ✅π\n\n,\t\\\t\\;\"a\t,ॹॹ\"aॹ\n,\n✅�\t\nॹᐿ\n\r✅<πॹ\n✅\tॹ\"ॹ\"�\r\\;\\✅;ॹπ;\n\nᐿ<\r<\"ॹ\n,\n;π\nॹ\ta✅\n�;ᐿ\"a�✅π\r✅ॹ,\n\n\",✅\nᐿ\n<�\r\nπᐿ\"πॹᐿ\r�\n<,✅a\\ॹ\r✅<;πᐿ✅ॹ<\"<✅\"π,\\\rπ\\<\"<\"π\n✅<;\\�\tॹ\n\n\r<\n\rᐿ\nᐿaॹaॹ\\\r<<\n\r\n�\ta,\nॹॹᐿ\n,π✅<;\\\nπॹπᐿॹ<;\"a\r<;\t\t<,;�π\n<✅ॹॹ\tᐿ\rᐿaaaॹ\t\\,ᐿ✅\n\\ॹ<\"π\t\r\"\tᐿ\n\ta\t,<ππ;\n\\\r�\n,\n\n\\ᐿa\nॹᐿa\n\t\n\t\n✅\"ᐿ\"\r\n\n\"�\r\n\n<<π\ra✅\\<ᐿ�\n\n✅�a✅�"/105} [(n1,s1):1, (n3,s3):2, (n2,s2):3, next=4, gen=1] into this range
I180827 20:43:03.233281 51943 server/status/runtime.go:433  [n3] runtime stats: 325 MiB RSS, 694 goroutines, 34 MiB/65 MiB/127 MiB GO alloc/idle/total, 90 MiB/130 MiB CGO alloc/total, 106.10cgo/sec, 0.03/0.01 %(u/s)time, 0.00 %gc (1x)
I180827 20:43:03.241471 51650 storage/store.go:2656  [n2,s2,r41/3:/Table/53{-/2/"\xc0…}] removing replica r26/3
I180827 20:43:03.242467 51901 storage/store.go:2656  [n3,s3,r41/2:/Table/53{-/2/"\xc0…}] removing replica r26/2
I180827 20:43:03.242470 51026 storage/store.go:2656  [n1,s1,r41/1:/Table/53{-/2/"\xc0…}] removing replica r26/1
I180827 20:43:03.248007 51945 server/status/recorder.go:652  [n3,summaries] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180827 20:43:12.772709 51087 server/status/runtime.go:433  [n1] runtime stats: 325 MiB RSS, 688 goroutines, 46 MiB/54 MiB/127 MiB GO alloc/idle/total, 90 MiB/130 MiB CGO alloc/total, 108.30cgo/sec, 0.03/0.00 %(u/s)time, 0.00 %gc (1x)
I180827 20:43:12.786344 51089 server/status/recorder.go:652  [n1,summaries] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180827 20:43:13.141431 51667 server/status/runtime.go:433  [n2] runtime stats: 325 MiB RSS, 688 goroutines, 31 MiB/68 MiB/127 MiB GO alloc/idle/total, 90 MiB/130 MiB CGO alloc/total, 107.80cgo/sec, 0.03/0.00 %(u/s)time, 0.00 %gc (2x)
I180827 20:43:13.154259 51685 server/status/recorder.go:652  [n2,summaries] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180827 20:43:13.234857 51943 server/status/runtime.go:433  [n3] runtime stats: 327 MiB RSS, 688 goroutines, 38 MiB/62 MiB/127 MiB GO alloc/idle/total, 90 MiB/130 MiB CGO alloc/total, 92.19cgo/sec, 0.03/0.00 %(u/s)time, 0.00 %gc (1x)
I180827 20:43:13.251661 51945 server/status/recorder.go:652  [n3,summaries] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180827 20:43:18.242642 51661 storage/compactor/compactor.go:329  [n2,s2,compactor] purging suggested compaction for range /Table/53/2/"\xc0\t\x13\xe0*c\xe4\xcfS-\x9b,\xe2\x82\xfa\xd8Z\xf6\x99\x81\\\x18ŕ\xea\x80Db\xa7\x94\xf7Q#\x13\\\xc7(\xc4=\xaaZ\xa2Hա}\xdeI\x06\x840I\xa9\x95\xcbи\r#iH\x97F~\x10\xe4<\xb2\xefFb\xac\xee\xf90H5\xd7D\xe4:\xf0Ae\xe3\xd1<\xd1\xf7\xb9\xad\xea\xd9\xe0r\xbc\xa6\xae\x92\xfb\xb5,\xc2\U0010f26eD\xe0 \xc5\x06\xfa\x04{\xf7\xe8\xbfZQ\xa3\x05M\xbb\xa8\xbe\xf4\xc4\x0f\xe9|s{|\x8fr\xad\xdaWĢ\x9e\xdf\x17\x9f\x02\xf3п\xd3\xea\xfd\x8ew3\xb8@7ꇘN%\n\xe0@jq\xb3\xb0&y\xe3K0ȼ_s\x1e\x15\x98\xe7\xbf6\xeb\xef}$dd/\xaa\xf1\xcb.U\x8f\xd4r"/"<a✅ᐿ<\n\n\nॹ\",\n\"πॹᐿ,\rᐿ\nॹ�ॹ<\naᐿᐿ,\"ॹ\"\\✅\n�✅<\n\r<\\<\"\\π;<✅,;✅ॹa\r✅ᐿ;ᐿ\r\\�\\�,ᐿ\r\n,�✅,\t;\\\"π,;ᐿa\nॹ✅\n;ॹ<\\\n;<ᐿॹ\n\r<\n�\\\",ॹ✅,\n\"✅ᐿ\raa\n\n\t;�π<,\",ᐿ<ᐿ\\ᐿ✅;\t<π\"\"ॹ<\t\n,,π\"\t�✅\r;;ॹ,ᐿ\tॹ✅ॹ,\nᐿ\n\\;\n\nπa<✅aπ\t;✅\r;�✅;a\t\n✅ππ\t\rॹ\n\\<✅✅<\r✅\t\r✅<π\n;\n\"\"��ॹ\r,ᐿ\naॹᐿπ\\π\n\n,�\t<�a\nπ\\✅,πॹ\",πᐿa,ᐿᐿa\r<\ta\t\nॹ\n\r✅\r;πa✅�\t�π\",πa\n<✅\"a\t�\r<ᐿa,\naᐿॹᐿ\rॹᐿa�\"\\a✅\"\"✅✅ᐿ\t\"ॹ<\n\"\nπ✅✅\n\\ॹ\t;ᐿ\"a,ॹ\"aॹᐿᐿ\n,\\\nॹॹ,;ॹ;\\\n\n\t\n,\naॹπ\r\"\n\t✅ॹ\r\tᐿ✅;\r\ta�ᐿ\t\t\nπᐿ<ॹ\tᐿ\na\nᐿ\tππ<<π\t✅;\r\"ॹ\n\na�<ᐿ\r\nᐿᐿ\";a<\rॹ\\πॹ\tᐿ\n\nᐿπ;\t;;✅ᐿ✅✅<�ॹ;\tᐿ,,\t,π✅\nᐿॹ;\r\"ᐿa,ᐿᐿᐿa\nॹ<aॹ\r,;π<<\nπa�\n\\\r,ॹπ�\n\"ᐿππ\n✅ॹπ\ra,πॹ\n\t<;πᐿᐿ✅ॹ\t�a✅\r�\"a,π��,ॹa\n\\\rॹ\nॹ\"π\"π\ta�<π�\r;a,a\r<πᐿ\na<\r\t\nॹ\\\\\n\\<�\\�aπa;\r\\,,\nॹ\"✅;\"\n✅ᐿ✅a\n<ππ\tπ<a\t�\\<\"✅\\\nᐿ;\r\t✅�✅π\r\r\r\n\",ᐿ;ॹ\nᐿ\r\"\naa\"\n\t<,✅<a\\\n\"ॹᐿ\n\\\t\t\r\"ॹ<,,π�\"ॹ<ॹ,\\ᐿ<\\π\"\\<ᐿ\n\rॹ\na\t\nπ;\\π\\✅\r,\r�\",,;;,<\n\t\"\\\r\r\"ॹ✅ॹ�\n�✅�\n�\\\n\n\rᐿॹ\tπa<;;\n\n�a\n\\ॹॹ\t;\n<\t\\<ॹॹ�✅π\t\"<\n\tᐿπᐿ\"\"�;\t;ॹ<π<✅\nππ<\"\rॹ�πᐿ�\rπ,<,<ᐿ<;;�,\t<<ᐿ\t<\tॹ,π,<\\a\t;\n\r�a✅\r\r\t\nᐿᐿ✅\r;\n;�;\r✅\n,;✅ᐿ,\\\n<,<\\\t\n;<\\aᐿ\r,\n;\\\r�\rॹ\\\t\";\t�;\n,\ta✅\r\t\r,\n\\\t\nᐿ✅\\\"\naᐿ\\\\\";\r<�✅;aॹ\t\t\\\t\tᐿ�\r,\"\n\"\taᐿ\na\rππॹ\nπᐿ\rॹ\nॹ\tπ,π\r<\"\n,�\na;�\\✅,<\"\"�\nππ\t\nπaॹaa✅\\;\\a\r\rᐿ,\\�;\\ᐿᐿ<a\"\r;π\"\\ᐿπ\tπॹ;\\a\ra;\t\n\\�\r<\t<ॹ\r✅\na\t\t<\n\n✅π\nᐿ✅\\<,\nπ\rᐿ✅a\"\r\"\n✅ॹ\\�\r\n\\�\nॹ\\<\\<\n\n\n"/PrefixEnd - /Table/53/3/";π,\\✅✅ᐿπ✅,�a\r<\nπᐿॹ;π\\,✅\nᐿॹ✅�,\r�\r\r\r,;\r;ᐿ,\n\nᐿaπ\r,,✅\na,a\\<✅\"✅\\,,a\"π\r\n�✅π\"ॹπ;\nπ;<,<\n;<\n\tॹ\rπ\r,a\\\t�\n\r\\ᐿ<\t,\n\\ᐿa\t\t\n\nπ\t\\\n\\πa;π\r\rᐿ\",a<\"\n�\r\ta\r\t�\r\t✅\t;ᐿᐿ��\nᐿᐿ\\,ᐿᐿ\na\"ᐿ\"\"aa\n;✅π\nॹ\\\"\"�✅ॹॹ\\\nॹ\t\nॹ✅,\n\"πᐿ\n\n;<<;\r\tᐿ�,,\\\n\n\n\n\nπ�;\n;,\"✅\r;a\n\\;aa\n\n\n;\n\n<<ॹ�\nπa�πॹaॹ\r\n;✅✅,;ᐿ\n;π\"\\πᐿ\n\"\n\\a\\aπ✅ᐿ\n<\",\tॹ\r<;\";\nππ\"�\n\n\t,π\\\\<\\\t;ॹπ\\,;�✅ॹ,\r<\n✅�;ᐿ\",;✅\n\nॹॹ\r\n✅\n<<\n\"\",a\t\r\r,ॹ\t�;�,π\\\t,;\\\"✅\t\n\n\nᐿ\n<\\\rᐿ,;�\nπ\r\\<ᐿᐿ\n<✅ᐿaॹ�✅\n\t,π\"\r<<\n\nπ\tπ✅\n<\\πॹaᐿ\t;�ॹॹ,\"\n\\a\n,\"πॹ,\r,ॹॹ\\\";�\n\\π✅\n\"ππ\n✅�πॹ,\r✅\n;π;ᐿ\"\nᐿ✅\rॹ;;\n\"ॹ\"�\"a\n\rπ\n<\n\t\"aπ\t;\\\n\";\"π,\ta\t\n\nᐿ<,;�<ᐿ\"\\ᐿᐿa,\n;;ॹॹ\tॹπa�ᐿ\ra,π<✅\tᐿᐿ\n,✅\ra�\"\r\r\",;π�<;\n<;ᐿ\"�;ᐿ;�;✅\t\\<\\<;πa✅\rॹ\\\\\\ᐿ\n;\r\t\n\\\r\"✅\n\tπ✅\"\"<\r\rπ\r<,\n,\\✅ᐿᐿ�\t�,ππ;ᐿ\t�\"\\ππ\"ॹ,πa<\n\n<��\rॹॹ\t,\r\"ॹ✅✅\n\n;\\ॹ;π<\"�\t�<\"ᐿॹॹ;\n<\n\r\na\t�ॹᐿa\n,\"\t\r\"\n,\r<,\"\tᐿ\\\n<,;<\"\t\n\nᐿ,ᐿ\tπ✅\n,\r,\n\t<,�\\;<\\a\nπ\t,\t�ॹ\t\n�a✅\n✅\nॹ\";\r\t✅<\tᐿ\n\tᐿॹ✅\"\r\rॹ✅π\n\n,\t\\\t\\;\"a\t,ॹॹ\"aॹ\n,\n✅�\t\nॹᐿ\n\r✅<πॹ\n✅\tॹ\"ॹ\"�\r\\;\\✅;ॹπ;\n\nᐿ<\r<\"ॹ\n,\n;π\nॹ\ta✅\n�;ᐿ\"a�✅π\r✅ॹ,\n\n\",✅\nᐿ\n<�\r\nπᐿ\"πॹᐿ\r�\n<,✅a\\ॹ\r✅<;πᐿ✅ॹ<\"<✅\"π,\\\rπ\\<\"<\"π\n✅<;\\�\tॹ\n\n\r<\n\rᐿ\nᐿaॹaॹ\\\r<<\n\r\n�\ta,\nॹॹᐿ\n,π✅<;\\\nπॹπᐿॹ<;\"a\r<;\t\t<,;�π\n<✅ॹॹ\tᐿ\rᐿaaaॹ\t\\,ᐿ✅\n\\ॹ<\"π\t\r\"\tᐿ\n\ta\t,<ππ;\n\\\r�\n,\n\n\\ᐿa\nॹᐿa\n\t\n\t\n✅\"ᐿ\"\r\n\n\"�\r\n\n<<π\ra✅\\<ᐿ�\n\n✅�a✅�"/105 that contains live data
I180827 20:43:18.243093 51082 storage/compactor/compactor.go:329  [n1,s1,compactor] purging suggested compaction for range /Table/53/2/"\xc0\t\x13\xe0*c\xe4\xcfS-\x9b,\xe2\x82\xfa\xd8Z\xf6\x99\x81\\\x18ŕ\xea\x80Db\xa7\x94\xf7Q#\x13\\\xc7(\xc4=\xaaZ\xa2Hա}\xdeI\x06\x840I\xa9\x95\xcbи\r#iH\x97F~\x10\xe4<\xb2\xefFb\xac\xee\xf90H5\xd7D\xe4:\xf0Ae\xe3\xd1<\xd1\xf7\xb9\xad\xea\xd9\xe0r\xbc\xa6\xae\x92\xfb\xb5,\xc2\U0010f26eD\xe0 \xc5\x06\xfa\x04{\xf7\xe8\xbfZQ\xa3\x05M\xbb\xa8\xbe\xf4\xc4\x0f\xe9|s{|\x8fr\xad\xdaWĢ\x9e\xdf\x17\x9f\x02\xf3п\xd3\xea\xfd\x8ew3\xb8@7ꇘN%\n\xe0@jq\xb3\xb0&y\xe3K0ȼ_s\x1e\x15\x98\xe7\xbf6\xeb\xef}$dd/\xaa\xf1\xcb.U\x8f\xd4r"/"<a✅ᐿ<\n\n\nॹ\",\n\"πॹᐿ,\rᐿ\nॹ�ॹ<\naᐿᐿ,\"ॹ\"\\✅\n�✅<\n\r<\\<\"\\π;<✅,;✅ॹa\r✅ᐿ;ᐿ\r\\�\\�,ᐿ\r\n,�✅,\t;\\\"π,;ᐿa\nॹ✅\n;ॹ<\\\n;<ᐿॹ\n\r<\n�\\\",ॹ✅,\n\"✅ᐿ\raa\n\n\t;�π<,\",ᐿ<ᐿ\\ᐿ✅;\t<π\"\"ॹ<\t\n,,π\"\t�✅\r;;ॹ,ᐿ\tॹ✅ॹ,\nᐿ\n\\;\n\nπa<✅aπ\t;✅\r;�✅;a\t\n✅ππ\t\rॹ\n\\<✅✅<\r✅\t\r✅<π\n;\n\"\"��ॹ\r,ᐿ\naॹᐿπ\\π\n\n,�\t<�a\nπ\\✅,πॹ\",πᐿa,ᐿᐿa\r<\ta\t\nॹ\n\r✅\r;πa✅�\t�π\",πa\n<✅\"a\t�\r<ᐿa,\naᐿॹᐿ\rॹᐿa�\"\\a✅\"\"✅✅ᐿ\t\"ॹ<\n\"\nπ✅✅\n\\ॹ\t;ᐿ\"a,ॹ\"aॹᐿᐿ\n,\\\nॹॹ,;ॹ;\\\n\n\t\n,\naॹπ\r\"\n\t✅ॹ\r\tᐿ✅;\r\ta�ᐿ\t\t\nπᐿ<ॹ\tᐿ\na\nᐿ\tππ<<π\t✅;\r\"ॹ\n\na�<ᐿ\r\nᐿᐿ\";a<\rॹ\\πॹ\tᐿ\n\nᐿπ;\t;;✅ᐿ✅✅<�ॹ;\tᐿ,,\t,π✅\nᐿॹ;\r\"ᐿa,ᐿᐿᐿa\nॹ<aॹ\r,;π<<\nπa�\n\\\r,ॹπ�\n\"ᐿππ\n✅ॹπ\ra,πॹ\n\t<;πᐿᐿ✅ॹ\t�a✅\r�\"a,π��,ॹa\n\\\rॹ\nॹ\"π\"π\ta�<π�\r;a,a\r<πᐿ\na<\r\t\nॹ\\\\\n\\<�\\�aπa;\r\\,,\nॹ\"✅;\"\n✅ᐿ✅a\n<ππ\tπ<a\t�\\<\"✅\\\nᐿ;\r\t✅�✅π\r\r\r\n\",ᐿ;ॹ\nᐿ\r\"\naa\"\n\t<,✅<a\\\n\"ॹᐿ\n\\\t\t\r\"ॹ<,,π�\"ॹ<ॹ,\\ᐿ<\\π\"\\<ᐿ\n\rॹ\na\t\nπ;\\π\\✅\r,\r�\",,;;,<\n\t\"\\\r\r\"ॹ✅ॹ�\n�✅�\n�\\\n\n\rᐿॹ\tπa<;;\n\n�a\n\\ॹॹ\t;\n<\t\\<ॹॹ�✅π\t\"<\n\tᐿπᐿ\"\"�;\t;ॹ<π<✅\nππ<\"\rॹ�πᐿ�\rπ,<,<ᐿ<;;�,\t<<ᐿ\t<\tॹ,π,<\\a\t;\n\r�a✅\r\r\t\nᐿᐿ✅\r;\n;�;\r✅\n,;✅ᐿ,\\\n<,<\\\t\n;<\\aᐿ\r,\n;\\\r�\rॹ\\\t\";\t�;\n,\ta✅\r\t\r,\n\\\t\nᐿ✅\\\"\naᐿ\\\\\";\r<�✅;aॹ\t\t\\\t\tᐿ�\r,\"\n\"\taᐿ\na\rππॹ\nπᐿ\rॹ\nॹ\tπ,π\r<\"\n,�\na;�\\✅,<\"\"�\nππ\t\nπaॹaa✅\\;\\a\r\rᐿ,\\�;\\ᐿᐿ<a\"\r;π\"\\ᐿπ\tπॹ;\\a\ra;\t\n\\�\r<\t<ॹ\r✅\na\t\t<\n\n✅π\nᐿ✅\\<,\nπ\rᐿ✅a\"\r\"\n✅ॹ\\�\r\n\\�\nॹ\\<\\<\n\n\n"/PrefixEnd - /Table/53/3/";π,\\✅✅ᐿπ✅,�a\r<\nπᐿॹ;π\\,✅\nᐿॹ✅�,\r�\r\r\r,;\r;ᐿ,\n\nᐿaπ\r,,✅\na,a\\<✅\"✅\\,,a\"π\r\n�✅π\"ॹπ;\nπ;<,<\n;<\n\tॹ\rπ\r,a\\\t�\n\r\\ᐿ<\t,\n\\ᐿa\t\t\n\nπ\t\\\n\\πa;π\r\rᐿ\",a<\"\n�\r\ta\r\t�\r\t✅\t;ᐿᐿ��\nᐿᐿ\\,ᐿᐿ\na\"ᐿ\"\"aa\n;✅π\nॹ\\\"\"�✅ॹॹ\\\nॹ\t\nॹ✅,\n\"πᐿ\n\n;<<;\r\tᐿ�,,\\\n\n\n\n\nπ�;\n;,\"✅\r;a\n\\;aa\n\n\n;\n\n<<ॹ�\nπa�πॹaॹ\r\n;✅✅,;ᐿ\n;π\"\\πᐿ\n\"\n\\a\\aπ✅ᐿ\n<\",\tॹ\r<;\";\nππ\"�\n\n\t,π\\\\<\\\t;ॹπ\\,;�✅ॹ,\r<\n✅�;ᐿ\",;✅\n\nॹॹ\r\n✅\n<<\n\"\",a\t\r\r,ॹ\t�;�,π\\\t,;\\\"✅\t\n\n\nᐿ\n<\\\rᐿ,;�\nπ\r\\<ᐿᐿ\n<✅ᐿaॹ�✅\n\t,π\"\r<<\n\nπ\tπ✅\n<\\πॹaᐿ\t;�ॹॹ,\"\n\\a\n,\"πॹ,\r,ॹॹ\\\";�\n\\π✅\n\"ππ\n✅�πॹ,\r✅\n;π;ᐿ\"\nᐿ✅\rॹ;;\n\"ॹ\"�\"a\n\rπ\n<\n\t\"aπ\t;\\\n\";\"π,\ta\t\n\nᐿ<,;�<ᐿ\"\\ᐿᐿa,\n;;ॹॹ\tॹπa�ᐿ\ra,π<✅\tᐿᐿ\n,✅\ra�\"\r\r\",;π�<;\n<;ᐿ\"�;ᐿ;�;✅\t\\<\\<;πa✅\rॹ\\\\\\ᐿ\n;\r\t\n\\\r\"✅\n\tπ✅\"\"<\r\rπ\r<,\n,\\✅ᐿᐿ�\t�,ππ;ᐿ\t�\"\\ππ\"ॹ,πa<\n\n<��\rॹॹ\t,\r\"ॹ✅✅\n\n;\\ॹ;π<\"�\t�<\"ᐿॹॹ;\n<\n\r\na\t�ॹᐿa\n,\"\t\r\"\n,\r<,\"\tᐿ\\\n<,;<\"\t\n\nᐿ,ᐿ\tπ✅\n,\r,\n\t<,�\\;<\\a\nπ\t,\t�ॹ\t\n�a✅\n✅\nॹ\";\r\t✅<\tᐿ\n\tᐿॹ✅\"\r\rॹ✅π\n\n,\t\\\t\\;\"a\t,ॹॹ\"aॹ\n,\n✅�\t\nॹᐿ\n\r✅<πॹ\n✅\tॹ\"ॹ\"�\r\\;\\✅;ॹπ;\n\nᐿ<\r<\"ॹ\n,\n;π\nॹ\ta✅\n�;ᐿ\"a�✅π\r✅ॹ,\n\n\",✅\nᐿ\n<�\r\nπᐿ\"πॹᐿ\r�\n<,✅a\\ॹ\r✅<;πᐿ✅ॹ<\"<✅\"π,\\\rπ\\<\"<\"π\n✅<;\\�\tॹ\n\n\r<\n\rᐿ\nᐿaॹaॹ\\\r<<\n\r\n�\ta,\nॹॹᐿ\n,π✅<;\\\nπॹπᐿॹ<;\"a\r<;\t\t<,;�π\n<✅ॹॹ\tᐿ\rᐿaaaॹ\t\\,ᐿ✅\n\\ॹ<\"π\t\r\"\tᐿ\n\ta\t,<ππ;\n\\\r�\n,\n\n\\ᐿa\nॹᐿa\n\t\n\t\n✅\"ᐿ\"\r\n\n\"�\r\n\n<<π\ra✅\\<ᐿ�\n\n✅�a✅�"/105 that contains live data
I180827 20:43:18.246476 51921 storage/compactor/compactor.go:329  [n3,s3,compactor] purging suggested compaction for range /Table/53/2/"\xc0\t\x13\xe0*c\xe4\xcfS-\x9b,\xe2\x82\xfa\xd8Z\xf6\x99\x81\\\x18ŕ\xea\x80Db\xa7\x94\xf7Q#\x13\\\xc7(\xc4=\xaaZ\xa2Hա}\xdeI\x06\x840I\xa9\x95\xcbи\r#iH\x97F~\x10\xe4<\xb2\xefFb\xac\xee\xf90H5\xd7D\xe4:\xf0Ae\xe3\xd1<\xd1\xf7\xb9\xad\xea\xd9\xe0r\xbc\xa6\xae\x92\xfb\xb5,\xc2\U0010f26eD\xe0 \xc5\x06\xfa\x04{\xf7\xe8\xbfZQ\xa3\x05M\xbb\xa8\xbe\xf4\xc4\x0f\xe9|s{|\x8fr\xad\xdaWĢ\x9e\xdf\x17\x9f\x02\xf3п\xd3\xea\xfd\x8ew3\xb8@7ꇘN%\n\xe0@jq\xb3\xb0&y\xe3K0ȼ_s\x1e\x15\x98\xe7\xbf6\xeb\xef}$dd/\xaa\xf1\xcb.U\x8f\xd4r"/"<a✅ᐿ<\n\n\nॹ\",\n\"πॹᐿ,\rᐿ\nॹ�ॹ<\naᐿᐿ,\"ॹ\"\\✅\n�✅<\n\r<\\<\"\\π;<✅,;✅ॹa\r✅ᐿ;ᐿ\r\\�\\�,ᐿ\r\n,�✅,\t;\\\"π,;ᐿa\nॹ✅\n;ॹ<\\\n;<ᐿॹ\n\r<\n�\\\",ॹ✅,\n\"✅ᐿ\raa\n\n\t;�π<,\",ᐿ<ᐿ\\ᐿ✅;\t<π\"\"ॹ<\t\n,,π\"\t�✅\r;;ॹ,ᐿ\tॹ✅ॹ,\nᐿ\n\\;\n\nπa<✅aπ\t;✅\r;�✅;a\t\n✅ππ\t\rॹ\n\\<✅✅<\r✅\t\r✅<π\n;\n\"\"��ॹ\r,ᐿ\naॹᐿπ\\π\n\n,�\t<�a\nπ\\✅,πॹ\",πᐿa,ᐿᐿa\r<\ta\t\nॹ\n\r✅\r;πa✅�\t�π\",πa\n<✅\"a\t�\r<ᐿa,\naᐿॹᐿ\rॹᐿa�\"\\a✅\"\"✅✅ᐿ\t\"ॹ<\n\"\nπ✅✅\n\\ॹ\t;ᐿ\"a,ॹ\"aॹᐿᐿ\n,\\\nॹॹ,;ॹ;\\\n\n\t\n,\naॹπ\r\"\n\t✅ॹ\r\tᐿ✅;\r\ta�ᐿ\t\t\nπᐿ<ॹ\tᐿ\na\nᐿ\tππ<<π\t✅;\r\"ॹ\n\na�<ᐿ\r\nᐿᐿ\";a<\rॹ\\πॹ\tᐿ\n\nᐿπ;\t;;✅ᐿ✅✅<�ॹ;\tᐿ,,\t,π✅\nᐿॹ;\r\"ᐿa,ᐿᐿᐿa\nॹ<aॹ\r,;π<<\nπa�\n\\\r,ॹπ�\n\"ᐿππ\n✅ॹπ\ra,πॹ\n\t<;πᐿᐿ✅ॹ\t�a✅\r�\"a,π��,ॹa\n\\\rॹ\nॹ\"π\"π\ta�<π�\r;a,a\r<πᐿ\na<\r\t\nॹ\\\\\n\\<�\\�aπa;\r\\,,\nॹ\"✅;\"\n✅ᐿ✅a\n<ππ\tπ<a\t�\\<\"✅\\\nᐿ;\r\t✅�✅π\r\r\r\n\",ᐿ;ॹ\nᐿ\r\"\naa\"\n\t<,✅<a\\\n\"ॹᐿ\n\\\t\t\r\"ॹ<,,π�\"ॹ<ॹ,\\ᐿ<\\π\"\\<ᐿ\n\rॹ\na\t\nπ;\\π\\✅\r,\r�\",,;;,<\n\t\"\\\r\r\"ॹ✅ॹ�\n�✅�\n�\\\n\n\rᐿॹ\tπa<;;\n\n�a\n\\ॹॹ\t;\n<\t\\<ॹॹ�✅π\t\"<\n\tᐿπᐿ\"\"�;\t;ॹ<π<✅\nππ<\"\rॹ�πᐿ�\rπ,<,<ᐿ<;;�,\t<<ᐿ\t<\tॹ,π,<\\a\t;\n\r�a✅\r\r\t\nᐿᐿ✅\r;\n;�;\r✅\n,;✅ᐿ,\\\n<,<\\\t\n;<\\aᐿ\r,\n;\\\r�\rॹ\\\t\";\t�;\n,\ta✅\r\t\r,\n\\\t\nᐿ✅\\\"\naᐿ\\\\\";\r<�✅;aॹ\t\t\\\t\tᐿ�\r,\"\n\"\taᐿ\na\rππॹ\nπᐿ\rॹ\nॹ\tπ,π\r<\"\n,�\na;�\\✅,<\"\"�\nππ\t\nπaॹaa✅\\;\\a\r\rᐿ,\\�;\\ᐿᐿ<a\"\r;π\"\\ᐿπ\tπॹ;\\a\ra;\t\n\\�\r<\t<ॹ\r✅\na\t\t<\n\n✅π\nᐿ✅\\<,\nπ\rᐿ✅a\"\r\"\n✅ॹ\\�\r\n\\�\

Please assign, take a look and update the issue accordingly.

Failed tests ():

The following test appears to have failed:

#:

W1215 01:18:50.477119 959 multiraft/multiraft.go:1233  aborting configuration change: key range /Local/Range/RangeDescriptor/""-/Min outside of bounds of range /Min-/Min
W1215 01:18:50.478804 959 multiraft/multiraft.go:1139  failed to look up replica ID for range 1 (disabling replica ID check): storage/store.go:1695: store 3 not found as replica of range 1
I1215 01:18:53.557582 959 storage/engine/rocksdb.go:138  closing in-memory rocksdb instance
I1215 01:18:53.557889 959 storage/engine/rocksdb.go:138  closing in-memory rocksdb instance
I1215 01:18:53.558132 959 storage/engine/rocksdb.go:138  closing in-memory rocksdb instance
--- FAIL: TestRaftRemoveRace (3.53s)
    <autogenerated>:32: storage/client_test.go:521: condition failed to evaluate within 3s: storage/client_test.go:517: range not found on store 2
=== RUN   TestStoreRangeRemoveDead
E1215 01:18:53.562437 959 gossip/gossip.go:181  different node IDs were set for the same gossip instance (2147483647, 1)
I1215 01:18:53.563875 959 multiraft/multiraft.go:579  node 1 starting
I1215 01:18:53.564727 959 storage/replica.go:1308  gossiping cluster id  from store 1, range 1
I1215 01:18:53.565694 959 raft/raft.go:446  [group 1] 1 became follower at term 5
I1215 01:18:53.565898 959 raft/raft.go:234  [group 1] newRaft 1 [peers: [1], term: 5, commit: 10, applied: 10, lastindex: 10, lastterm: 5]
I1215 01:18:53.566074 959 multiraft/multiraft.go:999  node 1 campaigning because initial confstate is [1]
I1215 01:18:53.566196 959 raft/raft.go:526  [group 1] 1 is starting a new election at term 5
I1215 01:18:53.566292 959 raft/raft.go:459  [group 1] 1 became candidate at term 6
--
I1215 01:19:05.063638 959 storage/engine/rocksdb.go:138  closing in-memory rocksdb instance
I1215 01:19:05.063778 959 storage/engine/rocksdb.go:138  closing in-memory rocksdb instance
--- PASS: TestLeaderAfterSplit (0.44s)
=== RUN   Example_rebalancing
--- PASS: Example_rebalancing (0.62s)
FAIL
FAIL    github.com/cockroachdb/cockroach/storage    32.371s
=== RUN   TestBatchBasics
I1215 01:18:45.307778 1019 storage/engine/rocksdb.go:138  closing in-memory rocksdb instance
--- PASS: TestBatchBasics (0.01s)
=== RUN   TestBatchGet
I1215 01:18:45.312076 1019 storage/engine/rocksdb.go:138  closing in-memory rocksdb instance
--- PASS: TestBatchGet (0.01s)
=== RUN   TestBatchMerge
I1215 01:18:45.320845 1019 storage/engine/rocksdb.go:138  closing in-memory rocksdb instance
--- PASS: TestBatchMerge (0.01s)
=== RUN   TestBatchProtoW1215 01:18:50.477119 959 multiraft/multiraft.go:1233  aborting configuration change: key range /Local/Range/RangeDescriptor/""-/Min outside of bounds of range /Min-/Min
W1215 01:18:50.478804 959 multiraft/multiraft.go:1139  failed to look up replica ID for range 1 (disabling replica ID check): storage/store.go:1695: store 3 not found as replica of range 1
I1215 01:18:53.557582 959 storage/engine/rocksdb.go:138  closing in-memory rocksdb instance
I1215 01:18:53.557889 959 storage/engine/rocksdb.go:138  closing in-memory rocksdb instance
I1215 01:18:53.558132 959 storage/engine/rocksdb.go:138  closing in-memory rocksdb instance
--- FAIL: TestRaftRemoveRace (3.53s)
    <autogenerated>:32: storage/client_test.go:521: condition failed to evaluate within 3s: storage/client_test.go:517: range not found on store 2
=== RUN   TestStoreRangeRemoveDead
E1215 01:18:53.562437 959 gossip/gossip.go:181  different node IDs were set for the same gossip instance (2147483647, 1)
I1215 01:18:53.563875 959 multiraft/multiraft.go:579  node 1 starting
I1215 01:18:53.564727 959 storage/replica.go:1308  gossiping cluster id  from store 1, range 1
I1215 01:18:53.565694 959 raft/raft.go:446  [group 1] 1 became follower at term 5
I1215 01:18:53.565898 959 raft/raft.go:234  [group 1] newRaft 1 [peers: [1], term: 5, commit: 10, applied: 10, lastindex: 10, lastterm: 5]
I1215 01:18:53.566074 959 multiraft/multiraft.go:999  node 1 campaigning because initial confstate is [1]
I1215 01:18:53.566196 959 raft/raft.go:526  [group 1] 1 is starting a new election at term 5
I1215 01:18:53.566292 959 raft/raft.go:459  [group 1] 1 became candidate at term 6
--
I1215 01:19:05.063638 959 storage/engine/rocksdb.go:138  closing in-memory rocksdb instance
I1215 01:19:05.063778 959 storage/engine/rocksdb.go:138  closing in-memory rocksdb instance
--- PASS: TestLeaderAfterSplit (0.44s)
=== RUN   Example_rebalancing
--- PASS: Example_rebalancing (0.62s)
FAIL
FAIL    github.com/cockroachdb/cockroach/storage    32.371s
=== RUN   TestBatchBasics
I1215 01:18:45.307778 1019 storage/engine/rocksdb.go:138  closing in-memory rocksdb instance
--- PASS: TestBatchBasics (0.01s)
=== RUN   TestBatchGet
I1215 01:18:45.312076 1019 storage/engine/rocksdb.go:138  closing in-memory rocksdb instance
--- PASS: TestBatchGet (0.01s)
=== RUN   TestBatchMerge
I1215 01:18:45.320845 1019 storage/engine/rocksdb.go:138  closing in-memory rocksdb instance
--- PASS: TestBatchMerge (0.01s)
=== RUN   TestBatchProto

Please assign, take a look and update the issue accordingly.

Test failure in CI build 432

The following test appears to have failed:

#432:

I0401 07:56:25.605432     265 multiraft.go:638] HardState updated: {Term:6 Vote:257 Commit:13 XXX_unrecognized:[]}
I0401 07:56:25.605747     265 multiraft.go:641] New Entry[0]: 6/13 EntryConfChange "\b\x00\x10\x00\x18\x82\x04\"\xd0\x01\x00\x13\xd0Ի\xf3\xa7\xa3\x0e6;ߨ\xf2ظ\x92\x10\x01\x1a\xba\x01J\xb7\x01\n\x92\x01\n\x04\b\x00\x10\x05\x12\x14\b\x8eƞ\x9d\xbf\x97\xb5\xe8\x13\x1
I0401 07:56:25.625558     265 multiraft.go:644] Committed Entry[0]: 6/13 EntryConfChange "\b\x00\x10\x00\x18\x82\x04\"\xd0\x01\x00\x13\xd0Ի\xf3\xa7\xa3\x0e6;ߨ\xf2ظ\x92\x10\x01\x1a\xba\x01J\xb7\x01\n\x92\x01\n\x04\b\x00\x10\x05\x12\x14\b\x8eƞ\x9d\xbf\x97\xb5\xe8\x13\x1
W0401 07:56:25.626610     265 multiraft.go:766] aborting configuration change: client_raft_test.go:242: boom
==================
WARNING: DATA RACE
Write by goroutine 47:
  github.com/cockroachdb/cockroach/proto.(*ResponseHeader).SetGoError()
      /go/src/github.com/cockroachdb/cockroach/proto/api.go:613 +0x13b
  github.com/cockroachdb/cockroach/kv.func·007()
      /go/src/github.com/cockroachdb/cockroach/kv/local_sender.go:153 +0x77d
  github.com/cockroachdb/cockroach/util.RetryWithBackoff()
      /go/src/github.com/cockroachdb/cockroach/util/retry.go:80 +0x6d
  github.com/cockroachdb/cockroach/kv.(*LocalSender).Send()
      /go/src/github.com/cockroachdb/cockroach/kv/local_sender.go:163 +0x1dc
  github.com/cockroachdb/cockroach/kv.(*TxnCoordSender).sendOne()
--
  github.com/cockroachdb/cockroach/storage_test.TestFailedReplicaChange()
      /go/src/github.com/cockroachdb/cockroach/storage/client_raft_test.go:253 +0x571
  testing.tRunner()
      /usr/src/go/src/testing/testing.go:447 +0x133

Previous read by goroutine 114:
  github.com/cockroachdb/cockroach/proto.(*Error).Error()
      /go/src/github.com/cockroachdb/cockroach/proto/errors.go:30 +0x48
  fmt.(*pp).handleMethods()
      /usr/src/go/src/fmt/print.go:714 +0x534
  fmt.(*pp).printArg()
      /usr/src/go/src/fmt/print.go:794 +0x492
  fmt.(*pp).doPrintf()
      /usr/src/go/src/fmt/print.go:1183 +0x2ec9
  fmt.Fprintf()
      /usr/src/go/src/fmt/print.go:188 +0x88
I0401 07:56:25.605432     265 multiraft.go:638] HardState updated: {Term:6 Vote:257 Commit:13 XXX_unrecognized:[]}
I0401 07:56:25.605747     265 multiraft.go:641] New Entry[0]: 6/13 EntryConfChange "\b\x00\x10\x00\x18\x82\x04\"\xd0\x01\x00\x13\xd0Ի\xf3\xa7\xa3\x0e6;ߨ\xf2ظ\x92\x10\x01\x1a\xba\x01J\xb7\x01\n\x92\x01\n\x04\b\x00\x10\x05\x12\x14\b\x8eƞ\x9d\xbf\x97\xb5\xe8\x13\x1
I0401 07:56:25.625558     265 multiraft.go:644] Committed Entry[0]: 6/13 EntryConfChange "\b\x00\x10\x00\x18\x82\x04\"\xd0\x01\x00\x13\xd0Ի\xf3\xa7\xa3\x0e6;ߨ\xf2ظ\x92\x10\x01\x1a\xba\x01J\xb7\x01\n\x92\x01\n\x04\b\x00\x10\x05\x12\x14\b\x8eƞ\x9d\xbf\x97\xb5\xe8\x13\x1
W0401 07:56:25.626610     265 multiraft.go:766] aborting configuration change: client_raft_test.go:242: boom
==================
WARNING: DATA RACE
Write by goroutine 47:
  github.com/cockroachdb/cockroach/proto.(*ResponseHeader).SetGoError()
      /go/src/github.com/cockroachdb/cockroach/proto/api.go:613 +0x13b
  github.com/cockroachdb/cockroach/kv.func·007()
      /go/src/github.com/cockroachdb/cockroach/kv/local_sender.go:153 +0x77d
  github.com/cockroachdb/cockroach/util.RetryWithBackoff()
      /go/src/github.com/cockroachdb/cockroach/util/retry.go:80 +0x6d
  github.com/cockroachdb/cockroach/kv.(*LocalSender).Send()
      /go/src/github.com/cockroachdb/cockroach/kv/local_sender.go:163 +0x1dc
  github.com/cockroachdb/cockroach/kv.(*TxnCoordSender).sendOne()
--
  github.com/cockroachdb/cockroach/storage_test.TestFailedReplicaChange()
      /go/src/github.com/cockroachdb/cockroach/storage/client_raft_test.go:253 +0x571
  testing.tRunner()
      /usr/src/go/src/testing/testing.go:447 +0x133

Previous read by goroutine 114:
  github.com/cockroachdb/cockroach/proto.(*Error).Error()
      /go/src/github.com/cockroachdb/cockroach/proto/errors.go:30 +0x48
  fmt.(*pp).handleMethods()
      /usr/src/go/src/fmt/print.go:714 +0x534
  fmt.(*pp).printArg()
      /usr/src/go/src/fmt/print.go:794 +0x492
  fmt.(*pp).doPrintf()
      /usr/src/go/src/fmt/print.go:1183 +0x2ec9
  fmt.Fprintf()
      /usr/src/go/src/fmt/print.go:188 +0x88

Please assign, take a look and update the issue accordingly.

teamcity: failed tests on release-banana: lint/TestLint, test/TestImportPgDump

The following tests appear to have failed:

#864629:

--- FAIL: test/TestImportPgDump/read_data_only (0.000s)
Test ended in panic.

------- Stdout: -------
I180827 20:41:54.053559 52667 storage/replica_command.go:298  [n1,s1,r23/1:/{Table/52-Max}] initiating a split of this range at key /Table/53/1/106 [r24]
I180827 20:41:54.062208 52385 storage/replica_range_lease.go:554  [replicate,n1,s1,r23/1:/Table/5{2-3/1/106}] transferring lease to s2
I180827 20:41:54.063407 52385 storage/replica_range_lease.go:617  [replicate,n1,s1,r23/1:/Table/5{2-3/1/106}] done transferring lease to s2: <nil>
I180827 20:41:54.063498 51617 storage/replica_proposal.go:210  [n2,s2,r23/3:/Table/5{2-3/1/106}] new range lease repl=(n2,s2):3 seq=3 start=1535402514.062230012,0 epo=1 pro=1535402514.062232488,0 following repl=(n1,s1):1 seq=2 start=1535402512.768597075,0 exp=1535402521.769064687,0 pro=1535402512.769088099,0
I180827 20:41:54.077824 52355 storage/replica_command.go:298  [n1,s1,r24/1:/{Table/53/1/1…-Max}] initiating a split of this range at key /Table/53/2/"\x15\x8f\xe8\u007f\\\xf3\xdf\xf0nP\xdb\xd3\xe8\x1b\"B1K\xa8l+\x96/l\v\x9e\x0e\x91\xa0D\x96\xc0J\xf1\xa1͠\xd2̃\x05\xe3\xe2?ET蛂\x00\xe5\xb0\x1a\x8e\x13Zu\xfd\xf2\x81w^\xb7\xbdH\xb8\xe4\a\x9c\xfd\x99{\xb4\"\xe5Q\x9c\x17\x85\x97\xf7Ëb\x0f\xff\xb0-vmO\xe1\xfb\xc3\xf3\xab0\xa0\x05u\x1c\xb0{B\xeamp\xbd\x8f\x99?\x87\x0f\xb2e\xe3ؿ2LN\x03\x17\xa7\x9f\xd3\x0f\x15$\x02I\xd2\xd7\x04R\x193\x9d\xddX\u007f\x01A\xcc\xde`Pm:\xdbe\xfd\xa6\a\xf8i\x88\xa7\xee\xacӸ\xbf2\x84y\xcd\n\xe6]L)\xca\xd9`x\xb4\x1b|\xe8\x13\x82\x1a(/* 3`J\xe1ٰ\xe6AdN!-\xd9"/"ॹॹ;,✅\nπ<\t\nπ\tॹπ✅a\n,\nᐿ�\nॹ✅�ॹ�\"✅ॹ\\<\"\n;a\\\n,✅π\n<\n<\nॹπᐿ�ᐿ;,�\tᐿ\nᐿaᐿ,\nπ�\t\\<ॹ\\π;π�π\"<;\"�\\<�,<�\\a�<\nॹaᐿaॹ�\\ᐿπ,✅ᐿ\"<✅✅a\t�ॹ\t<π;�ॹ\\ᐿ;✅\r\\,;\\ᐿॹ\nॹᐿππ\nᐿ\nᐿaπ\\\nπ\r\"✅�π\nπ\rॹπ\"ॹ✅a\ra�✅\nπ;ॹ✅\n;ॹ,�\nπ\rπᐿa\\\\ᐿ,π<ᐿ✅\n�,\r\nᐿ✅\n<�ᐿ\"\"✅,,\"\n<\n✅\rπa�π\n<\\ॹ\nॹ�;\ra��✅ᐿ\n,\t�;,π<\r��\r\\�\n✅\r✅�;\\\n\n,\nॹ✅π\n,\n✅\t,�<\nπ\t;aπ\n<a<\n\tπ\r\"\\✅\n\n\n<ᐿπaπ\\�\"<✅\\a,✅\n✅\n<\"\"\n\n\r\rᐿ�\\\tᐿᐿ;\n\rᐿa\\π<\n\\\n\n\";\r\r\raπ\"\r�ॹa\r�\"\n\"✅ππ✅�\t�ᐿ\tᐿ\\\r�ᐿ<\\\nᐿπ✅\tॹ<π\ta\"✅\t,ॹa✅ᐿ;\\\r✅\\,ॹ\"a\n<ॹ\\\n<\"π\\\\ᐿ\n✅\nᐿ\n,\n\r\t\n\r\n<aᐿ;ᐿ;ᐿ\r;✅a<a,,<\t\n\\ππ\\\"✅\n\\a\n\tπa<\r<π\n✅\\π<ॹ,\t;<aaaπॹᐿaॹaॹ�,\"\t,ॹ;\\<✅a\nᐿ\"\nπ\\aᐿ�ᐿ<ॹ;\\<ᐿ\nᐿ\n\"aᐿᐿπ,\"\r✅ॹ\n,<\r<\n<<,ᐿᐿa,\rᐿ<;π\\�,\"\rπ�\nππ�,✅;�\ra<;\r�ᐿ\tπ;\"πᐿ\\�a\"ᐿ\\;\\ॹ\";ॹ;;✅\tॹ\r\n<\t\n\t<aॹ\tᐿ\n\"ॹᐿ\t✅✅�ॹ;;<�\t,\n\r\n\n\ta\"\\<\rπᐿa\t<\na;\t\"\nπ<πॹ\r\n<\n✅ᐿॹ✅�<,;✅\"\n<�π<✅<<✅\\;\n\"\rᐿ\t�\n\n\r\t,ॹ�\"\rᐿᐿᐿ,\"π\nπ\",a\"\"<�\t\\πॹ\n\taॹᐿ\tπॹ,\n✅\rπa\r<<,\n\nᐿ;\t\\<\tᐿ\n\n�\"ॹ<\n\r\nᐿ\n�\n\nπᐿ;\nॹ\n\"π<\"\r\r\n\r\\ᐿ;;πॹ;�\r✅�,✅\r\r�,a;ᐿ\\ॹ\"\t\r✅;\t<,π,�\t\"πaᐿ��\\ॹ\"\n\tᐿ\t,ॹ✅�ᐿ✅\tᐿ,ॹ✅;;�\r\n✅ᐿ�\nπa;\\,✅ॹ<ᐿ\nπ\n\"\n;a\t\\π\n<\r\r\rπ\"\n\nᐿ<<ᐿ\"\n,\n\"ॹπaᐿπ��\r\n�ᐿॹ,\na\n\rॹ<�ॹ\"\n\t\r\n,π\n�,<ॹ,<\n<�✅ᐿ\r✅a✅<\r;,�a�\\\nॹ<\\<✅ॹ\"\nॹ\r�\ta;\"\\ᐿ\n\n✅\"\r;✅\t,a✅✅<\"ᐿ\t�π\\✅✅�<;\"✅π✅ॹ,\nπ\n��<,\ta\r�✅ᐿ\nॹπ\nᐿॹ✅;\nॹ\t\r\\\nᐿ�ᐿ\n\tᐿ,\r\\;<<a,\"π,\tᐿ\nπ<a\"ॹ\\aa\r\r\"\";\tᐿ,ॹa\nॹ\nπ"/PrefixEnd [r25]
I180827 20:41:54.090278 52643 storage/replica_range_lease.go:554  [n1,s1,r24/1:/Table/53/{1/106-2/"\x15…}] transferring lease to s3
I180827 20:41:54.091255 52643 storage/replica_range_lease.go:617  [n1,s1,r24/1:/Table/53/{1/106-2/"\x15…}] done transferring lease to s3: <nil>
I180827 20:41:54.092073 51863 storage/replica_proposal.go:210  [n3,s3,r24/2:/Table/53/{1/106-2/"\x15…}] new range lease repl=(n3,s3):2 seq=3 start=1535402514.090298269,0 epo=1 pro=1535402514.090300825,0 following repl=(n1,s1):1 seq=2 start=1535402512.768597075,0 exp=1535402521.769064687,0 pro=1535402512.769088099,0
I180827 20:41:54.094854 52702 storage/replica_command.go:298  [n1,s1,r25/1:/{Table/53/2/"…-Max}] initiating a split of this range at key /Table/53/2/"\xc0\t\x13\xe0*c\xe4\xcfS-\x9b,\xe2\x82\xfa\xd8Z\xf6\x99\x81\\\x18ŕ\xea\x80Db\xa7\x94\xf7Q#\x13\\\xc7(\xc4=\xaaZ\xa2Hա}\xdeI\x06\x840I\xa9\x95\xcbи\r#iH\x97F~\x10\xe4<\xb2\xefFb\xac\xee\xf90H5\xd7D\xe4:\xf0Ae\xe3\xd1<\xd1\xf7\xb9\xad\xea\xd9\xe0r\xbc\xa6\xae\x92\xfb\xb5,\xc2\U0010f26eD\xe0 \xc5\x06\xfa\x04{\xf7\xe8\xbfZQ\xa3\x05M\xbb\xa8\xbe\xf4\xc4\x0f\xe9|s{|\x8fr\xad\xdaWĢ\x9e\xdf\x17\x9f\x02\xf3п\xd3\xea\xfd\x8ew3\xb8@7ꇘN%\n\xe0@jq\xb3\xb0&y\xe3K0ȼ_s\x1e\x15\x98\xe7\xbf6\xeb\xef}$dd/\xaa\xf1\xcb.U\x8f\xd4r"/"<a✅ᐿ<\n\n\nॹ\",\n\"πॹᐿ,\rᐿ\nॹ�ॹ<\naᐿᐿ,\"ॹ\"\\✅\n�✅<\n\r<\\<\"\\π;<✅,;✅ॹa\r✅ᐿ;ᐿ\r\\�\\�,ᐿ\r\n,�✅,\t;\\\"π,;ᐿa\nॹ✅\n;ॹ<\\\n;<ᐿॹ\n\r<\n�\\\",ॹ✅,\n\"✅ᐿ\raa\n\n\t;�π<,\",ᐿ<ᐿ\\ᐿ✅;\t<π\"\"ॹ<\t\n,,π\"\t�✅\r;;ॹ,ᐿ\tॹ✅ॹ,\nᐿ\n\\;\n\nπa<✅aπ\t;✅\r;�✅;a\t\n✅ππ\t\rॹ\n\\<✅✅<\r✅\t\r✅<π\n;\n\"\"��ॹ\r,ᐿ\naॹᐿπ\\π\n\n,�\t<�a\nπ\\✅,πॹ\",πᐿa,ᐿᐿa\r<\ta\t\nॹ\n\r✅\r;πa✅�\t�π\",πa\n<✅\"a\t�\r<ᐿa,\naᐿॹᐿ\rॹᐿa�\"\\a✅\"\"✅✅ᐿ\t\"ॹ<\n\"\nπ✅✅\n\\ॹ\t;ᐿ\"a,ॹ\"aॹᐿᐿ\n,\\\nॹॹ,;ॹ;\\\n\n\t\n,\naॹπ\r\"\n\t✅ॹ\r\tᐿ✅;\r\ta�ᐿ\t\t\nπᐿ<ॹ\tᐿ\na\nᐿ\tππ<<π\t✅;\r\"ॹ\n\na�<ᐿ\r\nᐿᐿ\";a<\rॹ\\πॹ\tᐿ\n\nᐿπ;\t;;✅ᐿ✅✅<�ॹ;\tᐿ,,\t,π✅\nᐿॹ;\r\"ᐿa,ᐿᐿᐿa\nॹ<aॹ\r,;π<<\nπa�\n\\\r,ॹπ�\n\"ᐿππ\n✅ॹπ\ra,πॹ\n\t<;πᐿᐿ✅ॹ\t�a✅\r�\"a,π��,ॹa\n\\\rॹ\nॹ\"π\"π\ta�<π�\r;a,a\r<πᐿ\na<\r\t\nॹ\\\\\n\\<�\\�aπa;\r\\,,\nॹ\"✅;\"\n✅ᐿ✅a\n<ππ\tπ<a\t�\\<\"✅\\\nᐿ;\r\t✅�✅π\r\r\r\n\",ᐿ;ॹ\nᐿ\r\"\naa\"\n\t<,✅<a\\\n\"ॹᐿ\n\\\t\t\r\"ॹ<,,π�\"ॹ<ॹ,\\ᐿ<\\π\"\\<ᐿ\n\rॹ\na\t\nπ;\\π\\✅\r,\r�\",,;;,<\n\t\"\\\r\r\"ॹ✅ॹ�\n�✅�\n�\\\n\n\rᐿॹ\tπa<;;\n\n�a\n\\ॹॹ\t;\n<\t\\<ॹॹ�✅π\t\"<\n\tᐿπᐿ\"\"�;\t;ॹ<π<✅\nππ<\"\rॹ�πᐿ�\rπ,<,<ᐿ<;;�,\t<<ᐿ\t<\tॹ,π,<\\a\t;\n\r�a✅\r\r\t\nᐿᐿ✅\r;\n;�;\r✅\n,;✅ᐿ,\\\n<,<\\\t\n;<\\aᐿ\r,\n;\\\r�\rॹ\\\t\";\t�;\n,\ta✅\r\t\r,\n\\\t\nᐿ✅\\\"\naᐿ\\\\\";\r<�✅;aॹ\t\t\\\t\tᐿ�\r,\"\n\"\taᐿ\na\rππॹ\nπᐿ\rॹ\nॹ\tπ,π\r<\"\n,�\na;�\\✅,<\"\"�\nππ\t\nπaॹaa✅\\;\\a\r\rᐿ,\\�;\\ᐿᐿ<a\"\r;π\"\\ᐿπ\tπॹ;\\a\ra;\t\n\\�\r<\t<ॹ\r✅\na\t\t<\n\n✅π\nᐿ✅\\<,\nπ\rᐿ✅a\"\r\"\n✅ॹ\\�\r\n\\�\nॹ\\<\\<\n\n\n"/PrefixEnd [r26]
I180827 20:41:54.112580 52726 storage/replica_range_lease.go:554  [n1,s1,r25/1:/Table/53/2/"\x{15\x8…-c0\t\…}] transferring lease to s3
I180827 20:41:54.113319 52726 storage/replica_range_lease.go:617  [n1,s1,r25/1:/Table/53/2/"\x{15\x8…-c0\t\…}] done transferring lease to s3: <nil>
I180827 20:41:54.113657 51850 storage/replica_proposal.go:210  [n3,s3,r25/2:/Table/53/2/"\x{15\x8…-c0\t\…}] new range lease repl=(n3,s3):2 seq=3 start=1535402514.112607829,0 epo=1 pro=1535402514.112610518,0 following repl=(n1,s1):1 seq=2 start=1535402512.768597075,0 exp=1535402521.769064687,0 pro=1535402512.769088099,0
I180827 20:41:54.117098 52757 storage/replica_command.go:298  [n1,s1,r26/1:/{Table/53/2/"…-Max}] initiating a split of this range at key /Table/53/3/";π,\\✅✅ᐿπ✅,�a\r<\nπᐿॹ;π\\,✅\nᐿॹ✅�,\r�\r\r\r,;\r;ᐿ,\n\nᐿaπ\r,,✅\na,a\\<✅\"✅\\,,a\"π\r\n�✅π\"ॹπ;\nπ;<,<\n;<\n\tॹ\rπ\r,a\\\t�\n\r\\ᐿ<\t,\n\\ᐿa\t\t\n\nπ\t\\\n\\πa;π\r\rᐿ\",a<\"\n�\r\ta\r\t�\r\t✅\t;ᐿᐿ��\nᐿᐿ\\,ᐿᐿ\na\"ᐿ\"\"aa\n;✅π\nॹ\\\"\"�✅ॹॹ\\\nॹ\t\nॹ✅,\n\"πᐿ\n\n;<<;\r\tᐿ�,,\\\n\n\n\n\nπ�;\n;,\"✅\r;a\n\\;aa\n\n\n;\n\n<<ॹ�\nπa�πॹaॹ\r\n;✅✅,;ᐿ\n;π\"\\πᐿ\n\"\n\\a\\aπ✅ᐿ\n<\",\tॹ\r<;\";\nππ\"�\n\n\t,π\\\\<\\\t;ॹπ\\,;�✅ॹ,\r<\n✅�;ᐿ\",;✅\n\nॹॹ\r\n✅\n<<\n\"\",a\t\r\r,ॹ\t�;�,π\\\t,;\\\"✅\t\n\n\nᐿ\n<\\\rᐿ,;�\nπ\r\\<ᐿᐿ\n<✅ᐿaॹ�✅\n\t,π\"\r<<\n\nπ\tπ✅\n<\\πॹaᐿ\t;�ॹॹ,\"\n\\a\n,\"πॹ,\r,ॹॹ\\\";�\n\\π✅\n\"ππ\n✅�πॹ,\r✅\n;π;ᐿ\"\nᐿ✅\rॹ;;\n\"ॹ\"�\"a\n\rπ\n<\n\t\"aπ\t;\\\n\";\"π,\ta\t\n\nᐿ<,;�<ᐿ\"\\ᐿᐿa,\n;;ॹॹ\tॹπa�ᐿ\ra,π<✅\tᐿᐿ\n,✅\ra�\"\r\r\",;π�<;\n<;ᐿ\"�;ᐿ;�;✅\t\\<\\<;πa✅\rॹ\\\\\\ᐿ\n;\r\t\n\\\r\"✅\n\tπ✅\"\"<\r\rπ\r<,\n,\\✅ᐿᐿ�\t�,ππ;ᐿ\t�\"\\ππ\"ॹ,πa<\n\n<��\rॹॹ\t,\r\"ॹ✅✅\n\n;\\ॹ;π<\"�\t�<\"ᐿॹॹ;\n<\n\r\na\t�ॹᐿa\n,\"\t\r\"\n,\r<,\"\tᐿ\\\n<,;<\"\t\n\nᐿ,ᐿ\tπ✅\n,\r,\n\t<,�\\;<\\a\nπ\t,\t�ॹ\t\n�a✅\n✅\nॹ\";\r\t✅<\tᐿ\n\tᐿॹ✅\"\r\rॹ✅π\n\n,\t\\\t\\;\"a\t,ॹॹ\"aॹ\n,\n✅�\t\nॹᐿ\n\r✅<πॹ\n✅\tॹ\"ॹ\"�\r\\;\\✅;ॹπ;\n\nᐿ<\r<\"ॹ\n,\n;π\nॹ\ta✅\n�;ᐿ\"a�✅π\r✅ॹ,\n\n\",✅\nᐿ\n<�\r\nπᐿ\"πॹᐿ\r�\n<,✅a\\ॹ\r✅<;πᐿ✅ॹ<\"<✅\"π,\\\rπ\\<\"<\"π\n✅<;\\�\tॹ\n\n\r<\n\rᐿ\nᐿaॹaॹ\\\r<<\n\r\n�\ta,\nॹॹᐿ\n,π✅<;\\\nπॹπᐿॹ<;\"a\r<;\t\t<,;�π\n<✅ॹॹ\tᐿ\rᐿaaaॹ\t\\,ᐿ✅\n\\ॹ<\"π\t\r\"\tᐿ\n\ta\t,<ππ;\n\\\r�\n,\n\n\\ᐿa\nॹᐿa\n\t\n\t\n✅\"ᐿ\"\r\n\n\"�\r\n\n<<π\ra✅\\<ᐿ�\n\n✅�a✅�"/105 [r27]
I180827 20:41:54.137397 52716 storage/store_snapshot.go:615  [raftsnapshot,n3,s3,r25/2:/Table/53/2/"\x{15\x8…-c0\t\…}] sending Raft snapshot 547ab8d0 at applied index 21
I180827 20:41:54.140430 52716 storage/store_snapshot.go:657  [raftsnapshot,n3,s3,r25/2:/Table/53/2/"\x{15\x8…-c0\t\…}] streamed snapshot to (n2,s2):3: kv pairs: 14, log entries: 2, rate-limit: 8.0 MiB/sec, 22ms
I180827 20:41:54.140860 52705 storage/replica_raftstorage.go:784  [n2,s2,r25/3:/Table/53/2/"\x{15\x8…-c0\t\…}] applying Raft snapshot at index 21 (id=547ab8d0, encoded size=31270, 1 rocksdb batches, 2 log entries)
I180827 20:41:54.162696 52705 storage/replica_raftstorage.go:790  [n2,s2,r25/3:/Table/53/2/"\x{15\x8…-c0\t\…}] applied Raft snapshot in 22ms [clear=0ms batch=0ms entries=21ms commit=0ms]
I180827 20:41:54.166103 52791 storage/replica_range_lease.go:554  [n1,s1,r26/1:/Table/53/{2/"\xc0…-3/";π,…}] transferring lease to s3
I180827 20:41:54.167118 51903 storage/replica_proposal.go:210  [n3,s3,r26/2:/Table/53/{2/"\xc0…-3/";π,…}] new range lease repl=(n3,s3):2 seq=3 start=1535402514.166156675,0 epo=1 pro=1535402514.166159831,0 following repl=(n1,s1):1 seq=2 start=1535402512.768597075,0 exp=1535402521.769064687,0 pro=1535402512.769088099,0
I180827 20:41:54.167216 52791 storage/replica_range_lease.go:617  [n1,s1,r26/1:/Table/53/{2/"\xc0…-3/";π,…}] done transferring lease to s3: <nil>
I180827 20:41:54.172711 52589 storage/replica_command.go:298  [n1,s1,r27/1:/{Table/53/3/"…-Max}] initiating a split of this range at key /Table/54 [r28]
I180827 20:41:54.182740 52807 storage/replica_range_lease.go:554  [n1,s1,r27/1:/Table/5{3/3/";π…-4}] transferring lease to s2
I180827 20:41:54.183947 52807 storage/replica_range_lease.go:617  [n1,s1,r27/1:/Table/5{3/3/";π…-4}] done transferring lease to s2: <nil>
I180827 20:41:54.184954 51646 storage/replica_proposal.go:210  [n2,s2,r27/3:/Table/5{3/3/";π…-4}] new range lease repl=(n2,s2):3 seq=3 start=1535402514.182761052,0 epo=1 pro=1535402514.182764040,0 following repl=(n1,s1):1 seq=2 start=1535402512.768597075,0 exp=1535402521.769064687,0 pro=1535402512.769088099,0
--- FAIL: test/TestImportPgDump (0.000s)
Test ended in panic.

------- Stdout: -------
W180827 20:41:52.746991 50862 server/status/runtime.go:294  [n?] Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006"
I180827 20:41:52.757923 50862 server/server.go:830  [n?] monitoring forward clock jumps based on server.clock.forward_jump_check_enabled
I180827 20:41:52.758132 50862 base/addr_validation.go:260  [n?] server certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
I180827 20:41:52.758156 50862 base/addr_validation.go:300  [n?] web UI certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
I180827 20:41:52.761168 50862 server/config.go:496  [n?] 1 storage engine initialized
I180827 20:41:52.761191 50862 server/config.go:499  [n?] RocksDB cache size: 128 MiB
I180827 20:41:52.761204 50862 server/config.go:499  [n?] store 0: in-memory, size 0 B
I180827 20:41:52.767725 50862 server/node.go:373  [n?] **** cluster d5e53e69-a109-4eb6-91bf-29e74ae744ba has been created
I180827 20:41:52.767752 50862 server/server.go:1401  [n?] **** add additional nodes by specifying --join=127.0.0.1:41477
I180827 20:41:52.767936 50862 gossip/gossip.go:382  [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:41477" > attrs:<> locality:<> ServerVersion:<major_val:2 minor_val:0 patch:0 unstable:12 > build_tag:"v2.1.0-alpha.20180702-2025-gf1e7bb1" started_at:1535402512767856449 
I180827 20:41:52.770338 50862 storage/store.go:1541  [n1,s1] [n1,s1]: failed initial metrics computation: [n1,s1]: system config not yet available
I180827 20:41:52.770546 50862 server/node.go:476  [n1] initialized store [n1,s1]: disk (capacity=512 MiB, available=512 MiB, used=0 B, logicalBytes=6.9 KiB), ranges=1, leases=1, queries=0.00, writes=0.00, bytesPerReplica={p10=7103.00 p25=7103.00 p50=7103.00 p75=7103.00 p90=7103.00 pMax=7103.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
I180827 20:41:52.770626 50862 storage/stores.go:242  [n1] read 0 node addresses from persistent storage
I180827 20:41:52.770721 50862 server/node.go:697  [n1] connecting to gossip network to verify cluster ID...
I180827 20:41:52.770760 50862 server/node.go:722  [n1] node connected via gossip and verified as part of cluster "d5e53e69-a109-4eb6-91bf-29e74ae744ba"
I180827 20:41:52.770788 50862 server/node.go:546  [n1] node=1: started with [<no-attributes>=<in-mem>] engine(s) and attributes []
I180827 20:41:52.771023 50862 server/status/recorder.go:652  [n1] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180827 20:41:52.771066 50862 server/server.go:1807  [n1] Could not start heap profiler worker due to: directory to store profiles could not be determined
I180827 20:41:52.771159 50862 server/server.go:1538  [n1] starting https server at 127.0.0.1:42563 (use: 127.0.0.1:42563)
I180827 20:41:52.771188 50862 server/server.go:1540  [n1] starting grpc/postgres server at 127.0.0.1:41477
I180827 20:41:52.771209 50862 server/server.go:1541  [n1] advertising CockroachDB node at 127.0.0.1:41477
I180827 20:41:52.775258 51089 server/status/recorder.go:652  [n1,summaries] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180827 20:41:52.776337 50925 storage/replica_command.go:298  [split,n1,s1,r1/1:/M{in-ax}] initiating a split of this range at key /System/"" [r2]
I180827 20:41:52.788832 51094 storage/replica_command.go:298  [split,n1,s1,r2/1:/{System/-Max}] initiating a split of this range at key /System/NodeLiveness [r3]
W180827 20:41:52.790188 51128 storage/intent_resolver.go:668  [n1,s1] failed to push during intent resolution: failed to push "unnamed" id=ec083bbe key=/Table/SystemConfigSpan/Start rw=true pri=0.01126188 iso=SERIALIZABLE stat=PENDING epo=0 ts=1535402512.772758792,0 orig=1535402512.772758792,0 max=1535402512.772758792,0 wto=false rop=false seq=6
I180827 20:41:52.790695 51118 sql/event_log.go:126  [n1,intExec=optInToDiagnosticsStatReporting] Event: "set_cluster_setting", target: 0, info: {SettingName:diagnostics.reporting.enabled Value:true User:root}
I180827 20:41:52.795125 51100 storage/replica_command.go:298  [split,n1,s1,r3/1:/{System/NodeL…-Max}] initiating a split of this range at key /System/NodeLivenessMax [r4]
I180827 20:41:52.800783 51143 storage/replica_command.go:298  [split,n1,s1,r4/1:/{System/NodeL…-Max}] initiating a split of this range at key /System/tsd [r5]
I180827 20:41:52.807906 51165 storage/replica_command.go:298  [split,n1,s1,r5/1:/{System/tsd-Max}] initiating a split of this range at key /System/"tse" [r6]
I180827 20:41:52.811784 51141 sql/event_log.go:126  [n1,intExec=set-setting] Event: "set_cluster_setting", target: 0, info: {SettingName:version Value:2.0-12 User:root}
I180827 20:41:52.818164 50799 sql/event_log.go:126  [n1,intExec=disableNetTrace] Event: "set_cluster_setting", target: 0, info: {SettingName:trace.debug.enable Value:false User:root}
I180827 20:41:52.821094 51188 storage/replica_command.go:298  [split,n1,s1,r6/1:/{System/tse-Max}] initiating a split of this range at key /Table/SystemConfigSpan/Start [r7]
I180827 20:41:52.830709 51176 storage/replica_command.go:298  [split,n1,s1,r7/1:/{Table/System…-Max}] initiating a split of this range at key /Table/11 [r8]
I180827 20:41:52.839374 51187 sql/event_log.go:126  [n1,intExec=initializeClusterSecret] Event: "set_cluster_setting", target: 0, info: {SettingName:cluster.secret Value:045a1c98-219f-445b-bd6b-d481f04d6b0d User:root}
I180827 20:41:52.849534 51154 storage/replica_command.go:298  [split,n1,s1,r8/1:/{Table/11-Max}] initiating a split of this range at key /Table/12 [r9]
I180827 20:41:52.855898 51218 sql/event_log.go:126  [n1,intExec=create-default-db] Event: "create_database", target: 50, info: {DatabaseName:defaultdb Statement:CREATE DATABASE IF NOT EXISTS defaultdb User:root}
I180827 20:41:52.861462 51240 storage/replica_command.go:298  [split,n1,s1,r9/1:/{Table/12-Max}] initiating a split of this range at key /Table/13 [r10]
I180827 20:41:52.868342 51268 storage/replica_command.go:298  [split,n1,s1,r10/1:/{Table/13-Max}] initiating a split of this range at key /Table/14 [r11]
I180827 20:41:52.872706 51256 sql/event_log.go:126  [n1,intExec=create-default-db] Event: "create_database", target: 51, info: {DatabaseName:postgres Statement:CREATE DATABASE IF NOT EXISTS postgres User:root}
I180827 20:41:52.874819 51264 storage/replica_command.go:298  [split,n1,s1,r11/1:/{Table/14-Max}] initiating a split of this range at key /Table/15 [r12]
I180827 20:41:52.876403 50862 server/server.go:1594  [n1] done ensuring all necessary migrations have run
I180827 20:41:52.876433 50862 server/server.go:1597  [n1] serving sql connections
I180827 20:41:52.879108 51233 server/server_update.go:67  [n1] no need to upgrade, cluster already at the newest version
I180827 20:41:52.879639 51235 sql/event_log.go:126  [n1] Event: "node_join", target: 1, info: {Descriptor:{NodeID:1 Address:{NetworkField:tcp AddressField:127.0.0.1:41477} Attrs: Locality: ServerVersion:2.0-12 BuildTag:v2.1.0-alpha.20180702-2025-gf1e7bb1 StartedAt:1535402512767856449 LocalityAddress:[]} ClusterID:d5e53e69-a109-4eb6-91bf-29e74ae744ba StartedAt:1535402512767856449 LastUp:1535402512767856449}
I180827 20:41:52.880318 51302 storage/replica_command.go:298  [split,n1,s1,r12/1:/{Table/15-Max}] initiating a split of this range at key /Table/16 [r13]
I180827 20:41:52.927701 50819 storage/replica_command.go:298  [split,n1,s1,r13/1:/{Table/16-Max}] initiating a split of this range at key /Table/17 [r14]
I180827 20:41:52.940165 51323 storage/replica_command.go:298  [split,n1,s1,r14/1:/{Table/17-Max}] initiating a split of this range at key /Table/18 [r15]
I180827 20:41:52.948539 51355 storage/replica_command.go:298  [split,n1,s1,r15/1:/{Table/18-Max}] initiating a split of this range at key /Table/19 [r16]
I180827 20:41:52.953658 51380 storage/replica_command.go:298  [split,n1,s1,r16/1:/{Table/19-Max}] initiating a split of this range at key /Table/20 [r17]
I180827 20:41:52.961237 51137 storage/replica_command.go:298  [split,n1,s1,r17/1:/{Table/20-Max}] initiating a split of this range at key /Table/21 [r18]
I180827 20:41:52.966548 50832 storage/replica_command.go:298  [split,n1,s1,r18/1:/{Table/21-Max}] initiating a split of this range at key /Table/22 [r19]
I180827 20:41:52.977113 51362 storage/replica_command.go:298  [split,n1,s1,r19/1:/{Table/22-Max}] initiating a split of this range at key /Table/23 [r20]
I180827 20:41:53.041315 51440 storage/replica_command.go:298  [split,n1,s1,r20/1:/{Table/23-Max}] initiating a split of this range at key /Table/50 [r21]
I180827 20:41:53.047478 51414 storage/replica_command.go:298  [split,n1,s1,r21/1:/{Table/50-Max}] initiating a split of this range at key /Table/51 [r22]
W180827 20:41:53.081214 50862 server/status/runtime.go:294  [n?] Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006"
I180827 20:41:53.089127 50862 server/server.go:830  [n?] monitoring forward clock jumps based on server.clock.forward_jump_check_enabled
I180827 20:41:53.089322 50862 base/addr_validation.go:260  [n?] server certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
I180827 20:41:53.089338 50862 base/addr_validation.go:300  [n?] web UI certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
I180827 20:41:53.102793 50862 server/config.go:496  [n?] 1 storage engine initialized
I180827 20:41:53.102863 50862 server/config.go:499  [n?] RocksDB cache size: 128 MiB
I180827 20:41:53.102878 50862 server/config.go:499  [n?] store 0: in-memory, size 0 B
W180827 20:41:53.102953 50862 gossip/gossip.go:1371  [n?] no incoming or outgoing connections
I180827 20:41:53.103001 50862 server/server.go:1403  [n?] no stores bootstrapped and --join flag specified, awaiting init command.
I180827 20:41:53.115344 51458 gossip/client.go:129  [n?] started gossip client to 127.0.0.1:41477
I180827 20:41:53.125579 51530 gossip/server.go:217  [n1] received initial cluster-verification connection from {tcp 127.0.0.1:36113}
I180827 20:41:53.127987 50862 server/node.go:697  [n?] connecting to gossip network to verify cluster ID...
I180827 20:41:53.128034 50862 server/node.go:722  [n?] node connected via gossip and verified as part of cluster "d5e53e69-a109-4eb6-91bf-29e74ae744ba"
I180827 20:41:53.128397 51575 kv/dist_sender.go:345  [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I180827 20:41:53.134920 51574 kv/dist_sender.go:345  [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I180827 20:41:53.135628 50862 kv/dist_sender.go:345  [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I180827 20:41:53.136461 50862 server/node.go:428  [n?] new node allocated ID 2
I180827 20:41:53.136541 50862 gossip/gossip.go:382  [n2] NodeDescriptor set to node_id:2 address:<network_field:"tcp" address_field:"127.0.0.1:36113" > attrs:<> locality:<> ServerVersion:<major_val:2 minor_val:0 patch:0 unstable:12 > build_tag:"v2.1.0-alpha.20180702-2025-gf1e7bb1" started_at:1535402513136479434 
I180827 20:41:53.136591 50862 storage/stores.go:242  [n2] read 0 node addresses from persistent storage
I180827 20:41:53.136624 50862 storage/stores.go:261  [n2] wrote 1 node addresses to persistent storage
I180827 20:41:53.137485 51552 storage/stores.go:261  [n1] wrote 1 node addresses to persistent storage
I180827 20:41:53.139442 50862 server/node.go:672  [n2] bootstrapped store [n2,s2]
I180827 20:41:53.139577 50862 server/node.go:546  [n2] node=2: started with [] engine(s) and attributes []
I180827 20:41:53.140140 50862 server/status/recorder.go:652  [n2] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180827 20:41:53.140166 50862 server/server.go:1807  [n2] Could not start heap profiler worker due to: directory to store profiles could not be determined
I180827 20:41:53.140233 50862 server/server.go:1538  [n2] starting https server at 127.0.0.1:39947 (use: 127.0.0.1:39947)
I180827 20:41:53.140246 50862 server/server.go:1540  [n2] starting grpc/postgres server at 127.0.0.1:36113
I180827 20:41:53.140256 50862 server/server.go:1541  [n2] advertising CockroachDB node at 127.0.0.1:36113
I180827 20:41:53.140624 51685 server/status/recorder.go:652  [n2,summaries] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180827 20:41:53.153945 50862 server/server.go:1594  [n2] done ensuring all necessary migrations have run
I180827 20:41:53.153974 50862 server/server.go:1597  [n2] serving sql connections
W180827 20:41:53.165268 50862 server/status/runtime.go:294  [n?] Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006"
I180827 20:41:53.185802 51467 server/server_update.go:67  [n2] no need to upgrade, cluster already at the newest version
I180827 20:41:53.186848 51469 sql/event_log.go:126  [n2] Event: "node_join", target: 2, info: {Descriptor:{NodeID:2 Address:{NetworkField:tcp AddressField:127.0.0.1:36113} Attrs: Locality: ServerVersion:2.0-12 BuildTag:v2.1.0-alpha.20180702-2025-gf1e7bb1 StartedAt:1535402513136479434 LocalityAddress:[]} ClusterID:d5e53e69-a109-4eb6-91bf-29e74ae744ba StartedAt:1535402513136479434 LastUp:1535402513136479434}
I180827 20:41:53.189622 50862 server/server.go:830  [n?] monitoring forward clock jumps based on server.clock.forward_jump_check_enabled
I180827 20:41:53.189776 50862 base/addr_validation.go:260  [n?] server certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
I180827 20:41:53.189808 50862 base/addr_validation.go:300  [n?] web UI certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
I180827 20:41:53.207782 50862 server/config.go:496  [n?] 1 storage engine initialized
I180827 20:41:53.207807 50862 server/config.go:499  [n?] RocksDB cache size: 128 MiB
I180827 20:41:53.207815 50862 server/config.go:499  [n?] store 0: in-memory, size 0 B
W180827 20:41:53.207911 50862 gossip/gossip.go:1371  [n?] no incoming or outgoing connections
I180827 20:41:53.207947 50862 server/server.go:1403  [n?] no stores bootstrapped and --join flag specified, awaiting init command.
I180827 20:41:53.211471 51475 rpc/nodedialer/nodedialer.go:92  [ct-client] connection to n2 established
I180827 20:41:53.223653 51740 gossip/client.go:129  [n?] started gossip client to 127.0.0.1:41477
I180827 20:41:53.223954 51816 gossip/server.go:217  [n1] received initial cluster-verification connection from {tcp 127.0.0.1:46463}
I180827 20:41:53.224401 50862 server/node.go:697  [n?] connecting to gossip network to verify cluster ID...
I180827 20:41:53.224432 50862 server/node.go:722  [n?] node connected via gossip and verified as part of cluster "d5e53e69-a109-4eb6-91bf-29e74ae744ba"
I180827 20:41:53.224690 51837 kv/dist_sender.go:345  [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I180827 20:41:53.225445 51836 kv/dist_sender.go:345  [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I180827 20:41:53.226030 50862 kv/dist_sender.go:345  [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I180827 20:41:53.226699 50862 server/node.go:428  [n?] new node allocated ID 3
I180827 20:41:53.226763 50862 gossip/gossip.go:382  [n3] NodeDescriptor set to node_id:3 address:<network_field:"tcp" address_field:"127.0.0.1:46463" > attrs:<> locality:<> ServerVersion:<major_val:2 minor_val:0 patch:0 unstable:12 > build_tag:"v2.1.0-alpha.20180702-2025-gf1e7bb1" started_at:1535402513226706701 
I180827 20:41:53.226805 50862 storage/stores.go:242  [n3] read 0 node addresses from persistent storage
I180827 20:41:53.226851 50862 storage/stores.go:261  [n3] wrote 2 node addresses to persistent storage
I180827 20:41:53.227563 51809 storage/stores.go:261  [n1] wrote 2 node addresses to persistent storage
I180827 20:41:53.227869 51810 storage/stores.go:261  [n2] wrote 2 node addresses to persistent storage
I180827 20:41:53.228504 50862 server/node.go:672  [n3] bootstrapped store [n3,s3]
I180827 20:41:53.229044 50862 server/node.go:546  [n3] node=3: started with [] engine(s) and attributes []
I180827 20:41:53.229696 50862 server/status/recorder.go:652  [n3] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180827 20:41:53.229749 50862 server/server.go:1807  [n3] Could not start heap profiler worker due to: directory to store profiles could not be determined
I180827 20:41:53.235251 50862 server/server.go:1538  [n3] starting https server at 127.0.0.1:43307 (use: 127.0.0.1:43307)
I180827 20:41:53.235271 50862 server/server.go:1540  [n3] starting grpc/postgres server at 127.0.0.1:46463
I180827 20:41:53.235283 50862 server/server.go:1541  [n3] advertising CockroachDB node at 127.0.0.1:46463
I180827 20:41:53.240284 50862 server/server.go:1594  [n3] done ensuring all necessary migrations have run
I180827 20:41:53.240307 50862 server/server.go:1597  [n3] serving sql connections
I180827 20:41:53.243124 51945 server/status/recorder.go:652  [n3,summaries] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180827 20:41:53.248117 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r20/1:/Table/{23-50}] sending preemptive snapshot 59e1afc9 at applied index 16
I180827 20:41:53.249136 50862 testutils/testcluster/testcluster.go:536  [n1,s1] has 22 underreplicated ranges
I180827 20:41:53.251012 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r20/1:/Table/{23-50}] streamed snapshot to (n2,s2):?: kv pairs: 12, log entries: 6, rate-limit: 8.0 MiB/sec, 3ms
I180827 20:41:53.251369 51983 storage/replica_raftstorage.go:784  [n2,s2,r20/?:{-}] applying preemptive snapshot at index 16 (id=59e1afc9, encoded size=2241, 1 rocksdb batches, 6 log entries)
I180827 20:41:53.254056 51839 server/server_update.go:67  [n3] no need to upgrade, cluster already at the newest version
I180827 20:41:53.255122 51841 sql/event_log.go:126  [n3] Event: "node_join", target: 3, info: {Descriptor:{NodeID:3 Address:{NetworkField:tcp AddressField:127.0.0.1:46463} Attrs: Locality: ServerVersion:2.0-12 BuildTag:v2.1.0-alpha.20180702-2025-gf1e7bb1 StartedAt:1535402513226706701 LocalityAddress:[]} ClusterID:d5e53e69-a109-4eb6-91bf-29e74ae744ba StartedAt:1535402513226706701 LastUp:1535402513226706701}
I180827 20:41:53.256061 51983 storage/replica_raftstorage.go:790  [n2,s2,r20/?:/Table/{23-50}] applied preemptive snapshot in 1ms [clear=0ms batch=0ms entries=0ms commit=1ms]
I180827 20:41:53.256605 50930 storage/replica_command.go:812  [replicate,n1,s1,r20/1:/Table/{23-50}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r20:/Table/{23-50} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.259565 50930 storage/replica.go:3743  [n1,s1,r20/1:/Table/{23-50}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I180827 20:41:53.261627 51625 rpc/nodedialer/nodedialer.go:92  [n2] connection to n1 established
I180827 20:41:53.264544 50862 testutils/testcluster/testcluster.go:536  [n1,s1] has 22 underreplicated ranges
I180827 20:41:53.286630 50930 rpc/nodedialer/nodedialer.go:92  [replicate,n1,s1,r21/1:/Table/5{0-1}] connection to n3 established
I180827 20:41:53.287245 50862 testutils/testcluster/testcluster.go:536  [n1,s1] has 22 underreplicated ranges
I180827 20:41:53.287799 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r21/1:/Table/5{0-1}] sending preemptive snapshot de08568a at applied index 18
I180827 20:41:53.288157 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r21/1:/Table/5{0-1}] streamed snapshot to (n3,s3):?: kv pairs: 8, log entries: 8, rate-limit: 8.0 MiB/sec, 1ms
I180827 20:41:53.288623 51959 storage/replica_raftstorage.go:784  [n3,s3,r21/?:{-}] applying preemptive snapshot at index 18 (id=de08568a, encoded size=2646, 1 rocksdb batches, 8 log entries)
I180827 20:41:53.289814 51959 storage/replica_raftstorage.go:790  [n3,s3,r21/?:/Table/5{0-1}] applied preemptive snapshot in 1ms [clear=0ms batch=0ms entries=0ms commit=1ms]
I180827 20:41:53.290329 50930 storage/replica_command.go:812  [replicate,n1,s1,r21/1:/Table/5{0-1}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r21:/Table/5{0-1} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.293678 50930 storage/replica.go:3743  [n1,s1,r21/1:/Table/5{0-1}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I180827 20:41:53.294953 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r22/1:/{Table/51-Max}] sending preemptive snapshot a84e7278 at applied index 12
I180827 20:41:53.295229 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r22/1:/{Table/51-Max}] streamed snapshot to (n3,s3):?: kv pairs: 7, log entries: 2, rate-limit: 8.0 MiB/sec, 1ms
I180827 20:41:53.295441 51883 rpc/nodedialer/nodedialer.go:92  [n3] connection to n1 established
I180827 20:41:53.295585 51953 storage/replica_raftstorage.go:784  [n3,s3,r22/?:{-}] applying preemptive snapshot at index 12 (id=a84e7278, encoded size=386, 1 rocksdb batches, 2 log entries)
I180827 20:41:53.295717 51953 storage/replica_raftstorage.go:790  [n3,s3,r22/?:/{Table/51-Max}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.295955 50930 storage/replica_command.go:812  [replicate,n1,s1,r22/1:/{Table/51-Max}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r22:/{Table/51-Max} [(n1,s1):1, next=2, gen=0]
I180827 20:41:53.298097 50930 storage/replica.go:3743  [n1,s1,r22/1:/{Table/51-Max}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I180827 20:41:53.301122 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r8/1:/Table/1{1-2}] sending preemptive snapshot 201bdccc at applied index 18
I180827 20:41:53.301565 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r8/1:/Table/1{1-2}] streamed snapshot to (n3,s3):?: kv pairs: 9, log entries: 8, rate-limit: 8.0 MiB/sec, 3ms
I180827 20:41:53.306578 52088 storage/replica_raftstorage.go:784  [n3,s3,r8/?:{-}] applying preemptive snapshot at index 18 (id=201bdccc, encoded size=4352, 1 rocksdb batches, 8 log entries)
I180827 20:41:53.306868 52088 storage/replica_raftstorage.go:790  [n3,s3,r8/?:/Table/1{1-2}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.307601 50930 storage/replica_command.go:812  [replicate,n1,s1,r8/1:/Table/1{1-2}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r8:/Table/1{1-2} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.311873 50930 storage/replica.go:3743  [n1,s1,r8/1:/Table/1{1-2}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I180827 20:41:53.314134 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r17/1:/Table/2{0-1}] sending preemptive snapshot 53116eb2 at applied index 16
I180827 20:41:53.314317 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r17/1:/Table/2{0-1}] streamed snapshot to (n3,s3):?: kv pairs: 8, log entries: 6, rate-limit: 8.0 MiB/sec, 2ms
I180827 20:41:53.314683 52103 storage/replica_raftstorage.go:784  [n3,s3,r17/?:{-}] applying preemptive snapshot at index 16 (id=53116eb2, encoded size=2105, 1 rocksdb batches, 6 log entries)
I180827 20:41:53.314887 52103 storage/replica_raftstorage.go:790  [n3,s3,r17/?:/Table/2{0-1}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.315401 50930 storage/replica_command.go:812  [replicate,n1,s1,r17/1:/Table/2{0-1}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r17:/Table/2{0-1} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.318398 50930 storage/replica.go:3743  [n1,s1,r17/1:/Table/2{0-1}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I180827 20:41:53.319436 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r16/1:/Table/{19-20}] sending preemptive snapshot e0be8540 at applied index 16
I180827 20:41:53.319691 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r16/1:/Table/{19-20}] streamed snapshot to (n2,s2):?: kv pairs: 8, log entries: 6, rate-limit: 8.0 MiB/sec, 1ms
I180827 20:41:53.320127 52072 storage/replica_raftstorage.go:784  [n2,s2,r16/?:{-}] applying preemptive snapshot at index 16 (id=e0be8540, encoded size=2109, 1 rocksdb batches, 6 log entries)
I180827 20:41:53.320339 52072 storage/replica_raftstorage.go:790  [n2,s2,r16/?:/Table/{19-20}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.320816 50930 storage/replica_command.go:812  [replicate,n1,s1,r16/1:/Table/{19-20}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r16:/Table/{19-20} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.323849 50930 storage/replica.go:3743  [n1,s1,r16/1:/Table/{19-20}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I180827 20:41:53.326208 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r15/1:/Table/1{8-9}] sending preemptive snapshot d259ae5c at applied index 16
I180827 20:41:53.326404 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r15/1:/Table/1{8-9}] streamed snapshot to (n2,s2):?: kv pairs: 8, log entries: 6, rate-limit: 8.0 MiB/sec, 2ms
I180827 20:41:53.326731 52116 storage/replica_raftstorage.go:784  [n2,s2,r15/?:{-}] applying preemptive snapshot at index 16 (id=d259ae5c, encoded size=2276, 1 rocksdb batches, 6 log entries)
I180827 20:41:53.326923 50862 testutils/testcluster/testcluster.go:536  [n1,s1] has 22 underreplicated ranges
I180827 20:41:53.326953 52116 storage/replica_raftstorage.go:790  [n2,s2,r15/?:/Table/1{8-9}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.334514 50930 storage/replica_command.go:812  [replicate,n1,s1,r15/1:/Table/1{8-9}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r15:/Table/1{8-9} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.337656 50930 storage/replica.go:3743  [n1,s1,r15/1:/Table/1{8-9}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I180827 20:41:53.338767 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r14/1:/Table/1{7-8}] sending preemptive snapshot 9d0058d5 at applied index 16
I180827 20:41:53.339034 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r14/1:/Table/1{7-8}] streamed snapshot to (n2,s2):?: kv pairs: 8, log entries: 6, rate-limit: 8.0 MiB/sec, 1ms
I180827 20:41:53.339612 52090 storage/replica_raftstorage.go:784  [n2,s2,r14/?:{-}] applying preemptive snapshot at index 16 (id=9d0058d5, encoded size=2276, 1 rocksdb batches, 6 log entries)
I180827 20:41:53.339831 52090 storage/replica_raftstorage.go:790  [n2,s2,r14/?:/Table/1{7-8}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.340173 50930 storage/replica_command.go:812  [replicate,n1,s1,r14/1:/Table/1{7-8}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r14:/Table/1{7-8} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.343121 50930 storage/replica.go:3743  [n1,s1,r14/1:/Table/1{7-8}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I180827 20:41:53.345432 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r9/1:/Table/1{2-3}] sending preemptive snapshot 0eea2d20 at applied index 26
I180827 20:41:53.345859 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r9/1:/Table/1{2-3}] streamed snapshot to (n2,s2):?: kv pairs: 53, log entries: 16, rate-limit: 8.0 MiB/sec, 2ms
I180827 20:41:53.347137 52066 storage/replica_raftstorage.go:784  [n2,s2,r9/?:{-}] applying preemptive snapshot at index 26 (id=0eea2d20, encoded size=15139, 1 rocksdb batches, 16 log entries)
I180827 20:41:53.347467 52066 storage/replica_raftstorage.go:790  [n2,s2,r9/?:/Table/1{2-3}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.348208 50930 storage/replica_command.go:812  [replicate,n1,s1,r9/1:/Table/1{2-3}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r9:/Table/1{2-3} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.352166 50930 storage/replica.go:3743  [n1,s1,r9/1:/Table/1{2-3}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I180827 20:41:53.353188 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r4/1:/System/{NodeLive…-tsd}] sending preemptive snapshot 0cdee511 at applied index 39
I180827 20:41:53.353765 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r4/1:/System/{NodeLive…-tsd}] streamed snapshot to (n2,s2):?: kv pairs: 36, log entries: 29, rate-limit: 8.0 MiB/sec, 1ms
I180827 20:41:53.354286 51723 storage/replica_raftstorage.go:784  [n2,s2,r4/?:{-}] applying preemptive snapshot at index 39 (id=0cdee511, encoded size=98384, 1 rocksdb batches, 29 log entries)
I180827 20:41:53.354994 51723 storage/replica_raftstorage.go:790  [n2,s2,r4/?:/System/{NodeLive…-tsd}] applied preemptive snapshot in 1ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.355529 50930 storage/replica_command.go:812  [replicate,n1,s1,r4/1:/System/{NodeLive…-tsd}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r4:/System/{NodeLivenessMax-tsd} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.358523 50930 storage/replica.go:3743  [n1,s1,r4/1:/System/{NodeLive…-tsd}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I180827 20:41:53.360250 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r3/1:/System/NodeLiveness{-Max}] sending preemptive snapshot 965d58b1 at applied index 19
I180827 20:41:53.360436 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r3/1:/System/NodeLiveness{-Max}] streamed snapshot to (n3,s3):?: kv pairs: 10, log entries: 9, rate-limit: 8.0 MiB/sec, 1ms
I180827 20:41:53.360789 52150 storage/replica_raftstorage.go:784  [n3,s3,r3/?:{-}] applying preemptive snapshot at index 19 (id=965d58b1, encoded size=4003, 1 rocksdb batches, 9 log entries)
I180827 20:41:53.361043 52150 storage/replica_raftstorage.go:790  [n3,s3,r3/?:/System/NodeLiveness{-Max}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.361522 50930 storage/replica_command.go:812  [replicate,n1,s1,r3/1:/System/NodeLiveness{-Max}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r3:/System/NodeLiveness{-Max} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.364392 50930 storage/replica.go:3743  [n1,s1,r3/1:/System/NodeLiveness{-Max}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I180827 20:41:53.366422 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r12/1:/Table/1{5-6}] sending preemptive snapshot 811af376 at applied index 16
I180827 20:41:53.366638 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r12/1:/Table/1{5-6}] streamed snapshot to (n3,s3):?: kv pairs: 8, log entries: 6, rate-limit: 8.0 MiB/sec, 2ms
I180827 20:41:53.367089 52137 storage/replica_raftstorage.go:784  [n3,s3,r12/?:{-}] applying preemptive snapshot at index 16 (id=811af376, encoded size=2276, 1 rocksdb batches, 6 log entries)
I180827 20:41:53.367359 52137 storage/replica_raftstorage.go:790  [n3,s3,r12/?:/Table/1{5-6}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.368127 50930 storage/replica_command.go:812  [replicate,n1,s1,r12/1:/Table/1{5-6}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r12:/Table/1{5-6} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.371691 50930 storage/replica.go:3743  [n1,s1,r12/1:/Table/1{5-6}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I180827 20:41:53.374563 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r19/1:/Table/2{2-3}] sending preemptive snapshot 9cd02555 at applied index 16
I180827 20:41:53.374760 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r19/1:/Table/2{2-3}] streamed snapshot to (n3,s3):?: kv pairs: 8, log entries: 6, rate-limit: 8.0 MiB/sec, 2ms
I180827 20:41:53.375252 52080 storage/replica_raftstorage.go:784  [n3,s3,r19/?:{-}] applying preemptive snapshot at index 16 (id=9cd02555, encoded size=2276, 1 rocksdb batches, 6 log entries)
I180827 20:41:53.375582 52080 storage/replica_raftstorage.go:790  [n3,s3,r19/?:/Table/2{2-3}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.375950 50930 storage/replica_command.go:812  [replicate,n1,s1,r19/1:/Table/2{2-3}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r19:/Table/2{2-3} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.381819 50930 storage/replica.go:3743  [n1,s1,r19/1:/Table/2{2-3}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I180827 20:41:53.386461 52091 rpc/nodedialer/nodedialer.go:92  [ct-client] connection to n3 established
I180827 20:41:53.386637 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r10/1:/Table/1{3-4}] sending preemptive snapshot a16f4b15 at applied index 64
I180827 20:41:53.388005 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r10/1:/Table/1{3-4}] streamed snapshot to (n3,s3):?: kv pairs: 204, log entries: 54, rate-limit: 8.0 MiB/sec, 4ms
I180827 20:41:53.388536 52181 storage/replica_raftstorage.go:784  [n3,s3,r10/?:{-}] applying preemptive snapshot at index 64 (id=a16f4b15, encoded size=62836, 1 rocksdb batches, 54 log entries)
I180827 20:41:53.389154 52181 storage/replica_raftstorage.go:790  [n3,s3,r10/?:/Table/1{3-4}] applied preemptive snapshot in 1ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.389513 50930 storage/replica_command.go:812  [replicate,n1,s1,r10/1:/Table/1{3-4}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r10:/Table/1{3-4} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.392649 50930 storage/replica.go:3743  [n1,s1,r10/1:/Table/1{3-4}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I180827 20:41:53.394122 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r2/1:/System/{-NodeLive…}] sending preemptive snapshot 69adabc1 at applied index 23
I180827 20:41:53.394365 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r2/1:/System/{-NodeLive…}] streamed snapshot to (n2,s2):?: kv pairs: 7, log entries: 13, rate-limit: 8.0 MiB/sec, 1ms
I180827 20:41:53.394729 52213 storage/replica_raftstorage.go:784  [n2,s2,r2/?:{-}] applying preemptive snapshot at index 23 (id=69adabc1, encoded size=6277, 1 rocksdb batches, 13 log entries)
I180827 20:41:53.394981 52213 storage/replica_raftstorage.go:790  [n2,s2,r2/?:/System/{-NodeLive…}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.395465 50930 storage/replica_command.go:812  [replicate,n1,s1,r2/1:/System/{-NodeLive…}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r2:/System/{-NodeLiveness} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.398757 50930 storage/replica.go:3743  [n1,s1,r2/1:/System/{-NodeLive…}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I180827 20:41:53.399709 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r18/1:/Table/2{1-2}] sending preemptive snapshot e9df2a4a at applied index 16
I180827 20:41:53.400036 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r18/1:/Table/2{1-2}] streamed snapshot to (n3,s3):?: kv pairs: 8, log entries: 6, rate-limit: 8.0 MiB/sec, 1ms
I180827 20:41:53.400391 52185 storage/replica_raftstorage.go:784  [n3,s3,r18/?:{-}] applying preemptive snapshot at index 16 (id=e9df2a4a, encoded size=2272, 1 rocksdb batches, 6 log entries)
I180827 20:41:53.400594 52185 storage/replica_raftstorage.go:790  [n3,s3,r18/?:/Table/2{1-2}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.400882 50930 storage/replica_command.go:812  [replicate,n1,s1,r18/1:/Table/2{1-2}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r18:/Table/2{1-2} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.407636 50930 storage/replica.go:3743  [n1,s1,r18/1:/Table/2{1-2}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I180827 20:41:53.408861 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r13/1:/Table/1{6-7}] sending preemptive snapshot 6f914d55 at applied index 16
I180827 20:41:53.409071 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r13/1:/Table/1{6-7}] streamed snapshot to (n2,s2):?: kv pairs: 8, log entries: 6, rate-limit: 8.0 MiB/sec, 1ms
I180827 20:41:53.409426 52218 storage/replica_raftstorage.go:784  [n2,s2,r13/?:{-}] applying preemptive snapshot at index 16 (id=6f914d55, encoded size=2276, 1 rocksdb batches, 6 log entries)
I180827 20:41:53.409616 52218 storage/replica_raftstorage.go:790  [n2,s2,r13/?:/Table/1{6-7}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.409970 50930 storage/replica_command.go:812  [replicate,n1,s1,r13/1:/Table/1{6-7}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r13:/Table/1{6-7} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.411262 50862 testutils/testcluster/testcluster.go:536  [n1,s1] has 22 underreplicated ranges
I180827 20:41:53.412831 50930 storage/replica.go:3743  [n1,s1,r13/1:/Table/1{6-7}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I180827 20:41:53.414081 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r11/1:/Table/1{4-5}] sending preemptive snapshot cca961c1 at applied index 16
I180827 20:41:53.414277 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r11/1:/Table/1{4-5}] streamed snapshot to (n3,s3):?: kv pairs: 8, log entries: 6, rate-limit: 8.0 MiB/sec, 1ms
I180827 20:41:53.414576 52199 storage/replica_raftstorage.go:784  [n3,s3,r11/?:{-}] applying preemptive snapshot at index 16 (id=cca961c1, encoded size=2272, 1 rocksdb batches, 6 log entries)
I180827 20:41:53.414816 52199 storage/replica_raftstorage.go:790  [n3,s3,r11/?:/Table/1{4-5}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.415293 50930 storage/replica_command.go:812  [replicate,n1,s1,r11/1:/Table/1{4-5}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r11:/Table/1{4-5} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.418111 50930 storage/replica.go:3743  [n1,s1,r11/1:/Table/1{4-5}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I180827 20:41:53.419054 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r5/1:/System/ts{d-e}] sending preemptive snapshot 3c3a015f at applied index 27
I180827 20:41:53.423022 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r5/1:/System/ts{d-e}] streamed snapshot to (n3,s3):?: kv pairs: 1391, log entries: 2, rate-limit: 8.0 MiB/sec, 4ms
I180827 20:41:53.423893 52201 storage/replica_raftstorage.go:784  [n3,s3,r5/?:{-}] applying preemptive snapshot at index 27 (id=3c3a015f, encoded size=194658, 1 rocksdb batches, 2 log entries)
I180827 20:41:53.429501 52201 storage/replica_raftstorage.go:790  [n3,s3,r5/?:/System/ts{d-e}] applied preemptive snapshot in 6ms [clear=0ms batch=0ms entries=2ms commit=4ms]
I180827 20:41:53.433500 50930 storage/replica_command.go:812  [replicate,n1,s1,r5/1:/System/ts{d-e}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r5:/System/ts{d-e} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.437580 50930 storage/replica.go:3743  [n1,s1,r5/1:/System/ts{d-e}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I180827 20:41:53.440575 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r6/1:/{System/tse-Table/System…}] sending preemptive snapshot cbd412df at applied index 21
I180827 20:41:53.440794 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r6/1:/{System/tse-Table/System…}] streamed snapshot to (n3,s3):?: kv pairs: 8, log entries: 11, rate-limit: 8.0 MiB/sec, 2ms
I180827 20:41:53.441181 52260 storage/replica_raftstorage.go:784  [n3,s3,r6/?:{-}] applying preemptive snapshot at index 21 (id=cbd412df, encoded size=4339, 1 rocksdb batches, 11 log entries)
I180827 20:41:53.441400 52260 storage/replica_raftstorage.go:790  [n3,s3,r6/?:/{System/tse-Table/System…}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.441676 50930 storage/replica_command.go:812  [replicate,n1,s1,r6/1:/{System/tse-Table/System…}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r6:/{System/tse-Table/SystemConfigSpan/Start} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.448564 52224 rpc/nodedialer/nodedialer.go:92  [ct-client] connection to n2 established
I180827 20:41:53.461587 50930 storage/replica.go:3743  [n1,s1,r6/1:/{System/tse-Table/System…}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I180827 20:41:53.463345 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r7/1:/Table/{SystemCon…-11}] sending preemptive snapshot 114f4385 at applied index 29
I180827 20:41:53.464896 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r7/1:/Table/{SystemCon…-11}] streamed snapshot to (n2,s2):?: kv pairs: 59, log entries: 19, rate-limit: 8.0 MiB/sec, 3ms
I180827 20:41:53.465343 52280 storage/replica_raftstorage.go:784  [n2,s2,r7/?:{-}] applying preemptive snapshot at index 29 (id=114f4385, encoded size=16646, 1 rocksdb batches, 19 log entries)
I180827 20:41:53.465821 52280 storage/replica_raftstorage.go:790  [n2,s2,r7/?:/Table/{SystemCon…-11}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.466988 50930 storage/replica_command.go:812  [replicate,n1,s1,r7/1:/Table/{SystemCon…-11}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r7:/Table/{SystemConfigSpan/Start-11} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.472743 50930 storage/replica.go:3743  [n1,s1,r7/1:/Table/{SystemCon…-11}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I180827 20:41:53.474632 50930 storage/store_snapshot.go:615  [replicate,n1,s1,r1/1:/{Min-System/}] sending preemptive snapshot 0a244018 at applied index 114
I180827 20:41:53.475250 50930 storage/store_snapshot.go:657  [replicate,n1,s1,r1/1:/{Min-System/}] streamed snapshot to (n2,s2):?: kv pairs: 73, log entries: 90, rate-limit: 8.0 MiB/sec, 1ms
I180827 20:41:53.475827 52267 storage/replica_raftstorage.go:784  [n2,s2,r1/?:{-}] applying preemptive snapshot at index 114 (id=0a244018, encoded size=40271, 1 rocksdb batches, 90 log entries)
I180827 20:41:53.476525 52267 storage/replica_raftstorage.go:790  [n2,s2,r1/?:/{Min-System/}] applied preemptive snapshot in 1ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.476869 50930 storage/replica_command.go:812  [replicate,n1,s1,r1/1:/{Min-System/}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r1:/{Min-System/} [(n1,s1):1, next=2, gen=1]
I180827 20:41:53.482912 50930 storage/replica.go:3743  [n1,s1,r1/1:/{Min-System/}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I180827 20:41:53.483281 50930 storage/queue.go:873  [n1,replicate] purgatory is now empty
I180827 20:41:53.485684 52286 storage/store_snapshot.go:615  [replicate,n1,s1,r20/1:/Table/{23-50}] sending preemptive snapshot f1426c69 at applied index 19
I180827 20:41:53.487316 52286 storage/store_snapshot.go:657  [replicate,n1,s1,r20/1:/Table/{23-50}] streamed snapshot to (n3,s3):?: kv pairs: 13, log entries: 9, rate-limit: 8.0 MiB/sec, 4ms
I180827 20:41:53.487681 52252 storage/replica_raftstorage.go:784  [n3,s3,r20/?:{-}] applying preemptive snapshot at index 19 (id=f1426c69, encoded size=3273, 1 rocksdb batches, 9 log entries)
I180827 20:41:53.487932 52252 storage/replica_raftstorage.go:790  [n3,s3,r20/?:/Table/{23-50}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.488311 52286 storage/replica_command.go:812  [replicate,n1,s1,r20/1:/Table/{23-50}] change replicas (ADD_REPLICA (n3,s3):3): read existing descriptor r20:/Table/{23-50} [(n1,s1):1, (n2,s2):2, next=3, gen=1]
I180827 20:41:53.503580 52286 storage/replica.go:3743  [n1,s1,r20/1:/Table/{23-50}] proposing ADD_REPLICA((n3,s3):3): updated=[(n1,s1):1 (n2,s2):2 (n3,s3):3] next=4
I180827 20:41:53.505707 52235 storage/store_snapshot.go:615  [replicate,n1,s1,r1/1:/{Min-System/}] sending preemptive snapshot 99036b07 at applied index 119
I180827 20:41:53.506514 52235 storage/store_snapshot.go:657  [replicate,n1,s1,r1/1:/{Min-System/}] streamed snapshot to (n3,s3):?: kv pairs: 78, log entries: 95, rate-limit: 8.0 MiB/sec, 1ms
I180827 20:41:53.507282 52188 storage/replica_raftstorage.go:784  [n3,s3,r1/?:{-}] applying preemptive snapshot at index 119 (id=99036b07, encoded size=42101, 1 rocksdb batches, 95 log entries)
I180827 20:41:53.508109 52188 storage/replica_raftstorage.go:790  [n3,s3,r1/?:/{Min-System/}] applied preemptive snapshot in 1ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.508641 52235 storage/replica_command.go:812  [replicate,n1,s1,r1/1:/{Min-System/}] change replicas (ADD_REPLICA (n3,s3):3): read existing descriptor r1:/{Min-System/} [(n1,s1):1, (n2,s2):2, next=3, gen=1]
I180827 20:41:53.512524 52235 storage/replica.go:3743  [n1,s1,r1/1:/{Min-System/}] proposing ADD_REPLICA((n3,s3):3): updated=[(n1,s1):1 (n2,s2):2 (n3,s3):3] next=4
I180827 20:41:53.513999 52209 storage/store_snapshot.go:615  [replicate,n1,s1,r7/1:/Table/{SystemCon…-11}] sending preemptive snapshot bb53109c at applied index 32
I180827 20:41:53.514379 52209 storage/store_snapshot.go:657  [replicate,n1,s1,r7/1:/Table/{SystemCon…-11}] streamed snapshot to (n3,s3):?: kv pairs: 60, log entries: 22, rate-limit: 8.0 MiB/sec, 1ms
I180827 20:41:53.514821 52292 storage/replica_raftstorage.go:784  [n3,s3,r7/?:{-}] applying preemptive snapshot at index 32 (id=bb53109c, encoded size=17687, 1 rocksdb batches, 22 log entries)
I180827 20:41:53.515905 52292 storage/replica_raftstorage.go:790  [n3,s3,r7/?:/Table/{SystemCon…-11}] applied preemptive snapshot in 1ms [clear=1ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.516367 52209 storage/replica_command.go:812  [replicate,n1,s1,r7/1:/Table/{SystemCon…-11}] change replicas (ADD_REPLICA (n3,s3):3): read existing descriptor r7:/Table/{SystemConfigSpan/Start-11} [(n1,s1):1, (n2,s2):2, next=3, gen=1]
I180827 20:41:53.520158 52209 storage/replica.go:3743  [n1,s1,r7/1:/Table/{SystemCon…-11}] proposing ADD_REPLICA((n3,s3):3): updated=[(n1,s1):1 (n2,s2):2 (n3,s3):3] next=4
I180827 20:41:53.521958 52312 storage/store_snapshot.go:615  [replicate,n1,s1,r6/1:/{System/tse-Table/System…}] sending preemptive snapshot 2ca43612 at applied index 24
I180827 20:41:53.522776 52312 storage/store_snapshot.go:657  [replicate,n1,s1,r6/1:/{System/tse-Table/System…}] streamed snapshot to (n2,s2):?: kv pairs: 9, log entries: 14, rate-limit: 8.0 MiB/sec, 2ms
I180827 20:41:53.523128 52239 storage/replica_raftstorage.go:784  [n2,s2,r6/?:{-}] applying preemptive snapshot at index 24 (id=2ca43612, encoded size=5410, 1 rocksdb batches, 14 log entries)
I180827 20:41:53.523377 52239 storage/replica_raftstorage.go:790  [n2,s2,r6/?:/{System/tse-Table/System…}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.523701 52312 storage/replica_command.go:812  [replicate,n1,s1,r6/1:/{System/tse-Table/System…}] change replicas (ADD_REPLICA (n2,s2):3): read existing descriptor r6:/{System/tse-Table/SystemConfigSpan/Start} [(n1,s1):1, (n3,s3):2, next=3, gen=1]
I180827 20:41:53.525176 50862 testutils/testcluster/testcluster.go:536  [n1,s1] has 19 underreplicated ranges
I180827 20:41:53.527482 52312 storage/replica.go:3743  [n1,s1,r6/1:/{System/tse-Table/System…}] proposing ADD_REPLICA((n2,s2):3): updated=[(n1,s1):1 (n3,s3):2 (n2,s2):3] next=4
I180827 20:41:53.528875 52327 storage/store_snapshot.go:615  [replicate,n1,s1,r5/1:/System/ts{d-e}] sending preemptive snapshot 731be2ae at applied index 30
I180827 20:41:53.532860 52327 storage/store_snapshot.go:657  [replicate,n1,s1,r5/1:/System/ts{d-e}] streamed snapshot to (n2,s2):?: kv pairs: 1392, log entries: 5, rate-limit: 8.0 MiB/sec, 4ms
I180827 20:41:53.533361 52316 storage/replica_raftstorage.go:784  [n2,s2,r5/?:{-}] applying preemptive snapshot at index 30 (id=731be2ae, encoded size=195741, 1 rocksdb batches, 5 log entries)
I180827 20:41:53.535834 52316 storage/replica_raftstorage.go:790  [n2,s2,r5/?:/System/ts{d-e}] applied preemptive snapshot in 2ms [clear=0ms batch=0ms entries=0ms commit=2ms]
I180827 20:41:53.536253 52327 storage/replica_command.go:812  [replicate,n1,s1,r5/1:/System/ts{d-e}] change replicas (ADD_REPLICA (n2,s2):3): read existing descriptor r5:/System/ts{d-e} [(n1,s1):1, (n3,s3):2, next=3, gen=1]
I180827 20:41:53.540576 52327 storage/replica.go:3743  [n1,s1,r5/1:/System/ts{d-e}] proposing ADD_REPLICA((n2,s2):3): updated=[(n1,s1):1 (n3,s3):2 (n2,s2):3] next=4
I180827 20:41:53.545804 52341 storage/store_snapshot.go:615  [replicate,n1,s1,r11/1:/Table/1{4-5}] sending preemptive snapshot 7497a95f at applied index 19
I180827 20:41:53.546108 52341 storage/store_snapshot.go:657  [replicate,n1,s1,r11/1:/Table/1{4-5}] streamed snapshot to (n2,s2):?: kv pairs: 9, log entries: 9, rate-limit: 8.0 MiB/sec, 4ms
I180827 20:41:53.546590 52275 storage/replica_raftstorage.go:784  [n2,s2,r11/?:{-}] applying preemptive snapshot at index 19 (id=7497a95f, encoded size=3304, 1 rocksdb batches, 9 log entries)
I180827 20:41:53.546960 52275 storage/replica_raftstorage.go:790  [n2,s2,r11/?:/Table/1{4-5}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180827 20:41:53.547386 52341 storage/replica_command.go:812  [replicate,n1,s1,r11/1:/Table/1{4-5}] change replicas (ADD_REPLICA (

Please assign, take a look and update the issue accordingly.

Failed tests (): TestRaftRemoveRace TestRaftRemoveRace

The following test appears to have failed:

#:

W1215 01:18:50.477119 959 multiraft/multiraft.go:1233  aborting configuration change: key range /Local/Range/RangeDescriptor/""-/Min outside of bounds of range /Min-/Min
W1215 01:18:50.478804 959 multiraft/multiraft.go:1139  failed to look up replica ID for range 1 (disabling replica ID check): storage/store.go:1695: store 3 not found as replica of range 1
I1215 01:18:53.557582 959 storage/engine/rocksdb.go:138  closing in-memory rocksdb instance
I1215 01:18:53.557889 959 storage/engine/rocksdb.go:138  closing in-memory rocksdb instance
I1215 01:18:53.558132 959 storage/engine/rocksdb.go:138  closing in-memory rocksdb instance
--- FAIL: TestRaftRemoveRace (3.53s)
    <autogenerated>:32: storage/client_test.go:521: condition failed to evaluate within 3s: storage/client_test.go:517: range not found on store 2
=== RUN   TestStoreRangeRemoveDead
E1215 01:18:53.562437 959 gossip/gossip.go:181  different node IDs were set for the same gossip instance (2147483647, 1)
I1215 01:18:53.563875 959 multiraft/multiraft.go:579  node 1 starting
I1215 01:18:53.564727 959 storage/replica.go:1308  gossiping cluster id  from store 1, range 1
I1215 01:18:53.565694 959 raft/raft.go:446  [group 1] 1 became follower at term 5
I1215 01:18:53.565898 959 raft/raft.go:234  [group 1] newRaft 1 [peers: [1], term: 5, commit: 10, applied: 10, lastindex: 10, lastterm: 5]
I1215 01:18:53.566074 959 multiraft/multiraft.go:999  node 1 campaigning because initial confstate is [1]
I1215 01:18:53.566196 959 raft/raft.go:526  [group 1] 1 is starting a new election at term 5
I1215 01:18:53.566292 959 raft/raft.go:459  [group 1] 1 became candidate at term 6
--
I1215 01:19:05.063638 959 storage/engine/rocksdb.go:138  closing in-memory rocksdb instance
I1215 01:19:05.063778 959 storage/engine/rocksdb.go:138  closing in-memory rocksdb instance
--- PASS: TestLeaderAfterSplit (0.44s)
=== RUN   Example_rebalancing
--- PASS: Example_rebalancing (0.62s)
FAIL
FAIL    github.com/cockroachdb/cockroach/storage    32.371s
=== RUN   TestBatchBasics
I1215 01:18:45.307778 1019 storage/engine/rocksdb.go:138  closing in-memory rocksdb instance
--- PASS: TestBatchBasics (0.01s)
=== RUN   TestBatchGet
I1215 01:18:45.312076 1019 storage/engine/rocksdb.go:138  closing in-memory rocksdb instance
--- PASS: TestBatchGet (0.01s)
=== RUN   TestBatchMerge
I1215 01:18:45.320845 1019 storage/engine/rocksdb.go:138  closing in-memory rocksdb instance
--- PASS: TestBatchMerge (0.01s)
=== RUN   TestBatchProtoW1215 01:18:50.477119 959 multiraft/multiraft.go:1233  aborting configuration change: key range /Local/Range/RangeDescriptor/""-/Min outside of bounds of range /Min-/Min
W1215 01:18:50.478804 959 multiraft/multiraft.go:1139  failed to look up replica ID for range 1 (disabling replica ID check): storage/store.go:1695: store 3 not found as replica of range 1
I1215 01:18:53.557582 959 storage/engine/rocksdb.go:138  closing in-memory rocksdb instance
I1215 01:18:53.557889 959 storage/engine/rocksdb.go:138  closing in-memory rocksdb instance
I1215 01:18:53.558132 959 storage/engine/rocksdb.go:138  closing in-memory rocksdb instance
--- FAIL: TestRaftRemoveRace (3.53s)
    <autogenerated>:32: storage/client_test.go:521: condition failed to evaluate within 3s: storage/client_test.go:517: range not found on store 2
=== RUN   TestStoreRangeRemoveDead
E1215 01:18:53.562437 959 gossip/gossip.go:181  different node IDs were set for the same gossip instance (2147483647, 1)
I1215 01:18:53.563875 959 multiraft/multiraft.go:579  node 1 starting
I1215 01:18:53.564727 959 storage/replica.go:1308  gossiping cluster id  from store 1, range 1
I1215 01:18:53.565694 959 raft/raft.go:446  [group 1] 1 became follower at term 5
I1215 01:18:53.565898 959 raft/raft.go:234  [group 1] newRaft 1 [peers: [1], term: 5, commit: 10, applied: 10, lastindex: 10, lastterm: 5]
I1215 01:18:53.566074 959 multiraft/multiraft.go:999  node 1 campaigning because initial confstate is [1]
I1215 01:18:53.566196 959 raft/raft.go:526  [group 1] 1 is starting a new election at term 5
I1215 01:18:53.566292 959 raft/raft.go:459  [group 1] 1 became candidate at term 6
--
I1215 01:19:05.063638 959 storage/engine/rocksdb.go:138  closing in-memory rocksdb instance
I1215 01:19:05.063778 959 storage/engine/rocksdb.go:138  closing in-memory rocksdb instance
--- PASS: TestLeaderAfterSplit (0.44s)
=== RUN   Example_rebalancing
--- PASS: Example_rebalancing (0.62s)
FAIL
FAIL    github.com/cockroachdb/cockroach/storage    32.371s
=== RUN   TestBatchBasics
I1215 01:18:45.307778 1019 storage/engine/rocksdb.go:138  closing in-memory rocksdb instance
--- PASS: TestBatchBasics (0.01s)
=== RUN   TestBatchGet
I1215 01:18:45.312076 1019 storage/engine/rocksdb.go:138  closing in-memory rocksdb instance
--- PASS: TestBatchGet (0.01s)
=== RUN   TestBatchMerge
I1215 01:18:45.320845 1019 storage/engine/rocksdb.go:138  closing in-memory rocksdb instance
--- PASS: TestBatchMerge (0.01s)
=== RUN   TestBatchProto

Please assign, take a look and update the issue accordingly.

Test failure in CI build 1825

The following test appears to have failed:

#1825:

W1215 17:33:16.855575 972 multiraft/multiraft.go:1139  failed to look up replica ID for range 1 (disabling replica ID check): storage/store.go:1695: store 3 not found as replica of range 1
I1215 17:33:16.858553 972 raft/raft.go:772  [group 1] 3 [commit: 14] ignored snapshot [index: 14, term: 6]
I1215 17:33:18.039785 972 storage/engine/rocksdb.go:138  closing in-memory rocksdb instance
I1215 17:33:18.040005 972 storage/engine/rocksdb.go:138  closing in-memory rocksdb instance
I1215 17:33:18.040209 972 storage/engine/rocksdb.go:138  closing in-memory rocksdb instance
--- FAIL: TestProgressWithDownNode (1.39s)
    <autogenerated>:32: storage/client_raft_test.go:813: condition failed to evaluate within 1s: storage/client_raft_test.go:810: expected [5 5 5], got [5 5 0]
=== RUN   TestReplicateAddAndRemove
E1215 17:33:18.042697 972 gossip/gossip.go:181  different node IDs were set for the same gossip instance (2147483647, 1)
I1215 17:33:18.043860 972 multiraft/multiraft.go:579  node 1 starting
I1215 17:33:18.044519 972 raft/raft.go:446  [group 1] 1 became follower at term 5
I1215 17:33:18.044694 972 raft/raft.go:234  [group 1] newRaft 1 [peers: [1], term: 5, commit: 10, applied: 10, lastindex: 10, lastterm: 5]
I1215 17:33:18.044820 972 multiraft/multiraft.go:999  node 1 campaigning because initial confstate is [1]
I1215 17:33:18.044892 972 raft/raft.go:526  [group 1] 1 is starting a new election at term 5
I1215 17:33:18.044975 972 raft/raft.go:459  [group 1] 1 became candidate at term 6
I1215 17:33:18.045041 972 raft/raft.go:508  [group 1] 1 received vote from 1 at term 6
--
I1215 17:33:35.285039 972 storage/engine/rocksdb.go:138  closing in-memory rocksdb instance
I1215 17:33:35.285185 972 storage/engine/rocksdb.go:138  closing in-memory rocksdb instance
--- PASS: TestLeaderAfterSplit (0.38s)
=== RUN   Example_rebalancing
--- PASS: Example_rebalancing (0.42s)
FAIL
FAIL    github.com/cockroachdb/cockroach/storage    27.443s
=== RUN   TestBatchBasics
I1215 17:33:33.813017 1034 storage/engine/rocksdb.go:138  closing in-memory rocksdb instance
--- PASS: TestBatchBasics (0.00s)
=== RUN   TestBatchGet
I1215 17:33:33.816717 1034 storage/engine/rocksdb.go:138  closing in-memory rocksdb instance
--- PASS: TestBatchGet (0.00s)
=== RUN   TestBatchMerge
I1215 17:33:33.821715 1034 storage/engine/rocksdb.go:138  closing in-memory rocksdb instance
--- PASS: TestBatchMerge (0.00s)
=== RUN   TestBatchProto

Please assign, take a look and update the issue accordingly.

test failure #410

The following test appears to have failed:

#410:

--- PASS: TestRawBroadcast (0.00s)
=== RUN TestMetricSystemStop
--- PASS: TestMetricSystemStop (0.00s)
=== RUN: ExampleMetricSystem
==================
WARNING: DATA RACE
Read by goroutine 44:
  github.com/cockroachdb/cockroach/util/metrics.func·006()
      /go/src/github.com/cockroachdb/cockroach/util/metrics/metrics.go:591 +0x571
  github.com/cockroachdb/cockroach/util/metrics.func·005()
      /go/src/github.com/cockroachdb/cockroach/util/metrics/metrics.go:530 +0x8f

Previous write by main goroutine:
  sync/atomic.AddInt64()
      /usr/src/go/src/runtime/race_amd64.s:261 +0xc
  github.com/cockroachdb/cockroach/util/metrics.(*MetricSystem).Histogram()
      /go/src/github.com/cockroachdb/cockroach/util/metrics/metrics.go:295 +0xd93
  github.com/cockroachdb/cockroach/util/metrics.(*MetricSystem).StopTimer()
      /go/src/github.com/cockroachdb/cockroach/util/metrics/metrics.go:226 +0x75
  github.com/cockroachdb/cockroach/util/metrics.ExampleMetricSystem()
      /go/src/github.com/cockroachdb/cockroach/util/metrics/metrics_test.go:37 +0x1cd
  testing.runExample()
      /usr/src/go/src/testing/example.go:98 +0x5e6
--
Goroutine 44 (running) created at:
  github.com/cockroachdb/cockroach/util/metrics.(*MetricSystem).reaper()
      /go/src/github.com/cockroachdb/cockroach/util/metrics/metrics.go:532 +0x1ed
==================
==================
WARNING: DATA RACE
Read by goroutine 44:
  github.com/cockroachdb/cockroach/util/metrics.func·006()
      /go/src/github.com/cockroachdb/cockroach/util/metrics/metrics.go:593 +0x6d4
  github.com/cockroachdb/cockroach/util/metrics.func·005()
      /go/src/github.com/cockroachdb/cockroach/util/metrics/metrics.go:530 +0x8f

Previous write by main goroutine:
  sync/atomic.AddInt64()
      /usr/src/go/src/runtime/race_amd64.s:261 +0xc
  github.com/cockroachdb/cockroach/util/metrics.(*MetricSystem).Histogram()
      /go/src/github.com/cockroachdb/cockroach/util/metrics/metrics.go:294 +0xce9
  github.com/cockroachdb/cockroach/util/metrics.(*MetricSystem).StopTimer()
      /go/src/github.com/cockroachdb/cockroach/util/metrics/metrics.go:226 +0x75
  github.com/cockroachdb/cockroach/util/metrics.ExampleMetricSystem()
      /go/src/github.com/cockroachdb/cockroach/util/metrics/metrics_test.go:37 +0x1cd
  testing.runExample()
      /usr/src/go/src/testing/example.go:98 +0x5e6
--- PASS: TestRawBroadcast (0.00s)
=== RUN TestMetricSystemStop
--- PASS: TestMetricSystemStop (0.00s)
=== RUN: ExampleMetricSystem
==================
WARNING: DATA RACE
Read by goroutine 44:
  github.com/cockroachdb/cockroach/util/metrics.func·006()
      /go/src/github.com/cockroachdb/cockroach/util/metrics/metrics.go:591 +0x571
  github.com/cockroachdb/cockroach/util/metrics.func·005()
      /go/src/github.com/cockroachdb/cockroach/util/metrics/metrics.go:530 +0x8f

Previous write by main goroutine:
  sync/atomic.AddInt64()
      /usr/src/go/src/runtime/race_amd64.s:261 +0xc
  github.com/cockroachdb/cockroach/util/metrics.(*MetricSystem).Histogram()
      /go/src/github.com/cockroachdb/cockroach/util/metrics/metrics.go:295 +0xd93
  github.com/cockroachdb/cockroach/util/metrics.(*MetricSystem).StopTimer()
      /go/src/github.com/cockroachdb/cockroach/util/metrics/metrics.go:226 +0x75
  github.com/cockroachdb/cockroach/util/metrics.ExampleMetricSystem()
      /go/src/github.com/cockroachdb/cockroach/util/metrics/metrics_test.go:37 +0x1cd
  testing.runExample()
      /usr/src/go/src/testing/example.go:98 +0x5e6
--
Goroutine 44 (running) created at:
  github.com/cockroachdb/cockroach/util/metrics.(*MetricSystem).reaper()
      /go/src/github.com/cockroachdb/cockroach/util/metrics/metrics.go:532 +0x1ed
==================
==================
WARNING: DATA RACE
Read by goroutine 44:
  github.com/cockroachdb/cockroach/util/metrics.func·006()
      /go/src/github.com/cockroachdb/cockroach/util/metrics/metrics.go:593 +0x6d4
  github.com/cockroachdb/cockroach/util/metrics.func·005()
      /go/src/github.com/cockroachdb/cockroach/util/metrics/metrics.go:530 +0x8f

Previous write by main goroutine:
  sync/atomic.AddInt64()
      /usr/src/go/src/runtime/race_amd64.s:261 +0xc
  github.com/cockroachdb/cockroach/util/metrics.(*MetricSystem).Histogram()
      /go/src/github.com/cockroachdb/cockroach/util/metrics/metrics.go:294 +0xce9
  github.com/cockroachdb/cockroach/util/metrics.(*MetricSystem).StopTimer()
      /go/src/github.com/cockroachdb/cockroach/util/metrics/metrics.go:226 +0x75
  github.com/cockroachdb/cockroach/util/metrics.ExampleMetricSystem()
      /go/src/github.com/cockroachdb/cockroach/util/metrics/metrics_test.go:37 +0x1cd
  testing.runExample()
      /usr/src/go/src/testing/example.go:98 +0x5e6

Please assign, take a look and update the issue accordingly.

Test failure in CI build 477

The following test appears to have failed:

#477:

I0403 20:44:28.978781     278 multiraft.go:633] node 257: group 1 raft ready
I0403 20:44:28.978875     278 multiraft.go:638] HardState updated: {Term:6 Vote:257 Commit:20 XXX_unrecognized:[]}
I0403 20:44:28.979131     278 multiraft.go:641] New Entry[0]: 6/20 EntryConfChange "\b\x00\x10\x00\x18\x82\x04\"\xd0\x01\x00\x13ћε\x13\xd3\x13d'|\xbaS\x17\x84?\x10\x01\x1a\xba\x01J\xb7\x01\n\x92\x01\n\x04\b\x00\x10\r\x12\x14\b\x93\xa6Ϩ\xeb\xf9\xe6\xe8\x13\x10\xb
I0403 20:44:28.979401     278 multiraft.go:644] Committed Entry[0]: 6/20 EntryConfChange "\b\x00\x10\x00\x18\x82\x04\"\xd0\x01\x00\x13ћε\x13\xd3\x13d'|\xbaS\x17\x84?\x10\x01\x1a\xba\x01J\xb7\x01\n\x92\x01\n\x04\b\x00\x10\r\x12\x14\b\x93\xa6Ϩ\xeb\xf9\xe6\xe8\x13\x10\xb
I0403 20:44:28.984159     278 multiraft.go:751] node 257 applying configuration change {0 ConfChangeAddNode 514 [0 19 209 155 206 181 19 211 19 100 39 124 186 83 23 132 63 16 1 26 186 1 74 183 1 10 146 1 10 4 8 0 16 13 18 20 8 147 166 207 168 235 249 230 232 19 16 191 136 222 152 165 151 223 147 100 26 0 42 4 114 111 111 116 50 8 8 1 1 16 1 1 26 0 56 1 74 94 10 20 99 104 97 110 103 101 32 114 101 112 108 105 99 97 115 32 111 102 32 49 18 0 26 36 57 54 98 53 52 55 102 56 45 97 97 100 49 45 52 52 55 97 45 97 50 54 99 45 57 48 98 48 54 54 51 98 48 49 55 102 32 221 147 174 196 1 40 0 48 0 56 0 74 4 8 0 16 13 82 4 8 0 16 13 90 4 8 0 16 13 98 0 80 0 16 1 26 30 26 28 8 1 2 16 1 2 24 0 34 8 8 1 1 16 1 1 26 0 34 8 8 1 2 16 1 2 26 0] []}
--- FAIL: TestFailedReplicaChange (0.14s)
    <autogenerated>:31: Test appears to have leaked an rpc client:
        net.(*pollDesc).Wait(0xc20867e0d0, 0x72, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:84 +0x63
        net.(*pollDesc).WaitRead(0xc20867e0d0, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:89 +0x51
        net.(*netFD).Read(0xc20867e070, 0xc208550000, 0x1000, 0x1000, 0x0, 0x7f4abc78be20, 0xc208652c78)
            /usr/src/go/src/net/fd_unix.go:242 +0x4b3
        net.(*conn).Read(0xc208436008, 0xc208550000, 0x1000, 0x1000, 0xc208567640, 0x0, 0x0)
            /usr/src/go/src/net/net.go:121 +0x125
        net.(*TCPConn).Read(0xc208436008, 0xc208550000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
--
I0403 20:44:29.225028     278 multiraft.go:448] node 257: group 1 got message 514->257 MsgAppResp Term:6 Log:0/18
I0403 20:44:29.225421     278 multiraft.go:448] node 257: group 1 got message 514->257 MsgHeartbeatResp Term:6 Log:0/0
I0403 20:44:29.226984     278 client_raft_test.go:350] read value 39
I0403 20:44:29.227528     278 multiraft.go:633] node 257: group 1 raft ready
I0403 20:44:29.227680     278 multiraft.go:650] Outgoing Message[0]: 257->514 MsgHeartbeat Term:6 Log:0/0 Commit:18
--- FAIL: TestReplicateAfterTruncation (0.24s)
    <autogenerated>:31: Test appears to have leaked an rpc client:
        net.(*pollDesc).Wait(0xc20867e0d0, 0x72, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:84 +0x63
        net.(*pollDesc).WaitRead(0xc20867e0d0, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:89 +0x51
        net.(*netFD).Read(0xc20867e070, 0xc208550000, 0x1000, 0x1000, 0x0, 0x7f4abc78be20, 0xc208652c78)
            /usr/src/go/src/net/fd_unix.go:242 +0x4b3
        net.(*conn).Read(0xc208436008, 0xc208550000, 0x1000, 0x1000, 0xc208567640, 0x0, 0x0)
            /usr/src/go/src/net/net.go:121 +0x125
        net.(*TCPConn).Read(0xc208436008, 0xc208550000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
--
            /go/src/github.com/cockroachdb/cockroach/storage/main_test.go:29 +0x36
        main.main()
            github.com/cockroachdb/cockroach/storage/_test/_testmain.go:328 +0x28d
=== RUN TestSetupRangeTree
I0403 20:44:29.311546     278 multiraft.go:407] node 257 starting
--- FAIL: TestSetupRangeTree (0.08s)
    <autogenerated>:31: Test appears to have leaked an rpc client:
        net.(*pollDesc).Wait(0xc20867e0d0, 0x72, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:84 +0x63
        net.(*pollDesc).WaitRead(0xc20867e0d0, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:89 +0x51
        net.(*netFD).Read(0xc20867e070, 0xc208550000, 0x1000, 0x1000, 0x0, 0x7f4abc78be20, 0xc208652c78)
            /usr/src/go/src/net/fd_unix.go:242 +0x4b3
        net.(*conn).Read(0xc208436008, 0xc208550000, 0x1000, 0x1000, 0xc208567640, 0x0, 0x0)
            /usr/src/go/src/net/net.go:121 +0x125
        net.(*TCPConn).Read(0xc208436008, 0xc208550000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
--
I0403 20:44:29.928121     278 retry.go:93] Get failed; retrying immediately
I0403 20:44:29.928538     278 multiraft.go:633] node 257: group 4 raft ready
I0403 20:44:29.928644     278 multiraft.go:638] HardState updated: {Term:6 Vote:257 Commit:27 XXX_unrecognized:[]}
I0403 20:44:29.929631     278 multiraft.go:641] New Entry[0]: 6/27 EntryNormal 00000000000000006ef7c314b25f2556: raft_id:4 cmd:<internal_resolve_intent:<header:<timestamp:<wall_time:0 logical:207 > cmd_id:<wall_time:0 random:0 > key:"\000\000\000kc\000\001rtn-" 
I0403 20:44:29.933230     278 multiraft.go:644] Committed Entry[0]: 6/27 EntryNormal 00000000000000006ef7c314b25f2556: raft_id:4 cmd:<internal_resolve_intent:<header:<timestamp:<wall_time:0 logical:207 > cmd_id:<wall_time:0 random:0 > key:"\000\000\000kc\000\001rtn-" 
--- FAIL: TestInsert (0.62s)
    <autogenerated>:31: Test appears to have leaked an rpc client:
        net.(*pollDesc).Wait(0xc20867e0d0, 0x72, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:84 +0x63
        net.(*pollDesc).WaitRead(0xc20867e0d0, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:89 +0x51
        net.(*netFD).Read(0xc20867e070, 0xc208550000, 0x1000, 0x1000, 0x0, 0x7f4abc78be20, 0xc208652c78)
            /usr/src/go/src/net/fd_unix.go:242 +0x4b3
        net.(*conn).Read(0xc208436008, 0xc208550000, 0x1000, 0x1000, 0xc208567640, 0x0, 0x0)
            /usr/src/go/src/net/net.go:121 +0x125
        net.(*TCPConn).Read(0xc208436008, 0xc208550000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
--
            /go/src/github.com/cockroachdb/cockroach/storage/main_test.go:29 +0x36
        main.main()
            github.com/cockroachdb/cockroach/storage/_test/_testmain.go:328 +0x28d
=== RUN TestStoreRangeSplitAtIllegalKeys
I0403 20:44:30.017238     278 multiraft.go:407] node 257 starting
--- FAIL: TestStoreRangeSplitAtIllegalKeys (0.08s)
    <autogenerated>:31: Test appears to have leaked an rpc client:
        net.(*pollDesc).Wait(0xc20867e0d0, 0x72, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:84 +0x63
        net.(*pollDesc).WaitRead(0xc20867e0d0, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:89 +0x51
        net.(*netFD).Read(0xc20867e070, 0xc208550000, 0x1000, 0x1000, 0x0, 0x7f4abc78be20, 0xc208652c78)
            /usr/src/go/src/net/fd_unix.go:242 +0x4b3
        net.(*conn).Read(0xc208436008, 0xc208550000, 0x1000, 0x1000, 0xc208567640, 0x0, 0x0)
            /usr/src/go/src/net/net.go:121 +0x125
        net.(*TCPConn).Read(0xc208436008, 0xc208550000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
--
I0403 20:44:30.156006     278 multiraft.go:638] HardState updated: {Term:6 Vote:257 Commit:23 XXX_unrecognized:[]}
I0403 20:44:30.157038     278 multiraft.go:641] New Entry[0]: 6/23 EntryNormal 13d19bcefc85fb86279d5ab29f22bfbd: raft_id:1 cmd:<end_transaction:<header:<timestamp:<wall_time:0 logical:8 > cmd_id:<wall_time:1428093870155365254 random:2854537462043295677 > key:"a"
I0403 20:44:30.158054     278 multiraft.go:644] Committed Entry[0]: 6/23 EntryNormal 13d19bcefc85fb86279d5ab29f22bfbd: raft_id:1 cmd:<end_transaction:<header:<timestamp:<wall_time:0 logical:8 > cmd_id:<wall_time:1428093870155365254 random:2854537462043295677 > key:"a"
I0403 20:44:30.162461     278 raft.go:390] raft: 101 became follower at term 5
I0403 20:44:30.162735     278 raft.go:207] raft: newRaft 101 [peers: [101], term: 5, commit: 10, applied: 10, lastindex: 10, lastterm: 5]
--- FAIL: TestStoreRangeSplitAtRangeBounds (0.16s)
    <autogenerated>:31: Test appears to have leaked an rpc client:
        net.(*pollDesc).Wait(0xc20867e0d0, 0x72, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:84 +0x63
        net.(*pollDesc).WaitRead(0xc20867e0d0, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:89 +0x51
        net.(*netFD).Read(0xc20867e070, 0xc208550000, 0x1000, 0x1000, 0x0, 0x7f4abc78be20, 0xc208652c78)
            /usr/src/go/src/net/fd_unix.go:242 +0x4b3
        net.(*conn).Read(0xc208436008, 0xc208550000, 0x1000, 0x1000, 0xc208567640, 0x0, 0x0)
            /usr/src/go/src/net/net.go:121 +0x125
        net.(*TCPConn).Read(0xc208436008, 0xc208550000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
--
I0403 20:44:30.344507     278 multiraft.go:638] HardState updated: {Term:6 Vote:257 Commit:25 XXX_unrecognized:[]}
I0403 20:44:30.345428     278 multiraft.go:641] New Entry[0]: 6/24 EntryNormal 0000000000000000539abb3f6f5712fd: raft_id:1 cmd:<internal_resolve_intent:<header:<timestamp:<wall_time:0 logical:17 > cmd_id:<wall_time:0 random:0 > key:"\000\000\000k\000\001rdsc" us
I0403 20:44:30.346388     278 multiraft.go:641] New Entry[1]: 6/25 EntryNormal 00000000000000003fc951cc3c88058e: raft_id:1 cmd:<internal_resolve_intent:<header:<timestamp:<wall_time:0 logical:17 > cmd_id:<wall_time:0 random:0 > key:"\000\000\000k\000\001rtn-" us
I0403 20:44:30.347303     278 multiraft.go:644] Committed Entry[0]: 6/24 EntryNormal 0000000000000000539abb3f6f5712fd: raft_id:1 cmd:<internal_resolve_intent:<header:<timestamp:<wall_time:0 logical:17 > cmd_id:<wall_time:0 random:0 > key:"\000\000\000k\000\001rdsc" us
I0403 20:44:30.348147     278 multiraft.go:644] Committed Entry[1]: 6/25 EntryNormal 00000000000000003fc951cc3c88058e: raft_id:1 cmd:<internal_resolve_intent:<header:<timestamp:<wall_time:0 logical:17 > cmd_id:<wall_time:0 random:0 > key:"\000\000\000k\000\001rtn-" us
--- FAIL: TestStoreRangeSplitConcurrent (0.19s)
    <autogenerated>:31: Test appears to have leaked an rpc client:
        net.(*pollDesc).Wait(0xc20867e0d0, 0x72, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:84 +0x63
        net.(*pollDesc).WaitRead(0xc20867e0d0, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:89 +0x51
        net.(*netFD).Read(0xc20867e070, 0xc208550000, 0x1000, 0x1000, 0x0, 0x7f4abc78be20, 0xc208652c78)
            /usr/src/go/src/net/fd_unix.go:242 +0x4b3
        net.(*conn).Read(0xc208436008, 0xc208550000, 0x1000, 0x1000, 0xc208567640, 0x0, 0x0)
            /usr/src/go/src/net/net.go:121 +0x125
        net.(*TCPConn).Read(0xc208436008, 0xc208550000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
--
I0403 20:44:30.519882     278 multiraft.go:638] HardState updated: {Term:6 Vote:257 Commit:27 XXX_unrecognized:[]}
I0403 20:44:30.520949     278 multiraft.go:641] New Entry[0]: 6/27 EntryNormal 13d19bcf12366f1b3fc638e3d5e57a31: raft_id:1 cmd:<end_transaction:<header:<timestamp:<wall_time:0 logical:12 > cmd_id:<wall_time:1428093870519250715 random:4595423020975487537 > key:"m
I0403 20:44:30.521940     278 multiraft.go:644] Committed Entry[0]: 6/27 EntryNormal 13d19bcf12366f1b3fc638e3d5e57a31: raft_id:1 cmd:<end_transaction:<header:<timestamp:<wall_time:0 logical:12 > cmd_id:<wall_time:1428093870519250715 random:4595423020975487537 > key:"m
I0403 20:44:30.526923     278 raft.go:390] raft: 101 became follower at term 5
I0403 20:44:30.527186     278 raft.go:207] raft: newRaft 101 [peers: [101], term: 5, commit: 10, applied: 10, lastindex: 10, lastterm: 5]
--- FAIL: TestStoreRangeSplit (0.19s)
    <autogenerated>:31: Test appears to have leaked an rpc client:
        net.(*pollDesc).Wait(0xc20867e0d0, 0x72, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:84 +0x63
        net.(*pollDesc).WaitRead(0xc20867e0d0, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:89 +0x51
        net.(*netFD).Read(0xc20867e070, 0xc208550000, 0x1000, 0x1000, 0x0, 0x7f4abc78be20, 0xc208652c78)
            /usr/src/go/src/net/fd_unix.go:242 +0x4b3
        net.(*conn).Read(0xc208436008, 0xc208550000, 0x1000, 0x1000, 0xc208567640, 0x0, 0x0)
            /usr/src/go/src/net/net.go:121 +0x125
        net.(*TCPConn).Read(0xc208436008, 0xc208550000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
--
I0403 20:44:31.126562     278 multiraft.go:644] Committed Entry[0]: 6/33 EntryNormal 0000000000000000225bf3e9bd1c5be2: raft_id:1 cmd:<internal_resolve_intent:<header:<timestamp:<wall_time:0 logical:247 > cmd_id:<wall_time:0 random:0 > key:"\000\000\000k\000\001rtn-" u
I0403 20:44:31.126665     278 multiraft.go:633] node 257: group 2 raft ready
I0403 20:44:31.126735     278 multiraft.go:638] HardState updated: {Term:6 Vote:257 Commit:122 XXX_unrecognized:[]}
I0403 20:44:31.127522     278 multiraft.go:641] New Entry[0]: 6/122 EntryNormal 00000000000000000549fda49272c4a7: raft_id:2 cmd:<internal_resolve_intent:<header:<timestamp:<wall_time:0 logical:247 > cmd_id:<wall_time:0 random:0 > key:"\000\000\000k\001\000\001rd
I0403 20:44:31.128254     278 multiraft.go:644] Committed Entry[0]: 6/122 EntryNormal 00000000000000000549fda49272c4a7: raft_id:2 cmd:<internal_resolve_intent:<header:<timestamp:<wall_time:0 logical:247 > cmd_id:<wall_time:0 random:0 > key:"\000\000\000k\001\000\001rd
--- FAIL: TestStoreRangeSplitStats (0.60s)
    <autogenerated>:31: Test appears to have leaked an rpc client:
        net.(*pollDesc).Wait(0xc20867e0d0, 0x72, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:84 +0x63
        net.(*pollDesc).WaitRead(0xc20867e0d0, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:89 +0x51
        net.(*netFD).Read(0xc20867e070, 0xc208550000, 0x1000, 0x1000, 0x0, 0x7f4abc78be20, 0xc208652c78)
            /usr/src/go/src/net/fd_unix.go:242 +0x4b3
        net.(*conn).Read(0xc208436008, 0xc208550000, 0x1000, 0x1000, 0xc208567640, 0x0, 0x0)
            /usr/src/go/src/net/net.go:121 +0x125
        net.(*TCPConn).Read(0xc208436008, 0xc208550000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
--
I0403 20:44:32.236998     278 queue.go:211] processed range range=1 (""-"testZHMtQoLrvcssrtoMUpwiwLoiYLgAbUibFATwYLcrfzqyLotKmLcdRBEIZmDgeHbtygdWYtWlXDsXTwEYHjPgGFXiwfrGuOhjMCPH") from split queue in 75.409884ms
I0403 20:44:32.241415     278 queue.go:207] processing range range=1 (""-"testZHMtQoLrvcssrtoMUpwiwLoiYLgAbUibFATwYLcrfzqyLotKmLcdRBEIZmDgeHbtygdWYtWlXDsXTwEYHjPgGFXiwfrGuOhjMCPH") from split queue...
I0403 20:44:32.241603     278 queue.go:211] processed range range=1 (""-"testZHMtQoLrvcssrtoMUpwiwLoiYLgAbUibFATwYLcrfzqyLotKmLcdRBEIZmDgeHbtygdWYtWlXDsXTwEYHjPgGFXiwfrGuOhjMCPH") from split queue in 197.706µs
I0403 20:44:32.242075     278 raft.go:620] raft: 101 no leader at term 5; dropping proposal
I0403 20:44:32.242196     278 raft.go:620] raft: 101 no leader at term 5; dropping proposal
--- FAIL: TestStoreZoneUpdateAndRangeSplit (1.12s)
    <autogenerated>:31: Test appears to have leaked an rpc client:
        net.(*pollDesc).Wait(0xc20867e0d0, 0x72, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:84 +0x63
        net.(*pollDesc).WaitRead(0xc20867e0d0, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:89 +0x51
        net.(*netFD).Read(0xc20867e070, 0xc208550000, 0x1000, 0x1000, 0x0, 0x7f4abc78be20, 0xc208652c78)
            /usr/src/go/src/net/fd_unix.go:242 +0x4b3
        net.(*conn).Read(0xc208436008, 0xc208550000, 0x1000, 0x1000, 0xc208567640, 0x0, 0x0)
            /usr/src/go/src/net/net.go:121 +0x125
        net.(*TCPConn).Read(0xc208436008, 0xc208550000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
--
I0403 20:44:33.092692     278 multiraft.go:644] Committed Entry[0]: 6/66 EntryNormal 0000000000000000748c1d28f319f574: raft_id:1 cmd:<internal_resolve_intent:<header:<timestamp:<wall_time:0 logical:347 > cmd_id:<wall_time:0 random:0 > key:"\000\000meta2db5" user:"root
I0403 20:44:33.094476     278 multiraft.go:633] node 257: group 1 raft ready
I0403 20:44:33.094562     278 multiraft.go:638] HardState updated: {Term:6 Vote:257 Commit:67 XXX_unrecognized:[]}
I0403 20:44:33.095398     278 multiraft.go:641] New Entry[0]: 6/67 EntryNormal 0000000000000000744a18eb233cb41c: raft_id:1 cmd:<internal_resolve_intent:<header:<timestamp:<wall_time:0 logical:347 > cmd_id:<wall_time:0 random:0 > key:"\000\000meta2\377\377" user:
I0403 20:44:33.096123     278 multiraft.go:644] Committed Entry[0]: 6/67 EntryNormal 0000000000000000744a18eb233cb41c: raft_id:1 cmd:<internal_resolve_intent:<header:<timestamp:<wall_time:0 logical:347 > cmd_id:<wall_time:0 random:0 > key:"\000\000meta2\377\377" user:
--- FAIL: TestStoreRangeSplitOnConfigs (0.86s)
    <autogenerated>:31: Test appears to have leaked an rpc client:
        net.(*pollDesc).Wait(0xc20867e0d0, 0x72, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:84 +0x63
        net.(*pollDesc).WaitRead(0xc20867e0d0, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:89 +0x51
        net.(*netFD).Read(0xc20867e070, 0xc208550000, 0x1000, 0x1000, 0x0, 0x7f4abc78be20, 0xc208652c78)
            /usr/src/go/src/net/fd_unix.go:242 +0x4b3
        net.(*conn).Read(0xc208436008, 0xc208550000, 0x1000, 0x1000, 0xc208567640, 0x0, 0x0)
            /usr/src/go/src/net/net.go:121 +0x125
        net.(*TCPConn).Read(0xc208436008, 0xc208550000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
--
I0403 20:44:33.362763     278 multiraft.go:644] Committed Entry[0]: 6/45 EntryNormal 13d19bcfbb42d66733465fd438c0236a: raft_id:1 cmd:<put:<header:<timestamp:<wall_time:0 logical:39 > cmd_id:<wall_time:1428093873355413095 random:3694745909393892202 > key:"\000\000meta2
I0403 20:44:33.364465     278 multiraft.go:633] node 257: group 1 raft ready
I0403 20:44:33.364557     278 multiraft.go:638] HardState updated: {Term:6 Vote:257 Commit:46 XXX_unrecognized:[]}
I0403 20:44:33.365138     278 multiraft.go:641] New Entry[0]: 6/46 EntryNormal 13d19bcfbb43462922b0f89a51c88e5d: raft_id:1 cmd:<put:<header:<timestamp:<wall_time:0 logical:40 > cmd_id:<wall_time:1428093873355441705 random:2499771134871375453 > key:"\000\000meta1
I0403 20:44:33.365713     278 multiraft.go:644] Committed Entry[0]: 6/46 EntryNormal 13d19bcfbb43462922b0f89a51c88e5d: raft_id:1 cmd:<put:<header:<timestamp:<wall_time:0 logical:40 > cmd_id:<wall_time:1428093873355441705 random:2499771134871375453 > key:"\000\000meta1
--- FAIL: TestUpdateRangeAddressing (0.26s)
    <autogenerated>:31: Test appears to have leaked an rpc client:
        net.(*pollDesc).Wait(0xc20867e0d0, 0x72, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:84 +0x63
        net.(*pollDesc).WaitRead(0xc20867e0d0, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:89 +0x51
        net.(*netFD).Read(0xc20867e070, 0xc208550000, 0x1000, 0x1000, 0x0, 0x7f4abc78be20, 0xc208652c78)
            /usr/src/go/src/net/fd_unix.go:242 +0x4b3
        net.(*conn).Read(0xc208436008, 0xc208550000, 0x1000, 0x1000, 0xc208567640, 0x0, 0x0)
            /usr/src/go/src/net/net.go:121 +0x125
        net.(*TCPConn).Read(0xc208436008, 0xc208550000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
--
            /go/src/github.com/cockroachdb/cockroach/storage/main_test.go:29 +0x36
        main.main()
            github.com/cockroachdb/cockroach/storage/_test/_testmain.go:328 +0x28d
=== RUN TestUpdateRangeAddressingSplitMeta1
I0403 20:44:33.471603     278 multiraft.go:407] node 257 starting
--- FAIL: TestUpdateRangeAddressingSplitMeta1 (0.10s)
    <autogenerated>:31: Test appears to have leaked an rpc client:
        net.(*pollDesc).Wait(0xc20867e0d0, 0x72, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:84 +0x63
        net.(*pollDesc).WaitRead(0xc20867e0d0, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:89 +0x51
        net.(*netFD).Read(0xc20867e070, 0xc208550000, 0x1000, 0x1000, 0x0, 0x7f4abc78be20, 0xc208652c78)
            /usr/src/go/src/net/fd_unix.go:242 +0x4b3
        net.(*conn).Read(0xc208436008, 0xc208550000, 0x1000, 0x1000, 0xc208567640, 0x0, 0x0)
            /usr/src/go/src/net/net.go:121 +0x125
        net.(*TCPConn).Read(0xc208436008, 0xc208550000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
--
            /go/src/github.com/cockroachdb/cockroach/util/leaktest/leaktest.go:34 +0x36
        github.com/cockroachdb/cockroach/storage.TestMain(0xc208030b40)
            /go/src/github.com/cockroachdb/cockroach/storage/main_test.go:29 +0x36
        main.main()
            github.com/cockroachdb/cockroach/storage/_test/_testmain.go:328 +0x28d
FAIL
FAIL    github.com/cockroachdb/cockroach/storage    9.231s
=== RUN TestBatchBasics
--- PASS: TestBatchBasics (0.00s)
=== RUN TestBatchGet
--- PASS: TestBatchGet (0.00s)
=== RUN TestBatchMerge
--- PASS: TestBatchMerge (0.00s)
=== RUN TestBatchProto
--- PASS: TestBatchProto (0.00s)
=== RUN TestBatchScan
--- PASS: TestBatchScan (0.00s)
I0403 20:44:28.978781     278 multiraft.go:633] node 257: group 1 raft ready
I0403 20:44:28.978875     278 multiraft.go:638] HardState updated: {Term:6 Vote:257 Commit:20 XXX_unrecognized:[]}
I0403 20:44:28.979131     278 multiraft.go:641] New Entry[0]: 6/20 EntryConfChange "\b\x00\x10\x00\x18\x82\x04\"\xd0\x01\x00\x13ћε\x13\xd3\x13d'|\xbaS\x17\x84?\x10\x01\x1a\xba\x01J\xb7\x01\n\x92\x01\n\x04\b\x00\x10\r\x12\x14\b\x93\xa6Ϩ\xeb\xf9\xe6\xe8\x13\x10\xb
I0403 20:44:28.979401     278 multiraft.go:644] Committed Entry[0]: 6/20 EntryConfChange "\b\x00\x10\x00\x18\x82\x04\"\xd0\x01\x00\x13ћε\x13\xd3\x13d'|\xbaS\x17\x84?\x10\x01\x1a\xba\x01J\xb7\x01\n\x92\x01\n\x04\b\x00\x10\r\x12\x14\b\x93\xa6Ϩ\xeb\xf9\xe6\xe8\x13\x10\xb
I0403 20:44:28.984159     278 multiraft.go:751] node 257 applying configuration change {0 ConfChangeAddNode 514 [0 19 209 155 206 181 19 211 19 100 39 124 186 83 23 132 63 16 1 26 186 1 74 183 1 10 146 1 10 4 8 0 16 13 18 20 8 147 166 207 168 235 249 230 232 19 16 191 136 222 152 165 151 223 147 100 26 0 42 4 114 111 111 116 50 8 8 1 1 16 1 1 26 0 56 1 74 94 10 20 99 104 97 110 103 101 32 114 101 112 108 105 99 97 115 32 111 102 32 49 18 0 26 36 57 54 98 53 52 55 102 56 45 97 97 100 49 45 52 52 55 97 45 97 50 54 99 45 57 48 98 48 54 54 51 98 48 49 55 102 32 221 147 174 196 1 40 0 48 0 56 0 74 4 8 0 16 13 82 4 8 0 16 13 90 4 8 0 16 13 98 0 80 0 16 1 26 30 26 28 8 1 2 16 1 2 24 0 34 8 8 1 1 16 1 1 26 0 34 8 8 1 2 16 1 2 26 0] []}
--- FAIL: TestFailedReplicaChange (0.14s)
    <autogenerated>:31: Test appears to have leaked an rpc client:
        net.(*pollDesc).Wait(0xc20867e0d0, 0x72, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:84 +0x63
        net.(*pollDesc).WaitRead(0xc20867e0d0, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:89 +0x51
        net.(*netFD).Read(0xc20867e070, 0xc208550000, 0x1000, 0x1000, 0x0, 0x7f4abc78be20, 0xc208652c78)
            /usr/src/go/src/net/fd_unix.go:242 +0x4b3
        net.(*conn).Read(0xc208436008, 0xc208550000, 0x1000, 0x1000, 0xc208567640, 0x0, 0x0)
            /usr/src/go/src/net/net.go:121 +0x125
        net.(*TCPConn).Read(0xc208436008, 0xc208550000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
--
I0403 20:44:29.225028     278 multiraft.go:448] node 257: group 1 got message 514->257 MsgAppResp Term:6 Log:0/18
I0403 20:44:29.225421     278 multiraft.go:448] node 257: group 1 got message 514->257 MsgHeartbeatResp Term:6 Log:0/0
I0403 20:44:29.226984     278 client_raft_test.go:350] read value 39
I0403 20:44:29.227528     278 multiraft.go:633] node 257: group 1 raft ready
I0403 20:44:29.227680     278 multiraft.go:650] Outgoing Message[0]: 257->514 MsgHeartbeat Term:6 Log:0/0 Commit:18
--- FAIL: TestReplicateAfterTruncation (0.24s)
    <autogenerated>:31: Test appears to have leaked an rpc client:
        net.(*pollDesc).Wait(0xc20867e0d0, 0x72, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:84 +0x63
        net.(*pollDesc).WaitRead(0xc20867e0d0, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:89 +0x51
        net.(*netFD).Read(0xc20867e070, 0xc208550000, 0x1000, 0x1000, 0x0, 0x7f4abc78be20, 0xc208652c78)
            /usr/src/go/src/net/fd_unix.go:242 +0x4b3
        net.(*conn).Read(0xc208436008, 0xc208550000, 0x1000, 0x1000, 0xc208567640, 0x0, 0x0)
            /usr/src/go/src/net/net.go:121 +0x125
        net.(*TCPConn).Read(0xc208436008, 0xc208550000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
--
            /go/src/github.com/cockroachdb/cockroach/storage/main_test.go:29 +0x36
        main.main()
            github.com/cockroachdb/cockroach/storage/_test/_testmain.go:328 +0x28d
=== RUN TestSetupRangeTree
I0403 20:44:29.311546     278 multiraft.go:407] node 257 starting
--- FAIL: TestSetupRangeTree (0.08s)
    <autogenerated>:31: Test appears to have leaked an rpc client:
        net.(*pollDesc).Wait(0xc20867e0d0, 0x72, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:84 +0x63
        net.(*pollDesc).WaitRead(0xc20867e0d0, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:89 +0x51
        net.(*netFD).Read(0xc20867e070, 0xc208550000, 0x1000, 0x1000, 0x0, 0x7f4abc78be20, 0xc208652c78)
            /usr/src/go/src/net/fd_unix.go:242 +0x4b3
        net.(*conn).Read(0xc208436008, 0xc208550000, 0x1000, 0x1000, 0xc208567640, 0x0, 0x0)
            /usr/src/go/src/net/net.go:121 +0x125
        net.(*TCPConn).Read(0xc208436008, 0xc208550000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
--
I0403 20:44:29.928121     278 retry.go:93] Get failed; retrying immediately
I0403 20:44:29.928538     278 multiraft.go:633] node 257: group 4 raft ready
I0403 20:44:29.928644     278 multiraft.go:638] HardState updated: {Term:6 Vote:257 Commit:27 XXX_unrecognized:[]}
I0403 20:44:29.929631     278 multiraft.go:641] New Entry[0]: 6/27 EntryNormal 00000000000000006ef7c314b25f2556: raft_id:4 cmd:<internal_resolve_intent:<header:<timestamp:<wall_time:0 logical:207 > cmd_id:<wall_time:0 random:0 > key:"\000\000\000kc\000\001rtn-" 
I0403 20:44:29.933230     278 multiraft.go:644] Committed Entry[0]: 6/27 EntryNormal 00000000000000006ef7c314b25f2556: raft_id:4 cmd:<internal_resolve_intent:<header:<timestamp:<wall_time:0 logical:207 > cmd_id:<wall_time:0 random:0 > key:"\000\000\000kc\000\001rtn-" 
--- FAIL: TestInsert (0.62s)
    <autogenerated>:31: Test appears to have leaked an rpc client:
        net.(*pollDesc).Wait(0xc20867e0d0, 0x72, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:84 +0x63
        net.(*pollDesc).WaitRead(0xc20867e0d0, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:89 +0x51
        net.(*netFD).Read(0xc20867e070, 0xc208550000, 0x1000, 0x1000, 0x0, 0x7f4abc78be20, 0xc208652c78)
            /usr/src/go/src/net/fd_unix.go:242 +0x4b3
        net.(*conn).Read(0xc208436008, 0xc208550000, 0x1000, 0x1000, 0xc208567640, 0x0, 0x0)
            /usr/src/go/src/net/net.go:121 +0x125
        net.(*TCPConn).Read(0xc208436008, 0xc208550000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
--
            /go/src/github.com/cockroachdb/cockroach/storage/main_test.go:29 +0x36
        main.main()
            github.com/cockroachdb/cockroach/storage/_test/_testmain.go:328 +0x28d
=== RUN TestStoreRangeSplitAtIllegalKeys
I0403 20:44:30.017238     278 multiraft.go:407] node 257 starting
--- FAIL: TestStoreRangeSplitAtIllegalKeys (0.08s)
    <autogenerated>:31: Test appears to have leaked an rpc client:
        net.(*pollDesc).Wait(0xc20867e0d0, 0x72, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:84 +0x63
        net.(*pollDesc).WaitRead(0xc20867e0d0, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:89 +0x51
        net.(*netFD).Read(0xc20867e070, 0xc208550000, 0x1000, 0x1000, 0x0, 0x7f4abc78be20, 0xc208652c78)
            /usr/src/go/src/net/fd_unix.go:242 +0x4b3
        net.(*conn).Read(0xc208436008, 0xc208550000, 0x1000, 0x1000, 0xc208567640, 0x0, 0x0)
            /usr/src/go/src/net/net.go:121 +0x125
        net.(*TCPConn).Read(0xc208436008, 0xc208550000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
--
I0403 20:44:30.156006     278 multiraft.go:638] HardState updated: {Term:6 Vote:257 Commit:23 XXX_unrecognized:[]}
I0403 20:44:30.157038     278 multiraft.go:641] New Entry[0]: 6/23 EntryNormal 13d19bcefc85fb86279d5ab29f22bfbd: raft_id:1 cmd:<end_transaction:<header:<timestamp:<wall_time:0 logical:8 > cmd_id:<wall_time:1428093870155365254 random:2854537462043295677 > key:"a"
I0403 20:44:30.158054     278 multiraft.go:644] Committed Entry[0]: 6/23 EntryNormal 13d19bcefc85fb86279d5ab29f22bfbd: raft_id:1 cmd:<end_transaction:<header:<timestamp:<wall_time:0 logical:8 > cmd_id:<wall_time:1428093870155365254 random:2854537462043295677 > key:"a"
I0403 20:44:30.162461     278 raft.go:390] raft: 101 became follower at term 5
I0403 20:44:30.162735     278 raft.go:207] raft: newRaft 101 [peers: [101], term: 5, commit: 10, applied: 10, lastindex: 10, lastterm: 5]
--- FAIL: TestStoreRangeSplitAtRangeBounds (0.16s)
    <autogenerated>:31: Test appears to have leaked an rpc client:
        net.(*pollDesc).Wait(0xc20867e0d0, 0x72, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:84 +0x63
        net.(*pollDesc).WaitRead(0xc20867e0d0, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:89 +0x51
        net.(*netFD).Read(0xc20867e070, 0xc208550000, 0x1000, 0x1000, 0x0, 0x7f4abc78be20, 0xc208652c78)
            /usr/src/go/src/net/fd_unix.go:242 +0x4b3
        net.(*conn).Read(0xc208436008, 0xc208550000, 0x1000, 0x1000, 0xc208567640, 0x0, 0x0)
            /usr/src/go/src/net/net.go:121 +0x125
        net.(*TCPConn).Read(0xc208436008, 0xc208550000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
--
I0403 20:44:30.344507     278 multiraft.go:638] HardState updated: {Term:6 Vote:257 Commit:25 XXX_unrecognized:[]}
I0403 20:44:30.345428     278 multiraft.go:641] New Entry[0]: 6/24 EntryNormal 0000000000000000539abb3f6f5712fd: raft_id:1 cmd:<internal_resolve_intent:<header:<timestamp:<wall_time:0 logical:17 > cmd_id:<wall_time:0 random:0 > key:"\000\000\000k\000\001rdsc" us
I0403 20:44:30.346388     278 multiraft.go:641] New Entry[1]: 6/25 EntryNormal 00000000000000003fc951cc3c88058e: raft_id:1 cmd:<internal_resolve_intent:<header:<timestamp:<wall_time:0 logical:17 > cmd_id:<wall_time:0 random:0 > key:"\000\000\000k\000\001rtn-" us
I0403 20:44:30.347303     278 multiraft.go:644] Committed Entry[0]: 6/24 EntryNormal 0000000000000000539abb3f6f5712fd: raft_id:1 cmd:<internal_resolve_intent:<header:<timestamp:<wall_time:0 logical:17 > cmd_id:<wall_time:0 random:0 > key:"\000\000\000k\000\001rdsc" us
I0403 20:44:30.348147     278 multiraft.go:644] Committed Entry[1]: 6/25 EntryNormal 00000000000000003fc951cc3c88058e: raft_id:1 cmd:<internal_resolve_intent:<header:<timestamp:<wall_time:0 logical:17 > cmd_id:<wall_time:0 random:0 > key:"\000\000\000k\000\001rtn-" us
--- FAIL: TestStoreRangeSplitConcurrent (0.19s)
    <autogenerated>:31: Test appears to have leaked an rpc client:
        net.(*pollDesc).Wait(0xc20867e0d0, 0x72, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:84 +0x63
        net.(*pollDesc).WaitRead(0xc20867e0d0, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:89 +0x51
        net.(*netFD).Read(0xc20867e070, 0xc208550000, 0x1000, 0x1000, 0x0, 0x7f4abc78be20, 0xc208652c78)
            /usr/src/go/src/net/fd_unix.go:242 +0x4b3
        net.(*conn).Read(0xc208436008, 0xc208550000, 0x1000, 0x1000, 0xc208567640, 0x0, 0x0)
            /usr/src/go/src/net/net.go:121 +0x125
        net.(*TCPConn).Read(0xc208436008, 0xc208550000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
--
I0403 20:44:30.519882     278 multiraft.go:638] HardState updated: {Term:6 Vote:257 Commit:27 XXX_unrecognized:[]}
I0403 20:44:30.520949     278 multiraft.go:641] New Entry[0]: 6/27 EntryNormal 13d19bcf12366f1b3fc638e3d5e57a31: raft_id:1 cmd:<end_transaction:<header:<timestamp:<wall_time:0 logical:12 > cmd_id:<wall_time:1428093870519250715 random:4595423020975487537 > key:"m
I0403 20:44:30.521940     278 multiraft.go:644] Committed Entry[0]: 6/27 EntryNormal 13d19bcf12366f1b3fc638e3d5e57a31: raft_id:1 cmd:<end_transaction:<header:<timestamp:<wall_time:0 logical:12 > cmd_id:<wall_time:1428093870519250715 random:4595423020975487537 > key:"m
I0403 20:44:30.526923     278 raft.go:390] raft: 101 became follower at term 5
I0403 20:44:30.527186     278 raft.go:207] raft: newRaft 101 [peers: [101], term: 5, commit: 10, applied: 10, lastindex: 10, lastterm: 5]
--- FAIL: TestStoreRangeSplit (0.19s)
    <autogenerated>:31: Test appears to have leaked an rpc client:
        net.(*pollDesc).Wait(0xc20867e0d0, 0x72, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:84 +0x63
        net.(*pollDesc).WaitRead(0xc20867e0d0, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:89 +0x51
        net.(*netFD).Read(0xc20867e070, 0xc208550000, 0x1000, 0x1000, 0x0, 0x7f4abc78be20, 0xc208652c78)
            /usr/src/go/src/net/fd_unix.go:242 +0x4b3
        net.(*conn).Read(0xc208436008, 0xc208550000, 0x1000, 0x1000, 0xc208567640, 0x0, 0x0)
            /usr/src/go/src/net/net.go:121 +0x125
        net.(*TCPConn).Read(0xc208436008, 0xc208550000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
--
I0403 20:44:31.126562     278 multiraft.go:644] Committed Entry[0]: 6/33 EntryNormal 0000000000000000225bf3e9bd1c5be2: raft_id:1 cmd:<internal_resolve_intent:<header:<timestamp:<wall_time:0 logical:247 > cmd_id:<wall_time:0 random:0 > key:"\000\000\000k\000\001rtn-" u
I0403 20:44:31.126665     278 multiraft.go:633] node 257: group 2 raft ready
I0403 20:44:31.126735     278 multiraft.go:638] HardState updated: {Term:6 Vote:257 Commit:122 XXX_unrecognized:[]}
I0403 20:44:31.127522     278 multiraft.go:641] New Entry[0]: 6/122 EntryNormal 00000000000000000549fda49272c4a7: raft_id:2 cmd:<internal_resolve_intent:<header:<timestamp:<wall_time:0 logical:247 > cmd_id:<wall_time:0 random:0 > key:"\000\000\000k\001\000\001rd
I0403 20:44:31.128254     278 multiraft.go:644] Committed Entry[0]: 6/122 EntryNormal 00000000000000000549fda49272c4a7: raft_id:2 cmd:<internal_resolve_intent:<header:<timestamp:<wall_time:0 logical:247 > cmd_id:<wall_time:0 random:0 > key:"\000\000\000k\001\000\001rd
--- FAIL: TestStoreRangeSplitStats (0.60s)
    <autogenerated>:31: Test appears to have leaked an rpc client:
        net.(*pollDesc).Wait(0xc20867e0d0, 0x72, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:84 +0x63
        net.(*pollDesc).WaitRead(0xc20867e0d0, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:89 +0x51
        net.(*netFD).Read(0xc20867e070, 0xc208550000, 0x1000, 0x1000, 0x0, 0x7f4abc78be20, 0xc208652c78)
            /usr/src/go/src/net/fd_unix.go:242 +0x4b3
        net.(*conn).Read(0xc208436008, 0xc208550000, 0x1000, 0x1000, 0xc208567640, 0x0, 0x0)
            /usr/src/go/src/net/net.go:121 +0x125
        net.(*TCPConn).Read(0xc208436008, 0xc208550000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
--
I0403 20:44:32.236998     278 queue.go:211] processed range range=1 (""-"testZHMtQoLrvcssrtoMUpwiwLoiYLgAbUibFATwYLcrfzqyLotKmLcdRBEIZmDgeHbtygdWYtWlXDsXTwEYHjPgGFXiwfrGuOhjMCPH") from split queue in 75.409884ms
I0403 20:44:32.241415     278 queue.go:207] processing range range=1 (""-"testZHMtQoLrvcssrtoMUpwiwLoiYLgAbUibFATwYLcrfzqyLotKmLcdRBEIZmDgeHbtygdWYtWlXDsXTwEYHjPgGFXiwfrGuOhjMCPH") from split queue...
I0403 20:44:32.241603     278 queue.go:211] processed range range=1 (""-"testZHMtQoLrvcssrtoMUpwiwLoiYLgAbUibFATwYLcrfzqyLotKmLcdRBEIZmDgeHbtygdWYtWlXDsXTwEYHjPgGFXiwfrGuOhjMCPH") from split queue in 197.706µs
I0403 20:44:32.242075     278 raft.go:620] raft: 101 no leader at term 5; dropping proposal
I0403 20:44:32.242196     278 raft.go:620] raft: 101 no leader at term 5; dropping proposal
--- FAIL: TestStoreZoneUpdateAndRangeSplit (1.12s)
    <autogenerated>:31: Test appears to have leaked an rpc client:
        net.(*pollDesc).Wait(0xc20867e0d0, 0x72, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:84 +0x63
        net.(*pollDesc).WaitRead(0xc20867e0d0, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:89 +0x51
        net.(*netFD).Read(0xc20867e070, 0xc208550000, 0x1000, 0x1000, 0x0, 0x7f4abc78be20, 0xc208652c78)
            /usr/src/go/src/net/fd_unix.go:242 +0x4b3
        net.(*conn).Read(0xc208436008, 0xc208550000, 0x1000, 0x1000, 0xc208567640, 0x0, 0x0)
            /usr/src/go/src/net/net.go:121 +0x125
        net.(*TCPConn).Read(0xc208436008, 0xc208550000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
--
I0403 20:44:33.092692     278 multiraft.go:644] Committed Entry[0]: 6/66 EntryNormal 0000000000000000748c1d28f319f574: raft_id:1 cmd:<internal_resolve_intent:<header:<timestamp:<wall_time:0 logical:347 > cmd_id:<wall_time:0 random:0 > key:"\000\000meta2db5" user:"root
I0403 20:44:33.094476     278 multiraft.go:633] node 257: group 1 raft ready
I0403 20:44:33.094562     278 multiraft.go:638] HardState updated: {Term:6 Vote:257 Commit:67 XXX_unrecognized:[]}
I0403 20:44:33.095398     278 multiraft.go:641] New Entry[0]: 6/67 EntryNormal 0000000000000000744a18eb233cb41c: raft_id:1 cmd:<internal_resolve_intent:<header:<timestamp:<wall_time:0 logical:347 > cmd_id:<wall_time:0 random:0 > key:"\000\000meta2\377\377" user:
I0403 20:44:33.096123     278 multiraft.go:644] Committed Entry[0]: 6/67 EntryNormal 0000000000000000744a18eb233cb41c: raft_id:1 cmd:<internal_resolve_intent:<header:<timestamp:<wall_time:0 logical:347 > cmd_id:<wall_time:0 random:0 > key:"\000\000meta2\377\377" user:
--- FAIL: TestStoreRangeSplitOnConfigs (0.86s)
    <autogenerated>:31: Test appears to have leaked an rpc client:
        net.(*pollDesc).Wait(0xc20867e0d0, 0x72, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:84 +0x63
        net.(*pollDesc).WaitRead(0xc20867e0d0, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:89 +0x51
        net.(*netFD).Read(0xc20867e070, 0xc208550000, 0x1000, 0x1000, 0x0, 0x7f4abc78be20, 0xc208652c78)
            /usr/src/go/src/net/fd_unix.go:242 +0x4b3
        net.(*conn).Read(0xc208436008, 0xc208550000, 0x1000, 0x1000, 0xc208567640, 0x0, 0x0)
            /usr/src/go/src/net/net.go:121 +0x125
        net.(*TCPConn).Read(0xc208436008, 0xc208550000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
--
I0403 20:44:33.362763     278 multiraft.go:644] Committed Entry[0]: 6/45 EntryNormal 13d19bcfbb42d66733465fd438c0236a: raft_id:1 cmd:<put:<header:<timestamp:<wall_time:0 logical:39 > cmd_id:<wall_time:1428093873355413095 random:3694745909393892202 > key:"\000\000meta2
I0403 20:44:33.364465     278 multiraft.go:633] node 257: group 1 raft ready
I0403 20:44:33.364557     278 multiraft.go:638] HardState updated: {Term:6 Vote:257 Commit:46 XXX_unrecognized:[]}
I0403 20:44:33.365138     278 multiraft.go:641] New Entry[0]: 6/46 EntryNormal 13d19bcfbb43462922b0f89a51c88e5d: raft_id:1 cmd:<put:<header:<timestamp:<wall_time:0 logical:40 > cmd_id:<wall_time:1428093873355441705 random:2499771134871375453 > key:"\000\000meta1
I0403 20:44:33.365713     278 multiraft.go:644] Committed Entry[0]: 6/46 EntryNormal 13d19bcfbb43462922b0f89a51c88e5d: raft_id:1 cmd:<put:<header:<timestamp:<wall_time:0 logical:40 > cmd_id:<wall_time:1428093873355441705 random:2499771134871375453 > key:"\000\000meta1
--- FAIL: TestUpdateRangeAddressing (0.26s)
    <autogenerated>:31: Test appears to have leaked an rpc client:
        net.(*pollDesc).Wait(0xc20867e0d0, 0x72, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:84 +0x63
        net.(*pollDesc).WaitRead(0xc20867e0d0, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:89 +0x51
        net.(*netFD).Read(0xc20867e070, 0xc208550000, 0x1000, 0x1000, 0x0, 0x7f4abc78be20, 0xc208652c78)
            /usr/src/go/src/net/fd_unix.go:242 +0x4b3
        net.(*conn).Read(0xc208436008, 0xc208550000, 0x1000, 0x1000, 0xc208567640, 0x0, 0x0)
            /usr/src/go/src/net/net.go:121 +0x125
        net.(*TCPConn).Read(0xc208436008, 0xc208550000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
--
            /go/src/github.com/cockroachdb/cockroach/storage/main_test.go:29 +0x36
        main.main()
            github.com/cockroachdb/cockroach/storage/_test/_testmain.go:328 +0x28d
=== RUN TestUpdateRangeAddressingSplitMeta1
I0403 20:44:33.471603     278 multiraft.go:407] node 257 starting
--- FAIL: TestUpdateRangeAddressingSplitMeta1 (0.10s)
    <autogenerated>:31: Test appears to have leaked an rpc client:
        net.(*pollDesc).Wait(0xc20867e0d0, 0x72, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:84 +0x63
        net.(*pollDesc).WaitRead(0xc20867e0d0, 0x0, 0x0)
            /usr/src/go/src/net/fd_poll_runtime.go:89 +0x51
        net.(*netFD).Read(0xc20867e070, 0xc208550000, 0x1000, 0x1000, 0x0, 0x7f4abc78be20, 0xc208652c78)
            /usr/src/go/src/net/fd_unix.go:242 +0x4b3
        net.(*conn).Read(0xc208436008, 0xc208550000, 0x1000, 0x1000, 0xc208567640, 0x0, 0x0)
            /usr/src/go/src/net/net.go:121 +0x125
        net.(*TCPConn).Read(0xc208436008, 0xc208550000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
--
            /go/src/github.com/cockroachdb/cockroach/util/leaktest/leaktest.go:34 +0x36
        github.com/cockroachdb/cockroach/storage.TestMain(0xc208030b40)
            /go/src/github.com/cockroachdb/cockroach/storage/main_test.go:29 +0x36
        main.main()
            github.com/cockroachdb/cockroach/storage/_test/_testmain.go:328 +0x28d
FAIL
FAIL    github.com/cockroachdb/cockroach/storage    9.231s
=== RUN TestBatchBasics
--- PASS: TestBatchBasics (0.00s)
=== RUN TestBatchGet
--- PASS: TestBatchGet (0.00s)
=== RUN TestBatchMerge
--- PASS: TestBatchMerge (0.00s)
=== RUN TestBatchProto
--- PASS: TestBatchProto (0.00s)
=== RUN TestBatchScan
--- PASS: TestBatchScan (0.00s)

Please assign, take a look and update the issue accordingly.

Test failure in CI build 512

The following test appears to have failed:

#512:

I0409 09:20:27.018815     193 multiraft.go:675] Committed Entry[0]: 6/18 EntryNormal 00000000000000042253d25cfcad47ee: raft_id:1 cmd:<internal_heartbeat_txn:<header:<timestamp:<wall_time:3 logical:2 > cmd_id:<wall_time:0 random:0 > key:"a" user:"root" replica:<node_id
I0409 09:20:27.020903     193 multiraft.go:664] node 257: group 1 raft ready
I0409 09:20:27.021035     193 multiraft.go:669] HardState updated: {Term:6 Vote:257 Commit:19 XXX_unrecognized:[]}
I0409 09:20:27.022045     193 multiraft.go:672] New Entry[0]: 6/19 EntryNormal 13d34df64a99dac30f7c1969c21d914b: raft_id:1 cmd:<internal_heartbeat_txn:<header:<timestamp:<wall_time:0 logical:4 > cmd_id:<wall_time:1428571227015469763 random:1115794749700018507 > 
I0409 09:20:27.023035     193 multiraft.go:675] Committed Entry[0]: 6/19 EntryNormal 13d34df64a99dac30f7c1969c21d914b: raft_id:1 cmd:<internal_heartbeat_txn:<header:<timestamp:<wall_time:0 logical:4 > cmd_id:<wall_time:1428571227015469763 random:1115794749700018507 > 
panic: runtime error: invalid memory address or nil pointer dereference
[signal 0xb code=0x1 addr=0x48 pc=0x807afc]

goroutine 952 [running]:
github.com/cockroachdb/cockroach/kv.(*TxnCoordSender).heartbeat(0xc2081b3280, 0xc2081d4000, 0xc20818ed80)
    /go/src/github.com/cockroachdb/cockroach/kv/txn_coord_sender.go:539 +0x9ec
created by github.com/cockroachdb/cockroach/kv.(*TxnCoordSender).sendOne
    /go/src/github.com/cockroachdb/cockroach/kv/txn_coord_sender.go:309 +0xc02

goroutine 1 [chan receive]:
testing.RunTests(0x12467f8, 0x17069a0, 0x30, 0x30, 0x7f264668d701)
--
goroutine 951 [select]:
github.com/cockroachdb/cockroach/multiraft.(*writeTask).start(0xc2081f8cc0)
    /go/src/github.com/cockroachdb/cockroach/multiraft/storage.go:145 +0xb86
created by github.com/cockroachdb/cockroach/multiraft.(*state).start
    /go/src/github.com/cockroachdb/cockroach/multiraft/multiraft.go:410 +0x1d2
FAIL    github.com/cockroachdb/cockroach/kv 11.327s
=== RUN TestHeartbeatSingleGroup
I0409 09:20:16.278157     199 multiraft.go:409] node 1 starting
I0409 09:20:16.279279     199 multiraft.go:409] node 2 starting
I0409 09:20:16.280849     199 raft.go:390] raft: 1 became follower at term 5
I0409 09:20:16.281149     199 raft.go:207] raft: newRaft 1 [peers: [1,2], term: 5, commit: 10, applied: 10, lastindex: 10, lastterm: 5]
I0409 09:20:16.283141     199 raft.go:390] raft: 2 became follower at term 5
I0409 09:20:16.283412     199 raft.go:207] raft: newRaft 2 [peers: [1,2], term: 5, commit: 10, applied: 10, lastindex: 10, lastterm: 5]
I0409 09:20:16.283770     199 raft.go:464] raft: 1 is starting a new election at term 5
I0409 09:20:16.283970     199 raft.go:403] raft: 1 became candidate at term 6
I0409 09:20:16.284153     199 raft.go:447] raft: 1 received vote from 1 at term 6
--
I0409 09:20:49.648556     291 multiraft.go:664] node 257: group 2 raft ready
I0409 09:20:49.648614     291 multiraft.go:669] HardState updated: {Term:6 Vote:257 Commit:14 XXX_unrecognized:[]}
I0409 09:20:49.649578     291 multiraft.go:672] New Entry[0]: 6/14 EntryNormal 00000000000000000a7df9b9f1205ae5: raft_id:2 cmd:<internal_resolve_intent:<header:<timestamp:<wall_time:0 logical:31 > cmd_id:<wall_time:0 random:0 > key:"\000\000\000kd\000\001rtn-" u
I0409 09:20:49.650577     291 multiraft.go:675] Committed Entry[0]: 6/14 EntryNormal 00000000000000000a7df9b9f1205ae5: raft_id:2 cmd:<internal_resolve_intent:<header:<timestamp:<wall_time:0 logical:31 > cmd_id:<wall_time:0 random:0 > key:"\000\000\000kd\000\001rtn-" u
==================
WARNING: DATA RACE
Write by goroutine 48:
  runtime.mapassign1()
      /usr/src/go/src/runtime/hashmap.go:383 +0x0
  github.com/cockroachdb/cockroach/storage.(*Store).addRangeInternal()
      /go/src/github.com/cockroachdb/cockroach/storage/store.go:860 +0x23e
  github.com/cockroachdb/cockroach/storage.(*Store).GroupStorage()
      /go/src/github.com/cockroachdb/cockroach/storage/store.go:1200 +0x24a
  github.com/cockroachdb/cockroach/multiraft.(*state).createGroup()
      /go/src/github.com/cockroachdb/cockroach/multiraft/multiraft.go:589 +0x348
  github.com/cockroachdb/cockroach/multiraft.(*state).start()
      /go/src/github.com/cockroachdb/cockroach/multiraft/multiraft.go:472 +0x17ea

Previous read by goroutine 131:
  runtime.mapiternext()
      /usr/src/go/src/runtime/hashmap.go:601 +0x0
  github.com/cockroachdb/cockroach/storage.(*Store).Stop()
      /go/src/github.com/cockroachdb/cockroach/storage/store.go:370 +0x18a
  github.com/cockroachdb/cockroach/storage_test.TestStoreRangeMergeNonConsecutive()
      /go/src/github.com/cockroachdb/cockroach/storage/client_merge_test.go:237 +0x16cc
  testing.tRunner()
      /usr/src/go/src/testing/testing.go:447 +0x133

Goroutine 48 (running) created at:
--
1 instances of:
github.com/cockroachdb/cockroach/storage.(*Range).startGossip(0xc208644080)
    /go/src/github.com/cockroachdb/cockroach/storage/range.go:664 +0x186
created by github.com/cockroachdb/cockroach/storage.(*Range).start
    /go/src/github.com/cockroachdb/cockroach/storage/range.go:245 +0xb6
FAIL    github.com/cockroachdb/cockroach/storage    10.708s
=== RUN TestBatchBasics
--- PASS: TestBatchBasics (0.00s)
=== RUN TestBatchGet
--- PASS: TestBatchGet (0.00s)
=== RUN TestBatchMerge
--- PASS: TestBatchMerge (0.00s)
=== RUN TestBatchProto
--- PASS: TestBatchProto (0.00s)
=== RUN TestBatchScan
--- PASS: TestBatchScan (0.00s)
I0409 09:20:27.018815     193 multiraft.go:675] Committed Entry[0]: 6/18 EntryNormal 00000000000000042253d25cfcad47ee: raft_id:1 cmd:<internal_heartbeat_txn:<header:<timestamp:<wall_time:3 logical:2 > cmd_id:<wall_time:0 random:0 > key:"a" user:"root" replica:<node_id
I0409 09:20:27.020903     193 multiraft.go:664] node 257: group 1 raft ready
I0409 09:20:27.021035     193 multiraft.go:669] HardState updated: {Term:6 Vote:257 Commit:19 XXX_unrecognized:[]}
I0409 09:20:27.022045     193 multiraft.go:672] New Entry[0]: 6/19 EntryNormal 13d34df64a99dac30f7c1969c21d914b: raft_id:1 cmd:<internal_heartbeat_txn:<header:<timestamp:<wall_time:0 logical:4 > cmd_id:<wall_time:1428571227015469763 random:1115794749700018507 > 
I0409 09:20:27.023035     193 multiraft.go:675] Committed Entry[0]: 6/19 EntryNormal 13d34df64a99dac30f7c1969c21d914b: raft_id:1 cmd:<internal_heartbeat_txn:<header:<timestamp:<wall_time:0 logical:4 > cmd_id:<wall_time:1428571227015469763 random:1115794749700018507 > 
panic: runtime error: invalid memory address or nil pointer dereference
[signal 0xb code=0x1 addr=0x48 pc=0x807afc]

goroutine 952 [running]:
github.com/cockroachdb/cockroach/kv.(*TxnCoordSender).heartbeat(0xc2081b3280, 0xc2081d4000, 0xc20818ed80)
    /go/src/github.com/cockroachdb/cockroach/kv/txn_coord_sender.go:539 +0x9ec
created by github.com/cockroachdb/cockroach/kv.(*TxnCoordSender).sendOne
    /go/src/github.com/cockroachdb/cockroach/kv/txn_coord_sender.go:309 +0xc02

goroutine 1 [chan receive]:
testing.RunTests(0x12467f8, 0x17069a0, 0x30, 0x30, 0x7f264668d701)
--
goroutine 951 [select]:
github.com/cockroachdb/cockroach/multiraft.(*writeTask).start(0xc2081f8cc0)
    /go/src/github.com/cockroachdb/cockroach/multiraft/storage.go:145 +0xb86
created by github.com/cockroachdb/cockroach/multiraft.(*state).start
    /go/src/github.com/cockroachdb/cockroach/multiraft/multiraft.go:410 +0x1d2
FAIL    github.com/cockroachdb/cockroach/kv 11.327s
=== RUN TestHeartbeatSingleGroup
I0409 09:20:16.278157     199 multiraft.go:409] node 1 starting
I0409 09:20:16.279279     199 multiraft.go:409] node 2 starting
I0409 09:20:16.280849     199 raft.go:390] raft: 1 became follower at term 5
I0409 09:20:16.281149     199 raft.go:207] raft: newRaft 1 [peers: [1,2], term: 5, commit: 10, applied: 10, lastindex: 10, lastterm: 5]
I0409 09:20:16.283141     199 raft.go:390] raft: 2 became follower at term 5
I0409 09:20:16.283412     199 raft.go:207] raft: newRaft 2 [peers: [1,2], term: 5, commit: 10, applied: 10, lastindex: 10, lastterm: 5]
I0409 09:20:16.283770     199 raft.go:464] raft: 1 is starting a new election at term 5
I0409 09:20:16.283970     199 raft.go:403] raft: 1 became candidate at term 6
I0409 09:20:16.284153     199 raft.go:447] raft: 1 received vote from 1 at term 6
--
I0409 09:20:49.648556     291 multiraft.go:664] node 257: group 2 raft ready
I0409 09:20:49.648614     291 multiraft.go:669] HardState updated: {Term:6 Vote:257 Commit:14 XXX_unrecognized:[]}
I0409 09:20:49.649578     291 multiraft.go:672] New Entry[0]: 6/14 EntryNormal 00000000000000000a7df9b9f1205ae5: raft_id:2 cmd:<internal_resolve_intent:<header:<timestamp:<wall_time:0 logical:31 > cmd_id:<wall_time:0 random:0 > key:"\000\000\000kd\000\001rtn-" u
I0409 09:20:49.650577     291 multiraft.go:675] Committed Entry[0]: 6/14 EntryNormal 00000000000000000a7df9b9f1205ae5: raft_id:2 cmd:<internal_resolve_intent:<header:<timestamp:<wall_time:0 logical:31 > cmd_id:<wall_time:0 random:0 > key:"\000\000\000kd\000\001rtn-" u
==================
WARNING: DATA RACE
Write by goroutine 48:
  runtime.mapassign1()
      /usr/src/go/src/runtime/hashmap.go:383 +0x0
  github.com/cockroachdb/cockroach/storage.(*Store).addRangeInternal()
      /go/src/github.com/cockroachdb/cockroach/storage/store.go:860 +0x23e
  github.com/cockroachdb/cockroach/storage.(*Store).GroupStorage()
      /go/src/github.com/cockroachdb/cockroach/storage/store.go:1200 +0x24a
  github.com/cockroachdb/cockroach/multiraft.(*state).createGroup()
      /go/src/github.com/cockroachdb/cockroach/multiraft/multiraft.go:589 +0x348
  github.com/cockroachdb/cockroach/multiraft.(*state).start()
      /go/src/github.com/cockroachdb/cockroach/multiraft/multiraft.go:472 +0x17ea

Previous read by goroutine 131:
  runtime.mapiternext()
      /usr/src/go/src/runtime/hashmap.go:601 +0x0
  github.com/cockroachdb/cockroach/storage.(*Store).Stop()
      /go/src/github.com/cockroachdb/cockroach/storage/store.go:370 +0x18a
  github.com/cockroachdb/cockroach/storage_test.TestStoreRangeMergeNonConsecutive()
      /go/src/github.com/cockroachdb/cockroach/storage/client_merge_test.go:237 +0x16cc
  testing.tRunner()
      /usr/src/go/src/testing/testing.go:447 +0x133

Goroutine 48 (running) created at:
--
1 instances of:
github.com/cockroachdb/cockroach/storage.(*Range).startGossip(0xc208644080)
    /go/src/github.com/cockroachdb/cockroach/storage/range.go:664 +0x186
created by github.com/cockroachdb/cockroach/storage.(*Range).start
    /go/src/github.com/cockroachdb/cockroach/storage/range.go:245 +0xb6
FAIL    github.com/cockroachdb/cockroach/storage    10.708s
=== RUN TestBatchBasics
--- PASS: TestBatchBasics (0.00s)
=== RUN TestBatchGet
--- PASS: TestBatchGet (0.00s)
=== RUN TestBatchMerge
--- PASS: TestBatchMerge (0.00s)
=== RUN TestBatchProto
--- PASS: TestBatchProto (0.00s)
=== RUN TestBatchScan
--- PASS: TestBatchScan (0.00s)

Please assign, take a look and update the issue accordingly.

test failure #405

The following test appears to have failed:

#405:

test

Please assign, take a look and update the issue accordingly.

test failure #408

The following test appears to have failed:

#408:

--- PASS: TestRawBroadcast (0.00s)
=== RUN TestMetricSystemStop
--- PASS: TestMetricSystemStop (0.00s)
=== RUN: ExampleMetricSystem
==================
WARNING: DATA RACE
Read by goroutine 44:
  github.com/cockroachdb/cockroach/util/metrics.func·006()
      /go/src/github.com/cockroachdb/cockroach/util/metrics/metrics.go:591 +0x571
  github.com/cockroachdb/cockroach/util/metrics.func·005()
      /go/src/github.com/cockroachdb/cockroach/util/metrics/metrics.go:530 +0x8f

Previous write by main goroutine:
  sync/atomic.AddInt64()
      /usr/src/go/src/runtime/race_amd64.s:261 +0xc
  github.com/cockroachdb/cockroach/util/metrics.(*MetricSystem).Histogram()
      /go/src/github.com/cockroachdb/cockroach/util/metrics/metrics.go:295 +0xd93
  github.com/cockroachdb/cockroach/util/metrics.(*MetricSystem).StopTimer()
      /go/src/github.com/cockroachdb/cockroach/util/metrics/metrics.go:226 +0x75
  github.com/cockroachdb/cockroach/util/metrics.ExampleMetricSystem()
      /go/src/github.com/cockroachdb/cockroach/util/metrics/metrics_test.go:37 +0x1cd
  testing.runExample()
      /usr/src/go/src/testing/example.go:98 +0x5e6
--
Goroutine 44 (running) created at:
  github.com/cockroachdb/cockroach/util/metrics.(*MetricSystem).reaper()
      /go/src/github.com/cockroachdb/cockroach/util/metrics/metrics.go:532 +0x1ed
==================
==================
WARNING: DATA RACE
Read by goroutine 44:
  github.com/cockroachdb/cockroach/util/metrics.func·006()
      /go/src/github.com/cockroachdb/cockroach/util/metrics/metrics.go:593 +0x6d4
  github.com/cockroachdb/cockroach/util/metrics.func·005()
      /go/src/github.com/cockroachdb/cockroach/util/metrics/metrics.go:530 +0x8f

Previous write by main goroutine:
  sync/atomic.AddInt64()
      /usr/src/go/src/runtime/race_amd64.s:261 +0xc
  github.com/cockroachdb/cockroach/util/metrics.(*MetricSystem).Histogram()
      /go/src/github.com/cockroachdb/cockroach/util/metrics/metrics.go:294 +0xce9
  github.com/cockroachdb/cockroach/util/metrics.(*MetricSystem).StopTimer()
      /go/src/github.com/cockroachdb/cockroach/util/metrics/metrics.go:226 +0x75
  github.com/cockroachdb/cockroach/util/metrics.ExampleMetricSystem()
      /go/src/github.com/cockroachdb/cockroach/util/metrics/metrics_test.go:37 +0x1cd
  testing.runExample()
      /usr/src/go/src/testing/example.go:98 +0x5e6

Please assign, take a look and update the issue accordingly.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.