Coder Social home page Coder Social logo

apache / couchdb Goto Github PK

View Code? Open in Web Editor NEW
6.0K 238.0 1.0K 41.91 MB

Seamless multi-master syncing database with an intuitive HTTP/JSON API, designed for reliability

Home Page: https://couchdb.apache.org/

License: Apache License 2.0

Erlang 75.20% Shell 0.52% Python 3.61% JavaScript 4.08% Ruby 0.30% Makefile 0.27% PowerShell 0.18% C 1.23% C++ 3.67% Elixir 8.37% Batchfile 0.10% Groovy 0.30% Dockerfile 0.01% Java 2.18%
content network-server http cloud erlang javascript couchdb big-data network-client database

couchdb's Introduction

Apache CouchDB README

1 2

Installation

For a high-level guide to Unix-like systems, inc. Mac OS X and Ubuntu, see:

INSTALL.Unix

For a high-level guide to Microsoft Windows, see:

INSTALL.Windows

Follow the proper instructions to get CouchDB installed on your system.

If you're having problems, skip to the next section.

Documentation

We have documentation:

https://docs.couchdb.org/

It includes a changelog:

https://docs.couchdb.org/en/latest/whatsnew/

For troubleshooting or cryptic error messages, see:

https://docs.couchdb.org/en/latest/install/troubleshooting.html

For general help, see:

https://couchdb.apache.org/#mailing-list

We also have an IRC channel:

https://web.libera.chat/#couchdb

The mailing lists provide a wealth of support and knowledge for you to tap into. Feel free to drop by with your questions or discussion. See the official CouchDB website for more information about our community resources.

Verifying your Installation

Run a basic test suite for CouchDB by browsing here:

http://127.0.0.1:5984/_utils/#verifyinstall

Getting started with developing

Quickstart:

image

If you already have VS Code and Docker installed, you can click the badge above or here to get started. Clicking these links will cause VS Code to automatically install the Remote - Containers extension if needed, clone the source code into a container volume, and spin up a dev container for use.

This devcontainer will automatically run ./configure && make the first time it is created. While this may take some extra time to spin up, this tradeoff means you will be able to run things like ./dev/run, ./dev/run --admin=admin:admin, ./dev/run --with-admin-party-please, and make check straight away. Subsequent startups should be quick.

Manual Dev Setup:

For more detail, read the README-DEV.rst file in this directory.

Basically you just have to install the needed dependencies which are documented in the install docs and then run ./configure && make.

You don't need to run make install after compiling, just use ./dev/run to spin up three nodes. You can add haproxy as a caching layer in front of this cluster by running ./dev/run --with-haproxy --haproxy=/path/to/haproxy . You will now have a local cluster listening on port 5984.

For Fauxton developers fixing the admin-party does not work via the button in Fauxton. To fix the admin party you have to run ./dev/run with the admin flag, e.g. ./dev/run --admin=username:password. If you want to have an admin-party, just omit the flag.

Contributing to CouchDB

You can learn more about our contributing process here:

https://github.com/apache/couchdb/blob/main/CONTRIBUTING.md

Cryptographic Software Notice

This distribution includes cryptographic software. The country in which you currently reside may have restrictions on the import, possession, use, and/or re-export to another country, of encryption software. BEFORE using any encryption software, please check your country's laws, regulations and policies concerning the import, possession, or use, and re-export of encryption software, to see if this is permitted. See <https://www.wassenaar.org/> for more information.

The U.S. Government Department of Commerce, Bureau of Industry and Security (BIS), has classified this software as Export Commodity Control Number (ECCN) 5D002.C.1, which includes information security software using or performing cryptographic functions with asymmetric algorithms. The form and manner of this Apache Software Foundation distribution makes it eligible for export under the License Exception ENC Technology Software Unrestricted (TSU) exception (see the BIS Export Administration Regulations, Section 740.13) for both object code and source code.

The following provides more details on the included cryptographic software:

CouchDB includes a HTTP client (ibrowse) with SSL functionality.

couchdb's People

Contributors

b20n avatar big-r81 avatar chewbranca avatar cmlenz avatar davisp avatar dch avatar deathbearbrown avatar djc avatar eiri avatar fdmanana avatar garrensmith avatar iilyak avatar janl avatar jasondavies avatar jaydoane avatar jchris avatar jiangphcn avatar jjrodrig avatar klaustrainer avatar kocolosk avatar kxepal avatar mikewallace1979 avatar nickva avatar pgj avatar rnewson avatar robertkowalski avatar sagelywizard avatar tilgovi avatar willholley avatar wohali avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

couchdb's Issues

CloudDB just stops working D:

Currently running CloudDB for my GTA FiveM server, after installation it works for a while but then just stops. I've tried re-running it from the batch file in /bin but it just errors and closes.

Log file: https://pastebin.com/3CnWY8Wf

Your Environment

  • Version used: Version 2.0.0.1
  • Browser Name and version: Chrome (Latest)
  • Operating System and version (desktop or mobile): Windows Server 2012 (RDP)

[Jenkins] 500 error on getting view output

Expected Behavior

In reduce_builtin.js we call a view that has a built in reduce. It should return fine, and usually does.

Current Behavior

Once we saw a failure where a 500 comes back instead. The couch.log shows a gen_server exit: https://paste.apache.org/QfOz

Possible Solution

 <+davisp> Its doing "exit(ExitReasonOfProcess) where ExitReasonOfProcess was {normal, stuff...}
 <+davisp> Looks like something isn't properly monitoring the index and it closes as a race condition
 <+davisp> I *think* this line should go after line 49 instead:
           https://github.com/apache/couchdb/blob/master/src/couch_mrview/src/couch_mrview_util.erl#L54

Your Environment

Jenkins automated run, jenkins-pipeline build 5, CentOS 7, Erlang 18.3. Logfiles uploaded to central logger as jenkins-couchdb-5-2017-05-29T01:55:35.826219.

Create a Helm chart to deploy CouchDB using Kubernetes

Expected Behavior

helm install stable/couchdb should stand up a working CouchDB deployment in my Kubernetes environment.

Current Behavior

Installing CouchDB in Kubernetes is currently a very manual task. Many of our users are starting to adopt Kubernetes in their environments and are working through these details themselves.

Possible Solution

See https://github.com/kubernetes/charts/blob/master/CONTRIBUTING.md for the process to add a new chart to the Helm repository.

[Jenkins] view monitor failure in couchdb_views_tests/couchdb_1283

Expected & Current Behaviour

Normally, couchdb_views_tests's couchdb_1283 test passes. Today, it failed in Jenkins, master branch, Ubuntu 14.04, default Erlang, logs uploaded as _id jenkins-couchdb-3-2017-05-30T21:10:32.202064.

  View group shutdown
    couchdb_views_tests:315: couchdb_1283...*failed*
in function gen_server:call/2 (gen_server.erl, line 180)
in call from couchdb_views_tests:'-couchdb_1283/0-fun-22-'/0 (test/couchdb_views_tests.erl, line 368)
**exit:{normal,{gen_server,call,[<0.17099.1>,compact]}}

What I see in the couch.log is pid 0.17099.1 closing immediately after creation:

[info] 2017-05-30T21:10:27.825327Z nonode@nohost <0.17031.1> -------- Apache CouchDB has started on http://127.0.0.1:39367/
[notice] 2017-05-30T21:10:27.825734Z nonode@nohost <0.16878.1> -------- config: [couchdb] max_dbs_open set to 3 for reason nil
[info] 2017-05-30T21:10:27.825761Z nonode@nohost <0.7.0> -------- Application couch started on node nonode@nohost
[notice] 2017-05-30T21:10:27.826162Z nonode@nohost <0.16878.1> -------- config: [couchdb] delayed_commits set to false for reason nil
[info] 2017-05-30T21:10:27.931491Z nonode@nohost <0.17099.1> -------- Opening index for db: eunit-test-db-1496178627826424 idx: _design/foo sig: "0963a19eb3ef007218f1e11f0aefa2d9"
[info] 2017-05-30T21:10:27.934223Z nonode@nohost <0.17102.1> -------- Starting index update for db: eunit-test-db-1496178627826424 idx: _design/foo
[info] 2017-05-30T21:10:27.987657Z nonode@nohost <0.17102.1> -------- Index update finished for db: eunit-test-db-1496178627826424 idx: _design/foo
[notice] 2017-05-30T21:10:27.987993Z nonode@nohost <0.17050.1> -------- 127.0.0.1 - - GET /eunit-test-db-1496178627826424/_design/foo/_view/foo 200
[info] 2017-05-30T21:10:28.020920Z nonode@nohost <0.17099.1> -------- Index shutdown by monitor notice for db: eunit-test-db-1496178627826424 idx: _design/foo
[info] 2017-05-30T21:10:28.026515Z nonode@nohost <0.17099.1> -------- Closing index for db: eunit-test-db-1496178627826424 idx: _design/foo sig: "0963a19eb3ef007218f1e11f0aefa2d9" because normal

...and then the rest of the test which attempts to proceed.

Possible Solution

I wonder if this is related to #548 ?

jsapi.h: No such file or directory

The needed include files are under /home/user/include/js. How do I ensure that path is being used during build for version 2.0.0?

compiling /home/user/work/apache-couchdb-2.0.0/src/couch/priv/couch_js/http.c
/home/user/work/apache-couchdb-2.0.0/src/couch/priv/couch_js/http.c:18:19: fatal error: jsapi.h: No such file or directory
 #include <jsapi.h>
                   ^
compilation terminated.
ERROR: compile failed while processing /home/user/work/apache-couchdb-2.0.0/src/couch: rebar_abort

MRView's seq btree needs custom sorter function.

Expected Behavior

During work on #560 it was found that there are bug in mrview's seq btree leading to inconsistent behaviour on query with since parameter.

It is expected that when querying view_changes_since we should get changes feed starting from next update sequence than provided at since.

Current Behavior

Response of couch_mrview:view_changes_since/7 either include or exclude entry for since update sequence, depending on a view's key data type.

Possible Solution

The issue here is that seq_btree using tuples{UpdSeq, ViewKey} as the keys and ViewKey can be any mapped from JSON data type: binary, integer or boolean. We are using erlang native collision to find first key to start from, so we can't construct consistent low limit key.

We need to specify custom less function on seq_btree creation and normalize keys in it.

Steps to Reproduce (for bugs)

Since http end-point /{db}/_design/{ddoc}/_view_changes not exposed at the moment it's a bit hard to reproduce without changing existing changes_since_test_ test suite to create view with integer keys instead of binaries.

Context

It's a low priority issue, the function in question not exposed to any interface.

Custom replication DBs never get /_scheduler/docs entries

Steps to recreate

  1. Create database a and put a few documents in it. Ensure a database b does not exist.
  2. Create a _replicator document of the form:
{ "_id": "foo_error_rep", "source": "http://127.0.0.1:15984/a", "target": "http://127.0.0.1:15984/b" }
  1. Wait a bit and check _replicator/foo_error_rep. No state has been added. I would have expected one of crashing, running or pending.

Logfile excerpt

Note the throw of db_not_found.

[notice] 2017-05-03T21:49:42.495501Z [email protected] <0.309.0> 86f37313e7 127.0.0.1:15984 127.0.0.1 undefined GET /test_suite_db_ikklhwd%2F_replicator/foo_error_rep 200 ok 50
[notice] 2017-05-03T21:49:42.497481Z [email protected] <0.354.0> -------- starting new replication `b1137c827da5adb4376166374dcf79eb` at <0.1144.0> (`http://127.0.0.1:15984/test_suite_db_mxbhpygg/` -> `http://127.0.0.1:15984/nonexistent_test_db/`)
[notice] 2017-05-03T21:49:42.497981Z [email protected] <0.355.0> -------- couch_replicator_scheduler: Job {"b1137c827da5adb4376166374dcf79eb",[]} started as <0.1144.0>
[notice] 2017-05-03T21:49:42.547996Z [email protected] <0.309.0> 5ae0234b3c 127.0.0.1:15984 127.0.0.1 undefined GET /test_suite_db_ikklhwd%2F_replicator/foo_error_rep 200 ok 2
[notice] 2017-05-03T21:49:42.656934Z [email protected] <0.961.0> 7dd6682461 127.0.0.1:15984 127.0.0.1 undefined GET /test_suite_db_mxbhpygg/ 200 ok 147
[notice] 2017-05-03T21:49:42.657324Z [email protected] <0.309.0> 94c75a3e9a 127.0.0.1:15984 127.0.0.1 undefined GET /test_suite_db_ikklhwd%2F_replicator/foo_error_rep 200 ok 58
[notice] 2017-05-03T21:49:42.659778Z [email protected] <0.961.0> 286570172d 127.0.0.1:15984 127.0.0.1 undefined GET /nonexistent_test_db/ 404 ok 1
[error] 2017-05-03T21:49:42.660836Z [email protected] <0.1144.0> -------- throw:{db_not_found,<<"could not open http://127.0.0.1:15984/nonexistent_test_db/">>}: Replication failed to start for args {rep,{"b1137c827da5adb4376166374dcf79eb",[]},{httpdb,"http://127.0.0.1:15984/test_suite_db_mxbhpygg/",nil,[{"Accept","application/json"},{"User-Agent","CouchDB-Replicator/2.1.0-5cad2a4"}],30000,[{socket_options,[{keepalive,true},{nodelay,false}]}],10,250,nil,20,nil,undefined},{httpdb,"http://127.0.0.1:15984/nonexistent_test_db/",nil,[{"Accept","application/json"},{"User-Agent","CouchDB-Replicator/2.1.0-5cad2a4"}],30000,[{socket_options,[{keepalive,true},{nodelay,false}]}],10,250,nil,20,nil,undefined},[{checkpoint_interval,30000},{connection_timeout,30000},{http_connections,20},{retries,10},{socket_options,[{keepalive,true},{nodelay,false}]},{use_checkpoints,true},{worker_batch_size,500},{worker_processes,4}],{user_ctx,null,[],undefined},db,nil,<<"foo_error_rep">>,<<"shards/00000000-1fffffff/test_suite_db_ikklhwd/_replicator.1493848173">>,{1493,848182,496186}}: [{couch_replicator_api_wrap,db_open,3,[{file,"src/couch_replicator_api_wrap.erl"},{line,109}]},{couch_replicator_scheduler_job,init_state,1,[{file,"src/couch_replicator_scheduler_job.erl"},{line,568}]},{couch_replicator_scheduler_job,do_init,1,[{file,"src/couch_replicator_scheduler_job.erl"},{line,127}]},{couch_replicator_scheduler_job,handle_info,2,[{file,"src/couch_replicator_scheduler_job.erl"},{line,357}]},{gen_server,handle_msg,5,[{file,"gen_server.erl"},{line,599}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,237}]}]
[notice] 2017-05-03T21:49:42.709448Z [email protected] <0.309.0> 5a9498eacf 127.0.0.1:15984 127.0.0.1 undefined GET /test_suite_db_ikklhwd%2F_replicator/foo_error_rep 200 ok 2

/cc @nickva I think this is a new bug related to the scheduling replicator work.

{"error":"unauthorized","reason":"Name or password is incorrect."} even if access is granted to user

I was following the authorization topic mentioned in below link
http://docs.couchdb.org/en/2.0.0/intro/security.html
created an user using endpoints port 5986, then executed APIs as per documentation for e.g.
curl -X PUT http://localhost:5984/eventdb/_security -u admin:***** -H "Content-Type: application/json" -d '{"admins": { "names": [], "roles": [] }, "members": { "names": ["fan"], "roles": [] } }'
please note password is hidden for obvious purpose

which returns below result for me

{"ok":true}

Now when I tried to access database using below api

curl -u fan:apple http://localhost:5984/eventdb

it results below response for me
{"error":"unauthorized","reason":"Name or password is incorrect."}

I am pretty much sure about name and password here.

I am novice to couchdb however believe that porting _users to 5986 might have resulted this issue

Please note I am using couchdb 2.0.0

304 after GET-PUT-GET cycle on a _show

Expected & Current Behaviour

  1. Why is a GET-PUT-GET cycle returning a 304 on the second GET with headers "if-none-match" etag for a _show?

  2. Misleading error message should be cleaned up

makefile output:

test/javascript/tests/show_documents.js                        
    Error: expected '304', got '304'
Trace back (most recent call first):
    
  52: test/javascript/test_setup.js
      T(false,"expected '304', got '304'","changed ddoc")
 326: test/javascript/couch_test_runner.js
      TNotEquals(304,304,"changed ddoc")
 296: test/javascript/tests/show_documents.js
      ()
  37: test/javascript/cli_runner.js
      runTest()
  48: test/javascript/cli_runner.js
      
�[31mfail

couch.log:

[notice] 2017-05-31T23:29:00.503101Z [email protected] <0.16311.3> cea9c20ff0 127.0.0.1:15984 127.0.0.1 undefined GET /test_suite_db_xofwxqrc/_design/template/_show/just-name/78db53e02b06ddd067440cd0dd42f84c 304 ok 1
[notice] 2017-05-31T23:29:00.542915Z [email protected] <0.16311.3> 5cf1142bed 127.0.0.1:15984 127.0.0.1 undefined PUT /test_suite_db_xofwxqrc/_design%2Ftemplate 201 ok 40
[notice] 2017-05-31T23:29:00.545495Z [email protected] <0.16311.3> 58075015b3 127.0.0.1:15984 127.0.0.1 undefined GET /test_suite_db_xofwxqrc/_design/template/_show/just-name/78db53e02b06ddd067440cd0dd42f84c 304 ok 2

Your Environment

Jenkins, Erlang 18.3

JS stats test has inconsistent results

Current & Expected Behaviour

Sometimes, the JS test/javascript/tests/stats.js test errors with off-by-one errors such as the following:

test/javascript/tests/stats.js                                 
    Error: expected '8', got '9'
Trace back (most recent call first):
    
  52: test/javascript/test_setup.js
      T(false,"expected '8', got '9'","Request counts are incremented proper
 321: test/javascript/couch_test_runner.js
      TEquals(8,9,"Request counts are incremented properly.")
 132: test/javascript/tests/stats.js
      (6,9)
  46: test/javascript/tests/stats.js
      runTest([object Array],[object Object])
 131: test/javascript/tests/stats.js
      ()
  37: test/javascript/cli_runner.js
      runTest()
  48: test/javascript/cli_runner.js
      
�[31mfail

Possible Solution

This test was recently re-enabled under the assumption that it was good to have something covering this area of our API, but significant sections had to be commented out because results didn't match up with requests being made.

It would be good to determine whether these tests are deterministic enough - or whether we actually have a bug in our stats handling - since numbers seem off in quite a few places.

Your Environment

Most recent failure is Jenkins, master branch, logs _id jenkins-couchdb-1-2017-05-29T01:59:56.803570, Ubuntu 12.04, Erlang 18.3.

DELETE /db/doc/attachment returns 200 OK for non existing attachments

Hi

While playing with doc attachments I found out that CouchDB returns 200 OK and updates document rev when user tries to delete non-existing attachment

Here's an example output
'''
root@5d984b5559b3:/# curl -X PUT 'http://127.0.0.1:5984/testdb'
{"ok":true}
root@5d984b5559b3:/# curl -X PUT 'http://127.0.0.1:5984/testdb/doc' -d '{}'
{"ok":true,"id":"doc","rev":"1-967a00dff5e02add41819138abb3284d"}
root@5d984b5559b3:/#
root@5d984b5559b3:/# curl -X DELETE 'http://127.0.0.1:5984/testdb/doc/attachment?rev=1-967a00dff5e02add41819138abb3284d'
{"ok":true,"id":"doc","rev":"2-7051cbe5c8faecd085a3fa619e6e6337"}
'''

Is this OK? I'm confused because CouchDB API documentation states that I should receive 404

Thank You
Tomasz

Using Update Function throws occasional 412 errors when documents has inline attachments

I'm using javascript Update Functions to invoke server-side logic to update some individual fields in documents without first getting the latest revision as explained in: http://docs.couchdb.org/en/2.0.0/couchapp/ddocs.html?highlight=update%20handler.

My documents have inline attachments. The individual fields I'm updating are not the document _attachments. On occasions I'm getting 412 'Precondition Failed' error with message 'Invalid attachment stub in <doc_id> for <attachment_name>'.

The error seems false positive because when I inspect the document I see that it has the updated content and the _attachments part looks intact (see example).

"_attachments": {
"content": {
"content_type": "text/plain",
"revpos": 1,
"digest": "md5-H05fzZIvzFkyezESkSEwwQ==",
"length": 32,
"stub": true
}
}

Expected Behavior

Update function should consistetly return success code.

Current Behavior

Update function return success code on about 50% of executions.
The other times it throws 412 'Precondition Failed' error with message 'Invalid attachment stub in <doc_id> for <attachment_name>'. But the document is updated correctlly when inspected at the DB.

Possible Solution

?

Steps to Reproduce (for bugs)

Use couchdb v2.0.0 and nano js client to:

  1. Create and store a document with an inline text attachment. Attachment size does not matter.
  2. Create an update function that updates some document fields (not the attachment).
  3. Call the the update function to update the doc.
  4. On occasions, the nano client will throw error 412 'Precondition Failed' with message 'Invalid attachment stub in <doc_id> for <attachment_name>'.

Your Environment

  • Version used: couchdb v2.0.0
  • Browser Name and version: not relevant
  • Operating System and version (desktop or mobile): Linux

Extra 500 reply added to list function response when accessed through rewrite

I am getting strange responses from CouchDB 2.0.0 when accessing a list handler through a
rewrite. The result to my single request is two responses: the expected one, plus an extra
"unknown_error" 500 response. Over HTTP/1.1 this issue is masked because the extra response
is not properly chunked, but over HTTP/1.0 the two responses appear as a single body before
the connection is closed. So in local development the problem isn't seen in the browser, but
when hosted behind an HTTP/1.0 proxy like nginx in production all clients will see the extra
content appended to the intended HTML.

Regardless of the HTTP version of the request, the behavior is accompanied in the console
logs by:

[error] 2017-06-20T20:54:43.637726Z couchdb@localhost <0.23057.8> 4d6c7d7e8f req_err(2053811356) unknown_error : undef
    [<<"lacc:get/2">>,<<"chttpd:split_response/1 L343">>,<<"chttpd:handle_request_int/1 L234">>,<<"chttpd:process_request/1 L293">>,<<"chttpd:handle_request_int/1 L229">>,<<"mochiweb_http:headers/6 L122">>,<<"proc_lib:init_p_do_apply/3 L247">>]
[notice] 2017-06-20T20:54:43.637979Z couchdb@localhost <0.23057.8> 4d6c7d7e8f undefined 127.0.0.1 undefined GET /test/_design/glob/_rewrite/2015/04/f1040_spreadsheet 500 ok 22

Expected Behavior

CouchDB should never generate two responses from one request, regardless whether a list function is accessed via a rewrite or whether or not the list callsgetRow() a sufficient number of times.

Current Behavior

the list function can be as simple as

function() {
  return "[LIST FN OUTPUT]";
}

And the rewrite used looks like

"rewrites": [{
  "from": "/:path1/:path2/:path3",
  "to": "_list/posts/by_path",
  "query": {
      "include_docs": "true",
      "key": [":path1",":path2",":path3"]
  }
}]

When I query this such that "key" ends up matching a row from a simple "by_path" view, I get
an extra garbage response when I access the rewrite:

$ telnet localhost 5984
GET /test/_design/glob/_rewrite/2015/04/f1040_spreadsheet

HTTP/1.1 200 OK
Connection: close
Content-Type: application/json
Date: Tue, 20 Jun 2017 20:34:49 GMT
Server: CouchDB/2.0.0 (Erlang OTP/19)

[LIST FN OUTPUT]HTTP/1.1 500 Internal Server Error
Cache-Control: must-revalidate
Connection: close
Content-Length: 60
Content-Type: application/json
Date: Tue, 20 Jun 2017 20:34:49 GMT
Server: CouchDB/2.0.0 (Erlang OTP/19)
X-Couch-Request-ID: 5e0e7f467b
X-Couch-Stack-Hash: 2053811356
X-CouchDB-Body-Time: 0

{"error":"unknown_error","reason":"undef","ref":2053811356}
Connection closed by foreign host.

Possible Solution

This does not happen with a direct request to the list, bypassing the rewrite:

GET /test/_design/glob/_list/posts/by_path?include_docs=true&key=["2015","04","f1040_spreadsheet"]

HTTP/1.1 200 OK
Connection: close
Content-Type: application/json
Date: Tue, 20 Jun 2017 20:38:08 GMT
Server: CouchDB/2.0.0 (Erlang OTP/19)

[LIST FN OUTPUT]Connection closed by foreign host.

Nor does it happen if I make a request through the rewrite targeting a "missing key", i.e.
one that was not emitted by the view:

GET /test/_design/glob/_rewrite/2020/13/not_exist

HTTP/1.1 200 OK
Connection: close
Content-Type: application/json
Date: Tue, 20 Jun 2017 20:40:29 GMT
Server: CouchDB/2.0.0 (Erlang OTP/19)

[LIST FN OUTPUT]Connection closed by foreign host.

If I change my list function to

function() {
  getRow();
  getRow();
  return "[LIST FN OUTPUT]";
}

then the problem goes away even when accessed via the rewrite. With just a single call to
getRow() the extra response persists.

Also note that the chunked responses of HTTP/1.1 mask this "double response" issue, so you'll
see it in telnet or if CouchDB is behind nginx, but not in a browser or similar:

GET /test/_design/glob/_rewrite/2015/04/f1040_spreadsheet HTTP/1.1

HTTP/1.1 200 OK
Content-Type: application/json
Date: Tue, 20 Jun 2017 20:50:12 GMT
Server: CouchDB/2.0.0 (Erlang OTP/19)
Transfer-Encoding: chunked

d
[LIST FN OUTPUT]
0

HTTP/1.1 500 Internal Server Error
Cache-Control: must-revalidate
Content-Length: 60
Content-Type: application/json
Date: Tue, 20 Jun 2017 20:50:12 GMT
Server: CouchDB/2.0.0 (Erlang OTP/19)
X-Couch-Request-ID: 524b67d8db
X-Couch-Stack-Hash: 2053811356
X-CouchDB-Body-Time: 0

{"error":"unknown_error","reason":"undef","ref":2053811356}

Steps to Reproduce (for bugs)

The original full code I have is at https://github.com/natevw/glob, but basically:

  1. Create a view that returns a result for a particular key
  2. Create a list that does not call getRow() (or calls it only once)
  3. Create a rewrite to the list function
  4. Request the list via the rewrite, using the key which has a row

Context

This resulted in garbage appended to every single post when my blog was hosted behind nginx. I worked around by adding an extra getRow() call to my list function: natevw/Glob@53293bd

Your Environment

compaction eunit test failure: database not idle after compaction complete

Normally, the couchdb_compaction_daemon_tests test should_compact_by_default_rule test passes. Sometimes, though, it fails.

Example:

https://builds.apache.org/blue/organizations/jenkins/CouchDB/detail/master/11/pipeline/46

Node environment: ubuntu1404erlang183

makefile output:

module 'couch_query_servers'
  couch_query_servers: builtin_sum_rows_negative_test...ok
  couch_query_servers: sum_values_test...ok
  couch_query_servers: sum_values_negative_test...ok
  couch_query_servers: stat_values_test...ok
  [done in 0.012 s]
module 'couchdb_compaction_daemon_tests'
  Compaction daemon tests
    couchdb_compaction_daemon_tests:74: should_compact_by_default_rule...*failed*
in function couchdb_compaction_daemon_tests:'-should_compact_by_default_rule/1-fun-6-'/1 (test/couchdb_compaction_daemon_tests.erl, line 104)
in call from couchdb_compaction_daemon_tests:'-should_compact_by_default_rule/1-fun-7-'/1 (test/couchdb_compaction_daemon_tests.erl, line 104)
**error:{assert,[{module,couchdb_compaction_daemon_tests},
         {line,104},
         {expression,"is_idle ( DbName )"},
         {expected,true},
         {value,false}]}
  output:<<"">>

couch.log output:

[info] 2017-06-02T21:52:10.853758Z nonode@nohost <0.17500.0> -------- Starting index update for db: eunit-test-db-1496440307863680 idx: _design/foo
[info] 2017-06-02T21:52:10.857486Z nonode@nohost <0.17500.0> -------- Index update finished for db: eunit-test-db-1496440307863680 idx: _design/foo
[notice] 2017-06-02T21:52:10.858153Z nonode@nohost <0.17441.0> -------- 127.0.0.1 - - GET /eunit-test-db-1496440307863680/_design/foo/_view/foo 200
[info] 2017-06-02T21:52:10.917842Z nonode@nohost <0.17500.0> -------- Starting index update for db: eunit-test-db-1496440307863680 idx: _design/foo
[info] 2017-06-02T21:52:10.921713Z nonode@nohost <0.17500.0> -------- Index update finished for db: eunit-test-db-1496440307863680 idx: _design/foo
[notice] 2017-06-02T21:52:10.922582Z nonode@nohost <0.17441.0> -------- 127.0.0.1 - - GET /eunit-test-db-1496440307863680/_design/foo/_view/foo 200
[notice] 2017-06-02T21:52:10.974547Z nonode@nohost <0.17273.0> -------- config: [compactions] _default set to [{db_fragmentation, "70%"}, {view_fragmentation, "70%"}] for reason nil
[info] 2017-06-02T21:52:10.993547Z nonode@nohost <0.19743.0> -------- Opening index for db: eunit-test-db-1496440225289901 idx: _design/_auth sig: "3e823c2a4383ac0c18d4e574135a5b08"
[info] 2017-06-02T21:52:11.002863Z nonode@nohost <0.19752.0> -------- Opening index for db: eunit-test-db-1496440225785011 idx: _design/_auth sig: "3e823c2a4383ac0c18d4e574135a5b08"
[info] 2017-06-02T21:52:11.007746Z nonode@nohost <0.19761.0> -------- Opening index for db: eunit-test-db-1496440225463102 idx: _design/_auth sig: "3e823c2a4383ac0c18d4e574135a5b08"
[info] 2017-06-02T21:52:11.010744Z nonode@nohost <0.19770.0> -------- Opening index for db: eunit-test-db-1496440226930536 idx: _design/_auth sig: "3e823c2a4383ac0c18d4e574135a5b08"
[info] 2017-06-02T21:52:11.013218Z nonode@nohost <0.19776.0> -------- Opening index for db: _users idx: _design/_auth sig: "3e823c2a4383ac0c18d4e574135a5b08"
[info] 2017-06-02T21:52:11.022520Z nonode@nohost <0.19788.0> -------- Opening index for db: eunit-test-db-1496440225349889 idx: _design/_auth sig: "3e823c2a4383ac0c18d4e574135a5b08"
[info] 2017-06-02T21:52:11.036829Z nonode@nohost <0.19800.0> -------- Opening index for db: eunit-test-db-1496440225526972 idx: _design/_auth sig: "3e823c2a4383ac0c18d4e574135a5b08"
[info] 2017-06-02T21:52:11.039789Z nonode@nohost <0.19809.0> -------- Opening index for db: eunit-test-db-1496440225839920 idx: _design/_auth sig: "3e823c2a4383ac0c18d4e574135a5b08"
[info] 2017-06-02T21:52:11.042615Z nonode@nohost <0.19818.0> -------- Opening index for db: eunit-test-db-1496440225648009 idx: _design/_auth sig: "3e823c2a4383ac0c18d4e574135a5b08"
[info] 2017-06-02T21:52:11.055939Z nonode@nohost <0.19833.0> -------- Opening index for db: eunit-test-db-1496440225588013 idx: _design/_auth sig: "3e823c2a4383ac0c18d4e574135a5b08"
[info] 2017-06-02T21:52:11.058763Z nonode@nohost <0.19842.0> -------- Opening index for db: eunit-test-db-1496440225753479 idx: _design/_auth sig: "3e823c2a4383ac0c18d4e574135a5b08"
[info] 2017-06-02T21:52:11.132359Z nonode@nohost <0.19884.0> -------- Opening index for db: eunit-test-db-1496440225812335 idx: _design/_auth sig: "3e823c2a4383ac0c18d4e574135a5b08"
[info] 2017-06-02T21:52:11.134426Z nonode@nohost <0.17489.0> -------- Starting compaction for db "eunit-test-db-1496440307863680"
[notice] 2017-06-02T21:52:11.358111Z nonode@nohost <0.17489.0> -------- Compaction swap for db: /tmp/tmp.h5tfKaIecF/apache-couchdb-2.1.0-4a0cd89/tmp/data/eunit-test-db-1496440307863680.couch 1646789 53445
[info] 2017-06-02T21:52:11.361754Z nonode@nohost <0.17489.0> -------- Compaction for db "eunit-test-db-1496440307863680" completed.
[info] 2017-06-02T21:52:11.364188Z nonode@nohost <0.19897.0> -------- Compaction started for db: eunit-test-db-1496440307863680 idx: _design/foo
[notice] 2017-06-02T21:52:11.381542Z nonode@nohost <0.17497.0> -------- Compaction swap for view /tmp/tmp.h5tfKaIecF/apache-couchdb-2.1.0-4a0cd89/tmp/data/.eunit-test-db-1496440307863680_design/mrview/ee2c720a13443ab1d50210242730ca31.view 440970 28800
[notice] 2017-06-02T21:52:11.384063Z nonode@nohost <0.17273.0> -------- config: [compactions] _default deleted for reason nil
[info] 2017-06-02T21:52:11.385263Z nonode@nohost <0.19897.0> -------- Compaction finished for db: eunit-test-db-1496440307863680 idx: _design/foo
[info] 2017-06-02T21:52:11.388504Z nonode@nohost <0.17497.0> -------- Index shutdown by monitor notice for db: eunit-test-db-1496440307863680 idx: _design/foo
[info] 2017-06-02T21:52:11.389493Z nonode@nohost <0.17497.0> -------- Closing index for db: eunit-test-db-1496440307863680 idx: _design/foo sig: "ee2c720a13443ab1d50210242730ca31" because normal
[error] 2017-06-02T21:52:11.406481Z nonode@nohost <0.17457.0> -------- gen_server couch_compaction_daemon terminated with reason: {not_mocked,couch_compaction_daemon} at meck_proc:gen_server/3(line:450) <= meck_code_gen:exec/4(line:147) <= gen_server:try_dispatch/4(line:615) <= gen_server:handle_msg/5(line:681) <= proc_lib:init_p_do_apply/3(line:240)
  last msg: {'EXIT',<0.17459.0>,killed}
     state: {state,<0.17459.0>,[]}
[error] 2017-06-02T21:52:11.407573Z nonode@nohost <0.17457.0> -------- CRASH REPORT Process couch_compaction_daemon (<0.17457.0>) with 0 neighbors exited with reason: {not_mocked,couch_compaction_daemon} at meck_proc:gen_server/3(line:450) <= meck_code_gen:exec/4(line:147) <= gen_server:try_dispatch/4(line:615) <= gen_server:handle_msg/5(line:681) <= proc_lib:init_p_do_apply/3(line:240) at gen_server:terminate/7(line:826) <= proc_lib:init_p_do_apply/3(line:240); initial_call: {couch_compaction_daemon,init,['Argument__1']}, ancestors: [couch_secondary_services,couch_sup,<0.17421.0>], messages: [], links: [<0.17430.0>], dictionary: [], trap_exit: true, status: running, heap_size: 1598, stack_size: 27, reductions: 1264
[error] 2017-06-02T21:52:11.407930Z nonode@nohost <0.17430.0> -------- Supervisor couch_secondary_services had child compaction_daemon started with couch_compaction_daemon:start_link() at <0.17457.0> exit with reason {not_mocked,couch_compaction_daemon} at meck_proc:gen_server/3(line:450) <= meck_code_gen:exec/4(line:147) <= gen_server:try_dispatch/4(line:615) <= gen_server:handle_msg/5(line:681) <= proc_lib:init_p_do_apply/3(line:240) in context child_terminated

Possible Solution

I see that wait_for_compaction only waits on the pid for the compaction of the DB itself. In fact, the config:delete call happens immediately after the compaction for the DB complete, but the compaction for the DB's view _design/foo is still ongoing.

I wonder if the subsequent compaction of the view as well is setting up a monitor on the DB which is failing the is_idle check.

js restartServer() sometimes fails

Current & Expected Behaviour

Failure is on master.

The JS test test/javascript/tests/delayed_commits.js has a bunch of calls to restartServer() in it. So do some other JS tests. Sometimes, the restart fails:

test/javascript/tests/delayed_commits.js                       

FAIL restart

The uploaded couch.log file after restart looks normal - no problems in startup that I can see (_id jenkins-couchdb-3-2017-05-30T21:27:03.085057). Perhaps it just took too long to restart.

Possible Solution

Increase timeout?

Sort out GitHub issues for gitbox repos

We've got 'em, we want to use 'em instead of JIRA. Everyone on the project is +1, we just need a system. Here's some proposed rules; I'm keeping them simple.

Assignees

Self-explanatory.

  • Assign yourself if you start working on an issue.
  • De-assign yourself if you abandon an issue or won't be able to address it promptly.
  • Don't assign an issue to someone else without their explicit approval.
  • To avoid cookie-licking, our PMC should consider periodically de-assigning idle issues. How idle is idle? 3 months seems a good timeline to me. Writing a script to do this that runs on a cronjob should be straightforward.

Tagging

I particularly like what Robin have done with their colouring-and-categorizing system. So, inspired by that, here's the proposal. Remember an issue can (and probably should) have more than one tag.

  • Component: Various sub-components of Apache CouchDB. Can be expanded with search, geo, etc. in the future. (light blue)
    • build (toolchain, CI, etc.)
    • documentation (and/or below in Experience?)
    • dbcore (CouchDB database core)
    • fauxton
    • http_api
    • javascript
    • mango
    • nano (couchdb-nano client)
    • packaging (.deb, .rpm, snap, etc.)
    • plugins
    • replication (debatable, but this comes up enough)
    • testsuite
    • viewserver (the API)
  • Problems (red) - Issues that make the product feel broken. High priority, especially if its present in production.
    • bug
    • security
    • production
  • Mindless (beige) - Reorganizing folder structure, legal stuff, and other necessary (but less impactful) tasks.
    • chore
    • legal
  • Experience (orange) - Affect user’s comprehension, or overall enjoyment of the product. These can be both opportunities and “UX bugs”.
    • UX (could include API debates or installation as well as Fauxton)
    • documentation (and/or above in Components?)
    • design (primarily for Fauxton)
    • website (https://couchdb.apache.org/ and related websites)
  • Environment (pink)
    • windows
    • linux
    • macos
    • freebsd
    • docker
    • etc.
  • Feedback (magenta) - Requires further conversation to figure out the action steps. Most feature ideas start here.
    • discussion
    • rfc
    • question
  • Improvements (blue) - Iterations on existing features or infrastructure. Generally these update speed, or improve the quality of results.
    • enhancement
    • optimization
  • Additions (green) - Brand new functionality.
    • feature
  • Pending (yellow) - Taking action, but need a few things to happen first. A feature that needs dependencies merged, or a bug that needs further data.
    • in progress
    • watchlist
    • waiting on user
  • Inactive (grey) - No action needed or possible. The issue is either fixed, addressed better by other issues, or just out of scope.
    • invalid
    • wontfix
    • duplicate
    • on hold

Projects

Not useful for us right now.

Milestones

We should be using this for version targeting and mass-updating as necessarily. I.e., bugs get targeted for the next minor release, new features that break the API for the next major release, etc. Think v2.1.0, v3.0.0, etc. (Rathole: v or no v?)

Issue template

Going to start with this one.

PR template

Update to change reference from JIRA to GH Issues. Remind people to include text like "Fixes #472" to link a PR to an issue.

Contributing

We've had requests for this before, should we add one?

Multi-repo

We still have a number of repos that aren't likely to be merged into apache/couchdb, like fauxton. I am suggesting we keep primary issue reporting here (in apache/couchdb) for all of our repos, and reference issues here from PRs in other repos (via #472). We shouldn't disallow issues in other repos, but my guess is that they'll be limited to people 'in the know.'

[Travis] Mysterious killed eunit test in couch_replicator_use_checkpoints_tests

Current & Expected Behaviour

In one Travis run (Erlang 19.3) the test module couch_replicator_use_checkpoints_tests failed mysteriously:

  Replication use_checkpoints feature tests
    use_checkpoints: false
      local -> local
        couch_replicator_use_checkpoints_tests:111: should_populate_source...[0.033 s] ok
        couch_replicator_use_checkpoints_tests:118: should_replicate...[0.053 s] ok
        couch_replicator_use_checkpoints_tests:125: should_compare_databases...[0.051 s] ok
        [done in 0.146 s]
    *unexpected termination of test process*
::killed

The couch.log (uploader _id travis-couchdb-237668845-2017-05-30T20:31:25.123334) shows nothing out of the ordinary, though there are some initial failures to load some _local documents when starting up replication.

eunit replication test fails with {nocatch, {mp_parser_died,noproc}}

Expected & Current Behaviour

Usually, the eunit couch_replicator_small_max_request_size_target sub-test with should_populate_source_one_large_attachment passes. Occasionally, the Makefile shows the test timing out. This appears to be due to an actual crash in couch_att.

When it passes, the couch.log looks like:

[notice] 2017-06-03T19:18:08.430746Z nonode@nohost <0.24178.1> -------- 127.0.0.1 - - PUT /eunit-test-db-1496517487772729/doc0?new_edits=false 413
[error] 2017-06-03T19:18:08.431336Z nonode@nohost <0.24178.1> -------- httpd 413 error response:
 {"error":"too_large","reason":"the request entity is too large"}
[error] 2017-06-03T19:18:08.432311Z nonode@nohost <0.24300.1> -------- Replicator: error writing document `doc0` to `http://127.0.0.1:38412/eunit-test-db-1496517487772729/`: {error,request_body_too_large}
...

When it fails, the couch.log looks like the following. Notice how the attempt to convert the attachment to a multipart is failing, killing the replicator PUT connection. A backoff occurs and retry occurs; the code assumes the error is remote. The test fails after 60s of re-trying:

[notice] 2017-06-03T18:23:49.802299Z nonode@nohost <0.26345.1> -------- 127.0.0.1 - - PUT /eunit-test-db-1496514229430759/doc0?new_edits=false 413
[error] 2017-06-03T18:23:49.802840Z nonode@nohost <0.26345.1> -------- httpd 413 error response:
 {"error":"too_large","reason":"the request entity is too large"}
[error] 2017-06-03T18:23:49.836048Z nonode@nohost emulator -------- Error in process <0.26470.1> with exit value:
{{nocatch,{mp_parser_died,noproc}},[{couch_att,'-foldl/4-fun-0-',3,[{file,"src/couch_att.erl"},{line,591}]},{couch_att,fold_streamed_data,4,[{file,"src/couch_att.erl"},{line,642}]},{couch_att,foldl,4,[{file,"src/couch_att.erl"},{line,595}]},{couch_httpd_multipart,atts_to_mp,4,[{file,"src/couch_httpd_multipart.erl"},{line,208}]}]}

[info] 2017-06-03T18:23:49.835764Z nonode@nohost <0.26423.1> -------- Replication connection to: "127.0.0.1":51284 died with reason {{nocatch,{mp_parser_died,noproc}},[{couch_att,'-foldl/4-fun-0-',3,[{file,"src/couch_att.erl"},{line,591}]},{couch_att,fold_streamed_data,4,[{file,"src/couch_att.erl"},{line,642}]},{couch_att,foldl,4,[{file,"src/couch_att.erl"},{line,595}]},{couch_httpd_multipart,atts_to_mp,4,[{file,"src/couch_httpd_multipart.erl"},{line,208}]}]}
[notice] 2017-06-03T18:23:49.837660Z nonode@nohost <0.26347.1> -------- 127.0.0.1 - - PUT /eunit-test-db-1496514229430759/doc0?new_edits=false 413
[error] 2017-06-03T18:23:49.838393Z nonode@nohost <0.26347.1> -------- httpd 413 error response:
 {"error":"too_large","reason":"the request entity is too large"}
[error] 2017-06-03T18:23:49.873009Z nonode@nohost <0.26466.1> -------- Replicator, request PUT to "http://127.0.0.1:51284/eunit-test-db-1496514229430759/doc0?new_edits=false" failed due to error {error,
    {'EXIT',
        {{{nocatch,{mp_parser_died,noproc}},
          [{couch_att,'-foldl/4-fun-0-',3,
               [{file,"src/couch_att.erl"},{line,591}]},
           {couch_att,fold_streamed_data,4,
               [{file,"src/couch_att.erl"},{line,642}]},
           {couch_att,foldl,4,[{file,"src/couch_att.erl"},{line,595}]},
           {couch_httpd_multipart,atts_to_mp,4,
               [{file,"src/couch_httpd_multipart.erl"},{line,208}]}]},
         {gen_server,call,
             [<0.26468.1>,
              {send_req,
                  {{url,
                       "http://127.0.0.1:51284/eunit-test-db-1496514229430759/doc0?new_edits=false",
                       "127.0.0.1",51284,undefined,undefined,
                       "/eunit-test-db-1496514229430759/doc0?new_edits=false",
                       http,ipv4_address},
                   [{"Accept","application/json"},
                    {"Content-Length",140515},
                    {"Content-Type",
                     "multipart/related; boundary=\"2ff820d10ec858bdf12fb26f9f285cc7\""},
                    {"User-Agent","CouchDB-Replicator/2.1.0"}],
                   put,
                   {#Fun<couch_replicator_api_wrap.11.73637182>,
                    {<<"{\"_id\":\"doc0\",\"_rev\":\"1-40a6a02761aba1474c4a1ad9081a4c2e\",\"x\":\"xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx...
[notice] 2017-06-03T18:23:49.880952Z nonode@nohost <0.26461.1> -------- Retrying GET to http://127.0.0.1:51284/eunit-test-db-1496514229397621/doc0?revs=true&open_revs=%5B%221-40a6a02761aba1474c4a1ad9081a4c2e%22%5D&latest=true in 1.0 seconds due to error {http_request_failed,[80,85,84],[104,116,116,112,58,47,47,49,50,55,46,48,46,48,46,49,58,53,49,50,56,52,47,101,117,110,105,116,45,116,101,115,116,45,100,98,45,49,52,57,54,53,49,52,50,50,57,52,51,48,55,53,57,47,100,111,99,48,63,110,101,119,95,101,100,105,116,115,61,102,97,108,115,101],{error,{error,{'EXIT',{{{nocatch,{mp_parser_died,noproc}},[{couch_att,'-foldl/4-fun-0-',3,[{file,[115,114,99,47,99,111,117,99,104,95,97,116,116,46,101,114,108]},{line,591}]},{couch_att,fold_streamed_data,4,[{file,[115,114,99,47,99,111,117,99,104,95,97,116,116,46,101,114,108]},{line,642}]},{couch_att,foldl,4,[{file,[115,114,99,47,99,111,117,99,104,95,97,116,116,46,101,114,108]},{line,595}]},{couch_httpd_multipart,atts_to_mp,4,[{file,[115,114,99,47,99,111,117,99,104,95,104,116,116,112,100,95,109,117,108,116,105,112,97,114,116,46,101,114,108]},{line,208}]}]},{gen_server,call,[<0.26468.1>,{send_req,{{url,[104,116,116,112,58,47,47,49,50,55,46,48,46,48,46,49,58,53,49,50,56,52,47,101,117,110,105,116,45,116,101,115,116,45,100,98,45,49,52,57,54,53,49,52,50,50,57,52,51,48,55,53,57,47,100,111,99,48,63,110,101,119,95,101,100,105,116,115,61,102,97,108,115,101],[49,50,55,46,48,46,48,46,49],51284,undefined,undefined,[47,101,117,110,105,116,45,116,101,115,116,45,100,98,45,49,52,57,54,53,49,52,50,50,57,52,51,48,55,53,57,47,100,111,99,48,63,110,101,119,95,101,100,105,116,115,61,102,97,108,115,101],http,ipv4_address},[{[65,99,99,101,112,116],[97,112,112,108,105,99,97,116,105,111,110,47,106,115,111,110]},{[67,111,110,116,101,110,116,45,76,101,110,103,116,104],140515},{[67,111,110,116,101,110,116,45,84,121,112,101],[109,117,108,116,105,112,97,114,116,47,114,101,108,97,116,101,100,59,32,98,111,117,110,100,97,114,121,61,34,50,102,102,56,50,48,100,49,48,101,99,56,53,56,98,100,102,49,50,102,98,50,54,102,57,102,50,56,53,99,99,55,34]},{[85,115,101,114,45,65,103,101,110,116],[67,111,117,99,104,68,66,45,82,101,112,108,105,99,97,116,111,114,47,50,46,49,46,48]}],put,{#Fun<couch_replicator_api_wrap.11.73637182>,{<<123,34,95,105,100,34,58,34,100,111,99,48,34,44,34,95,114,101,118,34,58,34,49,45,52,48,97,54,97,48,50,55,54,49,97,98,97,49,52,55,52,99,52,97,49,97,100,57,48,56,49,97,52,99,50,101,34,44,34,120,34,58,34,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,...>>,...}},...}},...]}}}}}}
[notice] 2017-06-03T18:23:50.885688Z nonode@nohost <0.26346.1> -------- 127.0.0.1 - - GET /eunit-test-db-1496514229397621/doc0?revs=true&open_revs=%5B%221-40a6a02761aba1474c4a1ad9081a4c2e%22%5D&latest=true 200
[notice] 2017-06-03T18:23:50.905780Z nonode@nohost <0.26348.1> -------- 127.0.0.1 - - PUT /eunit-test-db-1496514229430759/doc0?new_edits=false 413
[error] 2017-06-03T18:23:50.906543Z nonode@nohost <0.26348.1> -------- httpd 413 error response:
 {"error":"too_large","reason":"the request entity is too large"}
[info] 2017-06-03T18:23:50.917537Z nonode@nohost <0.26423.1> -------- Replication connection to: "127.0.0.1":51284 died with reason {{nocatch,{mp_parser_died,noproc}},[{couch_att,'-foldl/4-fun-0-',3,[{file,"src/couch_att.erl"},{line,591}]},{couch_att,fold_streamed_data,4,[{file,"src/couch_att.erl"},{line,642}]},{couch_att,foldl,4,[{file,"src/couch_att.erl"},{line,595}]},{couch_httpd_multipart,atts_to_mp,4,[{file,"src/couch_httpd_multipart.erl"},{line,208}]}]}
[error] 2017-06-03T18:23:50.920248Z nonode@nohost emulator -------- Error in process <0.26481.1> with exit value:
{{nocatch,{mp_parser_died,noproc}},[{couch_att,'-foldl/4-fun-0-',3,[{file,"src/couch_att.erl"},{line,591}]},{couch_att,fold_streamed_data,4,[{file,"src/couch_att.erl"},{line,642}]},{couch_att,foldl,4,[{file,"src/couch_att.erl"},{line,595}]},{couch_httpd_multipart,atts_to_mp,4,[{file,"src/couch_httpd_multipart.erl"},{line,208}]}]}

[notice] 2017-06-03T18:23:50.924545Z nonode@nohost <0.26349.1> -------- 127.0.0.1 - - PUT /eunit-test-db-1496514229430759/doc0?new_edits=false 413
[error] 2017-06-03T18:23:50.925372Z nonode@nohost <0.26349.1> -------- httpd 413 error response:
 {"error":"too_large","reason":"the request entity is too large"}
[error] 2017-06-03T18:23:50.954294Z nonode@nohost <0.26475.1> -------- Replicator, request PUT to "http://127.0.0.1:51284/eunit-test-db-1496514229430759/doc0?new_edits=false" failed due to error {error,
    {'EXIT',
        {{{nocatch,{mp_parser_died,noproc}},
          [{couch_att,'-foldl/4-fun-0-',3,
               [{file,"src/couch_att.erl"},{line,591}]},
           {couch_att,fold_streamed_data,4,
               [{file,"src/couch_att.erl"},{line,642}]},
           {couch_att,foldl,4,[{file,"src/couch_att.erl"},{line,595}]},
           {couch_httpd_multipart,atts_to_mp,4,
               [{file,"src/couch_httpd_multipart.erl"},{line,208}]}]},
         {gen_server,call,
             [<0.26479.1>,
              {send_req,
                  {{url,
                       "http://127.0.0.1:51284/eunit-test-db-1496514229430759/doc0?new_edits=false",
                       "127.0.0.1",51284,undefined,undefined,
                       "/eunit-test-db-1496514229430759/doc0?new_edits=false",
                       http,ipv4_address},
                   [{"Accept","application/json"},
                    {"Content-Length",140515},
                    {"Content-Type",
                     "multipart/related; boundary=\"69b3f5ba1c64c1c3cd65d70f0299a962\""},
                    {"User-Agent","CouchDB-Replicator/2.1.0"}],
                   put,
                   {#Fun<couch_replicator_api_wrap.11.73637182>,
                    {<<"{\"_id\":\"doc0\",\"_rev\":\"1-40a6a02761aba1474c4a1ad9081a4c2e\",\"x\":\"xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx...
[notice] 2017-06-03T18:23:50.960325Z nonode@nohost <0.26461.1> -------- Retrying GET to http://127.0.0.1:51284/eunit-test-db-1496514229397621/doc0?revs=true&open_revs=%5B%221-40a6a02761aba1474c4a1ad9081a4c2e%22%5D&latest=true in 4.0 seconds due to error {http_request_failed,[80,85,84],[104,116,116,112,58,47,47,49,50,55,46,48,46,48,46,49,58,53,49,50,56,52,47,101,117,110,105,116,45,116,101,115,116,45,100,98,45,49,52,57,54,53,49,52,50,50,57,52,51,48,55,53,57,47,100,111,99,48,63,110,101,119,95,101,100,105,116,115,61,102,97,108,115,101],{error,{error,{'EXIT',{{{nocatch,{mp_parser_died,noproc}},[{couch_att,'-foldl/4-fun-0-',3,[{file,[115,114,99,47,99,111,117,99,104,95,97,116,116,46,101,114,108]},{line,591}]},{couch_att,fold_streamed_data,4,[{file,[115,114,99,47,99,111,117,99,104,95,97,116,116,46,101,114,108]},{line,642}]},{couch_att,foldl,4,[{file,[115,114,99,47,99,111,117,99,104,95,97,116,116,46,101,114,108]},{line,595}]},{couch_httpd_multipart,atts_to_mp,4,[{file,[115,114,99,47,99,111,117,99,104,95,104,116,116,112,100,95,109,117,108,116,105,112,97,114,116,46,101,114,108]},{line,208}]}]},{gen_server,call,[<0.26479.1>,{send_req,{{url,[104,116,116,112,58,47,47,49,50,55,46,48,46,48,46,49,58,53,49,50,56,52,47,101,117,110,105,116,45,116,101,115,116,45,100,98,45,49,52,57,54,53,49,52,50,50,57,52,51,48,55,53,57,47,100,111,99,48,63,110,101,119,95,101,100,105,116,115,61,102,97,108,115,101],[49,50,55,46,48,46,48,46,49],51284,undefined,undefined,[47,101,117,110,105,116,45,116,101,115,116,45,100,98,45,49,52,57,54,53,49,52,50,50,57,52,51,48,55,53,57,47,100,111,99,48,63,110,101,119,95,101,100,105,116,115,61,102,97,108,115,101],http,ipv4_address},[{[65,99,99,101,112,116],[97,112,112,108,105,99,97,116,105,111,110,47,106,115,111,110]},{[67,111,110,116,101,110,116,45,76,101,110,103,116,104],140515},{[67,111,110,116,101,110,116,45,84,121,112,101],[109,117,108,116,105,112,97,114,116,47,114,101,108,97,116,101,100,59,32,98,111,117,110,100,97,114,121,61,34,54,57,98,51,102,53,98,97,49,99,54,52,99,49,99,51,99,100,54,53,100,55,48,102,48,50,57,57,97,57,54,50,34]},{[85,115,101,114,45,65,103,101,110,116],[67,111,117,99,104,68,66,45,82,101,112,108,105,99,97,116,111,114,47,50,46,49,46,48]}],put,{#Fun<couch_replicator_api_wrap.11.73637182>,{<<123,34,95,105,100,34,58,34,100,111,99,48,34,44,34,95,114,101,118,34,58,34,49,45,52,48,97,54,97,48,50,55,54,49,97,98,97,49,52,55,52,99,52,97,49,97,100,57,48,56,49,97,52,99,50,101,34,44,34,120,34,58,34,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,120,...>>,...}},...}},...]}}}}}}
[notice] 2017-06-03T18:23:54.362773Z nonode@nohost <0.26421.1> -------- couch_replicator_clustering : publish cluster `stable` event
[notice] 2017-06-03T18:23:54.363866Z nonode@nohost <0.26435.1> -------- Started replicator db changes listener <0.26512.1>
[notice] 2017-06-03T18:23:54.965698Z nonode@nohost <0.26346.1> -------- 127.0.0.1 - - GET /eunit-test-db-1496514229397621/doc0?revs=true&open_revs=%5B%221-40a6a02761aba1474c4a1ad9081a4c2e%22%5D&latest=true 200

Possible Solution

eunit context cleanup times out while cleaning up os daemons

Expected and Current Behaviour

Sometimes, the eunit test harness fails to complete context cleanup. In this case, it fails finishing off the couchdb_os_daemons_tests test should_spawn_multiple_daemons.

https://builds.apache.org/blue/organizations/jenkins/CouchDB/detail/master/15/pipeline/49

Makefile output:

module 'couchdb_os_daemons_tests'
  OS Daemons tests
    couchdb_os_daemons_tests:107: should_check_daemon...ok
    couchdb_os_daemons_tests:113: should_check_daemon_table_form...ok
    couchdb_os_daemons_tests:120: should_clean_tables_on_daemon_remove...[0.548 s] ok
    couchdb_os_daemons_tests:127: should_spawn_multiple_daemons...[0.607 s] ok
    undefined
    *** context cleanup failed ***
**in function test_util:stop_sync_throw/4 (src/test_util.erl, line 161)
in call from couchdb_os_daemons_tests:teardown/2 (test/couchdb_os_daemons_tests.erl, line 58)
**throw:{timeout,os_daemon_stop}

Application config was left running!
Application couch_epi was left running!
Application ioq was left running!
  undefined
  Application config was left running!
*** context setup failed ***
**in function meck_proc:start/2 (src/meck_proc.erl, line 96)
  called as start(couch_stats,[passthrough])
in call from test_util:mock/1 (src/test_util.erl, line 245)
in call from test_util:'-mock/1-lc$^0/1-0-'/1 (src/test_util.erl, line 238)
in call from test_util:start/3 (src/test_util.erl, line 226)
in call from couchdb_os_daemons_tests:setup/1 (test/couchdb_os_daemons_tests.erl, line 47)
**error:{already_started,<0.10686.1>}

[done in 7.158 s]

Possible Solution

Increase the timeout for couchdb_os_daemons_tests:teardown/2.

Futon logout works on Firefox but not Chrome

I am not able to log out from Futon interface in chrome. When I click on logout link nothing happens. I am able to successfully login/logout from Futon in Firefox.

As there was no response after clicking the logout link, I could not mark any error from chrome dev console.

But there was an unusual observation while reloading the futon. Some database links responded with 404 error in chrome console. I have attached pics of the error in dev console below.

image

EDIT: When I cleared all storage site data from chrome dev tools> application > clear storage, everything started working normally. I don't know why this issue has arisen in chrome.

Test issue

This is a test issue to validate the mailing list is set up correctly.

It should be easier to find out what JavaScript features are supported by CouchDB

Most CouchDB tutorials/guides are written with ye olde JavaScript, i.e., ES5 and below. I was curious to find out if any ES2015 features were natively supported by CouchDB, so I did a few google searches for couchdb es2015 and couchdb es6. Neither search returned any relevant results. With that, I had to start experimenting with a local installation of CouchDB (I went with the latest version from Docker Hub). I was pleasantly surprised to see that CouchDB supports const as well as object and argument destructuring, and somewhat disappointed to see that it does not support String.prototype.includes, forcing me to use the inelegant snippet we're all familiar with, str.indexOf(...) > -1 (or, if you like hacks, ~str.indexOf(...)). I have yet to test for any other features.

I think there should be some easier way than trial and error to see what features are supported by CouchDB. The google searches, like I said, were not very helpful. Most browsers are starting to ship ES2017 features (such as async functions) and JavaScript developers are getting used to working in such environments.

I've been thinking about this a little bit and I think I have a simple solution that could work. Take, for example, Electron. It's built on Chromium but doesn't use Chromium version numbers. The solution to this is an npm package, electron-to-chromium. It converts Electron version numbers to feature-equivalent Chromium version numbers. This package is used by babel-preset-env to determine which features need to be transpiled to run on certain versions of Electron.

My solution is inspired by the notion of converting version numbers. My understanding is that CouchDB by default uses SpiderMonkey as its JavaScript engine. The simplest thing to do would be to have a table somewhere on the CouchDB website converting CouchDB version numbers to feature-equivalent Firefox version numbers. At a minimum people could see what features are supported by the listed version of Firefox and use only those features. Otherwise, people who already include transpiling in their pipeline could plug that version number into babel-preset-env and get valid JavaScript to use with CouchDB that way. Other avenues to explore include:

  • A package similar to electron-to-chromium, e.g., couchdb-to-firefox, which converts version numbers programmatically which we could nag the maintainers of babel-preset-env to include.
  • A babel preset, e.g., babel-preset-couchdb so we don't have to nag the maintainers of babel-preset-env.

To start off the discussion, is there a version of Firefox which the latest version of CouchDB could be considered equivalent to in terms of supported JavaScript features? My guess is some version < 40 given the lack of support for String.prototype.includes. I'm also curious if there are any plans to update the JavaScript engine in CouchDB to support newer features natively.

I could also be overlooking something. Is there already such a table or list of supported JavaScript features available somewhere? I found this page, but that only talks about CouchDB functions available in JavaScript.

[Travis+Jenkins] startup fails, crash in mem3_shards:get_update_seq

Expected Behavior

When the build process hits make javascript, dev/run should succeed in starting up a 1-node instance.

Current Behavior

Occasionally, as in this log, startup fails. We don't know exactly why in this case, as the logfile uploader was not yet present on the 2.1.x branch.

[EDIT] A subsequent failure turned up the real failure here (2.1.x branch, Ubuntu 12.04, Erlang 18.3). Logfile: https://paste.apache.org/IXst

CRASH REPORT Process  (<0.297.0>) with 0 neighbors exited with reason: no match of right hand value file_exists at mem3_shards:get_update_seq/0(line:318) <= mem3_shards:init/1(line:206)

eunit couch_log_config_test failed with get_listener() found

Current & Expected Behaviour

Sometimes, the couch_log_config_listener_test test couch_log_config_test_ fails. It should always pass

Makefile output:

module 'couch_log_config_listener_test'
  couch_log_config_listener_test: couch_log_config_test_...*failed*
in function couch_log_config_listener_test:'-check_restart_listener/0-fun-2-'/1 (test/couch_log_config_listener_test.erl, line 38)
in call from couch_log_config_listener_test:check_restart_listener/0 (test/couch_log_config_listener_test.erl, line 38)
**error:{assertEqual_failed,[{module,couch_log_config_listener_test},
                     {line,38},
                     {expression,"get_handler ( )"},
                     {expected,not_found},
                     {value,{config_listener,{couch_log_sup,<0.3334.0>}}}]}

  couch_log_config_listener_test: couch_log_config_test_...[1.002 s] ok
  [done in 1.008 s]

couch.log from the uploader is EMPTY.

Possible Solution

Looks like this test is expecting gen_event:delete_handler to occur immediately. Should there be a timer:sleep() call prior to checking get_handler()?

Context

This used to be known as https://issues.apache.org/jira/browse/COUCHDB-3341 but is now on GH Issues.

Better way to follow progress on 3.0

Hey, new to Couch and building an app with it for the first time. Putting together a fancy stack that glues a lot of cool new technology, with Couch/Pouch as the db.

I've been diving into Couch for the last few weeks pretty intensely and am glad to see despite relative lack of hype in the communities I follow, it seems to be making steady progress.

I've just finished reading this thread for the second time after spending a few weeks working on a test app, and understand a hell of a lot more of whats being talked about now.

I would love to be able to follow development here a bit better. My interests are in a few things coming up but primarily the improved document permissions system for which without I wouldn't be able to build most apps I think would be valuable.

After browsing JIRA/Github I don't see any places to follow most of these points brought up. Would be really nice to be able to subscribe to a Github issue for each of the main bullet points.

Also, JIRA seems to have some information on 3.0, 4.0, etc, but then there are active comments here on PR's. Any chance of moving roadmap stuff into github?

Keep up the great work!

Error 500 when creating a db below quorum

Expected Behavior

On a 4 nodes cluster, having two down, if I try to create a new database the server should either reject the request or accept it returning a 202 status code. Once the down nodes come back, the new db should be replicated to them.

Current Behavior

Currently an error 500 is returned but the db is indeed created (not sure if it replicated when the down nodes came back).

Possible Solution

Return a more friendlier status code as 202 Accepted if the db was actually created or completely reject it with a 412 Precondition Failed instead.

Steps to Reproduce (for bugs)

  1. Setup a 4 nodes cluster
  2. Bring 2 of them down
  3. curl -X PUT "http://xxx.xxx.xxxx.xxx:5984/testdb"
  4. An error 500 is returned.
  5. curl -X GET "http://xxx.xxx.xxx.xxx:5984/_all_dbs" will return testdb

Context

I'm building a CouchDB automatic administration toolset and I was trying out different scenarios to see it working, in this case I was simulating an operation where on a cluster without quorum (4 nodes, 2 of them down), one tries to create a new database.

Your Environment

It is a Kubernetes managed cluster where the Docker image in use is https://hub.docker.com/r/klaemo/couchdb/

"Reduce output must shrink more rapidly" misrepresents which view is problematic

When multiple views exist in the same design document, CouchDB 2.0 does not distinguish between the view being queried at the time and the view causing an error in the logged error messages.

Expected Behavior

CouchDB should specify which view resulted in the error.

Current Behavior

[info] 2017-05-25T19:15:38.649223Z couchdb@localhost <0.17653.11> -------- Starting index update for db: shards/a0000000-bfffffff/m3d_request.1495739240 idx: _design/mms.worker.request
[info] 2017-05-25T19:15:41.226488Z couchdb@localhost <0.16513.11> -------- OS Process #Port<0.18751> Log :: function raised exception (new TypeError("doc.state.fractionGroup_info is undefined", "undefined", 5)) with doc._id started_2013-04-29_12.07.28.801694_a5e86414c252c36ceb68b7aacedc9eda
[error] 2017-05-25T19:15:41.652864Z couchdb@localhost <0.23076.11> -------- OS Process Error <0.16513.11> :: {<<"reduce_overflow_error">>,<<"Reduce output must shrink more rapidly: Current output: '[{\"linacTpsRequest_cid\":\"linac_tps_2015-03-13_14.46.37.287005_185313df929b11bd40cbdee210cff04a\",\"lin'... (first 100 of 441 bytes)">>}
[error] 2017-05-25T19:15:41.653329Z couchdb@localhost emulator -------- Error in process <0.23076.11> on node couchdb@localhost with exit value:
{{nocatch,{<<"reduce_overflow_error">>,<<"Reduce output must shrink more rapidly: Current output: '[{\"linacTpsRequest_cid\":\"linac_tps_2015-03-13_14.46.37.287005_185313df929b11bd40cbdee210cff04a\",\"lin'... (first 100 of 441 bytes)">>}},[{couch_os_process,prompt,2,[{file,"src/couch_os_process.erl"},{line,59}]},{couch_query_servers,proc_prompt,2,[{file,"src/couch_query_servers.erl"},{line,427}]},{couch_query_servers,os_rereduce,3,[{file,"src/couch_query_servers.erl"},{line,135}]},{lists,zipwith,3,[{file,"lists.erl"},{line,450}]},{couch_query_servers,rereduce,3,[{file,"src/couch_query_servers.erl"},{line,93}]},{couch_mrview_util,'-make_reduce_fun/2-fun-1-',4,[{file,"src/couch_mrview_util.erl"},{line,1011}]},{couch_btree,'-write_node/3-lc$^0/1-0-',5,[{file,"src/couch_btree.erl"},{line,435}]},{couch_btree,'-write_node/3-lc$^0/1-0-',5,[{file,"src/couch_btree.erl"},{line,438}]}]}

[error] 2017-05-25T19:15:41.653424Z couchdb@localhost <0.23095.11> 6aa615c490 rexi_server throw:{<<"reduce_overflow_error">>,<<"Reduce output must shrink more rapidly: Current output: '[{\"linacTpsRequest_cid\":\"linac_tps_2015-03-13_14.46.37.287005_185313df929b11bd40cbdee210cff04a\",\"lin'... (first 100 of 441 bytes)">>} [{couch_mrview_util,get_view,4,[{file,"src/couch_mrview_util.erl"},{line,56}]},{couch_mrview,query_view,6,[{file,"src/couch_mrview.erl"},{line,244}]},{rexi_server,init_p,3,[{file,"src/rexi_server.erl"},{line,139}]}]
[notice] 2017-05-25T19:15:41.653758Z couchdb@localhost <0.22178.11> 6aa615c490 localhost:5984 127.0.0.1 undefined GET /m3d_request/_design/mms.worker.request/_view/RequestKnownDbsQuery?reduce=true&group_level=2 500 ok 3008

Steps to Reproduce (for bugs)

I have not verified the following manually, but:

  • Create a design doc with two views, one normal with a sane reduce, and one with a reduce that just returns a 1kb string of text.
  • Query the first view, get an error.
  • The error should indicate that the problem is with the second view, but it currently does not.

Context

I have design docs with ~15 views in some cases, and now I have to hunt through the code to find one that has a reduce that might output something matching:

'[{"linacTpsRequest_cid":"linac_tps_2015-03-13_14.46.37.287005_185313df929b11bd40cbdee210cff04a","lin'... (first 100 of 441 bytes)

Allow bind address of 127.0.0.1 in _cluster_setup for single node

Expected Behavior

A user should be able to configure a single node CouchDB installation to bind to 127.0.0.1 through the Fauxton Setup wizard or via the /_cluster_setup endpoint.

Current Behavior

Right now, if you use Fauxton to configure a single node CouchDB installation, you are forced to set a bind address other than 127.0.0.1. The same is true for the /_cluster_setup endpoint.

While this makes sense for configuring an actual cluster, it does not for a single node setup.

Possible Solutions

I see two ways to solve this problem:

  1. Change Fauxton's wizard for single node setup to directly access the /_node/couchdb@localhost/_config (substituting the correct node name as queried via /_membership) endpoint to alter the admin user, bind address and port as desired.

  2. Improve the /_cluster_setup endpoint to accept a new "action": "enable_single_node" that tolerates binding to 127.0.0.1. This would have to be paired with changing the validation function in Fauxton to accept that bind address when in the single node workflow.

I have a mild preference for 2, but 1 requires less work on the backend, and doesn't abuse an endpoint intended for setting up clusters to set up a single node as well.

Input from @garrensmith and @janl requested.

Steps to Reproduce (for bugs)

  1. Install a post-2.0 CouchDB.
  2. Use Fauxton to navigate to the setup wizard.
  3. Try to set up a single node that stays bound to 127.0.0.1. It will fail.
  4. Alternately, use the /_cluster_setup end point to do the same thing. It also fails.

Context

Real world users very often expect to have a single node CouchDB installation bound to localhost only.

Mango Query won't run on _users database

I have been trying to run a Mango Query on the _users database using Fauxton but the process seems unfruitful and yields either of the errors below;
i. vacc
ii. case_clause

Currently, I have an index on both the name and _id field

eunit couchdb_views_tests errors with 500: {'EXIT',noproc}

Current & Expected Behaviour

The couchdb_views_tests eunit test suite should always pass. Occasionally, it fails with a 500 error.

Environment: Jenkins, master, Ubuntu 16.04 with Erlang 18.3

Makefile output:

module 'couchdb_views_tests'
  View indexes cleanup
    couchdb_views_tests:214: should_have_two_indexes_alive_before_deletion...ok
    couchdb_views_tests:219: should_cleanup_index_file_after_ddoc_deletion...ok
    couchdb_views_tests:225: should_cleanup_all_index_files...ok
[os_mon] memory supervisor port (memsup): Erlang has closed
[os_mon] cpu supervisor port (cpu_sup): Erlang has closed
    [done in 0.242 s]
  View group db leaks
    couchdb_views_tests:228: couchdb_1138...[0.237 s] ok
    couchdb_views_tests:266: couchdb_1309...[0.037 s] ok
[os_mon] memory supervisor port (memsup): Erlang has closed
[os_mon] cpu supervisor port (cpu_sup): Erlang has closed
    [done in 0.339 s]
  View group shutdown
    couchdb_views_tests:315: couchdb_1283...[0.158 s] ok
[os_mon] memory supervisor port (memsup): Erlang has closed
[os_mon] cpu supervisor port (cpu_sup): Erlang has closed
    [done in 0.161 s]
  Upgrade and bugs related tests
    couchdb_views_tests:159: should_not_remember_docs_in_index_after_backup_restore...*failed*
in function couchdb_views_tests:'-query_view/4-fun-0-'/2 (test/couchdb_views_tests.erl, line 527)
in call from couchdb_views_tests:query_view/4 (test/couchdb_views_tests.erl, line 527)
in call from couchdb_views_tests:'-should_not_remember_docs_in_index_after_backup_restore/1-fun-8-'/1 (test/couchdb_views_tests.erl, line 173)
**error:{assertEqual,[{module,couchdb_views_tests},
              {line,527},
              {expression,"Code"},
              {expected,200},
              {value,500}]}
  output:<<"">>

[os_mon] memory supervisor port (memsup): Erlang has closed
[os_mon] cpu supervisor port (cpu_sup): Erlang has closed
    [done in 0.071 s]

couch.log output:

[info] 2017-06-08T18:34:25.819709Z nonode@nohost <0.24082.0> -------- Opening index for db: eunit-test-db-1496946865766965 idx: _design/foo sig: "5398c9ad12cad56d2228cc89a2770ddf"
[info] 2017-06-08T18:34:25.820150Z nonode@nohost <0.24085.0> -------- Starting index update for db: eunit-test-db-1496946865766965 idx: _design/foo
[info] 2017-06-08T18:34:25.825651Z nonode@nohost <0.24085.0> -------- Index update finished for db: eunit-test-db-1496946865766965 idx: _design/foo
[notice] 2017-06-08T18:34:25.826205Z nonode@nohost <0.24032.0> -------- 127.0.0.1 - - GET /eunit-test-db-1496946865766965/_design/foo/_view/bar 200
[info] 2017-06-08T18:34:25.826998Z nonode@nohost <0.24018.0> -------- db eunit-test-db-1496946865766965 died with reason shutdown
[info] 2017-06-08T18:34:25.827066Z nonode@nohost <0.24082.0> -------- Index shutdown by monitor notice for db: eunit-test-db-1496946865766965 idx: _design/foo
[error] 2017-06-08T18:34:25.831129Z nonode@nohost <0.24033.0> -------- Uncaught error in HTTP request: {error,{badmatch,{'EXIT',noproc}}}
[info] 2017-06-08T18:34:25.831593Z nonode@nohost <0.24082.0> -------- Closing index for db: eunit-test-db-1496946865766965 idx: _design/foo sig: "5398c9ad12cad56d2228cc89a2770ddf" because normal
[info] 2017-06-08T18:34:25.831969Z nonode@nohost <0.24033.0> -------- Stacktrace: [{couch_file,pread_binary,2,[{file,"src/couch_file.erl"},{line,169}]},{couch_file,pread_term,2,[{file,"src/couch_file.erl"},{line,157}]},{couch_btree,get_node,2,[{file,"src/couch_btree.erl"},{line,434}]},{couch_btree,lookup,3,[{file,"src/couch_btree.erl"},{line,284}]},{couch_btree,lookup,2,[{file,"src/couch_btree.erl"},{line,274}]},{couch_db,get_full_doc_info,2,[{file,"src/couch_db.erl"},{line,286}]},{couch_db,open_doc_int,3,[{file,"src/couch_db.erl"},{line,1366}]},{couch_db,open_doc,3,[{file,"src/couch_db.erl"},{line,189}]}]
[notice] 2017-06-08T18:34:25.832259Z nonode@nohost <0.24033.0> -------- 127.0.0.1 - - GET /eunit-test-db-1496946865766965/_design/foo/_view/bar 500
[error] 2017-06-08T18:34:25.832512Z nonode@nohost <0.24033.0> -------- httpd 500 error response:
 {"error":"badmatch","reason":"{'EXIT',noproc}"}
[info] 2017-06-08T18:34:25.842468Z nonode@nohost <0.7.0> -------- Application couch exited with reason: stopped

Possible Solution

Is this another case where the monitor shuts down too early?

changes feed has negative pending value

Hi folks,

I'm running a test in which two entries are updated in the following order: test1, test2.

when I query http://127.0.0.1:5984/doctrine_test_database/_changes

Sometimes I get this order:

{
  "results": [
    {
      "seq": "1-g1AAAAF1eJzLYWBg4MhgTmEQTM4vTc5ISXLIyU9OzMnILy7JAUoxJTIkyf___z8rgzmRMRcowG5pbmloYWKKTQMeY5IUgGSSPcikRAZ86hxA6uIJq0sAqasnqC6PBUgyNAApoNL5xKhdAFG7nxi1ByBq7xOj9gFELci9WQBea3jf",
      "id": "test2",
      "changes": [
        {
          "rev": "1-c86e975fffb4a635eed6d1dfc92afded"
        }
      ]
    },
    {
      "seq": "2-g1AAAAHleJzLYWBg4MhgTmEQTM4vTc5ISXLIyU9OzMnILy7JAUoxJTIkyf___z8rgzmRMRcowG5pbmloYWKKTQMeY5IUgGSSPdQkBrBJaQYWJkmWFikMnKV5KalpmXmpKfhMcACZEI9igqmxcYqhiRGxJiSATKhHMcHEIC0x2TyNSBPyWIAkQwOQAhoyHxEmpskmFimmliSGCcS0BRDT9mclMhBUewCi9j4xah9A1P4Hqs0CAIaTl3s",
      "id": "test1",
      "changes": [
        {
          "rev": "1-4c6114c65e295552ab1019e2b046b10e"
        }
      ]
    }
  ],
  "last_seq": "2-g1AAAAIzeJyV0EEOgjAQBdAqJurSE-gRCrTSruQmSjttKqmwUNZ6E72J3kRvgkVMgMQQ2cwkM5mXybcIoZnxAC1kXkgDIra5TKzJjyfrVuMEiWVZlqnxktHBDaY84j4j9NdBDyNWrorNV0IfSWNGBGeA5kUGSu8zBX1CXAnbjkDDEHwS_CvsKuHcEQjWiYz0n0I2cRVdXHPItcmESsKA8oGZ1Nqt1u7NT1j5igp_0E-PWnk2SkgjoGyY8qqVVj7rKMCKkbaSvgGlZaup",
  "pending": 0
}

sometimes this:

{
  "results": [
    {
      "seq": "1-g1AAAAHDeJzLYWBg4MhgTmEQTM4vTc5ISXLIyU9OzMnILy7JAUoxJTIkyf___z8rkQGPoiQFIJlkD1KXwZzIkAvksacZWJgkWVqkMHCW5qWkpmXmpabgM8EBZEI8igmmxsYphiZGxJqQADKhHsUEE4O0xGTzNCJNyGMBkgwNQApoyHyQKYwQdySbWKSYWmLTR9C0BRDT9uMPP4jaAxC194lR-wCiFhQvWQC-343K",
      "id": "test1",
      "changes": [
        {
          "rev": "1-4c6114c65e295552ab1019e2b046b10e"
        }
      ]
    },
    {
      "seq": "2-g1AAAAHleJzLYWBg4MhgTmEQTM4vTc5ISXLIyU9OzMnILy7JAUoxJTIkyf___z8rgzmRMRcowG5pbmloYWKKTQMeY5IUgGSSPdQkBrBJaQYWJkmWFikMnKV5KalpmXmpKfhMcACZEI9igqmxcYqhiRGxJiSATKhHMcHEIC0x2TyNSBPyWIAkQwOQAhoyHxEmpskmFimmliSGCcS0BRDT9mclMhBUewCi9j4xah9A1P4Hqs0CAIaTl3s",
      "id": "test2",
      "changes": [
        {
          "rev": "1-c86e975fffb4a635eed6d1dfc92afded"
        }
      ]
    }
  ],
  "last_seq": "2-g1AAAAIzeJyV0EEOgjAQBdAqJurSE-gRCrTSruQmSjttKqmwUNZ6E72J3kRvgkVMgMQQ2cwkM5mXybcIoZnxAC1kXkgDIra5TKzJjyfrVuMEiWVZlqnxktHBDaY84j4j9NdBDyNWrorNV0IfSWNGBGeA5kUGSu8zBX1CXAnbjkDDEHwS_CvsKuHcEQjWiYz0n0I2cRVdXHPItcmESsKA8oGZ1Nqt1u7NT1j5igp_0E-PWnk2SkgjoGyY8qqVVj7rKMCKkbaSvgGlZaup",
  "pending": 0
}

Assuming that the documentation says that I must use descending=true to sort by the most recent change, and there's no ascending parameter, so it should be always sorted in ascending way.

The descending parameter is not working properly as well

{
  "results": [
    {
      "seq": "2-g1AAAAGpeJzLYWBg4MhgTmEQTM4vTc5ISXLIyU9OzMnILy7JAUoxJTIkyf___z8rkRGPoiQFIJlkD1KXwZzIkAvksacZWJgkWVqkMHCW5qWkpmXmpabgM8EBZEI82CYGfOoSQOrqUWwyMUhLTDZPI9KmPBYgydAApICGzAeZwgg2xTTZxCLF1BKbPoKmLYCYth-_2yFqD0DU3idG7QOIWlCYZAEAn36HNw",
      "id": "test1",
      "changes": [
        {
          "rev": "1-4c6114c65e295552ab1019e2b046b10e"
        }
      ]
    },
    {
      "seq": "2-g1AAAAHleJzLYWBg4MhgTmEQTM4vTc5ISXLIyU9OzMnILy7JAUoxJTIkyf___z8rgzmRMRcowG5pbmloYWKKTQMeY5IUgGSSPdQkBrBJaQYWJkmWFikMnKV5KalpmXmpKfhMcACZEA8yIZEBn7oEkLp6FJtMDNISk83TiLQpjwVIMjQAKaAh8xE-N002sUgxtSTR5xDTFkBM24_f7RC1ByBq7yPcb2xqnmJqYUiS-x9ATAGFVhYACduXhg",
      "id": "test2",
      "changes": [
        {
          "rev": "1-c86e975fffb4a635eed6d1dfc92afded"
        }
      ]
    }
  ],
  "last_seq": "2-g1AAAAIzeJyV0EEOgjAQBdAqJurSE-gRCrTSruQmSjttKqmwUNZ6E72J3kRvgkVMgMQQ2cwkM5mXybcIoZnxAC1kXkgDIra5TKzJjyfrVuMEiWVZlqnxktHBDaY84j4j9NdBDyNWrorNV0IfSWNGBGeA5kUGSu8zBX1CXAnbjkDDEHwS_CvsKuHcEQjWiYz0n0I2cRVdXHPItcmESsKA8oGZ1Nqt1u7NT1j5igp_0E-PWnk2SkgjoGyY8qqVVj7rKMCKkbaSvgGlZaup",
  "pending": -2
}

Besides the order is wrong (should be test2,test1) the pending value is -2 but there are only two changes.

[Jenkins] timeout triggered by all_dbs_active

Current & Expected Behaviour

During an eunit test (couchdb_mrview_cors_tests) a request is made to retrieve the output of a view. It should succeed. Sometimes, it fails on a timeout.

Possible Solution

In one failure, the attempt to create the database in the eunit test fails, running into an all_dbs_active error. This is unusual because the couch application has just started up and only a single database is created for this test.

Perhaps we have a race condition in startup on in couch_lru?

Your Environment

Jenkins automated build, 2.1.x branch, Debian 8, default Erlang (17), logs uploaded as jenkins-couchdb-1-2017-05-29T02:02:45.043081. Relevant paste here: https://paste.apache.org/sdgk

Revert couch_lru to using gb_trees

Recently couch_lru was changed to use ets tables.

During eprof profiling it showed improved performance however recently in a larger test with more concurrent updates and 5000 max dbs open it showed a significant degradation compared to the previous (gb_tree-based) version

lru_cache

all_dbs_active error when max_dbs_open set very low crashes rexi, returns 500

I'm fixing up the stats.js test, which intentionally sets max_dbs_open low (but unintentionally lower than q). When an operation (here, a doc creation) hits all_dbs_active, rexi crashes with a badmatch, which results in a 500 being returned for the PUT operation.

Log:

    [notice] 2017-05-02T21:48:23.896392Z [email protected] <0.310.0> e0d886f91b 127.0.0.1:15984 127.0.0.1 undefined GET / 200 ok 0
    [notice] 2017-05-02T21:48:23.896987Z [email protected] <0.310.0> 71587dc11d 127.0.0.1:15984 127.0.0.1 undefined GET /_membership 200 ok 0
    [notice] 2017-05-02T21:48:23.934277Z [email protected] <0.69.0> -------- config: [couchdb] max_dbs_open set to 5 for reason nil
    [notice] 2017-05-02T21:48:23.934653Z [email protected] <0.310.0> b0c18fcc87 127.0.0.1:15984 127.0.0.1 undefined PUT /_node/[email protected]/_config/couchdb/max_dbs_open 200 ok 37
    [notice] 2017-05-02T21:48:23.950708Z [email protected] <0.310.0> d7719381c7 127.0.0.1:15984 127.0.0.1 undefined GET /_node/[email protected]/_stats/couchdb/open_databases?flush=true 200 ok 16
    [notice] 2017-05-02T21:48:23.967092Z [email protected] <0.310.0> 8257556df3 127.0.0.1:15984 127.0.0.1 undefined GET /_node/[email protected]/_stats/couchdb/open_os_files?flush=true 200 ok 16
    [error] 2017-05-02T21:48:23.968115Z [email protected] <0.310.0> 8541329306 Request to create N=3 DB but only 1 node(s)
    [notice] 2017-05-02T21:48:24.147960Z [email protected] <0.310.0> 8541329306 127.0.0.1:15984 127.0.0.1 undefined PUT /test_suite_db_ougfuqun/ 201 ok 180
    [error] 2017-05-02T21:48:24.215579Z [email protected] <0.485.0> -------- rexi_server error:{badmatch,{error,all_dbs_active}} [{fabric_rpc,all_docs,3,[{file,"src/fabric_rpc.erl"},{line,100}]},{rexi_server,init_p,3,[{file,"src/rexi_server.erl"},{line,139}]}]
    [error] 2017-05-02T21:48:24.215937Z [email protected] <0.486.0> -------- rexi_server error:{badmatch,{error,all_dbs_active}} [{fabric_rpc,all_docs,3,[{file,"src/fabric_rpc.erl"},{line,100}]},{rexi_server,init_p,3,[{file,"src/rexi_server.erl"},{line,139}]}]
    [error] 2017-05-02T21:48:24.216063Z [email protected] <0.487.0> -------- rexi_server error:{badmatch,{error,all_dbs_active}} [{fabric_rpc,all_docs,3,[{file,"src/fabric_rpc.erl"},{line,100}]},{rexi_server,init_p,3,[{file,"src/rexi_server.erl"},{line,139}]}]
    [error] 2017-05-02T21:48:24.216799Z [email protected] emulator -------- Error in process <0.479.0> on node '[email protected]' with exit value: {{badmatch,{error,{badmatch,{error,all_dbs_active},[{fabric_rpc,all_docs,3,[{file,"src/fabric_rpc.erl"},{line,100}]},{rexi_server,init_p,3,[{file,"src/rexi_server.erl"},{line,139}]}]}}},[{ddoc_cache_opener,recover_validation_funs...
    [error] 2017-05-02T21:48:24.216898Z [email protected] emulator -------- Error in process <0.478.0> on node '[email protected]' with exit value: {{case_clause,{error,{{badmatch,{error,{badmatch,{error,all_dbs_active},[{fabric_rpc,all_docs,3,[{file,"src/fabric_rpc.erl"},{line,100}]},{rexi_server,init_p,3,[{file,"src/rexi_server.erl"},{line,139}]}]}}},[{ddoc_cache_opener...
    [error] 2017-05-02T21:48:24.217563Z [email protected] <0.476.0> -------- could not load validation funs {{case_clause,{error,{{badmatch,{error,{badmatch,{error,all_dbs_active},[{fabric_rpc,all_docs,3,[{file,"src/fabric_rpc.erl"},{line,100}]},{rexi_server,init_p,3,[{file,"src/rexi_server.erl"},{line,139}]}]}}},[{ddoc_cache_opener,recover_validation_funs,1,[{file,"src/ddoc_cache_opener.erl"},{line,127}]},{ddoc_cache_opener,fetch_doc_data,1,[{file,"src/ddoc_cache_opener.erl"},{line,240}]}]}}},[{ddoc_cache_opener,handle_open_response,1,[{file,"src/ddoc_cache_opener.erl"},{line,282}]},{couch_db,'-load_validation_funs/1-fun-0-',1,[{file,"src/couch_db.erl"},{line,659}]}]}
    [notice] 2017-05-02T21:48:24.219008Z [email protected] <0.310.0> a4b1d64a3e 127.0.0.1:15984 127.0.0.1 undefined PUT /test_suite_db_ougfuqun/0 500 ok 68
    [error] 2017-05-02T21:48:24.224920Z [email protected] <0.489.0> -------- Could not open file /home/joant/couchdb/dev/lib/node1/data/shards/40000000-5fffffff/test_suite_db_ougfuqun.1493761703.couch: no such file or directory
    [info] 2017-05-02T21:48:24.225349Z [email protected] <0.217.0> -------- open_result error {not_found,no_db_file} for shards/40000000-5fffffff/test_suite_db_ougfuqun.1493761703
    [warning] 2017-05-02T21:48:24.225476Z [email protected] <0.483.0> -------- creating missing database: shards/40000000-5fffffff/test_suite_db_ougfuqun.1493761703

Apache CouchDB Windows Service Paused

I downloaded CouchDB as I was preparing my PouchDB project. My OS is Windows 10.
However when I tried to open Fauxton it said Connection refused, I went to look in my Services and I saw that Apache CouchDB Service is paused. I Stop ir and Start it, however I am getting this.
1

I tried googling all possible solutions, however I did not find what is exactly wrong. As there is no exact error and I do not understand CouchDB exactly the way it works or on what it relies, I can't seem to find solution.

Please let me know if I can provide any more details that would help to identify the issue.

couch_peruser not working

I am using CouchDB 2.0 on my Windows machine. I created _users database. Setted the couch_peruser flag in the configuration window to be true. After restarting the CouchDB when I added a user in the _users database, no new database was created. What can be the problem?

Security objects cannot be synced if nodes are in maintenance mode

Expected Behavior

On a running cluster, if one turns half of a particular shard's replicas into maintenance_mode, the cluster should keep working properly.

Current Behavior

On a running cluster, if one turns half of a particular shard's replicas into maintenance_mode, Error getting security objects for <<"affected_database_here">> : {error,no_majority} begin to appear.

Possible Solution

Allow the security objects sync internal process to push through maintenance mode.

Context

I'm building a CouchDB automatic administration toolset and I was trying out different scenarios to see it working, in this case I was simulating an operation where on a database with two replicas per shard and one of them being down. I add a third node, put it into maintenance mode and make it replicate the particular shard. At that moment, majority cannot be achieved and the errors start arising.

Your Environment

It is a Kubernetes managed cluster where the Docker image in use is https://hub.docker.com/r/klaemo/couchdb/

Database will not open after multiple attempts of fixing it

Expected Behavior

The service is running and you can connect to your database

Current Behavior

I watched a few videos on how to set it up and it followed them step by step, I go to connect to the database and it gives me:
This site can’t be reached

127.0.0.1 refused to connect.
Try:
Checking the connection
Checking the proxy and the firewall
ERR_CONNECTION_REFUSED

I am using windows 10 creators update

Possible Solution

I tried to Shift+Right Click on the bin folder and run command prompt, I then typed couchdb and it ran its thing... Didn't work so I tried to run services as an admin and start the service "Apache Couchdb" and to my luck, it didn't start. I tried to replace the 'nssm.exe' file and I haven't gotten it to work yet.

Steps to Reproduce (for bugs)

  1. When I go to start the program it gives me an error "Windows could not start the Apache CouchDB on Local Computer. For more information, review the System Event Log. If this is a non-Microsoft service, contact the service vendor, and refer to service-specific error code 3.
  2. When I connect to 127.0.0.1:5984/_utils I get a "refused to connect"

Context

I can't use the database

Your Environment

  • Version used: CouchDB 2.0.0.1
  • Browser Name and version: Chrome 58.0.3029.110 (64-bit)
  • Operating System and version (desktop or mobile): Windows 10 Creators Update
  • Link to your project: NULL

[Jenkins] couchjs segfaults

Expected Behavior

couchjs shouldn't segfault.

Current Behavior

Sometimes, in an automated run, it does. Here's an example:

test/javascript/tests/attachment_views.js                      
    Error: {exit_status,139}
Trace back (most recent call first):
    
 548: test/javascript/couch.js
      CouchError([object Object])
 511: test/javascript/couch.js
      ([object CouchHTTP])
 177: test/javascript/couch.js
      ("(function (doc) {var count = 0;for (var idx in doc._attachments) {co
  79: test/javascript/tests/attachment_views.js
      ()
  37: test/javascript/cli_runner.js
      runTest()
  48: test/javascript/cli_runner.js
      
�[31mfail

Sample couch.log content:

[info] 2017-05-30T19:36:00.862817Z [email protected] <0.2237.0> -------- Starting index update for db: shards/a0000000-bfffffff/test_suite_db_rsaxbqjx.1496172960 idx: _design/temp_dhrvvuyu
[error] 2017-05-30T19:36:00.903666Z [email protected] <0.2202.0> -------- OS Process Error <0.2117.0> :: {os_process_error,{exit_status,139}}
[info] 2017-05-30T19:36:00.903800Z [email protected] <0.222.0> -------- couch_proc_manager <0.2117.0> died normal
[error] 2017-05-30T19:36:00.903990Z [email protected] <0.2148.0> a631af55fb rexi_server throw:{os_process_error,{exit_status,139}} [{couch_mrview_util,get_view,4,[{file,"src/couch_mrview_util.erl"},{line,52}]},{couch_mrview,query_view,6,[{file,"src/couch_mrview.erl"},{line,244}]},{rexi_server,init_p,3,[{file,"src/rexi_server.erl"},{line,139}]}]
[error] 2017-05-30T19:36:00.904339Z [email protected] <0.1724.0> a631af55fb req_err(2090670111) os_process_error : {exit_status,139}
    [<<"couch_mrview_util:get_view/4 L52">>,<<"couch_mrview:query_view/6 L244">>,<<"rexi_server:init_p/3 L139">>]

Possible Solution

There was a comment on IRC about this:

14:02 < vatamane> I had built a centos 7 vagrant vm before but couldn't reproduce
14:02 < vatamane> however i encountered segfaults in couchjs when was playing with changing its max heap parameter, -S I think
14:03 < vatamane> the segfault was triggered by garbage collection
14:35 <+jan____> eeenteresting
14:35 <+jan____> we dropped support for this between 1.x and 2.x and only fixed it after 2.0, which means, I think, that Cloudant never ran that code
14:41 <+jan____> …and subsequently didn’t see any potential segfaults
14:42 < vatamane> the idea there at some point was that making the max heap big enough just delayed the garbage collection indefinitely

This is a recurrence of JIRA issue https://issues.apache.org/jira/browse/COUCHDB-3352

eunit couchdb_os_daemons_tests race on startup

Expected & Current Behaviour

The couchdb_os_daemons_tests test should always pass, but sometimes, it fails a context cleanup.

Makefile output:

module 'couchdb_os_daemons_tests'
  OS Daemons tests
    couchdb_os_daemons_tests:107: should_check_daemon...ok
    couchdb_os_daemons_tests:113: should_check_daemon_table_form...ok
    couchdb_os_daemons_tests:120: should_clean_tables_on_daemon_remove...[0.544 s] ok
    couchdb_os_daemons_tests:127: should_spawn_multiple_daemons...[0.618 s] ok
    undefined
    *** context cleanup failed ***
**in function test_util:stop_sync_throw/4 (src/test_util.erl, line 160)
in call from couchdb_os_daemons_tests:teardown/2 (test/couchdb_os_daemons_tests.erl, line 58)
**throw:{timeout,os_daemon_stop}

couch.log content:

[notice] 2017-06-06T22:50:22.237455Z nonode@nohost <0.3536.1> -------- config: [os_daemons] os_daemon_looper.escript deleted for reason nil
[info] 2017-06-06T22:50:22.795306Z nonode@nohost <0.7.0> -------- Application couch_epi exited with reason: stopped
[info] 2017-06-06T22:50:22.796319Z nonode@nohost <0.7.0> -------- Application ioq exited with reason: stopped
[info] 2017-06-06T22:50:22.825941Z nonode@nohost <0.7.0> -------- Application couch_log started on node nonode@nohost
[info] 2017-06-06T22:50:22.826318Z nonode@nohost <0.7.0> -------- Application ioq started on node nonode@nohost
[info] 2017-06-06T22:50:22.907640Z nonode@nohost <0.7.0> -------- Application couch_epi started on node nonode@nohost
[notice] 2017-06-06T22:50:23.068667Z nonode@nohost <0.3593.1> -------- config: [os_daemons] os_daemon_looper.escript set to /tmp/tmp.rGuUi1aWRx/apache-couchdb-2.0.0-76d2aee/src/couch/test/fixtures/os_daemon_looper.escript for reason nil
[notice] 2017-06-06T22:50:23.071633Z nonode@nohost <0.3593.1> -------- config: [uuids] algorithm set to sequential for reason nil
[notice] 2017-06-06T22:50:23.677403Z nonode@nohost <0.3593.1> -------- config: [os_daemons] bar set to /tmp/tmp.rGuUi1aWRx/apache-couchdb-2.0.0-76d2aee/src/couch/test/fixtures/os_daemon_looper.escript for reason nil
[notice] 2017-06-06T22:50:23.677893Z nonode@nohost <0.3593.1> -------- config: [os_daemons] baz set to /tmp/tmp.rGuUi1aWRx/apache-couchdb-2.0.0-76d2aee/src/couch/test/fixtures/os_daemon_looper.escript for reason nil
[info] 2017-06-06T22:50:25.320943Z nonode@nohost <0.7.0> -------- Application inets started on node nonode@nohost
[info] 2017-06-06T22:50:25.323433Z nonode@nohost <0.7.0> -------- Application ibrowse started on node nonode@nohost
[info] 2017-06-06T22:50:25.323627Z nonode@nohost <0.7.0> -------- Application asn1 started on node nonode@nohost
[info] 2017-06-06T22:50:25.323845Z nonode@nohost <0.7.0> -------- Application public_key started on node nonode@nohost
[info] 2017-06-06T22:50:25.324464Z nonode@nohost <0.7.0> -------- Application ssl started on node nonode@nohost
[info] 2017-06-06T22:50:25.324723Z nonode@nohost <0.7.0> -------- Application khash started on node nonode@nohost
[info] 2017-06-06T22:50:25.325443Z nonode@nohost <0.7.0> -------- Application couch_event started on node nonode@nohost
[info] 2017-06-06T22:50:25.326306Z nonode@nohost <0.7.0> -------- Application sasl started on node nonode@nohost
[info] 2017-06-06T22:50:25.328126Z nonode@nohost <0.7.0> -------- Application os_mon started on node nonode@nohost
[info] 2017-06-06T22:50:25.328339Z nonode@nohost <0.7.0> -------- Application xmerl started on node nonode@nohost
[info] 2017-06-06T22:50:25.328526Z nonode@nohost <0.7.0> -------- Application compiler started on node nonode@nohost
[info] 2017-06-06T22:50:25.328700Z nonode@nohost <0.7.0> -------- Application syntax_tools started on node nonode@nohost
[info] 2017-06-06T22:50:25.328873Z nonode@nohost <0.7.0> -------- Application mochiweb started on node nonode@nohost
[info] 2017-06-06T22:50:25.329045Z nonode@nohost <0.7.0> -------- Application oauth started on node nonode@nohost
[info] 2017-06-06T22:50:25.329212Z nonode@nohost <0.7.0> -------- Application b64url started on node nonode@nohost
[info] 2017-06-06T22:50:25.329750Z nonode@nohost <0.7.0> -------- Application folsom started on node nonode@nohost
[info] 2017-06-06T22:50:25.366232Z nonode@nohost <0.7.0> -------- Application couch_stats started on node nonode@nohost
[info] 2017-06-06T22:50:25.366227Z nonode@nohost <0.3762.1> -------- Apache CouchDB 2.0.0 is starting.

[info] 2017-06-06T22:50:25.366569Z nonode@nohost <0.3763.1> -------- Starting couch_sup
[error] 2017-06-06T22:50:25.383949Z nonode@nohost <0.3771.1> -------- Supervisor couch_secondary_services had child os_daemons started with couch_os_daemons:start_link() at undefined exit with reason {already_started,<0.3640.1>} in context start_error
[error] 2017-06-06T22:50:25.384692Z nonode@nohost <0.3763.1> -------- Supervisor couch_sup had child couch_secondary_services started with couch_secondary_sup:start_link() at undefined exit with reason {shutdown,{failed_to_start_child,os_daemons,{already_started,<0.3640.1>}}} in context start_error
[error] 2017-06-06T22:50:25.385050Z nonode@nohost <0.3762.1> -------- Error starting Apache CouchDB:

    {error,{shutdown,{failed_to_start_child,couch_secondary_services,{shutdown,{failed_to_start_child,os_daemons,{already_started,<0.3640.1>}}}}}}


[error] 2017-06-06T22:50:25.386914Z nonode@nohost <0.3761.1> -------- CRASH REPORT Process  (<0.3761.1>) with 0 neighbors exited with reason: {{shutdown,{failed_to_start_child,couch_secondary_services,{shutdown,{failed_to_start_child,os_daemons,{already_started,<0.3640.1>}}}}},{couch_app,start,[normal,[]]}} at application_master:init/4(line:133) <= proc_lib:init_p_do_apply/3(line:237); initial_call: {application_master,init,['Argument__1','Argument__2',...]}, ancestors: [<0.3760.1>], messages: [{'EXIT',<0.3762.1>,normal}], links: [<0.3760.1>,<0.7.0>], dictionary: [], trap_exit: true, status: running, heap_size: 610, stack_size: 27, reductions: 132
[info] 2017-06-06T22:50:25.387276Z nonode@nohost <0.7.0> -------- Application couch exited with reason: {{shutdown,{failed_to_start_child,couch_secondary_services,{shutdown,{failed_to_start_child,os_daemons,{already_started,<0.3640.1>}}}}},{couch_app,start,[normal,[]]}}

Analysis

Possibly bump timeout more?

The test spawns the file src/couch/test/fixtures/os_daemon_looper.escript. Perhaps this daemon is out to lunch on an io:read and can't be killed? I don't know.

Support additional INCLUDE/LIB paths during compile

Problem

Try to build CouchDB 2.0.0 from the source on CentOS 6.9 stopped with this problem:

Compiling /home/USER/tmp/apache-couchdb-2.0.0/src/couch/priv/couch_js/http.c
/home/USER/tmp/apache-couchdb-2.0.0/src/couch/priv/couch_js/http.c:18:19: Warning: jsapi.h: No such file or directory

Setup

I've build ERLANG OTP 17.5 and SpiderMonkey 1.8.5 in the local user directory.
They are available in $HOME/include and $HOME/lib.

What i tried

Give configure the paths like in earlier CouchDB versions (./configure --with-js-lib=$HOME/lib --with-js-include=$HOME/include/js) doesn't work and was ignored.

I tried to set the paths as compiler vars.

CFLAGS=-I$HOME/include
LDFLAGS=-L$HOME/lib

But it also doesn't work. The compile run breaks.

Another try:

export CPATH=$HOME/include:$LD_LIBRARY_PATH
export LIBRARY_PATH=$HOME/lib:$LD_LIBRARY_PATH
export LD_LIBRARY_PATH=$HOME/lib:$LD_LIBRARY_PATH

Make always fails at the same point.

Document revision value requirement when updating

It seems like CouchDB 2 no longer requires revision value for updating an existing document (PUT /{db}/{docid}). It always updates the newest version of the document.

But attaching a file to a document (PUT /{db}/{docid}/{attname}) still requires a revision value.

Can we in the future just default rev to the newest version for attachments?

should_not_remember_docs_in_index_after_backup_restore...*failed*

Expected and Current Behaviour

Usually the couchdb_views_tests test should_not_remember_docs_in_index_after_backup_restore passes. Sometimes it fails.

Environment: Travis, Erlang 18.3

Makefile output:

  Upgrade and bugs related tests
    couchdb_views_tests:159: should_not_remember_docs_in_index_after_backup_restore...*failed*
in function couchdb_views_tests:'-query_view/4-fun-0-'/2 (test/couchdb_views_tests.erl, line 514)
in call from couchdb_views_tests:query_view/4 (test/couchdb_views_tests.erl, line 514)
in call from couchdb_views_tests:'-should_not_remember_docs_in_index_after_backup_restore/1-fun-8-'/1 (test/couchdb_views_tests.erl, line 173)
**error:{assertEqual,[{module,couchdb_views_tests},
              {line,514},
              {expression,"Code"},
              {expected,200},
              {value,500}]}
  output:<<"">>

couch.log output:

[info] 2017-06-02T23:12:29.470701Z nonode@nohost <0.29000.0> -------- alarm_handler: {set,{system_memory_high_watermark,[]}}
[info] 2017-06-02T23:12:29.508603Z nonode@nohost <0.29086.0> -------- Opening index for db: eunit-test-db-1496445149424073 idx: _design/foo sig: "5398c9ad12cad56d2228cc89a2770ddf"
[info] 2017-06-02T23:12:29.508945Z nonode@nohost <0.29089.0> -------- Starting index update for db: eunit-test-db-1496445149424073 idx: _design/foo
[info] 2017-06-02T23:12:29.517038Z nonode@nohost <0.29089.0> -------- Index update finished for db: eunit-test-db-1496445149424073 idx: _design/foo
[notice] 2017-06-02T23:12:29.517765Z nonode@nohost <0.29036.0> -------- 127.0.0.1 - - GET /eunit-test-db-1496445149424073/_design/foo/_view/bar 200
[info] 2017-06-02T23:12:29.518975Z nonode@nohost <0.29086.0> -------- Index shutdown by monitor notice for db: eunit-test-db-1496445149424073 idx: _design/foo
[info] 2017-06-02T23:12:29.519120Z nonode@nohost <0.29022.0> -------- db eunit-test-db-1496445149424073 died with reason shutdown
[info] 2017-06-02T23:12:29.519432Z nonode@nohost <0.29086.0> -------- Closing index for db: eunit-test-db-1496445149424073 idx: _design/foo sig: "5398c9ad12cad56d2228cc89a2770ddf" because normal
[error] 2017-06-02T23:12:29.520706Z nonode@nohost <0.29037.0> -------- Uncaught error in HTTP request: {error,{badmatch,{'EXIT',noproc}}}
[info] 2017-06-02T23:12:29.522215Z nonode@nohost <0.29037.0> -------- Stacktrace: [{couch_file,pread_binary,2,[{file,"src/couch_file.erl"},{line,168}]},{couch_file,pread_term,2,[{file,"src/couch_file.erl"},{line,156}]},{couch_btree,get_node,2,[{file,"src/couch_btree.erl"},{line,423}]},{couch_btree,lookup,3,[{file,"src/couch_btree.erl"},{line,282}]},{couch_btree,lookup,2,[{file,"src/couch_btree.erl"},{line,272}]},{couch_db,get_full_doc_info,2,[{file,"src/couch_db.erl"},{line,286}]},{couch_db,open_doc_int,3,[{file,"src/couch_db.erl"},{line,1366}]},{couch_db,open_doc,3,[{file,"src/couch_db.erl"},{line,189}]}]
[notice] 2017-06-02T23:12:29.522460Z nonode@nohost <0.29037.0> -------- 127.0.0.1 - - GET /eunit-test-db-1496445149424073/_design/foo/_view/bar 500
[error] 2017-06-02T23:12:29.522708Z nonode@nohost <0.29037.0> -------- httpd 500 error response:
 {"error":"badmatch","reason":"{'EXIT',noproc}"}
[info] 2017-06-02T23:12:29.532104Z nonode@nohost <0.7.0> -------- Application couch exited with reason: stopped
[info] 2017-06-02T23:12:29.533589Z nonode@nohost <0.29000.0> -------- alarm_handler: {clear,system_memory_high_watermark}

Possible Solution

Race condition in test? @davisp ?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.