Coder Social home page Coder Social logo

pocdriver's Introduction

*** Latest Update December 2020 *** Prior to the latest version - POCDriver allowed you to specify a ratio of operation types. For example, 50:50 Inserts and Queries. However, it stuck to this ratio regardless of the relative performance - if for example the server could do 20,000 queries per second but only 3,000 updates per second you would get:

100% Queries / 0% Updates - 20,000 queries/s

0% Queries / 100% Updates - 3,000 updates/s

50% Queries / 50% Updates - 2,000 updates/s, 2,000 queries/s

This isn't right but it was because it was launching at a 1:1 ratio of operations/ as queries are quicker than updates you still get time for 2,000 not 1,500 however you don't get as many queries as they are throttled by the speed of updates (having to match 1:1)

This is now changed by default - when you specify -i, -u, -k etc you specify how many milliseconds of each cycle to spend doing these operations, assuming these cycles are longer than a single operation takes then you get proper differentiation. You need to be aware though that -i 1 -k 1, despite being a 1:1 ratio is not quite the same thing as -i 100 -k 100. In the first case there is likely only time for one operation in the cycle, there is a rounding error if you like. In the latter you might get 10 of one thing done and 500 of another in that 100 milliseconds showing a far better ratio.

Also be wary of batches, and mixing finds (which cannot be batched) with writes (which can) - either use a batch size of one or understand that you can write far faster than you can read simply because you can send many writes to the server in one attempt.

Note there is an extra flag --opsratio which enables the previous behaviour. Also if using --zipfian this new behaviour does not apply.

NOTE Recently upgraded to MongoDB 4.1.x Java Driver.

Introduction

Disclaimer: POCDriver is NOT in any way an official MongoDB product or project.

This is open source, immature, and undoubtedly buggy code. If you find bugs please fix them and send a pull request or report in the GitHub issue queue.

This tool is designed to make it easy to answer many of the questions people have during a MongoDB 'Proof of Concept':

  • How fast will MongoDB be on my hardware?
  • How could MongoDB handle my workload?
  • How does MongoDB scale?
  • How does High Availability work (aka How do I handle a failover)?

POCDriver is a single JAR file which allows you to specify and run a number of different workloads easily from the command line. It is intended to show how MongoDB should be used for various tasks and avoids testing your own client code versus MongoDB's capabilities.

POCDriver is an alternative to using generic tools like YCSB. Unlike these tools, POCDriver:

  • Only works with MongoDB. This shows what MongoDB can do rather than comparing lowest common denominator between systems that aren't directly comparable.

  • Includes much more sophisticated workloads using the appropriate MongoDB feature.

Build

Execute:

mvn clean package

and you will find POCDriver.jar in bin folder. You can execute this program by running,

java -jar ./bin/POCDriver.jar

Then append the flags and arguments you want to this command, which can be found specified below.

Requirements to Build

Basic usage

If run with no arguments, POCDriver will try to insert documents into a MongoDB deployment running on localhost as quickly as possible.

There will be only the _id index and documents will have 10 fields.

Use --print to see what the documents look like.

Client options

Flag Description
-h, --help Show Help
-p, --print Print out a sample document according to the other parameters then quit
-t <arg>, --threads <arg> Number of threads (default 4)
-s <arg>, --slowthreshold <arg> Slow operation threshold in ms, use comma to separate multiple thresholds (default 50)
-q <arg>, --opsPerSecond <arg> Try to rate limit the total ops/s to the specified amount
-c <arg>, --host <arg> MongoDB connection details (default mongodb://localhost:27017)

The -c/--host flag is the MongoDB connection string (aka connection URI) from the MongoDB Java driver. Documentation on its format and available options can be found here: http://mongodb.github.io/mongo-java-driver/4.1/apidocs/mongodb-driver-core/com/mongodb/ConnectionString.html

Basic operations

Flag Description
-k <arg>, --keyqueries <arg> Ratio of key query operations (default 0)
-r <arg>, --rangequeries <arg> Ratio of range query operations (default 0)
-u <arg>, --updates <arg> Ratio of update operations (default 0)
-i <arg>, --inserts <arg> Ratio of insert operations (default 100)

Complex operations

Flag Description
-g <arg>, --arrayupdates <arg> Ratio of array increment ops (requires option -a/--arrays) (default 0)
-v <arg>, --workflow <arg> Specify a set of ordered operations per thread from character set IiuKkp.

For the -v/--workflow flag, the valid options are:

  • i (lowercase i): Insert a new record, push it's key onto our stack
  • I (UPPERCASE i): Increment single stack record
  • u (lowercase u): Update single stack record
  • p (lowercase p): Pop off a stack record
  • k (lowercase k): Find a new record an put it on the stack
  • K (UPPERCASE k): Get a new _id but don't read the doc and put it on the stack

Examples:

  • -v iuu will insert then update that document twice
  • -v kui will find a document, update it, then insert a new document

The last document is placed on a stack and p pops it off so:

  • -v kiippu Finds a document, adds two, then pops them off and updates the original document found.

Note: If you specify a workflow via the -v flag, the basic operations above will be ignored and the operations listed will be performed instead.

Control options

Flag Description
-m, --findandmodify Use findAndModify instead of update and retrieve document (with -u or -v only)
-j <arg>, --workingset <arg> Percentage of database to be the working set (default 100)
-b <arg>, --bulksize <arg> Bulk op size (default 512)
--rangedocs <arg> Number of documents to fetch for range queries (default 10)
--updatefields <arg> Number of fields to update (default 1)
--projectfields <arg> Number of fields to project in finds (default 0, which is no projection)

Collection options

Flag Description
-x <arg>, --indexes <arg> Number of secondary indexes - does not remove existing (default 0)
-w, --nosharding Do not shard the collection
-e, --empty Remove data from collection on startup

Document shape options

Flag Description
-a <arg>, --arrays <arg> Shape of any arrays in new sample documents x:y so -a 12:60 adds an array of 12 length 60 arrays of integers
-f <arg>, --numfields <arg> Number of top level fields in test documents. After the first 3 every third is an integer, every fifth a date, the rest are text. (default 10)
-l <arg>, --textfieldsize <arg> Length of text fields in bytes (default 30)
--depth <arg> The depth of the document created (default 0)
--location <arg> Adds a field by name location and provided ISO-3166-2 code (args: comma,seperated,list,of,country,code). One can provide --location random to fill the field with random values. This field is required for zone sharding with Atlas.

Example

$ java -jar POCDriver.jar -p -a 3:4
MongoDB Proof Of Concept - Load Generator
{
  "_id": {
    "w": 1,
    "i": 12345678
  },
  "fld0": 195727,
  "fld1": {
    "$date": "1993-11-20T04:21:16.218Z"
  },
  "fld2": "Stet clita kasd gubergren, no ",
  "fld3": "rebum. Stet clita kasd gubergr",
  "fld4": "takimata sanctus est Lorem ips",
  "fld5": {
    "$date": "2007-12-26T07:28:49.386Z"
  },
  "fld6": 53068,
  "fld7": "et justo duo dolores et ea reb",
  "fld8": "kasd gubergren, no sea takimat",
  "fld9": 531837,
  "arr": [
    [0,0,0,0],
    [0,0,0,0],
    [0,0,0,0]
  ]
}
$ java -jar POCDriver.jar -k 20 -i 10 -u 10 -b 20
MongoDB Proof Of Concept - Load Generator
------------------------
After 10 seconds, 20016 new documents inserted - collection has 89733 in total
1925 inserts per second since last report 99.75 % in under 50 milliseconds
3852 keyqueries per second since last report 99.99 % in under 50 milliseconds
1949 updates per second since last report 99.84 % in under 50 milliseconds
0 rangequeries per second since last report 100.00 % in under 50 milliseconds

------------------------
After 20 seconds, 53785 new documents inserted - collection has 123502 in total
3377 inserts per second since last report 99.91 % in under 50 milliseconds
6681 keyqueries per second since last report 99.99 % in under 50 milliseconds
3322 updates per second since last report 99.94 % in under 50 milliseconds
0 rangequeries per second since last report 100.00 % in under 50 milliseconds

------------------------
After 30 seconds, 69511 new documents inserted - collection has 139228 in total
1571 inserts per second since last report 99.92 % in under 50 milliseconds
3139 keyqueries per second since last report 99.99 % in under 50 milliseconds
1595 updates per second since last report 99.94 % in under 50 milliseconds
0 rangequeries per second since last report 100.00 % in under 50 milliseconds

Troubleshooting

Connecting with auth

If you are running a mongod with --auth enabled, you must pass a user and password with read/write and replSetGetStatus privileges (e.g. readWriteAnyDatabase and clusterMonitor roles).

Connecting with TLS/SSL

If you are using TLS/SSL, then make sure to have the certificates and keys added to the Java keystore.

Add the CA certificate to the certstore:

cd $JAVA_HOME/lib/security
keytool -import -trustcacerts -file /path/to/mongodb/ca.crt -keystore ./cacerts -storepass changeit

You need the client certificate and key in pkcs12 format. If you have them in PEM format, you can convert them via openssl like so:

# The cert & key must be both in the same file. You can combine them like so:
cat /path/to/mongodb/tls.pem /path/to/mongodb/tls-key.pem > /tmp/tls-cert-and-key.pem

openssl pkcs12 -export -out /tmp/mongodb.pkcs12 -in /tmp/tls-cert-and-key.pem
# When prompted by the openssl command, enter "changeit" as the password (without quotes)

When running POCDriver, supply the javax.net.ssl properties, and set the ?ssl=true field on the connection string:

java \
  -Djavax.net.ssl.trustStore="$JAVA_HOME/lib/security/cacerts" \
  -Djavax.net.ssl.trustStorePassword="changeit" \
  -Djavax.net.ssl.keyStore="/tmp/mongodb.pkcs12" \
  -Djavax.net.ssl.keyStorePassword="changeit" \
  -jar ./bin/POCDriver.jar \
  --host "mongodb://localhost:27017/?ssl=true"

pocdriver's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pocdriver's Issues

Questions regarding CLI arguments

Thanks for creating this Tool.

I have a few questions regarding its CLI arguments/parameters:

  1. What exactly does the parameter -i 50 do in this case? What is meant by -i,--inserts Ratio of insert operations (default 100) ? What does ratio mean in a more elaborate fashion?
java -jar POCDriver.jar -d 120 -i 50 -l 20 -f 40  -c 
mongodb://admin:[email protected]/myLoadTestDatabase?authSource=admin
  1. I would like to insert 100 Documents, update and read them. Additionally I want to read 50 other documents in parallel to the first operation. Which combination of CLI arguments would be appropriate for that case?

  2. What exactly does the parameter -r 3 do in this case?

java -jar POCDriver.jar -d 120 -v iuk -r 3 -l 20 -f 50 -x 5 -c 
mongodb://admin:[email protected]/myLoadTestDatabase?authSource=admin

I guess that the above command should do the following:

  • -v iuk => insert, update and fetch a record as many times as possible per second
  • -r 3, do additional range queries on the data which has been inserted

My problem is that, the POCDriver does not do any range queries at all, see output:

MongoDB Proof Of Concept - Load Generator
Worker thread 0 Started.
Worker thread 1 Started.
Worker thread 2 Started.
Worker thread 3 Started.
------------------------
After 12 seconds, 171 new records inserted - collection has 14705 in total 
14 inserts per second since last report 0.00 % in under 50 milliseconds
55 keyqueries per second since last report 98.66 % in under 50 milliseconds
56 updates per second since last report 75.29 % in under 50 milliseconds
0 rangequeries per second since last report 100.00 % in under 50 milliseconds

------------------------
After 22 seconds, 1197 new records inserted - collection has 15731 in total 
102 inserts per second since last report 0.00 % in under 50 milliseconds
81 keyqueries per second since last report 97.91 % in under 50 milliseconds
82 updates per second since last report 21.24 % in under 50 milliseconds
0 rangequeries per second since last report 100.00 % in under 50 milliseconds

What did I do wrong or didn't understand correctly?

I hope I could formulate my problem in a understandable manner and you can answer the questions.

Best regards.

Multiple Java POCDriver client on the same MongoDB engine

Hello,
For a scalability issue, I would like to launch multiple POCDriver java clients on the same MongoDB engine.
Unfortunately, I have this error on the 2nd POCDriver client I launched:
"state should be: writes is not an empty list"
Is your app designed to be launched multiple time on the same MongoDB ?
Thanks
Julien

Problems using MongoDB Atlas with POCDriver

I'm attempting to run the POCDriver against an Atlas cluster as follows:

java -jar bin/POCDriver.jar -c "mongodb://user:[email protected]:27017,host1.mongodb.net:27017,host2.mongodb.net:27017/?ssl=true&replicaSet=rs0" -e

(valid credentials were replaced with anonymised data above)

Unfortunately I'm getting an error:

Exception in thread "main" com.mongodb.MongoTimeoutException: Timed out after 30000 ms while waiting for a server that matches ... Client view of cluster state is {type=REPLICA_SET, servers=[{address=...mongodb.net:27017, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketWriteException: Exception sending message}, caused by {javax.net.ssl.SSLHandshakeException: java.security.cert.CertificateException: No subject alternative names matching IP address ... found}, caused by {java.security.cert.CertificateException: No subject alternative names matching IP address ... found}}, {address=...mongodb.net:27017, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketWriteException: Exception sending message}, caused by {javax.net.ssl.SSLHandshakeException: java.security.cert.CertificateException: No subject alternative names matching IP address ... found}, caused by {java.security.cert.CertificateException: No subject alternative names matching IP address ... found}}, {address=...mongodb.net:27017, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketWriteException: Exception sending message}, caused by {javax.net.ssl.SSLHandshakeException: java.security.cert.CertificateException: No subject alternative names matching IP address ... found}, caused by {java.security.cert.CertificateException: No subject alternative names matching IP address ... found}}]

This looks like an SSL issue but unsure what's going on. I'm using the latest version of the POCDriver and there is an ssl option supplied in the connection string...

Is this a bug?

Note I've also tried other options such as the srv connection string variant, reverting back to an older .jar file, etc - but getting nowhere.

If this is just a user error, is it possible to update the README with an example of how to successfully connect to Atlas?

Thanks!

Doing bulk reads within a single call?

does anyone know how I would do the following with the POC driver?

  • Have around 300 million to 500 million documents in a collection.
  • Around 10,000 to 20,000 servers connecting to Mongo at once.
  • Each of those servers passes in about 100,000 to 200,000 unique keys.
  • Mongo projects from the document and returns values for each of those keys to each of those servers.
  • They want a target of half a second response time per server. Load is very bursty where all servers are requesting resources at once.

I’m thinking to generate the 500 million documents with a json generator first…. I think I can simulate the 20k servers with -t option… however I don’t see any options to do random find() 200k times within one call.

update run producing "Error: / by zero"

A run using these parameters

java -jar POCDriver.jar -c $MURI -n "poc.poc" -d $(echo 10*60 | bc) -t 20 -i 0 -u 3 | tee update_out2.txt

results in a long delay before any updates are reported and the production of the "Error: / by zero"

I see these reports of performance

------------------------
After 348 seconds (22:40:27), 0 new documents inserted - collection has 10,512,000 in total
0 inserts per second since last report
        100.00 % in under 50 milliseconds
0 keyqueries per second since last report
        100.00 % in under 50 milliseconds
102,553 updates per second since last report
        99.52 % in under 50 milliseconds
0 rangequeries per second since last report
        100.00 % in under 50 milliseconds

Yet, at the end the updates per second is reported to be much less

------------------------
After 600 seconds, 0 new documents inserted - collection has 10512000 in total
0 inserts per second on average
0 keyqueries per second on average
45868 updates per second on average
0 rangequeries per second on average

I suspect that besides the error, the average updates per second is being thrown off by the long prepare time.

keyqueries is zero all the time.

Run command: java -jar bin/POCDriver.jar -k 40 -r 10 -u 20 -i 30

Part of the result shows:

After 1190 seconds, 688890 new records inserted - collection has 131025315 in total
692 inserts per second since last report 99.71 % in under 50 milliseconds
0 keyqueries per second since last report 100.00 % in under 50 milliseconds
464 updates per second since last report 100.00 % in under 50 milliseconds
236 rangequeries per second since last report 90.16 % in under 50 milliseconds

I check the code. It looks like all key queries return null (not found).

What is wrong?

Setting updatefields to a value higher than 1 will update all fields

Running POCDriver without the --updatefields flag or with --updatefields 1 works as intended:

// running: java -jar POCDriver.jar -c "MYCLUSTERURI" -n "test.updates" -i 0 -u 100
// results in the below oplog entry
{
  op: 'u',
  ns: 'test.json',
  ui: UUID("938a8025-dfbd-4578-804f-fba27c0f7c7f"),
  o: { '$v': 2, diff: { u: { fld0: Long("984948") } } },
  o2: { _id: { w: 2, i: 6132 } },
  ts: Timestamp({ t: 1665674985, i: 1 }),
  t: Long("554"),
  v: Long("2"),
  wall: ISODate("2022-10-13T15:29:45.008Z")
}

However, when setting --updatefields 2 or any value higher than 1, it will update fields equal to the fields in the doc (-f flag, default = 10):

// running: java -jar POCDriver.jar -c "MYCLUSTERURI" -n "test.updates" -i 0 -u 100 --updatefields 2
// results in the below oplog entry
{
  op: 'u',
  ns: 'test.updates',
  ui: UUID("938a8025-dfbd-4578-804f-fba27c0f7c7f"),
  o: {
    '$v': 2,
    diff: {
      u: {
        fld0: Long("328002"),
        fld1: ISODate("2019-07-27T07:24:01.661Z"),
        fld2: 'dolor sit amet. Lorem ipsum',
        fld3: 'Lorem ipsum dolor sit amet,',
        fld4: 'justo duo dolores et ea',
        fld5: ISODate("2021-08-22T01:31:34.842Z"),
        fld6: Long("1867063"),
        fld7: 'Stet clita kasd gubergren, no',
        fld8: 'nonumy eirmod tempor invidunt',
        fld9: Long("1255035")
      }
    }
  },
  o2: { _id: { w: 0, i: 16357 } },
  ts: Timestamp({ t: 1665675029, i: 1 }),
  t: Long("554"),
  v: Long("2"),
  wall: ISODate("2022-10-13T15:30:29.000Z")
}

I'm expecting it to only update two fields with the above command. Even worse, when using the -f flag, it will update as many fields as the -f flag is set to. In my case, I was trying to generate large documents (~33kbs) so I used -f 900 to generate/insert data, and still had this flag set when doing the updates, causing it to update all 900 fields when using --updatefields 2 or any value higher than 1.

What command does POCDriver runs at backend.

Doing insert testing using custom script with insert_many in batch of 1000 . Results are different when compared to results of POCDriver. Wanted to know what command does pocdriver runs at backend for inserts.

Add the timestamp for each line to the output file using -o

Hi John,

I modified slightly the code of the POCTestReporter class so that it adds a timestamp for each line in the output file. This is useful to then graph the results from the output file using time series graphing tools like Kibana or similars.

The result is the following:

2016-02-11T15:25:20,10,44544,inserts,4342,0.00,keyqueries,0,100,updates,0,100,rangequeries,0,100
2016-02-11T15:25:30,20,98816,inserts,5464,0.00,keyqueries,0,100,updates,0,100,rangequeries,0,100
2016-02-11T15:26:22,10,19968,inserts,1945,0.00,keyqueries,0,100,updates,0,100,rangequeries,0,100

I attached the modified jar including this change.

Feel free to ping me if you have any questions.

Thanks!
Marco

POCDriver.jar.zip

Error: Prematurely reached end of stream during Replicaset failover

Hi, I have a Atlas(4.0.9 on M10..M30) 3 node Replicaset and when I perform "failover test" or node upgrade w/c essentially restarts a primary and trigger an election I got the following error.
Error: Prematurely reached end of stream
and POCDriver exits, but sometimes it survives and continue running. Here's what I use to run POCDriver:
java -jar POCDriver.jar -k 20 -i 10 -u 10 -b 20 -c "mongodb+srv://username:[email protected]/test?retryWrites=true".
I've alredy tried playing around different connection settings maxIdleTimeMS=100000&autoReconnect=true&connectTimeoutMS=300000 but still the same, sometimes POCDriver survive a failover sometimes not. Looks like the the worker thread dies one by one until none is left. Is there a way for POCDriver to maintain the number of threads if a thread dies?

Getting Shard not found for server error

I have a three shard cluster with mongos on those nodes as well but config servers on three dedicated servers all running 3.2.1.

Running POC driver with the below on the command line pointing to mongos on port 27017 gives me an error. Mongo connects fine to mongos

$ mongo
MongoDB shell version: 3.2.1
connecting to: test
mongos>

$ java -jar ./bin/POCDriver.jar -e
MongoDB Proof Of Concept - Load Generator
Exception in thread "main" com.mongodb.MongoCommandException: Command failed with error 70: 'Shard not found for server: configRS/ip-172-31-23-231.us-west-2.compute.internal:27019,ip-172-31-23-232.us-west-2.compute.internal:27019,ip-172-31-23-233.us-west-2.compute.internal:27019' on server localhost:27017. The full response is { "code" : 70, "ok" : 0.0, "errmsg" : "Shard not found for server: configRS/ip-172-31-23-231.us-west-2.compute.internal:27019,ip-172-31-23-232.us-west-2.compute.internal:27019,ip-172-31-23-233.us-west-2.compute.internal:27019" }
at com.mongodb.connection.ProtocolHelper.getCommandFailureException(ProtocolHelper.java:86)
at com.mongodb.connection.CommandProtocol.execute(CommandProtocol.java:120)

shards.count() in LoadRunner.java errors out. Need to further debug.

shards document looks like the below:
Document{{_id=shard_0, host=shard_0/ip-172-31-23-234.us-west-2.compute.internal:27000}}
Document{{_id=shard_1, host=shard_1/ip-172-31-23-235.us-west-2.compute.internal:27000}}
Document{{_id=shard_2, host=shard_2/ip-172-31-23-236.us-west-2.compute.internal:27000}}

However, I found the below shards working fine:
Document{{_id=shard0, host=shard0/ip-172-31-23-234.us-west-2.compute.internal:28010}}
Document{{_id=shard1, host=shard1/ip-172-31-23-234.us-west-2.compute.internal:28013}}
Document{{_id=shard2, host=shard2/ip-172-31-23-234.us-west-2.compute.internal:28016}}

Both shards are configured as CSRS.

Running queries?

Hi,
I would like to use this tool to stress test our DB, however, the way we use it is using queries and not GET operations (get by the document key).
Do you plan to support queries?

Query on POCDriver

This is no an issue.. I have query regarding the tool.. If I ran the insert only operation(java -jar POCDriver.jar -i 100 -d 600) for 10mins, Does the POCDriver loads the records continuously ?

In my test run after few seconds I see the no records are getting loaded and start loading after sometime.. Is this an expected behaviour?

After 140 seconds, 2451968 new records inserted - collection has 23130882 in total
23654 inserts per second since last report 87.05 % in under 50 milliseconds
0 keyqueries per second since last report 100.00 % in under 50 milliseconds
0 updates per second since last report 100.00 % in under 50 milliseconds
0 rangequeries per second since last report 100.00 % in under 50 milliseconds


After 150 seconds, 2451968 new records inserted - collection has 23130882 in total
0 inserts per second since last report 87.05 % in under 50 milliseconds
0 keyqueries per second since last report 100.00 % in under 50 milliseconds
0 updates per second since last report 100.00 % in under 50 milliseconds
0 rangequeries per second since last report 100.00 % in under 50 milliseconds
...
After 190 seconds, 2540032 new records inserted - collection has 23218946 in total
8806 inserts per second since last report 86.92 % in under 50 milliseconds
0 keyqueries per second since last report 100.00 % in under 50 milliseconds
0 updates per second since last report 100.00 % in under 50 milliseconds
0 rangequeries per second since last report 100.00 % in under 50 milliseconds

Create an official, Automated Build image on Docker Hub

Docker Hub allows you to create Automated Builds from source: https://docs.docker.com/docker-hub/builds/
It would add another packaging/distribution/installation method, whose buildings would be triggered automatically on each commit. It also allows to create different image tags from git tags & branches.
Also, documentation could easily include a canonical docker run statement to quickly spin up a POCDriver instance with just a single command.

By making the image build via an AB, you give the resulting image verifiability and auditability. Also, the build is fully automatic. You can have the latest image tag build from HEAD and individual image tags from git's release tags.
Some people avoid non-verifiable (manually uploaded) images due to security & traceability reasons.

Docker search command clearly displays AB when listing images:

$ docker search pocdriver
NAME                  DESCRIPTION   STARS     OFFICIAL   AUTOMATED
pataquets/pocdriver   POCDriver     0                    [OK]
manfontan/pocdriver                 0                    

Just a free Docker Hub account and a quick setup would do. Ping me if you need help.

Failing with MongoDB 5.0 (Unsupported OP_QUERY)

Hi !
Thank you for this tool @johnlpage, been useful. However, it is now failing when using it along with the last version of MongoDB on the master branch (d356d2ed92812367bb2f61495f2ec064cad5a021).

Error:

Traceback (most recent call last):
  File "/usr/lib/python3.6/multiprocessing/pool.py", line 119, in worker
    result = (True, func(*args, **kwds))
  File "/usr/lib/python3.6/multiprocessing/pool.py", line 44, in mapstar
    return list(map(*args))
  File "../many-collection-test.py", line 589, in populate_collections_worker
    bulk.execute()
  File "/usr/local/lib/python3.6/dist-packages/pymongo/bulk.py", line 675, in execute
    return self.__bulk.execute(write_concern)
  File "/usr/local/lib/python3.6/dist-packages/pymongo/bulk.py", line 493, in execute
    return self.execute_command(sock_info, generator, write_concern)
  File "/usr/local/lib/python3.6/dist-packages/pymongo/bulk.py", line 319, in execute_command
    run.ops, True, self.collection.codec_options, bwc)
  File "/usr/local/lib/python3.6/dist-packages/pymongo/message.py", line 581, in write_command
    reply = self.sock_info.write_command(request_id, msg)
  File "/usr/local/lib/python3.6/dist-packages/pymongo/pool.py", line 548, in write_command
    helpers._check_command_response(result)
  File "/usr/local/lib/python3.6/dist-packages/pymongo/helpers.py", line 210, in _check_command_response
    raise OperationFailure(msg % errmsg, code, response)
pymongo.errors.OperationFailure: Unsupported OP_QUERY command: insert
"""

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "../many-collection-test.py", line 754, in <module>
    populate_collections(num_collections, docs_per, multiprocessing.cpu_count())
  File "../many-collection-test.py", line 614, in populate_collections
    p.map(populate_collections_worker, pop_workers)
  File "/usr/lib/python3.6/multiprocessing/pool.py", line 266, in map
    return self._map_async(func, iterable, mapstar, chunksize).get()
  File "/usr/lib/python3.6/multiprocessing/pool.py", line 644, in get
    raise self._value
pymongo.errors.OperationFailure: Unsupported OP_QUERY command: insert

According to the docs, seems that OP_QUERY has been deprecated (see https://docs.mongodb.com/manual/reference/mongodb-wire-protocol/#request-opcodes)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.