Coder Social home page Coder Social logo

fcsonline / drill Goto Github PK

View Code? Open in Web Editor NEW
2.0K 24.0 108.0 357 KB

Drill is an HTTP load testing application written in Rust

License: GNU General Public License v3.0

Rust 96.32% JavaScript 3.68%
rust performance http jmeter tester performance-testing

drill's Introduction

Drill

Drill is a HTTP load testing application written in Rust. The main goal for this project is to build a really lightweight tool as alternative to other that require JVM and other stuff.

You can write benchmark files, in YAML format, describing all the stuff you want to test.

It was inspired by Ansible syntax because it is really easy to use and extend.

Here is an example for benchmark.yml:

---

concurrency: 4
base: 'http://localhost:9000'
iterations: 5
rampup: 2

plan:
  - name: Include comments
    include: comments.yml

  - name: Fetch users
    request:
      url: /api/users.json

  - name: Fetch organizations
    request:
      url: /api/organizations

  - name: Fetch account
    request:
      url: /api/account
    assign: foo

  - name: Fetch manager user
    request:
      url: /api/users/{{ foo.body.manager_id }}

  - name: Assert request response code
    assert:
      key: foo.status
      value: 200

  - name: Assign values
    assign:
      key: bar
      value: "2"

  - name: Assert values
    assert:
      key: bar
      value: "2"

  - name: Fetch user from assign
    request:
      url: /api/users/{{ bar }}

  - name: Fetch some users
    request:
      url: /api/users/{{ item }}
    with_items:
      - 70
      - 73
      - 75

  - name: Tagged user request
    request:
      url: /api/users/70
    tags:
      - tag_user

  - name: Fetch some users by hash
    request:
      url: /api/users/{{ item.id }}
    with_items:
      - { id: 70 }
      - { id: 73 }
      - { id: 75 }

  - name: Fetch some users by range, index {{ index }}
    request:
      url: /api/users/{{ item }}
    with_items_range:
      start: 70
      step: 5
      stop: 75

  - name: Fetch some users from CSV, index {{ index }}
    request:
      url: /api/users/contacts/{{ item.id }}
    with_items_from_csv: ./fixtures/users.csv
    shuffle: true

  - name: POST some crafted JSONs stored in CSV, index {{ index }}
    request:
      url: /api/transactions
      method: POST
      body: '{{ item.txn }}'
      headers:
        Content-Type: 'application/json'
    with_items_from_csv:
      file_name: ./fixtures/transactions.csv
      quote_char: "\'"

  - name: Fetch no relative url
    request:
      url: http://localhost:9000/api/users.json

  - name: Interpolate environment variables
    request:
      url: http://localhost:9000/api/{{ EDITOR }}

  - name: Support for POST method
    request:
      url: /api/users
      method: POST
      body: foo=bar&arg={{ bar }}

  - name: Login user
    request:
      url: /login?user=example&password=3x4mpl3

  - name: Fetch counter
    request:
      url: /counter
    assign: memory

  - name: Fetch counter
    request:
      url: /counter
    assign: memory

  - name: Fetch endpoint
    request:
      url: /?counter={{ memory.body.counter }}

  - name: Reset counter
    request:
      method: DELETE
      url: /

  - name: Exec external commands
    exec:
      command: "echo '{{ foo.body }}' | jq .phones[0] | tr -d '\"'"
    assign: baz

  - name: Custom headers
    request:
      url: /admin
      headers:
        Authorization: Basic aHR0cHdhdGNoOmY=
        X-Foo: Bar
        X-Bar: Bar {{ memory.headers.token }}

  - name: One request with a random item
    request:
      url: /api/users/{{ item }}
    with_items:
      - 70
      - 73
      - 75
    shuffle: true
    pick: 1

  - name: Three requests with random items from a range
    request:
      url: /api/users/{{ item }}
    with_items_range:
      start: 1
      stop: 1000
    shuffle: true
    pick: 3

As you can see, you can play with interpolations in different ways. This will let you specify a benchmark with different requests and dependencies between them.

If you want to know more about the benchmark file syntax, read this

Install

Right now, the easiest way to get drill is to go to the latest release page and download the binary file for your platform.

Another way to install drill, if you have Rust available in your system, is with cargo:

cargo install drill
drill --benchmark benchmark.yml --stats

or download the source code and compile it:

git clone [email protected]:fcsonline/drill.git && cd drill
cargo build --release
./target/release/drill --benchmark benchmark.yml --stats

Dependencies

OpenSSL is needed in order to compile Drill, whether it is through cargo install or when compiling from source with cargo build.

Depending on your platform, the name of the dependencies may differ.

Linux

Install libssl-dev and pkg-config packages with your favorite package manager (if libssl-dev is not found, try other names like openssl or openssl-devel).

macOS

First, install the Homebrew package manager.

And then install openssl with Homebrew.

Windows

First, install vcpkg.

And then run vcpkg install openssl:x64-windows-static-md.

Demo

demo

Features

This is the list of all features supported by the current version of drill:

  • Concurrency: run your benchmarks choosing the number of concurrent iterations.
  • Multi iterations: specify the number of iterations you want to run the benchmark.
  • Ramp-up: specify the amount of time, in seconds, that it will take drill to start all iterations.
  • Delay: introduce controlled delay between requests. Example: delay.yml
  • Dynamic urls: execute requests with dynamic interpolations in the url, like /api/users/{{ item }}
  • Dynamic headers: execute requests with dynamic headers. Example: headers.yml
  • Interpolate environment variables: set environment variables, like /api/users/{{ EDITOR }}
  • Executions: execute remote commands with test plan data.
  • Assertions: assert values during the test plan. Example: iterations.yml
  • Request dependencies: create dependencies between requests with assign and url interpolations.
  • Split files: organize your benchmarks in multiple files and include them.
  • CSV support: read CSV files and build N requests fill dynamic interpolations with CSV data.
  • HTTP methods: build request with different http methods like GET, POST, PUT, PATCH, HEAD or DELETE.
  • Cookie support: create benchmarks with sessions because cookies are propagates between requests.
  • Stats: get nice statistics about all the requests. Example: cookies.yml
  • Thresholds: compare the current benchmark performance against a stored one session and fail if a threshold is exceeded.
  • Tags: specify test plan items by tags.

Test it

Go to the example directory and you'll find a README how to test it in a safe environment.

Disclaimer: We really recommend not to run intensive benchmarks against production environments.

Command line interface

Full list of cli options, which is available under drill --help

drill 0.8.3
HTTP load testing application written in Rust inspired by Ansible syntax

USAGE:
    drill [FLAGS] [OPTIONS] --benchmark <benchmark>

FLAGS:
    -h, --help                      Prints help information
        --list-tags                 List all benchmark tags
        --list-tasks                List benchmark tasks (executes --tags/--skip-tags filter)
    -n, --nanosec                   Shows statistics in nanoseconds
        --no-check-certificate      Disables SSL certification check. (Not recommended)
    -q, --quiet                     Disables output
        --relaxed-interpolations    Do not panic if an interpolation is not present. (Not recommended)
    -s, --stats                     Shows request statistics
    -V, --version                   Prints version information
    -v, --verbose                   Toggle verbose output

OPTIONS:
    -b, --benchmark <benchmark>    Sets the benchmark file
    -c, --compare <compare>        Sets a compare file
    -r, --report <report>          Sets a report file
        --skip-tags <skip-tags>    Tags to exclude
        --tags <tags>              Tags to include
    -t, --threshold <threshold>    Sets a threshold value in ms amongst the compared file
    -o, --timeout <timeout>        Set timeout in seconds for all requests

Roadmap

  • Complete and improve the interpolation engine
  • Add writing to a file support

Donations

If you appreciate all the job done in this project, a small donation is always welcome:

"Buy Me A Coffee"

Contribute

This project started as a side project to learn Rust, so I'm sure that is full of mistakes and areas to be improve. If you think you can tweak the code to make it better, I'll really appreciate a pull request. ;)

drill's People

Contributors

a-nickol avatar arnodb avatar atouchet avatar atul9 avatar chills42 avatar corrieriluca avatar dannynoam avatar dependabot[bot] avatar eugeneilyin avatar fcsonline avatar hendric-dev avatar hypr2771 avatar iamjarvo avatar ivashchenkoserhii avatar joern-ploennigs avatar jsoref avatar kianmeng avatar kmerfeld avatar manithree avatar marcoieni avatar messense avatar mf-ds avatar reneegyllensvaan avatar schrieveslaach avatar spk avatar stevio89 avatar tzaeru avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

drill's Issues

Allow for environment variables to be interpolated with test plans

It would be really neat if we could add environment variables to the yaml plans.

For example you could export some vars:

$ export BASE_URL='http://localhost:8000'
$ export CONCURRENCY=50
$ export ITERATIONS=100

and in the yaml plan make it look at the ENV vars:

---

concurrency: {{ $CONCURRENCY }} 
base: {{ $BASE_URL }}
iterations: {{ $ITERATIONS }}

plan:
  - name: homepage
    request:
      url: /

Expected HTTP status codes / redirects

While giving drill a try, I noticed that it follows redirects and reports the resulting final page. I had been planning to have a sequence where it would do something like /, /login, etc. and was hoping to have a way to either say “Don't follow redirects on this URL” or “Expect this page to return a 403” so I can exercise the unauthenticated endpoints as well as their targets.

Question: how to pass a large payload to request body

Hi again!

I am trying to pass a large payload to a PUT request

I trying converting to YAML and putting into the body but when my target API is called, it does not receive any body

here is my scenario:

---
threads: 1
base: 'http://localhost:8080'
iterations: 1

plan:
  - name: Update Configuration
    request:
      url: /api/channel/v1/custom/integration/configurations/xyz
      method: PUT
      headers:
        Cookie: "{{ authCookie }}"
        Content-Type: "application/json"
      body:
        domainValidateEndpoint: http://localhost:3000
        crestBaseConfiguration:
          host: http://localhost:3000
          azureHost: http://localhost:3000
          azureManagementHost: http://localhost:3000
          azureResource: http://localhost:3000
          partnerBillingDay: 1
          partnerCenterApiHost: https://api.partnercenter.microsoft.com
          partnerCenterAuthHost: https://login.windows.net
          partnerCenterResource: https://api.partnercenter.microsoft.com
          azureProvisioningDelayInMinutes: 180
          credentialsSetupStep: COMPLETED
          apiSetupStep: COMPLETED
          apiAuthorizationStep: COMPLETED
          configuredByAppDirect: false
          crestConfigurations:
            USA:
              azureConfig:
                clientId: someId
                clientSecret: someSecret
                managementUsername: [email protected]
                managementPassword: someSecret
                grantType: PASSWORD
                encrypted: false
              partnerCenterConfiguration:
                clientId: 6c9f572a-d69a-4e08-b4e5-194696492374
                applicationDomain: dummy.onmicrosoft.com
              resellerCaid: 6c9f572a-d69a-4e08-b4e5-194696492374
              resellerDomain: dummy.onmicrosoft.com
              resellerPurchaseEnabled: true
          cspAzureConfigValid: true
        mosiEnabled: false
        crestEnabled: true
        azureEnabled: true

assign String from JSON response surrounded by double quotes

When assigning a JSON response, string variable will be surrounded by double quotes and unusable in request url

e.g. assigning to test : { "id": "abc" }, using test.id for next request as follow url: /test/{{ test.id }} will generate the request http://example.com/test/"abc" instead of http://example.com/test/abc

reproducible example, benchmark file

---

threads: 1
base: 'https://jsonplaceholder.typicode.com'
iterations: 1
rampup: 0

plan:
  - name: todos1
    request:
      url: /todos/1
    assign: todo
  - name: example
    request:
      url: http://example.com/{{ todo.title }}

result

Threads 1
Iterations 1
Rampup 0
Base URL https://jsonplaceholder.typicode.com

todos1                    https://jsonplaceholder.typicode.com/todos/1 200 OK 246ms
example                   http://example.com/"delectus aut autem" 404 Not Found 345ms

(this example doesn't really make sense as title has spaces, but it's to illustrate with publicly available api)

versions:
rustc 1.35.0 (3c235d560 2019-05-20)
drill 0.5.0

Body will be empty when executing PATCH methods.

First, thank you for this great load testing tool!
I like the assign feature, which makes drill and its template really flexible.

I noticed that the current version of drill does not send body if I use PATCH method.
Tested with this template.

test.yaml

---
concurrency: max
iterations: 1

plan:
    - name: Assign access token
      assign:
        key: access_token
        value: MY-ACCESS-TOKEN-HERE

    - name: Do something with access token
      request:
        url: http://127.0.0.1:3000/items/{{ item }}
        method: PATCH
        body: '{"params": {"name": "NAME"} }'
        headers:
          Authorization: Bearer {{ access_token }}
          Content-Type: application/json
      with_items:
        - "ITEM-ID-1"
        - "ITEM-ID-2"

Used server for debugging.

main.py

from http.server import BaseHTTPRequestHandler, HTTPServer
import logging


class Handler(BaseHTTPRequestHandler):
    def _set_response(self):
        self.send_response(200)
        self.send_header('Content-type', 'text/html')
        self.end_headers()

    def do_PATCH(self):
        content_length = int(self.headers['Content-Length']) # <--- Gets the size of data
        post_data = self.rfile.read(content_length) # <--- Gets the data itself
        logging.info("PATCH request,\nPath: %s\nHeaders:\n%s\n\nBody:\n%s\n",
                     str(self.path), str(self.headers), post_data.decode('utf-8'))

        self._set_response()
        self.wfile.write("PATCH request for {}".format(self.path).encode('utf-8'))


def run(server_class=HTTPServer, handler_class=Handler, port=3000):
    logging.basicConfig(level=logging.INFO)
    server_address = ('', port)
    httpd = server_class(server_address, handler_class)
    logging.info('Starting httpd...\n')
    try:
        httpd.serve_forever()
    except KeyboardInterrupt:
        pass
    httpd.server_close()
    logging.info('Stopping httpd...\n')


if __name__ == '__main__':
        run()

After running this debugging server, I executed drill with the above template.
But the server didn't show request body and its content-length, so the server will raise an exception.

# Run server
$ python3 main.py
----------------------------------------
Exception happened during processing of request from ('127.0.0.1', 52838)
Traceback (most recent call last):
  File "/usr/local/Cellar/[email protected]/3.8.5/Frameworks/Python.framework/Versions/3.8/lib/python3.8/socketserver.py", line 316, in _handle_request_noblock
    self.process_request(request, client_address)
  File "/usr/local/Cellar/[email protected]/3.8.5/Frameworks/Python.framework/Versions/3.8/lib/python3.8/socketserver.py", line 347, in process_request
    self.finish_request(request, client_address)
  File "/usr/local/Cellar/[email protected]/3.8.5/Frameworks/Python.framework/Versions/3.8/lib/python3.8/socketserver.py", line 360, in finish_request
    self.RequestHandlerClass(request, client_address, self)
  File "/usr/local/Cellar/[email protected]/3.8.5/Frameworks/Python.framework/Versions/3.8/lib/python3.8/socketserver.py", line 720, in __init__
    self.handle()
  File "/usr/local/Cellar/[email protected]/3.8.5/Frameworks/Python.framework/Versions/3.8/lib/python3.8/http/server.py", line 427, in handle
    self.handle_one_request()
  File "/usr/local/Cellar/[email protected]/3.8.5/Frameworks/Python.framework/Versions/3.8/lib/python3.8/http/server.py", line 415, in handle_one_request
    method()
  File "main.py", line 17, in do_PATCH
    content_length = int(self.headers['Content-Length']) # <--- Gets the size of data
TypeError: int() argument must be a string, a bytes-like object or a number, not 'NoneType'

If I add Content-Length header manually, server does not show this exception, but drill will show timeout error like this.

Error connecting 'http://127.0.0.1:3000/items/ITEM-ID-1': reqwest::Error { kind: Request, url: "http://127.0.0.1:3000/items/ITEM-ID-1", source: TimedOut }

I think the commit ( 4ad7822 ) made this change, body is allowed for the POST method only.

Slicing data and assign to specific thread/task

Given the following benchmark file:

base: 'myUrl'
threads: 8

plan:
  - name: CRIANDO UM CLIENTE {{ thread }} {{ item.txn }}
    request:
      url: /api/v1/customer/create
      method: POST
      body: '{{ item.txn }}'
      headers:
        Content-Type: 'application/json'
    with_items_from_csv:
      file_name: ../fixtures/cadastro-data.csv
      quote_char: "\'"

drill is calling the endpoint with the same row for all threads, is there any way to avoid it?
is it possible to slice the data and assign some range of rows to each thread?

Env variables interpolation doesn't work inside 'with_items_from_csv'

Looks like an interpolation in with_items_from_csv is not worked.

config contains:
with_items_from_csv': /fixtures/{{ RECIPE_IDS_FILE }}
error appeared:
thread 'main' panicked at 'couldn't open /fixtures/{{ RECIPE_IDS_FILE }}: No such file or directory (os error 2)'

Expected:
Unknown 'RECIPE_IDS_FILE' variable! if RECIPE_IDS_FILE is not defined.
Or reading csv file successfully is RECIPE_IDS_FILE is defined.

Benchmark panics if endpoint which assigns a value times out

I've encountered an issue where I have an endpoint returning a value that I assign to a variable which I depend on later, times out. This causes the whole benchmark to panic with the message:

Error connecting 'http://127.0.0.1:8000/endpoint/': reqwest::Error { kind: Request, url: "http://127.0.0.1:8000/endpoint/", source: TimedOut }
Error connecting 'http://127.0.0.1:8000/endpoint/': reqwest::Error { kind: Request, url: "http://127.0.0.1:8000/endpoint/", source: TimedOut }
thread 'thread 'tokio-runtime-workertokio-runtime-workerError connecting 'http://127.0.0.1:8000/endpoint/': reqwest::Error { kind: Request, url: "http://127.0.0.1:8000/endpoint/", source: TimedOut }
' panicked at 'thread 'Unknown 'resp.body.id' variable!' panicked at 'tokio-runtime-worker', Unknown 'resp.body.id' variable!Error connecting 'http://127.0.0.1:8000/endpoint/': reqwest::Error { kind: Request, url: "http://127.0.0.1:8000/endpoint/", source: TimedOut }
' panicked at 'src\interpolator.rs', Error connecting 'http://127.0.0.1:8000/endpoint/': reqwest::Error { kind: Request, url: "http://127.0.0.1:8000/endpoint/", source: TimedOut }
src\interpolator.rsUnknown 'resp.body.id' variable!thread 'Error connecting 'http://127.0.0.1:8000/endpoint/': reqwest::Error { kind: Request, url: "http://127.0.0.1:8000/endpoint/", source: TimedOut }
thread '', src\interpolator.rs:tokio-runtime-workerError connecting 'http://127.0.0.1:8000/endpoint/': reqwest::Error { kind: Request, url: "http://127.0.0.1:8000/endpoint/", source: TimedOut }
thread '' panicked at 'tokio-runtime-workerthread '::tokio-runtime-worker3939Error connecting 'http://127.0.0.1:8000/endpoint/': reqwest::Error { kind: Request, url: "http://127.0.0.1:8000/endpoint/", source: TimedOut }  
' panicked at ':39Unknown 'resp.body.id' variable!' panicked at ':tokio-runtime-workerthread 'Unknown 'resp.body.id' variable!Error connecting 'http://127.0.0.1:8000/endpoint/': reqwest::Error { kind: Request, url: "http://127.0.0.1:8000/endpoint/", source: TimedOut }
11:', 11Unknown 'resp.body.id' variable!' panicked at 'tokio-runtime-worker', thread '
Error connecting 'http://127.0.0.1:8000/endpoint/': reqwest::Error { kind: Request, url: "http://127.0.0.1:8000/endpoint/", source: TimedOut }
11src\interpolator.rs
', thread '' panicked at 'src\interpolator.rstokio-runtime-worker
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
:src\interpolator.rsUnknown 'resp.body.id' variable!tokio-runtime-workerUnknown 'resp.body.id' variable!:' panicked at '39:', ' panicked at '', 39Unknown 'resp.body.id' variable!:39src\interpolator.rsUnknown 'resp.body.id' variable!src\interpolator.rs:', 11src\interpolator.rs
', :src\interpolator.rs::1139
::1139::39
391111
::
1111

thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: JoinError::Panic(...)', src\benchmark.rs:89:106

Minimal reproducible example

Set up an actix server like this:

use actix_web::{HttpServer, App, HttpResponse, Responder, get, Result, web, post};
use serde_json::json;
use std::time::Duration;

#[get("/endpoint/")]
async fn endpoint() -> Result<impl Responder> {
    actix_rt::time::delay_for(Duration::from_secs(12)).await; // <--- NB!
    Ok(HttpResponse::Ok().json(json!({"id": 1 })))
}

#[get("/endpoint_2/{id}")]
async fn endpoint_post(id: web::Path<u32>) -> Result<impl Responder> {
    println!("Got: {}", *id);
    Ok(HttpResponse::Ok().json(json!({"status": "ok" })))
}

async fn app() -> std::io::Result<()> {
    HttpServer::new(move || {
        App::new()
        .service(endpoint)
        .service(endpoint_post)
    })
    .bind("127.0.0.1:8000")?
    .client_timeout(0)
    .run()
    .await
}

fn main() {
    let mut system = actix_rt::System::new("server1");
    match system.block_on(app()) {
        Ok(_) => println!("System exited!"),
        Err(e) => println!("{}", e),
    };
}

Cargo.toml

[dependencies]
serde_json = "1.0.53"
actix-rt = "1.1.1"
actix-web = "2.0.0"

Then run a benchmark like this:

concurrency: 10
base: 'http://127.0.0.1:8000'
iterations: 1


plan:
  - name: Get Id
    request:
      url: /endpoint/
      method: GET
      headers:
        Content-Type: 'application/json'
    assign: resp

  - name: Get Id
    request:
      url: /endpoint_2/{{resp.body.id}}
      method: GET

Suggestions

I'm really in doubt on how this should be handled the best way, but I don't think that aborting the whole run in a panic is the best way of handling this.

An error occurring in one endpoint that might affect another call later in the plan, so one option is to err both calls or just skip a call which depends on a returned value which turns out to be Null so you don't report a false error on the second endpoint. It's just one "missing" call. A message could be logged out during the run, or reported in the final report.

Right now the value returned from an endpoint timing out is stored as serde_json::Value::Null, see

let body: Value = serde_json::from_str(&data).unwrap_or(serde_json::Value::Null);
let assigned = AssignedRequest {
body,
headers,
};
let value = serde_json::to_value(assigned).unwrap();
context.insert(key.to_owned(), value);
}
.

This value seems to be retrieved in the Interpolator which panics if the value is not found. If Interpolator::resolve returned an Option<String> we could check for that and err instead of panic if it's not present.

Another solution is to let the timeout of a connection be user defined so it can be increased in these cases, but it's not really a solution to the problem since you might want an endpoint which takes more than 10 seconds to respond to err in all other cases.

Other information

Running drill built from the current master on Windows 10 using Rust stable-x86_64-pc-windows-msvc unchanged - rustc 1.43.1

RFC - JSON Flag

Sometimes I miss a flag to generate structured data to feed reports(like json)

--stats flag is awesome, but --json would allow us to show that info in a more structured way

Adding more info like throughput consuption data for example wold make it better

I can help to implement this feature if you find interesting

Programmatic API

YAML is a configuration language, not a programming language. While writing tests, sometimes I want the power of a programming language.

For example, I'd like to be able to POST n transactions to a URL, followed by a different specific POST to get a batching (with a factor of n) interface. This kind of thing is trivial to do in Locust but is harder to do without a programmatic API.

I doubt that replacing the YAML front end with a different embedded language is something that appeals to the Drill community. Since I already work in Rust, I'd like it if some of the internals of Drill were opened up and enhanced so that I could write some of my tests in Rust, using Drill.

RFC - Verbose Flag

I really miss some kind of flag to show more information about request/response

I've been learning Rust last year and it's a feature I could implement myself if you guys find it interesting 👍

POST-ing a sequence of JSON objects in Drill

The task I'd like to perform with Drill is to have it read a csv file filled with JSON objects representing a transaction (txn) and have it POST to an address with the JSON being the body of the transaction. I've been unsuccessful.

Here is what a simplified yml file looks like

---

threads: 1
base: 'http://localhost:8669'
iterations: 1
rampup: 2

plan:
  - name: POST transactions to standalone ledger from CSV
    request:
      url: /submit_transaction
      method: POST
      body: {{ item.txn }}
      headers:
        Content-Type: 'application/json'
    with_items_from_csv:
      file_name: ./log.csv
      quote_char: "\'"

and the log.csv file looks like:

txn
'{"operations": [{"DefineAsset": {"body": {"asset": {"code": {"val":
[114, 255, 1, 164, 16, 58, 220, 131, 26, 36, 233, 7, 83, 132, 2,
250]}, "issuer": {"key":
"RkcRIKYXs_t2CKgxFwLc2AMCKOQP2N0Q3kTuIVIbCII="}, "memo": "",
"confidential_memo": null, "updatable": false, "traceable": false}},
"pubkey": {"key": "RkcRIKYXs_t2CKgxFwLc2AMCKOQP2N0Q3kTuIVIbCII="},
"signature": "b6gc3THsoVFlBlMSUP78IxwPY37c2k-yllJJMtLhc1FW0iko0jSlSOBfjzMbPLdfsE7Dt2ky8ZsxKR9JTMvFAw=="}}],
"credentials": [], "memos": []}'

What appears to be happening is that the POST occurs with an empty body, suggesting that the interpolation of the JSON into the body has gone wrong, or I've used the wrong syntax. Any ideas?

Cannot pass simple variable to custom header

Hi

I am trying the following very basic config:

---
threads: 1
base: 'http://localhost:8080'
iterations: 1

plan:
  - name: Setup Auth Token
    assign:
      key: "authCookie"
      value: "JSESSIONID=MYVALUE"
  - name: Get Configuration providers
    request:
      url: /api/some_api
      headers:
        Cookie: {{ authCookie }}

But somehow the value authCookie isnt passed to the custom header, I'm getting the following error:

thread 'main' panicked at 'called `Option::unwrap()` on a `None` value', libcore/option.rs:345:21
note: Run with `RUST_BACKTRACE=1` for a backtrace.

I have a series of APIs I want to test but I don't want to have to set the auth cookie on each single request. I want to assign to a variable and have every test use that variable.

This way I can simply set the variable once before running my entire test plan.

RFC - Rename url keyword from .yml

There is a minor change that I think it would be quite useful, inside plan object we have a keyword called url:

plan:
  - name: plan name
    request:
      url: /users

But it's technical name actually is "uri" so, it would make more clear semantically what data to put in this field

Thanks!!

drill error regarding DNS configuration

No matter how my yaml-file looks like, I'm always getting the following error when running the latest drill:

thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: reqwest::Error { kind: Builder, source: Custom { kind: Other, error: "error reading DNS system conf: Error parsing resolv.conf: InvalidOption(17)" } }', src/actions/request.rs:152:148
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace

The corresponding line in my /etc/resolv.conf file is:

options edns0 trust-ad

Feature Request: Option to disable validation for ssl certificates

It would be nice to make tests to (local) urls with invalid ssl certificates. Maybe an option like this:

...
plan:
  - name: Get site with invalid ssl certificate
    request:
      validate_ssl: false
      url: /

Or for general preventing:

threads: 4
base: 'https://example.com'
iterations: 5
validate_ssl: false

I don´t know if this is currently possible with hyper.

"trust-ad" in /etc/resolv.conf causes error on Ubuntu 20.04.1

This looks like an issue in the resolv-conf crate, but thought it'd be best to say something constructive for others who may see the same error.

On a up-to-date Ubuntu 20.04.1 LTS I see:

drill --no-check-certificate --benchmark load/dev.yml --stats -q
Concurrency 256
Iterations 1024
Rampup 0
Base URL https://foo.internal.net:28080

thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: reqwest::Error { kind: Builder, source: Custom { kind: Other, error: "error reading DNS system conf: Error parsing resolv.conf: InvalidOption(17)" } }', /g/rich/.cargo/registry/src/github.com-1ecc6299db9ec823/drill-0.7.0/src/actions/request.rs:151:58

where /etc/resolv.conf contains:

nameserver 127.0.0.53
options edns0 trust-ad
search lan

And removing the word "trust-ad" from /etc/resolv.conf fixes the issue -- no error occurs when "trust-ad" is missing.

I've submitted the issue upstream:
tailhook/resolv-conf#25

Strip characters like double quotes in 'assign' JSON response values

Hi guys, not sure if there's a workaround for this current issue.

I have a thread test suite that first posts to an endpoint /token, and assigns the response value. I want to get one attribute and pass it in as the token value in subsequent request headers, like this:

GET /users/3472742
Headers:
  Bearer as897c98webjk324jkfds9

Instead, when I pass in this assign response attribute using {{ response.token }}, I get hard coded double quotes that break this request:

GET /users/3472742
Headers:
  Bearer "as897c98webjk324jkfds9"

Is there any way in Drill to strip characters like these from JSON response values?

Add option to force drill to read response body

I'm trying to test an API that serves files, and drill is reporting that the requests are completing much faster than they should, since it makes no attempt to download the response body before ending it's timer.

Assigns stores data in array when run with_items_from_csv

Feature request:

I want to be able to load test a mass number of users running through multiple actions (purchasing on a site, configuring their account, etc.). Right now I am able to login those users using with_items_from_csv and their credentials in each row of a CSV.

However, with subsequent steps the assign data is only going to be the latest data for the variable in the CSV user login step. So for example, when I assign data to user in Get token for user step below, only the latest row of the CSV input will be present in assign:

Threads 1
Iterations 1
Rampup 0
Base URL https://mysite.com

Get token for user [email protected] https://mysite.com/token 200 OK 660ms
Get token for user [email protected] https://mysite.com/token 200 OK 222ms
Get user info for Jocelyn Frank https://mysite.com/users/8331593/ 200 OK 125ms

It would be awesome if assign when called in a step with a CSV fixture supported arrays and stored the data that way, so subsequent steps could access the array with an option like with_items_from_assign_data, and I could do something like:

  - name: Get token for user {{ item.email }}
    request:
      url: /token
      method: POST
      headers:
        Accept: "{{ header_accept }}"
        Content-Type: "{{ header_content_type }}"
        Authorization: "{{ header_auth }}"
      body: grant_type=password&username={{ item.email }}&password={{ item.plaintext_password }}
    with_items_from_csv: ./fixtures/users.csv
    assign: users

  - name: Get user info for {{ user.body._embedded.user.first_name }} {{ user.body._embedded.user.last_name }}
    request:
      url: "{{ user.body._links.user }}"
      method: GET
      headers:
        Accept: "{{ header_accept }}"
        Content-Type: "{{ header_content_type }}"
        Authorization: Bearer {{ user.body.access_token }}
    with_items_from_assign_data: users

Then for this example run it would iterate over that array of data responses in the second step, yielding:

Threads 1
Iterations 1
Rampup 0
Base URL https://mysite.com

Get token for user [email protected] https://mysite.com/token 200 OK 660ms
Get token for user [email protected] https://mysite.com/token 200 OK 222ms
Get user info for Jessica Rice https://mysite.com/users/8331592/ 200 OK 168ms
Get user info for Jocelyn Frank https://mysite.com/users/8331593/ 200 OK 125ms

Poor performance

image

config:

threads: 16
base: 'http://localhost:8080'
iterations: 1000
rampup: 2

plan:
  - name: json
    request:
      url: /json

result:

./target/release/drill --benchmark benchmark.yml --stats -q
Threads 16
Iterations 1000
Rampup 2
Base URL http://localhost:8080


json                      Total requests            16000
json                      Successful requests       16000
json                      Failed requests           0
json                      Median time per request   3ms
json                      Average time per request  5ms
json                      Sample standard deviation 5ms

Concurrency Level         16
Time taken for tests      21.4 seconds
Total requests            16000
Successful requests       16000
Failed requests           0
Requests per second       746.78 [#/sec]
Median time per request   3ms
Average time per request  5ms
Sample standard deviation 5ms

run same benchmark with ab:

ab -n 16000 -k -c 16 http://localhost:8080/json
This is ApacheBench, Version 2.3 <$Revision: 1807734 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking localhost (be patient)
Completed 1600 requests
Completed 3200 requests
Completed 4800 requests
Completed 6400 requests
Completed 8000 requests
Completed 9600 requests
Completed 11200 requests
Completed 12800 requests
Completed 14400 requests
Completed 16000 requests
Finished 16000 requests


Server Software:        wizzardo
Server Hostname:        localhost
Server Port:            8080

Document Path:          /json
Document Length:        27 bytes

Concurrency Level:      16
Time taken for tests:   0.497 seconds
Complete requests:      16000
Failed requests:        0
Keep-Alive requests:    16000
Total transferred:      2832000 bytes
HTML transferred:       432000 bytes
Requests per second:    32195.81 [#/sec] (mean)
Time per request:       0.497 [ms] (mean)
Time per request:       0.031 [ms] (mean, across all concurrent requests)
Transfer rate:          5565.10 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.0      0       2
Processing:     0    0   1.6      0      41
Waiting:        0    0   1.5      0      41
Total:          0    0   1.6      0      41

Percentage of the requests served within a certain time (ms)
  50%      0
  66%      0
  75%      0
  80%      1
  90%      1
  95%      2
  98%      4
  99%      6
 100%     41 (longest request)

Slow performance

On a 2018 Macbook Pro I'm unable to get drill to pass the ~140reqs/sec mark. apib, for comparison, easily goes to ~24k reqs/sec on the same setup

Update crates.io package

Hi. I’m writing on an article about do’s and dont’s in aync Rust and using actix as an examle. I wanted to use a Rust load testing application for benchmarking different changes and it led me to Drill which is a really nice tool.

However, it took som time before I found out that benchmark.yaml expected “threads” instead of “concurrency” since all the examples uses that. The only hint is to look at the gif and pause it to see the right configuration. Also there seems to be som progress on performance that would be great to get when using cargo install.

Refreshing headers based on a condition

I think Drill is outstanding.

A key limitation is if your application is secured via some form of JWT then it's a little difficult to run longer load tests without disabling it in your application.

It would be useful to have some form of Ansible-like jinja2 expression to refresh a header(Authorization) based on some form of logic.

Suitability for my use case?

Yo!

I had an use case I thought I might make a load tester for, in Rust. As I checked out if something already existed in that, I stumbled on Drill.

Currently for my use case I'd also need testing with Websockets (and in future, possibly with TCP or UDP) and it'd be desirable if I could maximize the throughput as I'd wish to simulate thousands of requests a second against a server located in another network, so that the test would be as close to the real use case as possible.

If you think Drill is suitable for covering these two use cases, I'd be thrilled (ha, get it?) to just fork and branch it and then later PR? The goal would be to use it in the company I work at for testing our new web backends. If you think Drill is better suited in being HTTP-only and relying on threading and sync over async requests, I fully understand that, too.

Thanks & sorry for the non-issue issue!

Use header from response for the next request

Hi,

I like this tool a lot.

I have a use case where I have to authenticate with a post request. The successful response contains a token in the header that I have to use in the following request to access more endpoints. The token is dynamic and therefore can not be stored local.

Is there a way to use the token from the header in the response for the header in the next request? Does 'drill' support this?

Create a release for 0.7.x

The latest fixes and improvements since 0.6.0 are only available if building manually - could you cut a release so package-builds etc. will / can upgrade as well?

Thanks!

Portable .yml syntax between drill and blazemeter

Hi, thanks for creating drill, very interesting tool

I'd like to know if there is some kind of way to convert drill .yml to blazemeter .yml which looks very similar, but I believe it's not compatible

Can you help me to create some kind of converter?
There is some kind of parser available for ansible syntax?

Thanks!!!

Improve the `Time per request` stat.

Why not provide more clarity and information about the returned information.
Show the p99 time, Avg time, median time, etc.

At minimal clarify the 'time per request' to be (assuming) an average.

Concurrency and randomness for with_items

I was testing drill with a large CSV file of different item IDs. Is there any supported way to randomize the execution order? Right now this means that my concurrency is basically lock-step requesting the same items in the same order which maximizes the benefits from caching. It'd be really nice if there was a way to have the order be randomized other than, say, running multiple instances scripting shuf on a CSV file for each one.

Assign with Regex

Currently Assign seems to miss an regexp feature. I need to extract a variable from json embedded into an html document.
Is there something existing?

test failed

Threads 4
Iterations 5
Base URL http://baidu.com

thread 'main' panicked at 'called Option::unwrap() on a None value', libcore/option.rs:335:21
stack backtrace:
0: std::sys::unix::backtrace::tracing::imp::unwind_backtrace
1: std::sys_common::backtrace::_print
2: std::panicking::default_hook::{{closure}}
3: std::panicking::default_hook
4: std::panicking::rust_panic_with_hook
5: std::panicking::begin_panic_fmt
6: rust_begin_unwind
7: core::panicking::panic_fmt
8: core::panicking::panic
9: drill::expandable::include::expand_from_filepath
10: drill::benchmark::execute
11: drill::main
12: std::rt::lang_start::{{closure}}
13: std::panicking::try::do_call
14: __rust_maybe_catch_panic
15: std::rt::lang_start_internal
16: main

csv quoting

I have some .csv files that use single quote, and have commas embedded in strings. Those wouldn't parse with the current version of drill, so I added:
.quote(b'\'')
between lines 43 and 44 of reader.rs

That breaks other csv files I have, and clearly isn't a general or long-term solution.

Here's what I'm thinking of doing:
For backwards compatibility continue to allow the current syntax:
with_items_from_csv: ./fixtures/users.csv
which would work as now, with " as the default quote character. But also support something like:

  with_items_from_csv:
    file_name: ./fixtures/users.csv
    quote_char: "'"

I'm not sure that's the correct way to escape/quote a single quote in yaml, but the idea is to allow with_items_from_csv to be a string or a hash. I haven't implemented that yet, so I'm not sure it is feasible, but it appears to be.

Alternatively, another yaml element like, say, with_items_from_single_quote_csv that sets the quote character to ' instead of the default "

Do you prefer one of those, or something else?

Decouple Concurrency and Iterations

From #83:

Concurrency and iterations are linked so there's no way to, say, run a large CSV file a single time with currency > 1 other than by using something like GNU Parallel to run multiple copies — (since drill is so simple to run that's not actually a bad option).
It feels like it would be useful to have a way to expand the with_items directives so you could decouple those two settings and it'd be legal to, for example, say concurrency: 100 iterations:1 when you have a large input file so you'd still be able to things like the stats which wouldn't otherwise display if you terminated a large job before it finished many millions of requests.

Use Case: Have a list of 10k parameters to run against a url to validate they return 200. Want each item in the list to be run once, but run with concurrency of 20.

concurrency: 20
iterations: 1            # only want this to run once
base: 'http://localhost'
plan:
  - name: Fetch by id
    request:
      url: /{{ item.email }}
    with_items_from_csv: ./items.csv
id
1
2
3
...

terminal colours not working on windows outside `cargo run`

I tried building and using drill on Windows, with cargo install, for a particular client where I needed to use their SoE desktop in a quick project.

The colour-printing code didn't work right; I got ANSI escape codes printed literally on the terminal. This was running in VScode, with either powershell or git bash. I got the same in the powershell gui window. I set CLICOLOR=0 and got on with the task, but I wanted to come back and fix the problem so I could leave them with a working setup.

I then cloned the repo, but to my surprise cargo run showed the colours correctly. Invoking the same executable from target/release/drill.exe printed ANSI codes again!

The windows console doesn't interpret ANSI codes by defaut, and needs a console API call to set it into the appropriate mode. It turns out, cargo uses this mode, and leaves it active when invoking the program via cargo run.

The below patch fixes, or at least works around the problem.

diff --git a/src/main.rs b/src/main.rs
index 2164af2..3db592b 100644
--- a/src/main.rs
+++ b/src/main.rs
@@ -35,6 +35,8 @@ fn main() {
   let no_check_certificate = matches.is_present("no-check-certificate");
   let quiet = matches.is_present("quiet");
   let nanosec = matches.is_present("nanosec");
+  #[cfg(windows)]
+  let _ = control::set_virtual_terminal(true);
 
   let begin = time::precise_time_s();
   let list_reports_result = benchmark::execute(benchmark_file, report_path_option, no_check_certificate, quiet, nanosec);

Note: I'm a very infrequent Windows user, and this was my first time trying Rust on Windows. Unfortunately I didn't have a chance to fully test this change in all combinations while on site, so it might still misbehave in the case of redirected output or some other situation. See the comments at the top of https://github.com/mackwic/colored/blob/master/src/control.rs for more detail; there may be more that needs to be done to properly set up the windows console. That's one reason I've given a code example as a diff rather than a PR.

Big hat-tip to @retep998 for pointing out the console-mode api, I was well down the wrong rabbit-hole looking at differences in other environment variables between the two invocation methods.

How to use with http2 / https

Hello !

First thank you for your library. It seems to be an awesome tool 😄

Now I have a web server with http2 and rustls for my auto generated certificate.

When I try to run :

drill --benchmark benchmark.yml --no-check-certificate -stats

I do get some error message :

Error connecting 'https://127.0.0.1:8000/': 
reqwest::Error { kind: Request, url: Url { scheme: "https", host: Some(Ipv4(127.0.0.1)), port: Some(8000), path: "/", query: None, fragment: None }, source: hyper::Error(Connect, Ssl(Error { code: ErrorCode(1), cause: Some(Ssl(ErrorStack([Error { code: 336151578, library: "SSL routines", function: "ssl3_read_bytes", reason: "tlsv1 alert decode error", file: "../ssl/record/rec_layer_s3.c", line: 1528, data: "SSL alert number 50" }]))) }, X509VerifyResult { code: 0, error: "ok" })) }

I am not sure what is the problem because I have all the necessaries libraries :

libssl-dev & pkg-config

What could be the thing I am missing ?

Best regards,
Arn

Report only runs one iteration

What is the use of the --report option? It can only run one iteration. Wouldn't it be useful if the report contains the details of all the iterations, so that the client can perform other analysis with this data?

Also, if both --stats and --report options are used, then there is a panic thrown as he reports vector is empty when the report option is used.

Bug in example benchmark.yml

Using version 0.6.0

Following the example:

When running the node express server

DELAY_MS=100 node server.js

and running

drill --benchmark benchmark.yml --stats

fails:

Threads 1
Iterations 3
Rampup 2
Base URL http://localhost:9000

Fetch comments            http://localhost:9000/api/comments.json 404 Not Found 103ms
Fetch sub comments        http://localhost:9000/api/subcomments.json 404 Not Found 106ms
Fetch users               http://localhost:9000/api/users.json 200 OK 104ms
Fetch organizations       http://localhost:9000/api/organizations 200 OK 104ms
Fetch account             http://localhost:9000/api/account 200 OK 105ms
thread '<unnamed>' panicked at 'Unknown 'foo.body.manager_id' variable!', /Users/erkki.keranen/.cargo/registry/src/github.com-1ecc6299db9ec823/drill-0.6.0/src/interpolator.rs:38:11
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
thread 'main' panicked at 'arrrgh', /Users/erkki.keranen/.cargo/registry/src/github.com-1ecc6299db9ec823/drill-0.6.0/src/benchmark.rs:88:14

Solution:

remove .body from interpolation variables in benchmark.yml

i.e. foo.body.something -> foo.something

benchmark runs.

Is pull request welcome?
:)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.