Coder Social home page Coder Social logo

hasura / graphql-bench Goto Github PK

View Code? Open in Web Editor NEW
258.0 24.0 41.0 16.55 MB

A super simple tool to benchmark GraphQL queries

License: Apache License 2.0

Lua 0.63% Dockerfile 0.04% JavaScript 2.65% Batchfile 0.01% TypeScript 4.02% Shell 0.31% HTML 0.09% Makefile 0.20% TSQL 92.06%
graphql graphql-server benchmark

graphql-bench's Introduction

GraphQL Bench

Introduction

GraphQL Bench is a versatile tool for benchmarking and load-testing GraphQL Services. It can be run as a CLI application (local or Docker), and also provides a programmatic API. Both HTTP (Queries/Mutations) and Websocket (Subscriptions) tests are supported.

HTTP tests can be configured to run with your choice of:

  • Autocannon
  • K6
  • wrk2

Each benchmark tool will produce live output you can monitor while running your tests:

Autocannon K6

The output is standardized internally across tools using HDRHistograms and can be viewed in a web app:

Usage

CLI

Commands Overview

❯ graphql-bench --help

USAGE
  $ graphql-bench [COMMAND]

COMMANDS
  help          display help for graphql-bench
  query         benchmark queries or mutations
  subscription  benchmark subscriptions
❯ graphql-bench query --help
benchmark queries or mutations

USAGE
  $ graphql-bench query

OPTIONS
  -c, --config=config    (required) Filepath to YAML config file for query benchmarks
  -h, --help             show CLI help
  -o, --outfile=outfile  Filepath to output JSON file containing benchmark stats
  --url=url              URL to direct graphql queries; may override 'url' from the YAML config, which is optional if this flag is passed

EXAMPLE
  $ graphql-bench query --config ./config.query.yaml --outfile results.json
❯ graphql-bench subscription --help
benchmark subscriptions

USAGE
  $ graphql-bench subscription

OPTIONS
  -c, --config=config  (required) Filepath to YAML config file for subscription benchmarks
  -h, --help           show CLI help

EXAMPLE
  $ graphql-bench subscription --config ./config.subscription.yaml

Queries/Mutations

When running locally, add executable permission to the

  • run binary at app/cli/bin by running chmod +x run
  • k6 binary at app/queries/bin/k6/ by running chmod +x k6
  • wrk binary at app/queries/bin/wrk/ by running chmod +x wrk
Config

The Query/Mutation CLI bench expects a YAML config of the following format:

url: 'http://localhost:8085/v1/graphql'
headers:
  X-Hasura-Admin-Secret: my-secret
# "Debug" mode enables request and response logging for Autocannon and K6
# This lets you see what is happening and confirm proper behavior.
# This should be disabled for genuine benchmarks, and only used for debugging/visibility.
debug: false
queries:
    # Name: Unique name for the query
  - name: SearchAlbumsWithArtist
    # Tools: List of benchmarking tools to run: ['autocannon', 'k6', 'wrk2']
    tools: [autocannon, k6]
    # Execution Strategy: the type of the benchmark to run. Options are: 
    # REQUESTS_PER_SECOND: Fixed duration, fixed rps. Example parameters:
    #   duration: 10s
    #   rps: 500
    # FIXED_REQUEST_NUMBER: Complete requests as fast as possible, no duration. Example parameters:
    #   requests: 10000
    # MAX_REQUESTS_IN_DURATION: Make as many requests as possible in duration. Example parameters:
    #   duration: 10s
    # MULTI_STAGE: (K6 only currently) Several stages of REQUESTS_PER_SECOND benchmark. Example parameters:
    #   initial_rps: 0
    #   stages:
    #     - duration: 5s
    #       target: 100
    #     - duration: 10s
    #       target: 1000
    # CUSTOM: Pass completely custom options to each tool (see full API spec for all supported options, very large)
    execution_strategy: REQUESTS_PER_SECOND
    rps: 2000
    duration: 10s
    connections: 50
    query: |
      query SearchAlbumsWithArtist {
        albums(where: {title: {_like: "%Rock%"}}) {
          id
          title
          artist {
            name
            id
          }
        }
      }
  - name: AlbumByPK
    tools: [autocannon, k6]
    execution_strategy: FIXED_REQUEST_NUMBER
    requests: 10000
    query: |
      query AlbumByPK {
        albums_by_pk(id: 1) {
          id
          title
        }
      }
  - name: AlbumByPKMultiStage
    tools: [k6]
    execution_strategy: MULTI_STAGE
    initial_rps: 0
    stages:
      - duration: 5s
        target: 100
      - duration: 5s
        target: 1000
    query: |
      query AlbumByPK {
        albums_by_pk(id: 1) {
          id
          title
        }
      }
Run with Docker

Note: to be updated when image published to Dockerhub

The Makefile contains steps to automate building/tagging/running the image. You can run make build_local_docker_image and then make run_docker_query_bench.

The configuration used by these make commands lives in /docker-run-test/config.(query|subscription).yaml, so edit those files to change the run parameters.

To manually execute, run the following (where the current directory contains the file config.query.yaml):

docker run --net=host -v "$PWD":/app/tmp -it \
  graphql-bench-local query \
  --config="./tmp/config.query.yaml" \
  --outfile="./tmp/report.json"
Run Locally
cd cli
yarn install
./bin/run query  --config="<query YAML config file path here>" --outfile="report.json"
Usage Guide
  • Watch the tool-specific output during the benchmark to view live metrics
  • Save the output to a file, IE report.json
  • Inspect report.json to view detailed statistics and histograms
  • Open the report.json in the web viewer app (hosted at https://hasura.github.io/graphql-bench/app/web-app/) for visual metrics

Subscriptions

Config

The Subscription CLI bench expects a YAML config of the following format:

url: 'http://localhost:8085/v1/graphql'
db_connection_string: postgres://postgres:postgrespassword@localhost:5430/postgres
headers:
  X-Hasura-Admin-Secret: my-secret
config:
  # Label must be unique per-run, it identifiers the run in the DB
  label: SearchAlbumsWithArtistUpdated
  # Total number of websocket connections to open
  max_connections: 20
  # New connections to make per second until target reached
  connections_per_second: 10
  # Whether or not to insert the subscription payload data into the DB at the end
  insert_payload_data: true
  # The subscription to run
  query: |
    subscription AlbumByIDSubscription($artistIds: [Int!]!) {
      albums(where: {artist_id: { _in: $artistIds}}) {
        id
        title
        updated_at
      }
    }
  # Optional variables (if subscription uses variables)
  variables:
    some_value: a_string
    # Ranges will loop repeatedly from "start" to "end" and increment by one for each new subscription
    some_range: { start: 1, end: 10 }
    another_range: { start: 50, end: 100 }
    some_number: 10
    artistIds: [1, 2, 3, 4]
    some_object:
      a_key: a_value
Note: Required Table

For the Subscriptions test to record data in the DB, it requires a table events present with the following schema:

CREATE TABLE public.events (
  -- unique label to identify benchmark
  label text NOT NULL
  -- connection_id represents the n'th connection
  connection_id int NOT NULL
  operation_id int NOT NULL
  -- event_number represents the nth event that was received by the client
  event_number int NOT NULL
  -- event_data stores the payload that was received
  event_data jsonb NOT NULL
  -- event_time stores the time at which the event was received by the client
  event_time timestamptz NOT NULL
  -- is_error represents whether the event was error or not
  is_error boolean NOT NULL
  --  latency is not populated by the benchmark tool, but this can be populated by calculating event_time - <event_triggered_time>
  latency int
)
Run with Docker

Note: to be updated when image published to Dockerhub

The Makefile contains steps to automate building/tagging/running the image. You can run make build_local_docker_image and then make run_docker_subscription_bench.

The configuration used by this make commands lives in /docker-run-test/config.(query|subscription).yaml, so edit those files to change the run parameters.

To manually execute, run the following (where the current directory contains the file config.query.yaml):

docker run --net=host -v "$PWD":/app/tmp -it \
  graphql-bench-local subscription \
  --config="./tmp/config.subscription.yaml" \
Run Locally
cd cli
yarn install
./bin/run subscription --config="<subscription YAML config file path here>"
Usage Guide
  • Create events by making changes in the subscribed table

  • As you create changes, you should notice the number of data events increasing in stdout output:

  • Stop the benchmark with ctrl + c

  • The script should say it has inserted the event data:

    ❯ Executing Teardown Process
    ❯ Starting to close socket connections
    ❯ Sockets closed, attempting to insert event data
    ❯ Inserted total of 10 events for label SearchAlbumsWithArtistUpdated
    ❯ Trying to close DB connection pool
    ❯ Database connection destroyed
    ❯ Now exiting the process
    

Programmatic API

Note: A good reference/usage demo exists at /queries/src/tests.ts and /subscriptions/src/tests.ts.

Queries/Mutations

The easiest way of interacting with this library is to use the exported BenchmarkRunner class.

It exposes a single method, .runBenchmarks() which takes a GlobalConfig object defining the benchmarks to be run. Here's an example of running a REQUESTS_PER_SECOND benchmark for a query using all of autocannon, k6, and wrk2:

import { BenchmarkRunner } from './main'
import {
  GlobalConfig,
  BenchmarkTool,
  MaxRequestsInDurationBenchmark,
  FixedRequestNumberBenchmark,
  RequestsPerSecondBenchmark,
  MultiStageBenchmark,
  CustomBenchmark,
} from './executors/base/types'

const queries = {
  searchAlbumsWithArtist: `
    query SearchAlbumsWithArtist {
      albums(where: {title: {_like: "%Rock%"}}) {
        id
        title
        artist {
          name
          id
        }
      }
    }`,
}

const rpsBench: RequestsPerSecondBenchmark = {
  tools: [BenchmarkTool.AUTOCANNON, BenchmarkTool.K6, BenchmarkTool.WRK2],
  name: 'AlbumsArtistTrackGenreAll',
  execution_strategy: 'REQUESTS_PER_SECOND',
  duration: '3s',
  rps: 500,
  query: queries.albumsArtistTracksGenreAll,
}

const tests: GlobalConfig = {
  url: 'http://localhost:8085/v1/graphql',
  headers: { 'X-Hasura-Admin-Secret': 'my-secret' },
  queries: [rpsBench],
}

async function main() {
  const runner = new BenchmarkRunner(tests)
  const results = await runner.runBenchmarks()
  console.log('Test results:', results)
}

main()

Subscriptions

The main exported method from the Subscriptions repo is what runs the benchmarks. Here's an example of programmatically running a subscription test:

import { SubscriptionBenchConfig } from './utils'
import { main as runSubscriptionBenchmark } from './main'

const testConfig: SubscriptionBenchConfig = {
  url: 'http://localhost:8085/v1/graphql',
  db_connection_string:
    'postgres://postgres:postgrespassword@localhost:5430/postgres',
  headers: {
    'X-Hasura-Admin-Secret': 'my-secret',
  },
  config: {
    label: 'SearchAlbumsWithArtist',
    max_connections: 20,
    connections_per_second: 10,
    insert_payload_data: true,
    query: `
      subscription AlbumByIDSubscription($artistIds: [Int!]!) {
        albums(where: {artist_id: { _in: $artistIds}}) {
          id
          title
          updated_at
        }
      }
    `,
    variables: {
      artistIds: [1, 2, 3, 4],
    },
  },
}

async function main() {
  await runSubscriptionBenchmark(testConfig)
}

main()

graphql-bench's People

Contributors

0x777 avatar 0xflotus avatar abooij avatar coco98 avatar gavinray97 avatar jberryman avatar mozgiii avatar sandeepsamba avatar sassela avatar sidharthbihary avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

graphql-bench's Issues

Connection refused when running Docker image in Windows 10

Hi,

I'm completely new to Docker and a bit lost as to what my problem is when trying to use this tool.

When I start the server using cat bench.yaml | docker run -i --rm -p 8050:8050 -v C:/github/graphql-bench/examples/starwars/queries.graphql hasura/graphql-bench:v0.3 I get:

====================
benchmark: query-comparison

candidate: HeroNameQuery on hero_name at http://172.17.0.1:5000/graphql
Warmup:
++++++++++++++++++++
100Req/s Duration:60s open connections:20
unable to connect to 172.17.0.1:5000 Connection refused
++++++++++++++++++++
200Req/s Duration:60s open connections:20
unable to connect to 172.17.0.1:5000 Connection refused
++++++++++++++++++++
300Req/s Duration:60s open connections:20
unable to connect to 172.17.0.1:5000 Connection refused
++++++++++++++++++++
400Req/s Duration:60s open connections:20
unable to connect to 172.17.0.1:5000 Connection refused
++++++++++++++++++++
500Req/s Duration:60s open connections:20
unable to connect to 172.17.0.1:5000 Connection refused
Benchmark:
++++++++++++++++++++
100Req/s Duration:300s open connections:20
unable to connect to 172.17.0.1:5000 Connection refused
++++++++++++++++++++
200Req/s Duration:300s open connections:20
unable to connect to 172.17.0.1:5000 Connection refused
++++++++++++++++++++
300Req/s Duration:300s open connections:20
unable to connect to 172.17.0.1:5000 Connection refused
++++++++++++++++++++
400Req/s Duration:300s open connections:20
unable to connect to 172.17.0.1:5000 Connection refused
++++++++++++++++++++
500Req/s Duration:300s open connections:20
unable to connect to 172.17.0.1:5000 Connection refused

candidate: HeroNameFriendsQuery on hero_name_friends at http://172.17.0.1:5000/graphql
Warmup:
++++++++++++++++++++
100Req/s Duration:60s open connections:20
unable to connect to 172.17.0.1:5000 Connection refused
++++++++++++++++++++
200Req/s Duration:60s open connections:20
unable to connect to 172.17.0.1:5000 Connection refused
++++++++++++++++++++
300Req/s Duration:60s open connections:20
unable to connect to 172.17.0.1:5000 Connection refused
++++++++++++++++++++
400Req/s Duration:60s open connections:20
unable to connect to 172.17.0.1:5000 Connection refused
++++++++++++++++++++
500Req/s Duration:60s open connections:20
unable to connect to 172.17.0.1:5000 Connection refused
Benchmark:
++++++++++++++++++++
100Req/s Duration:300s open connections:20
unable to connect to 172.17.0.1:5000 Connection refused
++++++++++++++++++++
200Req/s Duration:300s open connections:20
unable to connect to 172.17.0.1:5000 Connection refused
++++++++++++++++++++
300Req/s Duration:300s open connections:20
unable to connect to 172.17.0.1:5000 Connection refused
++++++++++++++++++++
400Req/s Duration:300s open connections:20
unable to connect to 172.17.0.1:5000 Connection refused
++++++++++++++++++++
500Req/s Duration:300s open connections:20
unable to connect to 172.17.0.1:5000 Connection refused

benchmark: webserver-comparison

candidate: HeroNameQuery on uwsgi at http://172.17.0.1:5001/graphql
Warmup:
++++++++++++++++++++
100Req/s Duration:60s open connections:20
unable to connect to 172.17.0.1:5001 Connection refused
++++++++++++++++++++
200Req/s Duration:60s open connections:20
unable to connect to 172.17.0.1:5001 Connection refused
++++++++++++++++++++
300Req/s Duration:60s open connections:20
unable to connect to 172.17.0.1:5001 Connection refused
Benchmark:
++++++++++++++++++++
100Req/s Duration:100s open connections:20
unable to connect to 172.17.0.1:5001 Connection refused
++++++++++++++++++++
200Req/s Duration:100s open connections:20
unable to connect to 172.17.0.1:5001 Connection refused
++++++++++++++++++++
300Req/s Duration:100s open connections:20
unable to connect to 172.17.0.1:5001 Connection refused

candidate: HeroNameQuery on dev-server at http://172.17.0.1:5000/graphql
Warmup:
++++++++++++++++++++
100Req/s Duration:60s open connections:20
unable to connect to 172.17.0.1:5000 Connection refused
++++++++++++++++++++
200Req/s Duration:60s open connections:20
unable to connect to 172.17.0.1:5000 Connection refused
++++++++++++++++++++
300Req/s Duration:60s open connections:20
unable to connect to 172.17.0.1:5000 Connection refused
Benchmark:
++++++++++++++++++++
100Req/s Duration:100s open connections:20
unable to connect to 172.17.0.1:5000 Connection refused
++++++++++++++++++++
200Req/s Duration:100s open connections:20
unable to connect to 172.17.0.1:5000 Connection refused
++++++++++++++++++++
300Req/s Duration:100s open connections:20
unable to connect to 172.17.0.1:5000 Connection refused

  • Serving Flask app "bench" (lazy loading)
  • Environment: production
    WARNING: Do not use the development server in a production environment.
    Use a production WSGI server instead.
  • Debug mode: off
  • Running on http://0.0.0.0:8050/ (Press CTRL+C to quit)

As you can see from the logs, it can't connect to 172.17.0.1:5000, and therefore when I open http://127.0.0.1:8050 in my browser I can see the front end but not the graphs, like so:

empty graph

I would very appreciate any advice,
Thank you

Roadmap Queries Mutations and Subscriptions

Just took a look at this project, and it's a great framework to get started with queries, but are you guys planning on expanding graphql-bench to be used with mutations, and subscriptions?

It's clear that you guys have an emphasis on queries in your current readme.

Having some sort of "driver" interface would be really nice, especially if we could use a syntax similar to faker for mutations. It's no easy undertaking.

Can't generate histograms on the assertion_bucket error when running example

@0x777
Following the example directoy:

  1. I ran the graphql server and tested it, all seemed ok
  2. I ran the graphql benchmark suite and got this output:
====================
benchmark: query-comparison
  --------------------
  candidate: HeroNameQuery on hero_name at http://172.17.0.1:5000/graphql
    Warmup:
      ++++++++++++++++++++
      100Req/s Duration:60s open connections:20
      Running 1m test @ http://172.17.0.1:5000/graphql
        4 threads and 20 connections
        Thread calibration: mean lat.: 24.057ms, rate sampling interval: 102ms
        Thread calibration: mean lat.: 37.867ms, rate sampling interval: 101ms
        Thread calibration: mean lat.: 40.488ms, rate sampling interval: 106ms
        Thread calibration: mean lat.: 36.296ms, rate sampling interval: 96ms
        Thread Stats   Avg      Stdev     Max   +/- Stdev
          Latency    35.98ms   16.65ms 133.76ms   71.89%
          Req/Sec    24.77     21.41    52.00     41.66%
        5980 requests in 1.00m, 1.03MB read
        Socket errors: connect 0, read 0, write 0, timeout 1
      Requests/sec:     99.66
      Transfer/sec:     17.52KB
      ++++++++++++++++++++
      200Req/s Duration:60s open connections:20
      Running 1m test @ http://172.17.0.1:5000/graphql
        4 threads and 20 connections
        Thread calibration: mean lat.: 38.743ms, rate sampling interval: 121ms
        Thread calibration: mean lat.: 45.739ms, rate sampling interval: 123ms
        Thread calibration: mean lat.: 47.466ms, rate sampling interval: 124ms
        Thread calibration: mean lat.: 44.317ms, rate sampling interval: 110ms
        Thread Stats   Avg      Stdev     Max   +/- Stdev
          Latency    42.98ms   23.42ms 962.05ms   87.69%
          Req/Sec    49.72     12.15   120.00     77.33%
        12004 requests in 1.00m, 2.06MB read
      Requests/sec:    200.05
      Transfer/sec:     35.17KB
    Benchmark:
      ++++++++++++++++++++
      100Req/s Duration:300s open connections:20
      wrk2: src/hdr_histogram.c:54: counts_index: Assertion `bucket_index < h->bucket_count' failed.
      ++++++++++++++++++++
      200Req/s Duration:300s open connections:20
      wrk2: src/hdr_histogram.c:54: counts_index: Assertion `bucket_index < h->bucket_count' failed.
  --------------------
  candidate: HeroNameFriendsQuery on hero_name_friends at http://172.17.0.1:5000/graphql
    Warmup:
      ++++++++++++++++++++
      100Req/s Duration:60s open connections:20
      Running 1m test @ http://172.17.0.1:5000/graphql
        4 threads and 20 connections
        Thread calibration: mean lat.: 44.057ms, rate sampling interval: 125ms
        Thread calibration: mean lat.: 48.205ms, rate sampling interval: 123ms
        Thread calibration: mean lat.: 49.266ms, rate sampling interval: 121ms
        Thread calibration: mean lat.: 46.410ms, rate sampling interval: 112ms
        Thread Stats   Avg      Stdev     Max   +/- Stdev
          Latency    46.20ms   14.88ms 105.92ms   77.60%
          Req/Sec    24.81     17.46    45.00     63.81%
        6001 requests in 1.00m, 1.50MB read
        Non-2xx or 3xx responses: 1
      Requests/sec:     99.96
      Transfer/sec:     25.58KB
      ++++++++++++++++++++
      200Req/s Duration:60s open connections:20
      wrk2: src/hdr_histogram.c:54: counts_index: Assertion `bucket_index < h->bucket_count' failed.
    Benchmark:
      ++++++++++++++++++++
      100Req/s Duration:300s open connections:20
      wrk2: src/hdr_histogram.c:54: counts_index: Assertion `bucket_index < h->bucket_count' failed.
      ++++++++++++++++++++
      200Req/s Duration:300s open connections:20
      wrk2: src/hdr_histogram.c:54: counts_index: Assertion `bucket_index < h->bucket_count' failed.
 * Serving Flask app "bench" (lazy loading)
 * Environment: production
   WARNING: Do not use the development server in a production environment.
   Use a production WSGI server instead.
 * Debug mode: off
 * Running on http://0.0.0.0:8050/ (Press CTRL+C to quit)
  1. When I head to localhost:8050 I don't see any histograms:

screen shot 2018-05-27 at 12 44 26 am

This seems to be the culprit from the logs:

      wrk2: src/hdr_histogram.c:54: counts_index: Assertion `bucket_index < h->bucket_count' failed.

Extra requests being sent during benchmark

I was running a benchmark for Hasura Pro -
set it to 10 reqs/second and time to 1 second, resulted in 20 requests (should have been 10 requests)
set it to 10 reqs/second and time to 5 seconds, resulted in 81 requests (should have been 50)

The request count is taken from Pro metrics.

There was a warmup time set initially, which I removed and then got these results.

Error while accessing graph from browser

Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/flask/app.py", line 2292, in wsgi_app
response = self.full_dispatch_request()
File "/usr/local/lib/python3.7/site-packages/flask/app.py", line 1815, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/usr/local/lib/python3.7/site-packages/flask/app.py", line 1718, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/usr/local/lib/python3.7/site-packages/flask/_compat.py", line 35, in reraise
raise value
File "/usr/local/lib/python3.7/site-packages/flask/app.py", line 1813, in full_dispatch_request
rv = self.dispatch_request()
File "/usr/local/lib/python3.7/site-packages/flask/app.py", line 1799, in dispatch_request
return self.view_functionsrule.endpoint
File "/usr/local/lib/python3.7/site-packages/dash/dash.py", line 1151, in dispatch
response.set_data(self.callback_map[output]'callback')
File "/usr/local/lib/python3.7/site-packages/dash/dash.py", line 1037, in add_context
output_value = func(*args, **kwargs)
File "/graphql-bench/plot.py", line 93, in updateGraph
benchMarkIndex=int(benchMarkIndex)
TypeError: int() argument must be a string, a bytes-like object or a number, not 'NoneType'

Unable to benchmark our own GraphQL server for Doublets

We are trying to benchmark our GraphQL server implementation for Doublets (database engine based on associative model of data).

This the YAML file we tried:

url: 'http://linksplatform.ddns.net:29018/graphql'
queries:
  - name: GetSingleLink
    tools: [k6, wrk2, autocannon]
    execution_strategy: REQUESTS_PER_SECOND
    rps: 1
    duration: 1s
    query: '{ links(where: {from_id: {_eq: 2}, to_id: {_eq: 1}}) { id from_id to_id } }'
  - name: UseFromIndex
    tools: [k6, wrk2, autocannon]
    execution_strategy: REQUESTS_PER_SECOND
    rps: 1
    duration: 1s
    query: '{ links(where: {from_id: {_eq: 1}}) { id } }'
  - name: UseToIndex
    tools: [k6, wrk2, autocannon]
    execution_strategy: REQUESTS_PER_SECOND
    rps: 1
    duration: 1s
    query: '{ links(where: {to_id: {_eq: 1}}) { id } }'
  - name: FullScan
    tools: [k6, wrk2, autocannon]
    execution_strategy: REQUESTS_PER_SECOND
    rps: 1
    duration: 1s
    query: '{ links { id } }'

But all requests are ended up with 400 or 500 codes. And I'm not able to see exact error in the benchmark tool. Is there a way to see how the request is sent and what response is received via the benchmark tool?

If I use any other client (via UI, insomnia or plain JavaScript ApolloClient from the node.js) I do not get any errors with these requests. Only graphql-bench is unable to make request for some reason.

Enhancements to make this more valuable to GraphQL backend developers

  • Work your way up to a 95% percentile latency limit to automatically find the rps where the system breaks
  • Sample CPU and RAM (even if inaccurate) to give the developer a sense of resource consumption
  • Spin up and spin down a backend for every suite so that resources are freed up

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.