Coder Social home page Coder Social logo

xk6-output-elasticsearch's People

Contributors

danielmitterdorfer avatar elastic-backstage-prod[bot] avatar immavalls avatar selamanse avatar servetozkan avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

xk6-output-elasticsearch's Issues

[indices:admin/create] is unauthorized for user

Hi,

I am using this extension to integrate K6 tests with our organization's Elastic search's instance. I have already integrated with locally hosted ES as per the README.md file. I am getting below error when creating index in ES after test execution.

Error_ES_Index

I have provided details like K6_ELASTICSEARCH_URL, K6_ELASTICSEARCH_USER and K6_ELASTICSEARCH_PASSWORD inside K6 service in docker-compose.yml file. You can see it below:

k6:
build: .
ports:
- "6565:6565"
environment:
- K6_OUT=output-elasticsearch
- K6_ELASTICSEARCH_URL=my_org_es_url
- K6_ELASTICSEARCH_USER=performance_test
- K6_ELASTICSEARCH_PASSWORD=xyz
- K6_ELASTICSEARCH_INDEX_NAME=filebeat-k6-metrics
- K6_ELASTICSEARCH_INSECURE_SKIP_VERIFY=true

Note: User already has permission to create index.

Needed to maintain listing in k6 Extensions Registry

We've recently updated the requirements for maintaining an extension within the listing on our site. As such, please address the following items to maintain your listing within the registry:

  • add the xk6 topic to your repository metadata
  • add an examples directory containing k6 test scripts making use of your extension
  • requires at least one versioned release

For more information on these and other listing requirements, please take a look at the registry requirements.

Add UUID to all the documents of a specific run

I'm not sure if there is any in the output files that we can take and reuse everywhere, but having one specific to each run makes it a lot easier to compare 2 different runs in Kibana.
We could have the script that runs k6 set it as part of the tags but that would be optional and not a default behavior.
From what I can find in this blog post ( https://k6.io/blog/comparison-of-k6-test-result-visualizations/ ) , differentiating between runs is even listed as a downside of Grafana/InfluxDB connector.

I'm thinking they do this pretty much the same way (with some kind of metadata added to each run), but I can't confirm that since there is no access to the output in their cloud.

No indication that uploading from K6 to Elastic is failed because of wrong credentials

I'm trying to configure the correct access credentials for Elastic, but it's a bit problematic, because if I put obviously incorrect credentials like this:

export K6_ELASTICSEARCH_CLOUD_ID="XXXXXX3f4254280ba2b209985ad4bf7:dXMtY2VudHJhbDEuZ2NwLmNsb3VkLmVzLmlvOjQ0MyQ1MTQ5MjgwODJmYjA0ODYxOTFjYzhjNzIzOTU4NTU2ZCRiN2FlMmU0YzgxMmE0YWQ2YTAwZGEzZjBjMTMyXXXXXX=="
export K6_ELASTICSEARCH_USER=foo
export K6_ELASTICSEARCH_PASSWORD=bar
./k6 run script.js -o output-elasticsearch

I see no errors in the output, it just finishes correctly.

Could you please implement error reporting, when the upload is failed? Thanks!

Extra processing for thresholds in order to make them usable in Kibana

The way k6 outputs the threshold values makes them not really usable in Kibana (without having to use scripted fields or anything like that).
If we could get the data flattened we could plug it in the threshold functionality in Lens.

For example, I would like this output example:

      "thresholds":[
         "rate<0.01"
      ],

flattened as:
thresholds.rate: 0.01
thresholds.operator: lt (this one isn't of much use in Kibana for now, but it's good to have it saved for data integrity purposes)

There are a few more threshold options that will have to be parsed, like group specific or tag specific. We can see some samples here:
https://k6.io/docs/using-k6/thresholds/

Dockerfile

if i wanted to include this extension within my own dockerfile, would i be just copy the contents of the dockerfile and if so, i get this error during the build stage
module github.com/elastic/xk6-output-elasticsearch@latest found (v0.2.0, replaced by /go/src/go.k6.io/k6), but does not contain package github.com/elastic/xk6-output-elasticsearch

Access to Elastic by Authorization headers

Now to configure the access to Elastic, seems we can only specify the user and password.

But many projects use HTTP headers to authenticate, like Authorization=Bearer G29l5OjyPSYfxXXXX.

It is much more secure than the current way with the User and Password.

Could you please add this way of authorization? Thanks!

Use datastreams to store metrics

So far we have used a single index to store k6 metrics. However, datastreams are preferable for a couple of reasons, one of them being able define a retention period via an ILM policy. We should therefore move away from the single index and instead create a datastream. Storing data in an index will not be supported anymore.

Datastream Details

  • Name: metrics-k6-default. The namespace - here: default - is overridable via a new setting K6_ELASTICSEARCH_DATASTREAM_NAMESPACE. The existing setting K6_ELASTICSEARCH_INDEX_NAME will be removed. It will not be possible to override the entire datastream name. This would increase complexity significantly as we would need to make sure that the index template matches the chosen datastream name. Note: We might allow overriding the entire datastream name in the future if and only if the datastream is managed by the user.
  • ILM policy: We specify a default policy without a retention period but allow to override it.

These calls will be issued internally:

PUT /_ilm/policy/metrics-k6
{
  "phases": {
    "hot": {
      "actions": {
        "rollover": {
          "max_primary_shard_size": "50gb",
          "min_docs": 1
        },
        "set_priority": {
          "priority": 100
        },
        "readonly": {}
      }
    }
  },
  "_meta": {
    "description": "default policy for k6 metrics",
    "managed": true,
    "version": 1
  }
}
PUT /_component_template/metrics-k6
{
  "template": {
    "settings": {
      "index": {
        "number_of_shards": 1,
        "number_of_replicas": 0,
        "auto_expand_replicas": "0-1"
      },
      "codec": "best_compression"
    },
    "mappings": {
      "_meta": {
        "index-template-version": 1,
        "managed": true
      },
      "date_detection": false,
      "dynamic_templates": [
        {
          "strings": {
            "match": "*",
            "match_mapping_type": "string",
            "mapping": {
              "type": "keyword"
            }
          }
        }
      ],
      "_source": {
        "enabled": true
      },
      "properties": {
        "@timestamp": {
          "type": "date"
        },
        "Value": {
          "type": "double"
        }
      },
      "version": 1
    }
  }
}

Note: previously the timestamp field was called Time. We can rename this in a reindex script.

PUT /_component_template/metrics-k6-ilm
{
  "template": {
    "settings": {
      "index": {
        "lifecycle": {
          "name": "metrics-k6"
        }
      }
    }
  },
  "_meta": {
    "index-template-version": 1,
    "managed": true
  },
  "version": 1
}
PUT /_index_template/metrics-k6
{
  "index_patterns": [
    "metrics-k6-*"
  ],
  "data_stream": {},
  "composed_of": [
    "metrics-k6",
    "metrics-k6-ilm",
    "metrics-k6-ilm@custom"
  ],
  "ignore_missing_component_templates": [
    "metrics-k6-ilm@custom"
  ],
  "priority": 100,
  "_meta": {
    "description": "index template for k6 metrics",
    "managed": true
  },
  "version": 1
}

Behavior for existing installations

When the k6-metrics index exists, we can issue a warning that the index pattern has changed. This is only best effort and won't catch cases where users have overridden the index name though.

Migration

We won't automatically migrate data but can provide a reindex and cleanup script that users can execute if required.

Permissions

We might need to adapt the initial permission check as the output extension needs to create a datastream and associated ILM policy. Finally, we should allow to make this process optional as advanced users might want to create the datastream themselves and tighten the cluster permissions of the k6 user to allow only write access. This behavior will be controlled by the flag K6_ELASTICSEARCH_AUTOCREATE_DATASTREAM which is true by default. If it set to false, the output extension assumes that the datastream is already setup properly (without any further checks).

Unable to build a docker image using Dockerfile

I am trying to build a docker image using Dockerfile you provided.

  • docker build -t xk6-es -f xk6-es-Dockerfile .
  • target image name: xk6-es, Dockerfile name: xk6-es-Dockerfile

But I got an error like below...

[+] Building 3.3s (14/16)                                                                                                                                                                                                docker:desktop-linux
 => [internal] load build definition from xk6-es-Dockerfile                                                                                                                                                                              0.0s
 => => transferring dockerfile: 488B                                                                                                                                                                                                     0.0s
 => [internal] load .dockerignore                                                                                                                                                                                                        0.0s
 => => transferring context: 2B                                                                                                                                                                                                          0.0s
 => [internal] load metadata for docker.io/library/golang:1.20-alpine                                                                                                                                                                    0.0s
 => [internal] load metadata for docker.io/library/alpine:3.17                                                                                                                                                                           2.5s
 => [auth] library/alpine:pull token for registry-1.docker.io                                                                                                                                                                            0.0s
 => [stage-1 1/4] FROM docker.io/library/alpine:3.17@sha256:53cf9478b76f4c8fae126acbdfb79bed6e69e628faff572ebe4a029d3d247d98                                                                                                             0.0s
 => [builder 1/6] FROM docker.io/library/golang:1.20-alpine                                                                                                                                                                              0.0s
 => [internal] load build context                                                                                                                                                                                                        0.0s
 => => transferring context: 679B                                                                                                                                                                                                        0.0s
 => CACHED [stage-1 2/4] RUN apk add --no-cache ca-certificates &&     adduser -D -u 12345 -g 12345 k6                                                                                                                                   0.0s
 => CACHED [builder 2/6] WORKDIR /go/src/go.k6.io/k6                                                                                                                                                                                     0.0s
 => CACHED [builder 3/6] ADD . .                                                                                                                                                                                                         0.0s
 => CACHED [builder 4/6] RUN apk --no-cache add git                                                                                                                                                                                      0.0s
 => CACHED [builder 5/6] RUN CGO_ENABLED=0 go install go.k6.io/xk6/cmd/xk6@latest                                                                                                                                                        0.0s
 => ERROR [builder 6/6] RUN CGO_ENABLED=0 xk6 build --with github.com/elastic/xk6-output-elasticsearch=. --output /tmp/k6                                                                                                                0.7s
------
 > [builder 6/6] RUN CGO_ENABLED=0 xk6 build --with github.com/elastic/xk6-output-elasticsearch=. --output /tmp/k6:
0.236 2024/03/20 09:47:56 [INFO] Temporary folder: /tmp/buildenv_2024-03-20-0947.3224696936
0.236 2024/03/20 09:47:56 [INFO] Initializing Go module
0.236 2024/03/20 09:47:56 [INFO] exec (timeout=10s): /usr/local/go/bin/go mod init k6
0.240 go: creating new go.mod: module k6
0.242 2024/03/20 09:47:56 [INFO] Replace github.com/elastic/xk6-output-elasticsearch => /go/src/go.k6.io/k6
0.242 2024/03/20 09:47:56 [INFO] exec (timeout=0s): /usr/local/go/bin/go mod edit -replace github.com/elastic/xk6-output-elasticsearch=/go/src/go.k6.io/k6
0.247 2024/03/20 09:47:56 [INFO] exec (timeout=0s): /usr/local/go/bin/go mod tidy -compat=1.17
0.253 go: warning: "all" matched no packages
0.254 2024/03/20 09:47:56 [INFO] Pinning versions
0.255 2024/03/20 09:47:56 [INFO] exec (timeout=0s): /usr/local/go/bin/go mod tidy -compat=1.17
0.264 go: finding module for package github.com/elastic/xk6-output-elasticsearch
0.684 k6 imports
0.684 	github.com/elastic/xk6-output-elasticsearch: module github.com/elastic/xk6-output-elasticsearch@latest found (v0.3.0, replaced by /go/src/go.k6.io/k6), but does not contain package github.com/elastic/xk6-output-elasticsearch
0.686 2024/03/20 09:47:57 [INFO] Cleaning up temporary folder: /tmp/buildenv_2024-03-20-0947.3224696936
0.687 2024/03/20 09:47:57 [FATAL] exit status 1
------
xk6-es-Dockerfile:6
--------------------
   4 |     RUN apk --no-cache add git
   5 |     RUN CGO_ENABLED=0 go install go.k6.io/xk6/cmd/xk6@latest
   6 | >>> RUN CGO_ENABLED=0 xk6 build --with github.com/elastic/xk6-output-elasticsearch=. --output /tmp/k6
   7 |
   8 |     FROM alpine:3.17
--------------------
ERROR: failed to solve: process "/bin/sh -c CGO_ENABLED=0 xk6 build --with github.com/elastic/xk6-output-elasticsearch=. --output /tmp/k6" did not complete successfully: exit code: 1

I am using docker v4.24.2.
Let me know if I have to give you any further information about my environments.

Make compatible to newer k6 version v0.42.0

I've found this plugin on the k6 output site but it was not compatible with the newest version.

I've made a proposal on how to make it work again: #3

This could also introduce the possiblity to map / transform values from sample entries as desired in other enhancements.

403 when connecting to AWS Elastic

With the latest changes, it is not possible for us to connect to our AWS Elastic cluster, either with the URL or Cloud ID. A 403 error is reported:

time="2023-11-24T17:00:49Z" level=error msg="could not create the 'output-elasticsearch' output: cannot connect to Elasticsearch (status code 403)"

It still works with a local instance, and it works if I use the 0.1.0 tag when building with xk6.

Error on try compiler

Trying compile this extension following the instructions on this web https://k6.io/docs/results-output/real-time/elasticsearch/

get the next error:

go: downloading go.buf.build/grpc/go/prometheus/prometheus v1.4.4
go: k6 imports
go.k6.io/k6/cmd imports
github.com/grafana/xk6-output-prometheus-remote/pkg/remotewrite imports
go.buf.build/grpc/go/prometheus/prometheus: unrecognized import path "go.buf.build/grpc/go/prometheus/prometheus": https fetch: Get "https://go.buf.build/grpc/go/prometheus/prometheus?go-get=1": dial tcp: lookup go.buf.build: no such host
2023/09/06 13:08:24 [INFO] Cleaning up temporary folder: C:\Users\agfierro\AppData\Local\Temp\buildenv_2023-09-06-1308.10199663
2023/09/06 13:08:24 [FATAL] exit status 1

Elasticsearch 7.x versions support

I'm trying to connect xk6-output-elasticsearch to Elasticsearch 7.x versions but I don't think that this plugin supports Elasticsearch 7 versions.

I ran Elasticsearch-oss 7.10.2 version and Opensearch 2.8.0 version (compatible with Elasticsearch 7.x versions) on docker containers.

  • docker.elastic.co/elasticsearch/elasticsearch-oss:7.10.2
  • opensearchproject/opensearch:2.8.0

When connected to Elasticsearch-oss 7.10.2 version and Opensearch 2.8.0 version, both exited with an error like below:

  • network name docker-data_k6 is auto-created by docker-compose because I'm in a directory with name docker-data
 docker run --rm --network docker-data_k6 -e K6_ELASTICSEARCH_URL=http://opensearch:9200 -i xk6-output-elasticsearch run -o output-elasticsearch --tag testid=$(date "+%Y%m%d-%H%M%S")  - <k6-scripts/script.js

          /\      |‾‾| /‾‾/   /‾‾/   
     /\  /  \     |  |/  /   /  /    
    /  \/    \    |     (   /   ‾‾\  
   /          \   |  |\  \ |  (‾)  | 
  / __________ \  |__| \__\ \_____/ .io

time="2024-03-22T01:17:07Z" level=info msg="Elasticsearch: configuring output"

Init      [   0% ]
default   [   0% ]
time="2024-03-22T01:17:07Z" level=error msg="could not create the 'output-elasticsearch' output: the client noticed that the server is not Elasticsearch and we do not support this unknown product"

To reproduce my case, I attach a docker-compose.yml file for Opensearch 2.8.0.

version: '3'
services:
  opensearch:
    image: opensearchproject/opensearch:2.8.0
    container_name: opensearch
    environment:
      - bootstrap.memory_lock=true # Disable JVM heap memory swapping
      - "OPENSEARCH_JAVA_OPTS=-Xms512m -Xmx512m" # Set min and max JVM heap sizes to at least 50% of system RAM
      - "discovery.type=single-node"
      - "DISABLE_INSTALL_DEMO_CONFIG=true" # Prevents execution of bundled demo script which installs demo certificates and security configurations to OpenSearch
      - "DISABLE_SECURITY_PLUGIN=true" # Disables Security plugin
    ulimits:
      memlock:
        soft: -1 # Set memlock to unlimited (no soft or hard limit)
        hard: -1
      nofile:
        soft: 65536 # Maximum number of open files for the opensearch user - set to at least 65536
        hard: 65536
    volumes:
      - opensearch-data:/usr/share/opensearch/data
    ports:
      - 19200:9200 # REST API
      - 19600:9600 # Performance Analyzer
    networks:
      - k6
  opensearch-dashboards:
    image: opensearchproject/opensearch-dashboards:2.8.0
    container_name: opensearch-dashboards
    ports:
      - 5601:5601
    expose:
      - "5601"
    environment:
      - 'OPENSEARCH_HOSTS=["http://opensearch:9200"]'
      - "DISABLE_SECURITY_DASHBOARDS_PLUGIN=true" # disables security dashboards plugin in OpenSearch Dashboards
    networks:
      - k6

volumes:
  opensearch-data:

networks:
  k6:

Allow to add additional custom fields to the uploaded metrics

Now we have a build in list of fields, which uploads to Elastic. But when we run several different tests, it will be useful to mark each test (or group of tests) by a specific key via an additional field.

Could you extend the implementation to allow adding custom fields to the uploading data, like:

TestGroup=OrdersListPerformance
TestType=RandomRequestsOrder

and

TestGroup=HomepagePerformance
TestType=NoCaching

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.