Coder Social home page Coder Social logo

prometheus_bigquery_remote_storage_adapter's People

Contributors

avxhi93 avatar danschdatsci avatar dependabot[bot] avatar gitter-badger avatar jeremysprofile avatar jimmyfigiel avatar jrollinson avatar karl-nilsson avatar kt3a avatar seanmalloy avatar smiley73 avatar ssttevee avatar trzejos avatar vinny-sabatini avatar wdoogz avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

prometheus_bigquery_remote_storage_adapter's Issues

Prepare step in deploy job has deprecations

What happens?

When the deploy pipeline runs, there are deprecation warnings

What were you expecting to happen?

No deprecation warnings in the job output

Steps to reproduce:

  • Run the deploy pipeline

Any errors, stacktrace, logs?

Run DOCKER_IMAGE=quay.io/kohlstechnology/prometheus_bigquery_remote_storage_adapter
  DOCKER_IMAGE=quay.io/kohlstechnology/prometheus_bigquery_remote_storage_adapter
  VERSION=${GITHUB_REF#refs/tags/}
  TAGS="${DOCKER_IMAGE}:${VERSION},${DOCKER_IMAGE}:latest"
  echo ::set-output name=tags::${TAGS}
  shell: /usr/bin/bash -e {0}
Warning: The `set-output` command is deprecated and will be disabled soon. Please upgrade to using Environment Files. For more information see: https://github.blog/changelog/[2](https://github.com/KohlsTechnology/prometheus_bigquery_remote_storage_adapter/actions/runs/3988232667/jobs/6839622287#step:4:2)022-10-11-github-actions-deprecating-save-state-and-set-output-commands/

Ref: https://github.com/KohlsTechnology/prometheus_bigquery_remote_storage_adapter/actions/runs/3988232667/jobs/6839622287#step:4:8

Read test case missing

What happens?

We are currently testing writes, but not reads.

What were you expecting to happen?

We test both write and read cases. Read must also make sure the proper order of the time series is there. Both test cases can build on each other.

Steps to reproduce:

make test

Automated e2e Test In CI

What happens?

The e2e tests go not run automatically during GitHub Actions CI

What were you expecting to happen?

I would like the e2e test to run automatically during GitHuab Actions CI.

Steps to reproduce:

Open a PR and notice that the CI pipeline does not run the e2e tests.

Any errors, stacktrace, logs?

n/a

Environment:

N/A

Additional comments

The e2e tests can be run when doing local development on a laptop, but they just don't run from GitHub Actions CI yet. See #119 .

Make Unit Tests Work in CI

(While submitting an issue briefly describe the problem you are facing or a new feature you would want to see added)

What happens?

When submitting a PR to this repo the unit tests are not automatically run as part of GitHub Actions CI.

What were you expecting to happen?

The unit tests are run for every PR.

Steps to reproduce:

Submit a PR to this repo and view the logs in the GH Actions pipeline. Notice that the unit test are not run.

Any errors, stacktrace, logs?

N/A

Environment:

  • Runtime version(Java, Go, Python, etc): N/A
  • Desktop OS/version: N/A

Additional comments:

See the below comment in the GH Actions pipeline.

# TODO: unit tests require cloud credentials
#- name: Test
# run: make test-unit
#- name: Upload coverage to Codecov
# uses: codecov/codecov-action@v1
# with:
# fail_ci_if_error: true
# files: ./coverage.txt
# verbose: true

I'm hoping there is a way to split the tests that require GCP BigQuery access from tests that do not require GCP BigQuery access, but I have not reviewed the actual test code yet. I'm hoping the tests that actually require GCP BigQuery access can be considered e2e or acceptance tests(not unit tests).

Recommend use of clustering columns on metricname to speed up queries

Currently the documentation specifies the use of DAY timestamp partitioning on the table schema but is missing another optimisation of using bigquery clustering columns on the metricname field.

Given queries to prometheus timeseries metrics almost always specify the metric name this is a cheap win for query planning.

Add example code for creating dataset and table

What happens?

The documentation currently does not have a code example on how to create and properly configure the dataset and table.

What were you expecting to happen?

The documentation has code examples on how to do both, but as separate sections to be more flexible to the user.

ARM Images

Thanks again for this project!

What happens?

Image pull back-off when attempting to deploy to an ARM64 Kubernetes cluster

What were you expecting to happen?

The adapter should spin up appropriately.

Steps to reproduce:

  • Create a kind/k8s cluster on an ARM64 machine
  • Install Prometheus Operator
  • Apply manifest in the README

Environment:

Kubernetes v1.27 on Linux ARM64

goreleaser should not tag latest directly

What happens?

Go releaser currently tags latest directly, which makes it show up under every release.

What were you expecting to happen?

Have a separate docker tag command in the release code, to move the latest tag, but not have it show up in the release notes.

Steps to reproduce:

make release

Add metrics

What happens?

We currently do not expose any metrics through /metrics.

What were you expecting to happen?

A collection of usable metrics is exposed, so that prometheus can scrape them.
Possible metrics:

  • Number of records fetched
  • Duration of SQL query
  • Duration of processing
  • Number of SQL queries
  • Write speed
  • Error counts

Add Tracing For Additional Observability

What happens?

Currently the prometheus_bigquery_remote_storage_adapter application only supports logs and prometheus metrics for observability.

What were you expecting to happen?

I would like to be able to use tracing to monitor/observe prometheus_bigquery_remote_storage_adapter when running in a production environment.

Steps to reproduce:

Run prometheus_bigquery_remote_storage_adapter and notice that only logs and prometheus metrics are provided.

Any errors, stacktrace, logs?

N/A

Environment:

  • Runtime version(Java, Go, Python, etc): N/A
  • Desktop OS/version: N/A

Additional comments:

From my perspective it seems like OpenTelemetry would provide a nice vendor neutral option for enabling tracing.

Graceful Shutdown

What happens?

he webserver does not terminate gracefully. There is no signal handler to do a graceful shutdown.

What were you expecting to happen?

When the application receives a SIGTERM signal is does a graceful shutdown.

Steps to reproduce:

Start the application then send it a SIGTERM signal using the kill(1) command.

Any errors, stacktrace, logs?

N/A

Environment:

  • Runtime version(Java, Go, Python, etc): N/A
  • Desktop OS/version: N/A

Additional comments:

None.

Update documentation with perfomance tuning settings

What happens?

High utilization can cause write errors with the default settings.

What were you expecting to happen?

Provide some guidance on CPU/Memory settings, in addition to sensible queue_config settings.
Provide guidance for the BQ timeout setting.

Support for bigquery json type

(While submitting an issue briefly describe the problem you are facing or a new feature you would want to see added)

The tags column is stored as a JSON object so it is natural for this also to be stored as a JSON type in biqquery.

Currently the remote read functionality uses the JSON_EXTRACT function which operates on JSON formatted strings and not JSON columns leading to query errors on remote read.

It would be good to support the JSON value type as this gives more concise query expressions compared to using JSON_EXTRACT on strings.

Releases only show most recent commit

What happens?

When a release is created, only the most recent commit is shown in the release notes

What were you expecting to happen?

Any commits that are included in that release should be noted in the release notes

Steps to reproduce:

  • Create a release
  • Look at the releases for the project

Document Prometneus Metrics For End Users

What happens?

I read the README.md file and do not see any documentation on the Prometheus metrics that are provided by the /metrics HTTP endpoint.

What were you expecting to happen?

The README.md file documents each Prometheus metric name, metric type , and a short description.

Steps to reproduce:

Read the README.md file.

Any errors, stacktrace, logs?

N/A

Environment:

  • Runtime version(Java, Go, Python, etc): N/A
  • Desktop OS/version: N/A

Additional comments:

None.

Automatically Set GOMAXPROCS

What happens?

When running in a Linux container GOMAXPROCS is not automatically set.

What were you expecting to happen?

This project would automatically set GOMAXPROCS when running in a Linux container(i.e. kubernetes).

Steps to reproduce:

N/A

Any errors, stacktrace, logs?

N/A

Environment:

Kubernetes

Additional comments:

Consider using this library to add this feature: https://github.com/uber-go/automaxprocs

Add sample k8s configuration

What happens?

A user currently has to figure out how to add this to their own k8s configuation.

What were you expecting to happen?

We provide an example that can simply be copied. This should be a sidecar to the prometheus container itself, to keep cross-node network communication at a minimum.

Add remote read support

What happens?

Currently, data can get sent to remote BigQuery table, but once there, the data cannot be read back out of the table via a Prometheus query.

What were you expecting to happen?

Remote data should be available to Prometheus and any "downstream" analysis tools (Grafana, Alertmanager, etc.) to maximize utility.

Additional comments:

Can reference influxdb example in the official Prometheus repo for an example of "remote read": https://github.com/prometheus/prometheus/blob/master/documentation/examples/remote_storage/remote_storage_adapter/influxdb/client.go#L115

NEQ label matcher does not work

What happens?

LabelMatcher_NEQ for labels (not metric name) matches entries that are equal to the value.

What were you expecting to happen?

LabelMatcher_NEQ for labels (not metric name) to matches entries that are NOT equal to the value

Set flags for Codecov report

What happens?

When codecov runs, the report shows both unit and e2e tests in the same report

What were you expecting to happen?

I would expect there to be a coverage report on unit tests, and a coverage report on the e2e tests that are separate

Steps to reproduce:

  • Run the codecov action and look at the report in the comments

Additional comments:

Here are the docs for Flags - https://docs.codecov.com/docs/flags
Here are the docs for the https://github.com/codecov/codecov-action#usage

Test action always fails when PR is created from a fork

What happens?

When a pull request is created from a fork, the test action always fails

What were you expecting to happen?

I would expect that the jobs would be able to run successfully

Steps to reproduce:

  • Create a PR from a fork
  • Look at the logs from the "Authenticate to Google Cloud" step

Any errors, stacktrace, logs?

Here are the logs from the step:

Run google-github-actions/auth@v1
  with:
    workload_identity_provider: projects/8[2](https://github.com/KohlsTechnology/prometheus_bigquery_remote_storage_adapter/actions/runs/4186647158/jobs/7259676118#step:8:2)1427[3](https://github.com/KohlsTechnology/prometheus_bigquery_remote_storage_adapter/actions/runs/4186647158/jobs/7259676118#step:8:3)11[4](https://github.com/KohlsTechnology/prometheus_bigquery_remote_storage_adapter/actions/runs/4186647158/jobs/7259676118#step:8:4)13/locations/global/workloadIdentityPools/prombq-adaptor/providers/github
    service_account: [email protected]
    create_credentials_file: true
    export_environment_variables: true
    cleanup_credentials: true
    access_token_lifetime: 3600s
    access_token_scopes: https://www.googleapis.com/auth/cloud-platform
    retries: 0
    id_token_include_email: false
  env:
    BQ_DATASET_NAME: github_actions_41866471[5](https://github.com/KohlsTechnology/prometheus_bigquery_remote_storage_adapter/actions/runs/4186647158/jobs/7259676118#step:8:5)[8](https://github.com/KohlsTechnology/prometheus_bigquery_remote_storage_adapter/actions/runs/4186647158/jobs/7259676118#step:8:8)_2
    MSYS: winsymlinks:nativestrict
Error: google-github-actions/auth failed with: retry function failed after 1 attempt: gitHub Actions did not inject $ACTIONS_ID_TOKEN_REQUEST_TOKEN or $ACTIONS_ID_TOKEN_REQUEST_URL into this job. This most likely means the GitHub Actions workflow permissions are incorrect, or this job is being run from a fork. For more information, please see https://docs.github.com/en/actions/security-guides/automatic-token-authentication#permissions-for-the-github_token

Additional comments:

#203 is an example PR where this issue happened

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.