Coder Social home page Coder Social logo

weaveworks / prom-aggregation-gateway Goto Github PK

View Code? Open in Web Editor NEW
328.0 44.0 61.0 1.02 MB

An aggregating push gateway for Prometheus

License: Mozilla Public License 2.0

Makefile 4.19% Shell 43.17% Go 17.66% Python 17.49% HCL 13.66% Dockerfile 2.23% Starlark 1.59%

prom-aggregation-gateway's Introduction

🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨

Maintenance of project has moved No more contributions or issues are being accepted in this repo. If you woud like to send a PR or file a bug please go here:

https://github.com/zapier/prom-aggregation-gateway

🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨

Prometheus Aggregation Gateway

Prometheus Aggregation Gateway is a aggregating push gateway for Prometheus. As opposed to the official Prometheus Pushgateway, this service aggregates the sample values it receives.

  • Counters where all labels match are added up.
  • Histograms are added up; if bucket boundaries are mismatched then the result has the union of all buckets and counts are given to the lowest bucket that fits.
  • Gauges are also added up (but this may not make any sense)
  • Summaries are discarded.

How to use

Send metrics in Prometheus format to /metrics/

E.g. if you have the program running locally:

echo 'http_requests_total{method="post",code="200"} 1027' | curl --data-binary @- http://localhost/metrics/

Now you can push your metrics using your favorite Prometheus client.

E.g. in Python using prometheus/client_python:

from prometheus_client import CollectorRegistry, Counter, push_to_gateway
registry = CollectorRegistry()
counter = Counter('some_counter', "A counter", registry=registry)
counter.inc()
push_to_gateway('localhost', job='my_job_name', registry=registry)

Then have your Prometheus scrape metrics at /metrics.

Ready-built images

Available on DockerHub weaveworks/prom-aggregation-gateway

According to https://prometheus.io/docs/practices/pushing/:

The Pushgateway never forgets series pushed to it and will expose them to Prometheus forever...

The latter point is especially relevant when multiple instances of a job differentiate their metrics in the Pushgateway via an instance label or similar.

This restriction makes the Prometheus pushgateway inappropriate for the usecase of accepting metrics from a client-side web app, so we created this one to aggregate counters from multiple senders.

Prom-aggregation-gateway presents a similar API, but does not attempt to be a drop-in replacement.

JS Client Library

See https://github.com/weaveworks/promjs/ for a JS client library for Prometheus that can be used from within a web app.

Getting Help

If you have any questions about, feedback for or problems with prom-aggregation-gateway:

Weaveworks follows the CNCF Code of Conduct. Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting a Weaveworks project maintainer, or Alexis Richardson ([email protected]).

Your feedback is always welcome!

prom-aggregation-gateway's People

Contributors

bboreham avatar jpellizzari avatar leth avatar martinbaillie avatar monadic avatar rade avatar rndstr avatar sontek avatar sootysec avatar tomwilkie avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

prom-aggregation-gateway's Issues

Metrics never being cleaned up generates Memory and CPU performance issues

PAG doesn't keep the track of all metrics in memory, like the normal prom gateway, but just the last version of the merged metric. That reduces the memory usage and makes it possible to use it to handle heavy metrics input loads, like the ones generated from browser-side apps.

Even with that "merge and keep the last value" optimization, as the metrics are never cleaned up, considering time < infinite, PAG will eventually deplete the MEM/CPU resourcing and blow up, as it happened a couple of times in my company. As we have cortex keeping track of metrics, PAG getting restarted every now and then is not a huge problem, but before blowing up we have an increase on the number of "bad requests", which makes us lose some good metrics while it doesn't restart.

image

There's no need to keep the metrics always there on PAG, as they are constantly scraped and stored on Prometheus or Cortex long living storage, so we should have a way to detect and remove old metrics from memory.

Cleanup needed

This projet need a big cleanup to remove unnecessary code to open source community:

E.g:

  • .circleci/
  • aggate-build/
  • tools/
  • Makefile

I can provide a pull request if necessary

Architecture suggestion for publishing metrics to Prometheus from many ephemeral workers

Hello. We have an execution model which is typical of a producer-consumer pattern with a queue/topic in the middle. Currently, the queue holds work of same type from multiple customers/tenants. The consumers/workers are non-http based applications that pick/pop a message from the queue and execute work. These consumers are Kubernetes pods that are spun up using a Deployment. They are configured to autoscale based on the work available on the queue. We would like to know at least two metrics about the performance of the workers/backlog burn up

  • number of executions processed for each customer/tenant
  • number of success or failures for each customer/tenant

What is the best way to publish metrics from these ephemeral workers? We were trying to send through Prometheus gateway but looks it has design philosophy which for us
a) can either result in metric overwrites from multiple workers/pods if they try to use same job/group name
b) can result in garbage build up if job/group name is based on instance/pod name as pods come and go over longer periods of time

We could additionally introduce a mini HTTP web server for each consumer and expose a scrape metric endpoint. It is possibly a bit overkill but it would work. Please suggest.

image

MetricFamily has no metrics for type:SUMMARY

When I examine the GET /metrics endpoint i see this in the output:

An error has occurred during metrics encoding:

MetricFamily has no metrics: name:"usage_summary" help:"Report Usage Summary" type:SUMMARY

I've identified the sender and it is a cronjob that is every ~24h which sends the summary type. So I see this error only sometimes. If I had to guess, the nil here is the episodic cause/effect. Is this something that could be handled more gracefully with an empty struct?

Or maybe it is possible to suppress the error in the /metrics output?

As it is now, the error acts as a DoS of any user of the aggregation gateway stats as consumers encounter the error in the endpoint's invalid format and fail.

Question: how to communicate securely with the gateway

Sorry but I can't ask that via Slack so I hope this is fine to be raised here: how we could securely communicate with the gateway from the client side in order to prevent people from sending bogus or intentionally wrong/bad data to the gateway using promjs or something else since it's all client-side?

Proper Gauge support

Right now gauges don't work for gathering metrics. A gauge is implemented as a counter with no real way to create one/reset one.

As of now I don't see how this would work in this aggregation gateway as gauges should be able to be individually set (e.g. not aggregated). For example client1 writes value 5 to a gauge, you don't want client2 overwriting it with 6 when they write their value.

So maybe this is by design, but is there any way to do gauges?

Problem when pushing using golang client - invalid metric name

Version: github.com/prometheus/client_golang v1.5.1

Code:

import (
	"github.com/prometheus/client_golang/prometheus/push"
	"github.com/prometheus/client_golang/prometheus"
)

	pusher := push.New(
		"http://localhost:9091",
		"foo-job",
	).Gatherer(prometheus.DefaultGatherer)

	if err := pusher.Push(); err != nil {
		fmt.Println(err)
	}

Error:

unexpected status code 400 while pushing to http://localhost:9091/metrics/job/foo-job: text format parsing error in line 1: invalid metric name

Histogram aggregation (runtime error: invalid memory address or nil pointer dereference)

Hi,

If I am pushing histogram metrics with only le property then everything works fine.
Example:
curl -d $'# TYPE test_v1 histogram\ntest_v1_bucket{le="25"} 0\ntest_v1_bucket{le="50"} 2\ntest_v1_bucket{le="80"} 2\ntest_v1_bucket{le="100"} 3\ntest_v1_bucket{le="150"} 3\ntest_v1_bucket{le="200"} 3\ntest_v1_bucket{le="250"} 3\ntest_v1_bucket{le="300"} 3\ntest_v1_bucket{le="400"} 3\ntest_v1_bucket{le="500"} 3\ntest_v1_bucket{le="800"} 3\ntest_v1_bucket{le="+Inf"} 3\ntest_v1_count 3\ntest_v1_sum 152.2200000108569\n' http://localhost/metrics/

I can execute this several time and all the bucket numbers will be aggregated.

But if together with le I have some custom properties in my metrics, then 1st time data will be pushed, but 2nd call will raise an error:
curl -d $'# TYPE test_v2 histogram\ntest_v2_bucket{le="25",instance="a"} 0\ntest_v2_bucket{le="50",instance="a"} 2\ntest_v2_bucket{le="80",instance="a"} 2\ntest_v2_bucket{le="100",instance="a"} 3\ntest_v2_bucket{le="150",instance="a"} 3\ntest_v2_bucket{le="200",instance="a"} 3\ntest_v2_bucket{le="250",instance="a"} 3\ntest_v2_bucket{le="300",instance="a"} 3\ntest_v2_bucket{le="400",instance="a"} 3\ntest_v2_bucket{le="500",instance="a"} 3\ntest_v2_bucket{le="800",instance="a"} 3\ntest_v2_bucket{le="+Inf",instance="a"} 3\ntest_v2_count 3\ntest_v2_sum 152.2200000108569\n' http://localhost/metrics/

2021/01/21 19:25:49 http: panic serving 172.17.0.1:41728: runtime error: invalid memory address or nil pointer dereference goroutine 53 [running]: net/http.(*conn).serve.func1(0xc0001a4780) /usr/local/go/src/net/http/server.go:1769 +0x139 panic(0x6e8760, 0x9b96d0) /usr/local/go/src/runtime/panic.go:522 +0x1b5 main.mergeMetric(0xc000000004, 0xc0000a7aa0, 0xc0001ee060, 0xc00000e0d0) /go/src/github.com/weaveworks/prom-aggregation-gateway/cmd/prom-aggregation-gateway/main.go:106 +0x2d6 main.mergeFamily(0xc0000e46e0, 0xc0000e47d0, 0xc00001bbb8, 0x7, 0xc0001be160) /go/src/github.com/weaveworks/prom-aggregation-gateway/cmd/prom-aggregation-gateway/main.go:149 +0x431 main.(*aggate).parseAndMerge(0xc0000d0320, 0x7f286b4ff008, 0xc0000a2d40, 0x0, 0x0) /go/src/github.com/weaveworks/prom-aggregation-gateway/cmd/prom-aggregation-gateway/main.go:227 +0x2e0 main.main.func1(0x7b09e0, 0xc0001b8b60, 0xc0001b1000) /go/src/github.com/weaveworks/prom-aggregation-gateway/cmd/prom-aggregation-gateway/main.go:271 +0x186 net/http.HandlerFunc.ServeHTTP(0xc0000d0340, 0x7b09e0, 0xc0001b8b60, 0xc0001b1000) /usr/local/go/src/net/http/server.go:1995 +0x44 net/http.(*ServeMux).ServeHTTP(0x9c5cc0, 0x7b09e0, 0xc0001b8b60, 0xc0001b1000) /usr/local/go/src/net/http/server.go:2375 +0x1d6 net/http.serverHandler.ServeHTTP(0xc0000d2d00, 0x7b09e0, 0xc0001b8b60, 0xc0001b1000) /usr/local/go/src/net/http/server.go:2774 +0xa8 net/http.(*conn).serve(0xc0001a4780, 0x7b1420, 0xc0000a2cc0) /usr/local/go/src/net/http/server.go:1878 +0x851 created by net/http.(*Server).Serve /usr/local/go/src/net/http/server.go:2884 +0x2f4

Handle metrics deletion

It could be nice to handle metrics deletion to avoid restarting the service in order to "forget" old metrics.

BTW, documentation in the Motivation section is misleading: it seams like weaveworks/prom-aggregation-gateway can forgets series whereas prometheus/pushgateway can't.

Using Go Modules

Are you interested in using Go module for the build?
I can provide a push request for that.

Unresponsive after second POST with metric labels

After a POST request on a metric with labels, the gateway becomes unresponsive.

Reproduction steps:

  1. Send a POST to the gateway with a metric with labels:
curl -v -X POST \
-d '
# HELP ui_page_render_errors Number of times a page fails to render
# TYPE ui_page_render_errors counter
ui_page_render_errors{path="/org/:orgId"} 1
' \
"http://localhost:9000/api/ui/metrics"
  1. Should get a 200 ok
  2. Do the same request again
  3. Gateway does not respond to the request or further requests

Writing metrics without any labels seems to work fine.

Profiling suggests it is hung in a main.lablesLessThan function:

(pprof) top
23000ms of 23070ms total (99.70%)
Dropped 1 node (cum <= 115.35ms)
Showing top 10 nodes out of 16 (cum >= 22360ms)
      flat  flat%   sum%        cum   cum%
   10780ms 46.73% 46.73%    22360ms 96.92%  main.lablesLessThan
    8820ms 38.23% 84.96%     8820ms 38.23%  runtime.memeqbody
    2760ms 11.96% 96.92%     2760ms 11.96%  runtime.eqstring
     640ms  2.77% 99.70%      640ms  2.77%  runtime.kevent
         0     0% 99.70%    22360ms 96.92%  main.(*aggate).parseAndMerge
         0     0% 99.70%    22360ms 96.92%  main.main.func1
         0     0% 99.70%    22360ms 96.92%  main.mergeFamily
         0     0% 99.70%    22360ms 96.92%  net/http.(*ServeMux).ServeHTTP
         0     0% 99.70%    22360ms 96.92%  net/http.(*conn).serve
         0     0% 99.70%    22360ms 96.92%  net/http.HandlerFunc.ServeHTTP

Docker Scan reporting vulnerabilities

Docker Scan is reporting security vulnerabilities due to the version of alpine being deployed.

❯ docker scan weaveworks/prom-aggregation-gateway:master-c4415bbe

Testing weaveworks/prom-aggregation-gateway:master-c4415bbe...

βœ— Low severity vulnerability found in openssl/libcrypto1.1
  Description: Inadequate Encryption Strength
  Info: https://snyk.io/vuln/SNYK-ALPINE310-OPENSSL-1075742
  Introduced through: openssl/[email protected], openssl/[email protected], apk-tools/[email protected], libtls-standalone/[email protected]
  From: openssl/[email protected]
  From: openssl/[email protected] > openssl/[email protected]
  From: apk-tools/[email protected] > openssl/[email protected]
  and 4 more...
  Fixed in: 1.1.1j-r0

βœ— Medium severity vulnerability found in openssl/libcrypto1.1
  Description: NULL Pointer Dereference
  Info: https://snyk.io/vuln/SNYK-ALPINE310-OPENSSL-1051928
  Introduced through: openssl/[email protected], openssl/[email protected], apk-tools/[email protected], libtls-standalone/[email protected]
  From: openssl/[email protected]
  From: openssl/[email protected] > openssl/[email protected]
  From: apk-tools/[email protected] > openssl/[email protected]
  and 4 more...
  Fixed in: 1.1.1i-r0

βœ— Medium severity vulnerability found in openssl/libcrypto1.1
  Description: NULL Pointer Dereference
  Info: https://snyk.io/vuln/SNYK-ALPINE310-OPENSSL-1075740
  Introduced through: openssl/[email protected], openssl/[email protected], apk-tools/[email protected], libtls-standalone/[email protected]
  From: openssl/[email protected]
  From: openssl/[email protected] > openssl/[email protected]
  From: apk-tools/[email protected] > openssl/[email protected]
  and 4 more...
  Fixed in: 1.1.1j-r0

βœ— Medium severity vulnerability found in openssl/libcrypto1.1
  Description: NULL Pointer Dereference
  Info: https://snyk.io/vuln/SNYK-ALPINE310-OPENSSL-1089243
  Introduced through: openssl/[email protected], openssl/[email protected], apk-tools/[email protected], libtls-standalone/[email protected]
  From: openssl/[email protected]
  From: openssl/[email protected] > openssl/[email protected]
  From: apk-tools/[email protected] > openssl/[email protected]
  and 4 more...
  Fixed in: 1.1.1k-r0

βœ— Medium severity vulnerability found in musl/musl
  Description: Out-of-bounds Write
  Info: https://snyk.io/vuln/SNYK-ALPINE310-MUSL-1042764
  Introduced through: musl/[email protected], busybox/[email protected], alpine-baselayout/[email protected], openssl/[email protected], openssl/[email protected], zlib/[email protected], apk-tools/[email protected], libtls-standalone/[email protected], busybox/[email protected], musl/[email protected], pax-utils/[email protected], libc-dev/[email protected]
  From: musl/[email protected]
  From: busybox/[email protected] > musl/[email protected]
  From: alpine-baselayout/[email protected] > musl/[email protected]
  and 10 more...
  Fixed in: 1.1.22-r4

βœ— High severity vulnerability found in openssl/libcrypto1.1
  Description: Integer Overflow or Wraparound
  Info: https://snyk.io/vuln/SNYK-ALPINE310-OPENSSL-1075741
  Introduced through: openssl/[email protected], openssl/[email protected], apk-tools/[email protected], libtls-standalone/[email protected]
  From: openssl/[email protected]
  From: openssl/[email protected] > openssl/[email protected]
  From: apk-tools/[email protected] > openssl/[email protected]
  and 4 more...
  Fixed in: 1.1.1j-r0

βœ— High severity vulnerability found in openssl/libcrypto1.1
  Description: Improper Certificate Validation
  Info: https://snyk.io/vuln/SNYK-ALPINE310-OPENSSL-1089244
  Introduced through: openssl/[email protected], openssl/[email protected], apk-tools/[email protected], libtls-standalone/[email protected]
  From: openssl/[email protected]
  From: openssl/[email protected] > openssl/[email protected]
  From: apk-tools/[email protected] > openssl/[email protected]
  and 4 more...
  Fixed in: 1.1.1k-r0

βœ— High severity vulnerability found in busybox/busybox
  Description: Improper Handling of Exceptional Conditions
  Info: https://snyk.io/vuln/SNYK-ALPINE310-BUSYBOX-1090151
  Introduced through: busybox/[email protected], alpine-baselayout/[email protected], busybox/[email protected]
  From: busybox/[email protected]
  From: alpine-baselayout/[email protected] > busybox/[email protected]
  From: busybox/[email protected]
  Fixed in: 1.30.1-r5

βœ— High severity vulnerability found in apk-tools/apk-tools
  Description: Out-of-bounds Read
  Info: https://snyk.io/vuln/SNYK-ALPINE310-APKTOOLS-1246341
  Introduced through: apk-tools/[email protected]
  From: apk-tools/[email protected]
  Fixed in: 2.10.6-r0

βœ— Critical severity vulnerability found in apk-tools/apk-tools
  Description: Out-of-bounds Read
  Info: https://snyk.io/vuln/SNYK-ALPINE310-APKTOOLS-1534688
  Introduced through: apk-tools/[email protected]
  From: apk-tools/[email protected]
  Fixed in: 2.10.7-r0

[question] Gateway aggregates data based on what?

Prometheus Aggregation Gateway is a aggregating push gateway for Prometheus. As opposed to the official Prometheus Pushgateway, this service aggregates the sample values it receives.

This is great, but how is the data aggregated? In what dimensions, what slices, what buckets?

What is the default and is there a way to configure aggregation with custom way?

Also, any Docker images and instructions on how to use them?

prom-aggregation-gateway

Hi,
I am using PromJs in my angular application to generate metrics. I am not sure how to push those to the Prometheus gateway. Can I use this library for pushing the metrics.
How to use this library in Angular as I can't run "npm install prom-aggregation-gateway" on this and its throwing an exception.
npm ERR! code E404
npm ERR! 404 Not Found: prom-aggregation-gateway@latest

No response body for prom pushgateway

I used default prom pushgateway before moving to prom-aggregation-pushgateway and https://github.com/siimon/prom-client lib for pushing metrics to pushgateway. The response body that comes in callback was not empty.
Π‘Π½ΠΈΠΌΠΎΠΊ экрана 2020-03-17 Π² 16 03 33

After i moved to prom-aggregation-pushgateway, everything works ok, but response body is '' (empty string). How can it be fixed?

Zapier has an actively maintained fork

Hello! I was wondering what should be done because there is a modernized/active fork of this over at:
https://github.com/zapier/prom-aggregation-gateway

Some of the updated features:

  • Updated docker images and published them to https://ghcr.io/zapier/prom-aggregation-gateway
  • Helm charts are published to https://zapier.github.io/prom-aggregation-gateway/
  • Rebuilt the whole CI/CD pipeline using Earthfile so the entire system can be run locally and matches what is ran in Github Actions
  • Improved overall testing which includes skaffold for running the helm charts in a kubernetes cluster
  • Added job_label logic for metrics to pass in job_label parameter through API to prevent metric clobbering from different service_jobs with same metric_names ( Added support for labels as part of the URL (i.e /metrics/job///) )
  • Refactored web framework to use Gin to take advantage of the ability to expose metrics to prometheus, CORS, and BasicAuth support
  • Added service level metrics for tracking the operational aspect of prom-agg-gateway and can be used for defining SLOs, alerts, etc.
  • Created private endpoint for liveness, readiness, and internal metrics to go along with the public endpoint

Should this repo point people to it from the README? Or should we try to get the zapier updates merged back in?

Lots of duplicate metric messages

Cortex logs these all the time, coming from the ui-server pods.
Always it seems the new value is a very low figure, e.g. 0.

example:

ts=2018-01-16T19:51:15.813722857Z caller=log.go:108 level=error org_id=1 msg="push error" err="rpc error: code = Code(400) desc = sample with repeated timestamp but different value for series ui_request_seconds_count_bucket{instance=\"ui-metrics-1640188189-jk2tg\", job=\"monitoring/ui-metrics\", le=\"5000\", monitor=\"prod\", node=\"ip-172-20-1-230.ec2.internal\", path=\"api/flux/v4/ping\"}; last value: 1, incoming value: 0"
ts=2018-01-16T19:51:16.042612072Z caller=log.go:108 level=error org_id=2 msg="push error" err="rpc error: code = Code(400) desc = sample with repeated timestamp but different value for series ui_request_seconds_count_sum{instance=\"ui-metrics-1362350778-9lg9z\", job=\"monitoring/ui-metrics\", monitor=\"dev\", node=\"ip-172-20-1-176.ec2.internal\", path=\"api/flux/v6/history\"}; last value: 5948, incoming value: 0"

go client doesn't work

trying the example code from go client doc: https://pkg.go.dev/github.com/prometheus/client_golang/prometheus/push?utm_source=godoc#Pusher.Push

it works with the normal pushgateway, but if I point the url to prom-aggregation-gateway,
got error: Could not push to Pushgateway: unexpected status code 400 while pushing to http://localhost:8080/metrics/job/ping_test: text format parsing error in line 1: expected float as value, got ""

package main

import (
	"fmt"

	"github.com/prometheus/client_golang/prometheus"
	"github.com/prometheus/client_golang/prometheus/push"
)

func main() {
	completionTime := prometheus.NewGauge(prometheus.GaugeOpts{
		Name: "db_backup_last_completion_timestamp_seconds",
		Help: "The timestamp of the last successful completion of a DB backup.",
	})
	completionTime.SetToCurrentTime()
	if err := push.New("http://pushgateway:9091", "ping_test").
		Collector(completionTime).
		Grouping("db", "customers").
		Push(); err != nil {
		fmt.Println("Could not push completion time to Pushgateway:", err)
	}
}

Issue:http: superfluous response.WriteHeader call

Hi,

I found there seems an error in the following code in main.go:

func (a *aggate) handler(w http.ResponseWriter, r *http.Request) {
	contentType := expfmt.Negotiate(r.Header)
	w.Header().Set("Content-Type", string(contentType))
	enc := expfmt.NewEncoder(w, contentType)

       ......
	for _, name := range metricNames {
		if err := enc.Encode(a.families[name]); err != nil {
			http.Error(w, "An error has occurred during metrics encoding:\n\n"+err.Error(), http.StatusInternalServerError)
			return
		}
	}

	// TODO reset gauges
}

you could see that "metricNames" would be go through one by one, and when execute enc.Encode(a.families[name]), it would call writeHeader. However, when there are an err occor before program go through all metricName's items, err would not nill and program will execute "http.Error(....)". This function will call writeHeader again. Then there would be an issue as following:
http: superfluous response.WriteHeader call

The real error would not be print.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.