Coder Social home page Coder Social logo

prometheus-proxy's Introduction

Prometheus Proxy

JitPack Build Status codebeat badge Codacy Badge codecov Coverage Status Kotlin ktlint

Prometheus is an excellent systems monitoring and alerting toolkit, which uses a pull model for collecting metrics. The pull model is problematic when a firewall separates a Prometheus server and its metrics endpoints. Prometheus Proxy enables Prometheus to reach metrics endpoints running behind a firewall and preserves the pull model.

The prometheus-proxy runtime comprises 2 services:

  • proxy: runs in the same network domain as Prometheus server (outside the firewall) and proxies calls from Prometheus to the agent behind the firewall.
  • agent: runs in the same network domain as all the monitored hosts/services/apps (inside the firewall). It maps the scraping queries coming from the proxy to the actual /metrics scraping endpoints of the hosts/services/apps.

Here's a simplified network diagram of how the deployed proxy and agent work:

network diagram

Endpoints running behind a firewall require a prometheus-agent (the agent) to be run inside the firewall. An agent can run as a stand-alone server, embedded in another java server, or as a java agent. Agents connect to a prometheus-proxy (the proxy) and register the paths for which they will provide data. One proxy can work one or many agents.

Requirements

Requires Java 11 or newer.

CLI Usage

Download the proxy and agent uber-jars from here.

Start a proxy with:

java -jar prometheus-proxy.jar

Start an agent with:

java -jar prometheus-agent.jar -Dagent.proxy.hostname=mymachine.local --config https://raw.githubusercontent.com/pambrose/prometheus-proxy/master/examples/myapps.conf

If the prometheus-proxy were running on a machine named mymachine.local and the agent.pathConfigs value in the myapps.conf config file had the contents:

agent {
  pathConfigs: [
    {
      name: "App1 metrics"
      path: app1_metrics
      url: "http://app1.local:9100/metrics"
    },
    {
      name: "App2 metrics"
      path: app2_metrics
      url: "http://app2.local:9100/metrics"
    },
    {
      name: "App3 metrics"
      path: app3_metrics
      url: "http://app3.local:9100/metrics"
    }
  ]
}

then the prometheus.yml scrape_config would target the three apps with:

If the endpoints were restricted with basic auth/bearer authentication, you could either include the basic-auth credentials in the URL with: http://user:pass@hostname/metrics or they could be configured with basic_auth/bearer_token in the scrape-config.

The prometheus.yml file would include:

scrape_configs:
  - job_name: 'app1 metrics'
    metrics_path: '/app1_metrics'
    bearer_token: 'eyJ....hH9rloA'
    static_configs:
      - targets: [ 'mymachine.local:8080' ]
  - job_name: 'app2 metrics'
    metrics_path: '/app2_metrics'
    basic_auth:
        username: 'user'
        password: 's3cr3t'
    static_configs:
      - targets: [ 'mymachine.local:8080' ]
  - job_name: 'app3 metrics'
    metrics_path: '/app3_metrics'
    static_configs:
      - targets: [ 'mymachine.local:8080' ]

Docker Usage

The docker images are available via:

docker pull pambrose/prometheus-proxy:1.22.0
docker pull pambrose/prometheus-agent:1.22.0

Start a proxy container with:

docker run --rm -p 8082:8082 -p 8092:8092 -p 50051:50051 -p 8080:8080 \
        --env ADMIN_ENABLED=true \
        --env METRICS_ENABLED=true \
        pambrose/prometheus-proxy:1.22.0

Start an agent container with:

docker run --rm -p 8083:8083 -p 8093:8093 \
        --env AGENT_CONFIG='https://raw.githubusercontent.com/pambrose/prometheus-proxy/master/examples/simple.conf' \
        pambrose/prometheus-agent:1.22.0

Using the config file simple.conf, the proxy and the agent metrics would be available from the proxy on localhost at:

If you want to use a local config file with a docker container (instead of the above HTTP-served config file), use the docker mount option. Assuming the config file prom-agent.conf is in your current directory, run an agent container with:

docker run --rm -p 8083:8083 -p 8093:8093 \
    --mount type=bind,source="$(pwd)"/prom-agent.conf,target=/app/prom-agent.conf \
    --env AGENT_CONFIG=prom-agent.conf \
    pambrose/prometheus-agent:1.22.0

Note: The WORKDIR of the proxy and agent images is /app, so make sure to use /app as the base directory in the target for --mount options.

Embedded Agent Usage

If you are running a JVM-based program, you can run with the agent embedded directly in your app and not use an external agent:

EmbeddedAgentInfo agentInfo = startAsyncAgent("configFile.conf", true);

Configuration

The proxy and agent use the Typesafe Config library for configuration. Highlights include:

  • support for files in three formats: Java properties, JSON, and a human-friendly JSON superset (HOCON)
  • config files can be files or urls
  • config values can come from CLI options, environment vars, Java system properties, and/or config files.
  • config files can reference environment variables

All the proxy and agent properties are described here. The only required argument is an agent config value, which should have an agent.pathConfigs value.

Proxy CLI Options

Options ENV VAR
Property
Default Description
--config, -c PROXY_CONFIG Agent config file or url
--port, -p PROXY_PORT
proxy.http.port
8080 Proxy listen port
--agent_port, -a AGENT_PORT
proxy.agent.port
50051 gRPC listen port for agents
--admin, -r ADMIN_ENABLED
proxy.admin.enabled
false Enable admin servlets
--admin_port, -i ADMIN_PORT
proxy.admin.port
8092 Admin servlets port
--debug, -b DEBUG_ENABLED
proxy.admin.debugEnabled
false Enable proxy debug servlet
on admin port
--metrics, -e METRICS_ENABLED
proxy.metrics.enabled
false Enable proxy metrics
--metrics_port, -m METRICS_PORT
proxy.metrics.port
8082 Proxy metrics listen port
--sd_enabled SD_ENABLED
proxy.service.discovery.enabled
false Service discovery endpoint enabled
--sd_path SD_PATH
proxy.service.discovery.path
"discovery" Service discovery endpoint path
--sd_target_prefix SD_TARGET_PREFIX
proxy.service.discovery.targetPrefix
"http://localhost:8080/" Service discovery target prefix
--tf-disabled TRANSPORT_FILTER_DISABLED
proxy.transportFilterDisabled
false Transport filter disabled
--cert, -t CERT_CHAIN_FILE_PATH
proxy.tls.certChainFilePath
Certificate chain file path
--key, -k PRIVATE_KEY_FILE_PATH
proxy.tls.privateKeyFilePath
Private key file path
--trust, -s TRUST_CERT_COLLECTION_FILE_PATH
proxy.tls.trustCertCollectionFilePath
Trust certificate collection file path
--version, -v Print version info and exit
--usage, -u Print usage message and exit
-D Dynamic property assignment

Agent CLI Options

Options ENV VAR
Property
Default Description
--config, -c AGENT_CONFIG Agent config file or url (required)
--proxy, -p PROXY_HOSTNAME
agent.proxy.hostname
Proxy hostname (can include :port)
--name, -n AGENT_NAME
agent.name
Agent name
--admin, -r ADMIN_ENABLED
agent.admin.enabled
false Enable admin servlets
--admin_port, -i ADMIN_PORT
agent.admin.port
8093 Admin servlets port
--debug, -b DEBUG_ENABLED
agent.admin.debugEnabled
false Enable agent debug servlet
on admin port
--metrics, -e METRICS_ENABLED
agent.metrics.enabled
false Enable agent metrics
--metrics_port, -m METRICS_PORT
agent.metrics.port
8083 Agent metrics listen port
--consolidated, -o CONSOLIDATED
agent.consolidated
false Enable multiple agents per registered path
--timeout SCRAPE_TIMEOUT_SECS
agent.scrapeTimeoutSecs
15 Scrape timeout time (seconds)
--max_retries SCRAPE_MAX_RETRIES
agent.scrapeMaxRetries
0 Scrape maximum retries (0 disables scrape retries)
--chunk CHUNK_CONTENT_SIZE_KBS
agent.chunkContentSizeKbs
32 Threshold for chunking data to Proxy and buffer size (KBs)
--gzip MIN_GZIP_SIZE_BYTES
agent.minGzipSizeBytes
1024 Minimum size for content to be gzipped (bytes)
--tf-disabled TRANSPORT_FILTER_DISABLED
proxy.transportFilterDisabled
false Transport filter disabled
--trust_all_x509 TRUST_ALL_X509_CERTIFICATES
agent.http.enableTrustAllX509Certificates
false Disable SSL verification for agent https endpoints
--cert, -t CERT_CHAIN_FILE_PATH
agent.tls.certChainFilePath
Certificate chain file path
--key, -k PRIVATE_KEY_FILE_PATH
agent.tls.privateKeyFilePath
Private key file path
--trust, -s TRUST_CERT_COLLECTION_FILE_PATH
agent.tls.trustCertCollectionFilePath
Trust certificate collection file path
--override OVERRIDE_AUTHORITY
agent.tls.overrideAuthority
Override authority (for testing)
--version, -v Print version info and exit
--usage, -u Print usage message and exit
-D Dynamic property assignment

Misc notes:

  • If you want to customize the logging, include the java arg -Dlogback.configurationFile=/path/to/logback.xml
  • JSON config files must have a .json suffix
  • Java Properties config files must have a .properties or .prop suffix
  • HOCON config files must have a .conf suffix
  • Option values are evaluated in the order: CLI, environment vars, and finally config file vals
  • Property values can be set as a java -D arg to or as a proxy or agent jar -D arg
  • For more information about the proxy service discovery options, see the Prometheus documentation

Admin Servlets

These admin servlets are available when the admin servlet is enabled:

  • /ping
  • /threaddump
  • /healthcheck
  • /version

The admin servlets can be enabled with the ADMIN_ENABLED environment var, the --admin CLI option, or with the proxy.admin.enabled and agent.admin.enabled properties.

The debug servlet can be enabled with the DEBUG_ENABLED environment var, the --debug CLI option , or with the proxy.admin.debugEnabled and agent.admin.debugEnabled properties. The debug servlet requires that the admin servlets are enabled. The debug servlet is at: /debug on the admin port.

Descriptions of the servlets are here. The path names can be changed in the configuration file. To disable an admin servlet, assign its property path to "".

Adding TLS to Agent-Proxy Connections

Agents connect to a proxy using gRPC. gRPC supports TLS with or without mutual authentication. The necessary certificate and key file paths can be specified via CLI args, environment variables and configuration file settings.

The gRPC docs describe how to set up TLS. The repo includes the certificates and keys necessary to test TLS support.

Running TLS without mutual authentication requires these settings:

  • certChainFilePath and privateKeyFilePath on the proxy
  • trustCertCollectionFilePath on the agent

Running TLS with mutual authentication requires these settings:

  • certChainFilePath, privateKeyFilePath and trustCertCollectionFilePath on the proxy
  • certChainFilePath, privateKeyFilePath and trustCertCollectionFilePath on the agent

Running with TLS

Run a proxy and an agent with TLS (no mutual auth) using the included testing certs and keys with:

java -jar prometheus-proxy.jar --config examples/tls-no-mutual-auth.conf
java -jar prometheus-agent.jar --config examples/tls-no-mutual-auth.conf

Run a proxy and an agent docker container with TLS (no mutual auth) using the included testing certs and keys with:

docker run --rm -p 8082:8082 -p 8092:8092 -p 50440:50440 -p 8080:8080 \
    --mount type=bind,source="$(pwd)"/testing/certs,target=/app/testing/certs \
    --mount type=bind,source="$(pwd)"/examples/tls-no-mutual-auth.conf,target=/app/tls-no-mutual-auth.conf \
    --env PROXY_CONFIG=tls-no-mutual-auth.conf \
    --env ADMIN_ENABLED=true \
    --env METRICS_ENABLED=true \
    pambrose/prometheus-proxy:1.22.0

docker run --rm -p 8083:8083 -p 8093:8093 \
    --mount type=bind,source="$(pwd)"/testing/certs,target=/app/testing/certs \
    --mount type=bind,source="$(pwd)"/examples/tls-no-mutual-auth.conf,target=/app/tls-no-mutual-auth.conf \
    --env AGENT_CONFIG=tls-no-mutual-auth.conf \
    --env PROXY_HOSTNAME=mymachine.lan:50440 \
    --name docker-agent \
    pambrose/prometheus-agent:1.22.0

Note: The WORKDIR of the proxy and agent images is /app, so make sure to use /app as the base directory in the target for --mount options.

Scraping HTTPS Endpoints

Disable SSL verification for agent https endpoints with the TRUST_ALL_X509_CERTIFICATES environment var, the --trust_all_x509 CLI option, or the agent.http.enableTrustAllX509Certificates property.

Scraping a Prometheus Instance

It's possible to scrape an existing prometheus server using the /federate endpoint. This enables using the existing service discovery features already built into Prometheus.

An example config can be found in federate.conf.

Nginx Support

To use the prometheus_proxy with nginx as a reverse proxy, disable the transport filter with the TRANSPORT_FILTER_DISABLED environment var, the --tf-disabled CLI option, or the agent.transportFilterDisabled/ proxy.transportFilterDisabled properties. Agents and the Proxy must run with the same transporFilterDisabled value.

When using transporFilterDisabled, you will not see agent contexts immediately removed from the proxy when agents are terminated. Instead, agent contexts will be removed from the proxy after they age out from inactivity. The maximum age is controlled by the proxy.internal.maxAgentInactivitySecs value. The default value is 1 minute.

An example nginx conf file is here and an example agent/proxy conf file is here

Grafana

Grafana dashboards for the proxy and agent are here.

Related Links

prometheus-proxy's People

Contributors

elektro-wolle avatar mejlholm avatar pambrose avatar rakhbari avatar vincedom avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

prometheus-proxy's Issues

log4j vulnerability / CVE-2021-44228

Hi, not an expert in Java, so I'm wondering - does the inclusion of a single log4j class' bytecode expose this project to this newly discovered critical vulnerability?

To me, it doesn't seem like it does, feel free to close the issue in that case.
If it does, could you advise on how to mitigate it? Thanks in advance.

$ jar tf prometheus-proxy.jar | grep log4j
ch/qos/logback/classic/log4j/
ch/qos/logback/classic/log4j/XMLLayout.class

$ jar tf prometheus-agent.jar | grep log4j
ch/qos/logback/classic/log4j/
ch/qos/logback/classic/log4j/XMLLayout.class

Missing ScrapeRequestWrapper, timeout after 5 seconds

Missing ScrapeRequestWrapper ERROR, encountered when the source metrics exceeds 5 seconds.

### Agent Side

Agent Config

{
  name: aaa
  path: aaa_metrics
  url: "http://aaa.local:1111/metrics"
},

Direct access

time curl http://aaa.local:1111/metrics
real 0m5.689s
user 0m0.003s
sys 0m0.003s

Proxy Side

prometheus.yml

  • job_name: 'aaa'
    scrape_interval: 2m
    metrics_path: '/aaa_metrics'
    scrape_timeout: 90s
    static_configs:
    • targets: ['proxy.local:8080']

Direct Access

time curl http://proxy.local:8080/aaa_metrics
< HTTP/1.1 503 Service Unavailable
< Date: Wed, 29 Apr 2020 01:42:45 GMT
< Server: Application/debug ktor/debug
< Cache-Control: must-revalidate,no-cache,no-store
< cache-control: must-revalidate,no-cache,no-store
< Content-Length: 0
< Content-Type: text/plain; charset=UTF-8
<

real 0m5.016s
user 0m0.003s
sys 0m0.007s

### ERROR

01:26:04.142 ERROR [ScrapeRequestManager.kt:46] - Missing ScrapeRequestWrapper for scrape_id: 53 [grpc-default-executor-4]
01:28:04.292 ERROR [ScrapeRequestManager.kt:46] - Missing ScrapeRequestWrapper for scrape_id: 56 [grpc-default-executor-4]
01:30:04.236 ERROR [ScrapeRequestManager.kt:46] - Missing ScrapeRequestWrapper for scrape_id: 59 [grpc-default-executor-4]
01:32:04.084 ERROR [ScrapeRequestManager.kt:46] - Missing ScrapeRequestWrapper for scrape_id: 62 [grpc-default-executor-4]
01:32:57.767 ERROR [ScrapeRequestManager.kt:46] - Missing ScrapeRequestWrapper for scrape_id: 65 [grpc-default-executor-4]

Prometheus proxy uses a lot of memory

I am running a lot of deployments with the same architecture, with a prometheus that scrapes remote systems through a prometheus proxy.
The things that is strange is the amount of memory that prometheus proxy uses. Here's an example:

prometheus-k8s-0                      15m          693Mi
prometheus-proxy-69945d644-w4wzm      17m          2995Mi

Why would the proxy use much more memory than even prometheus?
Could this be a memory leak?

question about java version

hi,
Is there any requirement for Java version?
I run it by docker, it works.
But fail when run by command: java -jar prometheus-agent.jar --config nodes.conf.
It has the error like this:
06:04:46.631 INFO [BaseOptions.kt:169] - trustCertCollectionFilePath: [main] 06:04:46.632 INFO [AgentOptions.kt:121] - agent.internal.cioTimeoutSecs: 90.0s [main] 06:04:46.632 INFO [AgentOptions.kt:122] - agent.scrapeTimeoutSecs: 300s [main] Exception in thread "main" java.lang.NoClassDefFoundError: java/net/http/HttpConnectTimeoutException at io.prometheus.Agent.<init>(Agent.kt:69) at io.prometheus.Agent.<init>(Agent.kt:57) at io.prometheus.Agent$Companion.startSyncAgent(Agent.kt:278) at io.prometheus.Agent$Companion.main(Agent.kt:269) at io.prometheus.Agent.main(Agent.kt) Caused by: java.lang.ClassNotFoundException: java.net.http.HttpConnectTimeoutException at java.net.URLClassLoader.findClass(URLClassLoader.java:381) at java.lang.ClassLoader.loadClass(ClassLoader.java:424) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335) at java.lang.ClassLoader.loadClass(ClassLoader.java:357) ... 5 more c105[root@fuel prometheus-proxy]#

endpoints are based on https

Hi Team
I see that there is failure when endpoints are using https ; how can we resolve this ?

[AgentHttpService.kt:172] - fetchScrapeUrl() sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target - https://Endpoint-host1:8082/metrics [DefaultDispatcher-worker-2]
sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
at java.base/sun.security.provider.certpath.SunCertPathBuilder.build(SunCertPathBuilder.java:141)
at java.base/sun.security.provider.certpath.SunCertPathBuilder.engineBuild(SunCertPathBuilder.java:126)
at java.base/java.security.cert.CertPathBuilder.build(CertPathBuilder.java:297)
at java.base/sun.security.validator.PKIXValidator.doBuild(PKIXValidator.java:434)
at java.base/sun.security.validator.PKIXValidator.engineValidate(PKIXValidator.java:306)
at java.base/sun.security.validator.Validator.validate(Validator.java:264)
at java.base/sun.security.ssl.X509TrustManagerImpl.validate(X509TrustManagerImpl.java:313)
at java.base/sun.security.ssl.X509TrustManagerImpl.checkTrusted(X509TrustManagerImpl.java:233)
at java.base/sun.security.ssl.X509TrustManagerImpl.checkServerTrusted(X509TrustManagerImpl.java:110)
at io.ktor.network.tls.TLSClientHandshake.handleCertificatesAndKeys(TLSClientHandshake.kt:233)
at io.ktor.network.tls.TLSClientHandshake.negotiate(TLSClientHandshake.kt:164)
at io.ktor.network.tls.TLSClientHandshake$negotiate$1.invokeSuspend(TLSClientHandshake.kt)
at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:33)
at kotlinx.coroutines.DispatchedTask.run(DispatchedTask.kt:106)
at kotlinx.coroutines.internal.LimitedDispatcher.run(LimitedDispatcher.kt:42)
at kotlinx.coroutines.scheduling.TaskImpl.run(Tasks.kt:95)
at kotlinx.coroutines.scheduling.CoroutineScheduler.runSafely(CoroutineScheduler.kt:570)
at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.executeTask(CoroutineScheduler.kt:750)
at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.runWorker(CoroutineScheduler.kt:677)
at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.run(CoroutineScheduler.kt:664)

Agent cannot reconnect to proxy due to invalid agentId

I am running into an issue where, after some time, the proxy agent disconnects from the proxy and cannot reconnect.

This is what I get as logs from the agent container:

09:00:24.203 INFO  [Agent.kt:194] - Waited 3.00s to reconnect [Agent Unnamed-cf3b11648a1e]
09:00:24.203 INFO  [AgentGrpcService.kt:122] - Creating gRPC stubs [Agent Unnamed-cf3b11648a1e]
09:00:24.203 INFO  [GrpcDsl.kt:48] - Creating connection for gRPC server at syneto-scrappy-metrics.my.syneto.eu:50051 using plaintext [Agent Unnamed-cf3b11648a1e]
09:00:24.204 INFO  [Agent.kt:133] - Resetting agentId [Agent Unnamed-cf3b11648a1e]
09:00:24.204 INFO  [AgentGrpcService.kt:147] - Connecting to proxy at syneto-scrappy-metrics.my.syneto.eu:50051 using plaintext... [Agent Unnamed-cf3b11648a1e]
09:00:24.241 INFO  [AgentClientInterceptor.kt:51] - Assigned agentId: 5 to Agent{agentId=5, agentName=Unnamed-cf3b11648a1e, proxyHost=syneto-scrappy-metrics.my.syneto.eu:50051, adminService=AdminService{port=8093, paths=[/ping, /version, /healthcheck, /threaddump]}, metricsService=MetricsService{port=8083, path=/metrics}} [grpc-default-executor-377]
09:00:24.251 INFO  [AgentGrpcService.kt:149] - Connected to proxy at syneto-scrappy-metrics.my.syneto.eu:50051 using plaintext [Agent Unnamed-cf3b11648a1e]
09:00:24.278 INFO  [Agent.kt:181] - Disconnected from proxy at syneto-scrappy-metrics.my.syneto.eu:50051 after invalid response registerAgent() - Invalid agentId: 5 [Agent Unnamed-cf3b11648a1e]

Restarting the agent several times does not fix the issue, and the log is the same.

Hoewever, after I restart the prometheus proxy, the agent can reconnect properly:

09:01:18.204 INFO  [AgentGrpcService.kt:147] - Connecting to proxy at syneto-scrappy-metrics.my.syneto.eu:50051 using plaintext... [Agent Unnamed-cf3b11648a1e]
09:01:19.385 INFO  [AgentClientInterceptor.kt:51] - Assigned agentId: 1 to Agent{agentId=1, agentName=Unnamed-cf3b11648a1e, proxyHost=syneto-scrappy-metrics.my.syneto.eu:50051, adminService=AdminService{port=8093, paths=[/ping, /version, /healthcheck, /threaddump]}, metricsService=MetricsService{port=8083, path=/metrics}} [grpc-default-executor-377]
09:01:19.651 INFO  [AgentGrpcService.kt:149] - Connected to proxy at syneto-scrappy-metrics.my.syneto.eu:50051 using plaintext [Agent Unnamed-cf3b11648a1e]
09:01:19.821 INFO  [AgentPathManager.kt:67] - Registered http://172.16.254.2:9272/metrics as /esxi-g6fn04700be2 [Agent Unnamed-cf3b11648a1e]
09:01:19.859 INFO  [AgentPathManager.kt:67] - Registered http://172.16.254.2:9100/metrics as /node-g6fn04700be2 [Agent Unnamed-cf3b11648a1e]
09:01:19.873 INFO  [Agent.kt:221] - Heartbeat scheduled to fire after 5.00s of inactivity [DefaultDispatcher-worker-2]

All configurations are vanilla, and I could not find anything that shows where the problem might be.

I tried to reproduce the problem by restarting both the agent and the proxy multiple times, but the issue does not show immediately after proxy restart. The only way to reproduce it is to wait (did not check exactly how long, but less than a day). Once the issue shows up, the only way to reconnect the agent is by restarting the server.

WebSockets Support

While creating a POC of using the proxy I have run into a couple of platform issues with supporting gRPC. Have you considered WebSockets as additional transport mechanism?

Would you be open to having it added? How much effort do you think it would be to add?

Prometheus-proxy not working for me

I have setup an agent and a proxy but agent is not able to communicate to the proxy. Between agent and proxy firewall is open for port 8080, verified it with running nginx on port 8080. Any help in correcting the configuration files would be very much appreciated.

agent:
16:15:00.512 INFO [Agent.kt:171] - Waited 3.00s to reconnect [Agent Unnamed-monitoring]
16:15:00.512 INFO [AgentGrpcService.kt:142] - Connecting to proxy at :8080 using plaintext... [Agent Unnamed-monitoring]
16:15:00.513 INFO [AgentGrpcService.kt:149] - Cannot connect to proxy at :8080 using plaintext - StatusRuntimeException: UNAVAILABLE: Network close
d for unknown reason [Agent Unnamed-monitoring]

Proxy:
16:13:56.243 INFO [Proxy.kt:219] - Version: 1.6.3 Release Date: 12/21/19 [main]
16:13:56.480 INFO [GrpcDsl.kt:84] - Listening for gRPC traffic on port 50051 using plaintext [main]
16:13:56.712 INFO [GenericService.kt:183] - Adding service ProxyGrpcService{serverType=Netty, port=50051} [main]
16:13:56.712 INFO [GenericService.kt:183] - Adding service ProxyHttpService{port=8080} [main]
16:13:56.747 INFO [Log.java:169] - Logging initialized @1163ms to org.eclipse.jetty.util.log.Slf4jLog [main]
16:13:56.842 INFO [GenericService.kt:183] - Adding service AdminService{port=8092, paths=[/ping, /version, /healthcheck, /threaddump]} [main]
16:13:56.843 INFO [GenericService.kt:103] - Enabling Dropwizard metrics [main]
16:13:56.846 INFO [GenericService.kt:106] - Enabling JMX metrics [main]
16:13:56.849 INFO [GenericService.kt:183] - Adding service MetricsService{port=8082, path=/metrics} [main]
16:13:57.006 INFO [GenericService.kt:126] - Zipkin reporter service disabled [main]
16:13:57.009 INFO [GenericService.kt:183] - Adding service Proxy{proxyPort=8080, adminService=AdminService{port=8092, paths=[/ping, /version, /healthcheck, /thr
eaddump]}, metricsService=MetricsService{port=8082, path=/metrics}} [main]
16:13:57.102 INFO [GenericServiceListener.kt:31] - Starting Proxy{proxyPort=8080, adminService=AdminService{port=8092, paths=[/ping, /version, /healthcheck, /th
readdump]}, metricsService=MetricsService{port=8082, path=/metrics}} [main]
16:13:57.105 INFO [GenericServiceListener.kt:31] - Starting MetricsService{port=8082, path=/metrics} [Proxy]
16:13:57.211 INFO [GenericServiceListener.kt:31] - Starting AdminService{port=8092, paths=[/ping, /version, /healthcheck, /threaddump]} [Proxy]
16:13:57.214 INFO [GenericServiceListener.kt:32] - Running MetricsService{port=8082, path=/metrics} [MetricsService STARTING]
16:13:57.373 INFO [GenericServiceListener.kt:32] - Running AdminService{port=8092, paths=[/ping, /version, /healthcheck, /threaddump]} [AdminService STARTING]
16:13:57.379 INFO [GenericServiceListener.kt:31] - Starting ProxyGrpcService{serverType=Netty, port=50051} [Proxy]
16:13:57.484 INFO [GenericServiceListener.kt:31] - Starting ProxyHttpService{port=8080} [Proxy]
16:13:57.483 INFO [GenericServiceListener.kt:32] - Running ProxyGrpcService{serverType=Netty, port=50051} [ProxyGrpcService STARTING]
16:13:57.496 INFO [GenericService.kt:183] - Adding service AgentContextCleanupService{max inactivity secs=15, pause secs=10} [Proxy]
16:13:57.494 INFO [GenericServiceListener.kt:32] - Running ProxyHttpService{port=8080} [ProxyHttpService STARTING]
16:13:57.497 INFO [GenericServiceListener.kt:31] - Starting AgentContextCleanupService{max inactivity secs=15, pause secs=10} [Proxy]
16:13:57.498 INFO [GenericServiceListener.kt:32] - Running Proxy{proxyPort=8080, adminService=AdminService{port=8092, paths=[/ping, /version, /healthcheck, /thr
eaddump]}, metricsService=MetricsService{port=8082, path=/metrics}} [Proxy]
16:13:57.499 INFO [GenericService.kt:138] - All Proxy services healthy [Proxy]
16:13:57.498 INFO [GenericServiceListener.kt:32] - Running AgentContextCleanupService{max inactivity secs=15, pause secs=10} [AgentContextCleanupService]

agent command line : java -jar prometheus-agent.jar -Dagent.proxy.hostname=168.61.46.213:8080 --config /opt/promproxy/apps.conf

proxy docker:
docker run -d --name promproxy -p 8082:8082 -p 8092:8092 -p 50051:50051 -p 8080:8080
--env ADMIN_ENABLED=true
--env METRICS_ENABLED=true
pambrose/prometheus-agent:1.6.3

apps.conf
agent {
pathConfigs: [
{
name: sandboxvm
path: sandboxvm_metrics
url: "http://localhost:9100/metrics"
},

{

name: app2

path: app2_metrics

url: "http://app2.local:9100/metrics"

},

{

name: app3

path: app3_metrics

url: "http://app3.local:9100/metrics"

}

]
}

Unable to handle metrics data larger than grpc default max message size

I have many docker containers running on a host and the /metrics endpoint exposed from cAdvisor is over 4Mb (default max message size in grpc). I would like to be able to configure the max message size on the proxy. (I'm guessing around here: https://github.com/pambrose/prometheus-proxy/blob/master/src/main/kotlin/io/prometheus/proxy/ProxyGrpcService.kt#L76)

Related issues: grpc/grpc-java#3996

Error on the agent

prometheus-agent_1  | 15:40:21.667 ERROR [AgentGrpcService.kt:249] - Error in writeResponsesToProxyUntilDisconnected(): CANCELLED HTTP/2 error code: CANCEL

Stacktrace on the proxy

prometheus-proxy_1  | WARNING: Exception processing message
prometheus-proxy_1  | io.grpc.StatusRuntimeException: RESOURCE_EXHAUSTED: gRPC message exceeds maximum size 4194304: 4678937
prometheus-proxy_1  | 	at io.grpc.Status.asRuntimeException(Status.java:524)
prometheus-proxy_1  | 	at io.grpc.internal.MessageDeframer.processHeader(MessageDeframer.java:387)
prometheus-proxy_1  | 	at io.grpc.internal.MessageDeframer.deliver(MessageDeframer.java:267)
prometheus-proxy_1  | 	at io.grpc.internal.MessageDeframer.deframe(MessageDeframer.java:177)
prometheus-proxy_1  | 	at io.grpc.internal.AbstractStream$TransportState.deframe(AbstractStream.java:193)
prometheus-proxy_1  | 	at io.grpc.internal.AbstractServerStream$TransportState.inboundDataReceived(AbstractServerStream.java:266)
prometheus-proxy_1  | 	at io.grpc.netty.NettyServerStream$TransportState.inboundDataReceived(NettyServerStream.java:252)
prometheus-proxy_1  | 	at io.grpc.netty.NettyServerHandler.onDataRead(NettyServerHandler.java:478)
prometheus-proxy_1  | 	at io.grpc.netty.NettyServerHandler.access$800(NettyServerHandler.java:101)
prometheus-proxy_1  | 	at io.grpc.netty.NettyServerHandler$FrameListener.onDataRead(NettyServerHandler.java:787)
prometheus-proxy_1  | 	at io.netty.handler.codec.http2.DefaultHttp2ConnectionDecoder$FrameReadListener.onDataRead(DefaultHttp2ConnectionDecoder.java:292)
prometheus-proxy_1  | 	at io.netty.handler.codec.http2.Http2InboundFrameLogger$1.onDataRead(Http2InboundFrameLogger.java:48)
prometheus-proxy_1  | 	at io.netty.handler.codec.http2.DefaultHttp2FrameReader.readDataFrame(DefaultHttp2FrameReader.java:422)
...

prometheus-agent fails to connect to the proxy

Hi!
I am opening this issue because i have some trouble to make working prometheus agent and proxy together.
Use case:
Installing Prometheus agent an proxy on Openshift 4.11 ==> Something that i have been able to do

Proxy Deployment

kind: Deployment
apiVersion: apps/v1
metadata:
  name: prometheus-proxy
  namespace: prometheus-proxy
  labels:
    app: prometheus-proxy
spec:
  replicas: 1
  selector:
    matchLabels:
      app: prometheus-proxy
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: prometheus-proxy
    spec:
      containers:
        - name: prometheus-proxy
          image: 'pambrose/prometheus-proxy:1.15.0'
          ports:
            - name: proxy-port
              containerPort: 8080
              protocol: TCP
            - name: agent-port
              containerPort: 50051
              protocol: TCP
            - name: admin-port
              containerPort: 8092
              protocol: TCP
            - name: metrics-port
              containerPort: 8082
              protocol: TCP
          resources: {}

This is working perfectly

Prometheus Agent Deployment

kind: Deployment
apiVersion: apps/v1
metadata:
  resourceVersion: '36073133'
  name: prometheus-agent
  namespace: prometheus-proxy
  labels:
    app: prometheus-agent
spec:
  replicas: 1
  selector:
    matchLabels:
      app: prometheus-agent
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: prometheus-agent
    spec:
      containers:
        - name: prometheus-agent
          image: 'pambrose/prometheus-agent:1.15.0'
          ports:
            - name: metrics-port
              containerPort: 8083
              protocol: TCP
            - name: admin-port
              containerPort: 8093
              protocol: TCP
          env:
            - name: PROXY_HOSTNAME
              value: '172.30.213.173:50051'
            - name: ADMIN_ENABLED
              value: 'true'
            - name: METRICS_ENABLED
              value: 'true'
            - name: DEBUG_ENABLED
              value: 'true'
            - name: AGENT_CONFIG
              valueFrom:
                configMapKeyRef:
                  name: conf
                  key: agent.conf
          resources: {}

My issue is, the agent is crashing intensively. When i looked at it and tried to understand what's going on, it seems like the agent is not able to connect to the proxy. Ii have tried to test the connection by using netcat from the agent "nc -vz 172.30.213.173 50051" but the agent is not able to connect to the proxy. On the other hand, i have tried the same action with another pod in another namespace and i have been able to connect to the proxy.

I have tried to deploy both of them in the same pod, that way the proxy hostname would be only localhost. Even with that the netcat command failed. I mean the agent was not able to connect to the proxy.

This the agent config file

agent {
  pathConfigs: [
    {
      name: "Kubelet1 metrics"
      path: kubelet1_metrics
      url: "http://172.16.1.20:9537/metrics"
    },
    {
      name: "Kubelet2 metrics"
      path: kubelet2_metrics
      url: "http://172.16.1.21:9537/metrics"
    },
    {
      name: "Kubelet metrics"
      path: kubelet3_metrics
      url: "http://172.16.1.22:9537/metrics"
    }
  ]
}

Prometheus Agent logs

14:13:56.588 INFO  [Agent.kt:285] - 

     $$$$$$$\                                                $$\     $$\
     $$  __$$\                                               $$ |    $$ |
     $$ |  $$ | $$$$$$\   $$$$$$\  $$$$$$\$$$$\   $$$$$$\  $$$$$$\   $$$$$$$\   $$$$$$\  $$\   $$\  $$$$$$$\
     $$$$$$$  |$$  __$$\ $$  __$$\ $$  _$$  _$$\ $$  __$$\ \_$$  _|  $$  __$$\ $$  __$$\ $$ |  $$ |$$  _____|
     $$  ____/ $$ |  \__|$$ /  $$ |$$ / $$ / $$ |$$$$$$$$ |  $$ |    $$ |  $$ |$$$$$$$$ |$$ |  $$ |\$$$$$$\
     $$ |      $$ |      $$ |  $$ |$$ | $$ | $$ |$$   ____|  $$ |$$\ $$ |  $$ |$$   ____|$$ |  $$ | \____$$\
     $$ |      $$ |      \$$$$$$  |$$ | $$ | $$ |\$$$$$$$\   \$$$$  |$$ |  $$ |\$$$$$$$\ \$$$$$$  |$$$$$$$  |
     \__|      \__|       \______/ \__| \__| \__| \_______|   \____/ \__|  \__| \_______| \______/ \_______/
     
     
     
                                    $$$$$$\                                  $$\
                                   $$  __$$\                                 $$ |
                                   $$ /  $$ | $$$$$$\   $$$$$$\  $$$$$$$\  $$$$$$\
                                   $$$$$$$$ |$$  __$$\ $$  __$$\ $$  __$$\ \_$$  _|
                                   $$  __$$ |$$ /  $$ |$$$$$$$$ |$$ |  $$ |  $$ |
                                   $$ |  $$ |$$ |  $$ |$$   ____|$$ |  $$ |  $$ |$$\
                                   $$ |  $$ |\$$$$$$$ |\$$$$$$$\ $$ |  $$ |  \$$$$  |
                                   \__|  \__| \____$$ | \_______|\__|  \__|   \____/
                                             $$\   $$ |
                                             \$$$$$$  |
                                              \______/

 [main]
14:13:56.791 INFO  [Agent.kt:286] - Version: 1.15.0 Release Date: 12/14/22 [main]
14:13:56.844 ERROR [BaseOptions.kt:264] - Exception: IO - agent {
  pathConfigs: [
    {
      name: "Kubelet1 metrics"
      path: kubelet1_metrics
      url: "http:/172.16.1.20:9537/metrics"
    },
    {
      name: "Kubelet2 metrics"
      path: kubelet2_metrics
      url: "http:/172.16.1.21:9537/metrics"
    },
    {
      name: "Kubelet metrics"
      path: kubelet3_metrics
      url: "http:/172.16.1.22:9537/metrics"
    }
  ]
}
: agent {
  pathConfigs: [
    {
      name: "Kubelet1 metrics"
      path: kubelet1_metrics
      url: "http:/172.16.1.20:9537/metrics"
    },
    {
      name: "Kubelet2 metrics"
      path: kubelet2_metrics
      url: "http:/172.16.1.21:9537/metrics"
    },
    {
      name: "Kubelet metrics"
      path: kubelet3_metrics
      url: "http:/172.16.1.22:9537/metrics"
    }
  ]
}
.conf: java.io.FileNotFoundException: agent {
  pathConfigs: [
    {
      name: "Kubelet1 metrics"
      path: kubelet1_metrics
      url: "http:/172.16.1.20:9537/metrics"
    },
    {
      name: "Kubelet2 metrics"
      path: kubelet2_metrics
      url: "http:/172.16.1.21:9537/metrics"
    },
    {
      name: "Kubelet metrics"
      path: kubelet3_metrics
      url: "http:/172.16.1.22:9537/metrics"
    }
  ]
}
.conf (No such file or directory), agent {
  pathConfigs: [
    {
      name: "Kubelet1 metrics"
      path: kubelet1_metrics
      url: "http:/172.16.1.20:9537/metrics"
    },
    {
      name: "Kubelet2 metrics"
      path: kubelet2_metrics
      url: "http:/172.16.1.21:9537/metrics"
    },
    {
      name: "Kubelet metrics"
      path: kubelet3_metrics
      url: "http:/172.16.1.22:9537/metrics"
    }
  ]
}
.json: java.io.FileNotFoundException: agent {
  pathConfigs: [
    {
      name: "Kubelet1 metrics"
      path: kubelet1_metrics
      url: "http:/172.16.1.20:9537/metrics"
    },
    {
      name: "Kubelet2 metrics"
      path: kubelet2_metrics
      url: "http:/172.16.1.21:9537/metrics"
    },
    {
      name: "Kubelet metrics"
      path: kubelet3_metrics
      url: "http:/172.16.1.22:9537/metrics"
    }
  ]
}
.json (No such file or directory), agent {
  pathConfigs: [
    {
      name: "Kubelet1 metrics"
      path: kubelet1_metrics
      url: "http:/172.16.1.20:9537/metrics"
    },
    {
      name: "Kubelet2 metrics"
      path: kubelet2_metrics
      url: "http:/172.16.1.21:9537/metrics"
    },
    {
      name: "Kubelet metrics"
      path: kubelet3_metrics
      url: "http:/172.16.1.22:9537/metrics"
    }
  ]
}
.properties: java.io.FileNotFoundException: agent {
  pathConfigs: [
    {
      name: "Kubelet1 metrics"
      path: kubelet1_metrics
      url: "http:/172.16.1.20:9537/metrics"
    },
    {
      name: "Kubelet2 metrics"
      path: kubelet2_metrics
      url: "http:/172.16.1.21:9537/metrics"
    },
    {
      name: "Kubelet metrics"
      path: kubelet3_metrics
      url: "http:/172.16.1.22:9537/metrics"
    }
  ]
}
.properties (No such file or directory) [main]
com.typesafe.config.ConfigException$IO: agent {
  pathConfigs: [
    {
      name: "Kubelet1 metrics"
      path: kubelet1_metrics
      url: "http:/172.16.1.20:9537/metrics"
    },
    {
      name: "Kubelet2 metrics"
      path: kubelet2_metrics
      url: "http:/172.16.1.21:9537/metrics"
    },
    {
      name: "Kubelet metrics"
      path: kubelet3_metrics
      url: "http:/172.16.1.22:9537/metrics"
    }
  ]
}
: agent {
  pathConfigs: [
    {
      name: "Kubelet1 metrics"
      path: kubelet1_metrics
      url: "http:/172.16.1.20:9537/metrics"
    },
    {
      name: "Kubelet2 metrics"
      path: kubelet2_metrics
      url: "http:/172.16.1.21:9537/metrics"
    },
    {
      name: "Kubelet metrics"
      path: kubelet3_metrics
      url: "http:/172.16.1.22:9537/metrics"
    }
  ]
}
.conf: java.io.FileNotFoundException: agent {
  pathConfigs: [
    {
      name: "Kubelet1 metrics"
      path: kubelet1_metrics
      url: "http:/172.16.1.20:9537/metrics"
    },
    {
      name: "Kubelet2 metrics"
      path: kubelet2_metrics
      url: "http:/172.16.1.21:9537/metrics"
    },
    {
      name: "Kubelet metrics"
      path: kubelet3_metrics
      url: "http:/172.16.1.22:9537/metrics"
    }
  ]
}
.conf (No such file or directory), agent {
  pathConfigs: [
    {
      name: "Kubelet1 metrics"
      path: kubelet1_metrics
      url: "http:/172.16.1.20:9537/metrics"
    },
    {
      name: "Kubelet2 metrics"
      path: kubelet2_metrics
      url: "http:/172.16.1.21:9537/metrics"
    },
    {
      name: "Kubelet metrics"
      path: kubelet3_metrics
      url: "http:/172.16.1.22:9537/metrics"
    }
  ]
}
.json: java.io.FileNotFoundException: agent {
  pathConfigs: [
    {
      name: "Kubelet1 metrics"
      path: kubelet1_metrics
      url: "http:/172.16.1.20:9537/metrics"
    },
    {
      name: "Kubelet2 metrics"
      path: kubelet2_metrics
      url: "http:/172.16.1.21:9537/metrics"
    },
    {
      name: "Kubelet metrics"
      path: kubelet3_metrics
      url: "http:/172.16.1.22:9537/metrics"
    }
  ]
}
.json (No such file or directory), agent {
  pathConfigs: [
    {
      name: "Kubelet1 metrics"
      path: kubelet1_metrics
      url: "http:/172.16.1.20:9537/metrics"
    },
    {
      name: "Kubelet2 metrics"
      path: kubelet2_metrics
      url: "http:/172.16.1.21:9537/metrics"
    },
    {
      name: "Kubelet metrics"
      path: kubelet3_metrics
      url: "http:/172.16.1.22:9537/metrics"
    }
  ]
}
.properties: java.io.FileNotFoundException: agent {
  pathConfigs: [
    {
      name: "Kubelet1 metrics"
      path: kubelet1_metrics
      url: "http:/172.16.1.20:9537/metrics"
    },
    {
      name: "Kubelet2 metrics"
      path: kubelet2_metrics
      url: "http:/172.16.1.21:9537/metrics"
    },
    {
      name: "Kubelet metrics"
      path: kubelet3_metrics
      url: "http:/172.16.1.22:9537/metrics"
    }
  ]
}
.properties (No such file or directory)
	at com.typesafe.config.impl.SimpleIncluder.fromBasename(SimpleIncluder.java:236)
	at com.typesafe.config.impl.ConfigImpl.parseFileAnySyntax(ConfigImpl.java:138)
	at com.typesafe.config.ConfigFactory.parseFileAnySyntax(ConfigFactory.java:845)
	at io.prometheus.common.BaseOptions.readConfig(BaseOptions.kt:259)
	at io.prometheus.common.BaseOptions.readConfig(BaseOptions.kt:192)
	at io.prometheus.common.BaseOptions.parseOptions(BaseOptions.kt:131)
	at io.prometheus.agent.AgentOptions.<init>(AgentOptions.kt:74)
	at io.prometheus.Agent$Companion.startSyncAgent(Agent.kt:288)
	at io.prometheus.Agent$Companion.main(Agent.kt:279)
	at io.prometheus.Agent.main(Agent.kt)
Caused by: com.typesafe.config.ConfigException$IO: agent {
  pathConfigs: [
    {
      name: "Kubelet1 metrics"
      path: kubelet1_metrics
      url: "http:/172.16.1.20:9537/metrics"
    },
    {
      name: "Kubelet2 metrics"
      path: kubelet2_metrics
      url: "http:/172.16.1.21:9537/metrics"
    },
    {
      name: "Kubelet metrics"
      path: kubelet3_metrics
      url: "http:/172.16.1.22:9537/metrics"
    }
  ]
}
.conf: java.io.FileNotFoundException: agent {
  pathConfigs: [
    {
      name: "Kubelet1 metrics"
      path: kubelet1_metrics
      url: "http:/172.16.1.20:9537/metrics"
    },
    {
      name: "Kubelet2 metrics"
      path: kubelet2_metrics
      url: "http:/172.16.1.21:9537/metrics"
    },
    {
      name: "Kubelet metrics"
      path: kubelet3_metrics
      url: "http:/172.16.1.22:9537/metrics"
    }
  ]
}
.conf (No such file or directory)
	at com.typesafe.config.impl.Parseable.parseValue(Parseable.java:190)
	at com.typesafe.config.impl.Parseable.parseValue(Parseable.java:174)
	at com.typesafe.config.impl.Parseable.parse(Parseable.java:152)
	at com.typesafe.config.impl.SimpleIncluder.fromBasename(SimpleIncluder.java:185)
	... 9 common frames omitted
Caused by: java.io.FileNotFoundException: agent {
  pathConfigs: [
    {
      name: "Kubelet1 metrics"
      path: kubelet1_metrics
      url: "http:/172.16.1.20:9537/metrics"
    },
    {
      name: "Kubelet2 metrics"
      path: kubelet2_metrics
      url: "http:/172.16.1.21:9537/metrics"
    },
    {
      name: "Kubelet metrics"
      path: kubelet3_metrics
      url: "http:/172.16.1.22:9537/metrics"
    }
  ]
}
.conf (No such file or directory)
	at java.base/java.io.FileInputStream.open0(Native Method)
	at java.base/java.io.FileInputStream.open(FileInputStream.java:216)
	at java.base/java.io.FileInputStream.<init>(FileInputStream.java:157)
	at com.typesafe.config.impl.Parseable$ParseableFile.reader(Parseable.java:629)
	at com.typesafe.config.impl.Parseable.reader(Parseable.java:99)
	at com.typesafe.config.impl.Parseable.rawParseValue(Parseable.java:233)
	at com.typesafe.config.impl.Parseable.parseValue(Parseable.java:180)
	... 12 common frames omitted

Prometheus Proxy Log

12:51:49.237 INFO  [Proxy.kt:225] - 

     $$$$$$$\                                                $$\     $$\
     $$  __$$\                                               $$ |    $$ |
     $$ |  $$ | $$$$$$\   $$$$$$\  $$$$$$\$$$$\   $$$$$$\  $$$$$$\   $$$$$$$\   $$$$$$\  $$\   $$\  $$$$$$$\
     $$$$$$$  |$$  __$$\ $$  __$$\ $$  _$$  _$$\ $$  __$$\ \_$$  _|  $$  __$$\ $$  __$$\ $$ |  $$ |$$  _____|
     $$  ____/ $$ |  \__|$$ /  $$ |$$ / $$ / $$ |$$$$$$$$ |  $$ |    $$ |  $$ |$$$$$$$$ |$$ |  $$ |\$$$$$$\
     $$ |      $$ |      $$ |  $$ |$$ | $$ | $$ |$$   ____|  $$ |$$\ $$ |  $$ |$$   ____|$$ |  $$ | \____$$\
     $$ |      $$ |      \$$$$$$  |$$ | $$ | $$ |\$$$$$$$\   \$$$$  |$$ |  $$ |\$$$$$$$\ \$$$$$$  |$$$$$$$  |
     \__|      \__|       \______/ \__| \__| \__| \_______|   \____/ \__|  \__| \_______| \______/ \_______/
     
     
     
                                   $$$$$$$\
                                   $$  __$$\
                                   $$ |  $$ | $$$$$$\   $$$$$$\  $$\   $$\ $$\   $$\
                                   $$$$$$$  |$$  __$$\ $$  __$$\ \$$\ $$  |$$ |  $$ |
                                   $$  ____/ $$ |  \__|$$ /  $$ | \$$$$  / $$ |  $$ |
                                   $$ |      $$ |      $$ |  $$ | $$  $$<  $$ |  $$ |
                                   $$ |      $$ |      \$$$$$$  |$$  /\$$\ \$$$$$$$ |
                                   \__|      \__|       \______/ \__/  \__| \____$$ |
                                                                           $$\   $$ |
                                                                           \$$$$$$  |
                                                                            \______/

 [main]
12:51:49.461 INFO  [Proxy.kt:226] - Version: 1.15.0 Release Date: 12/14/22 [main]
12:51:49.519 INFO  [ProxyOptions.kt:57] - proxyHttpPort: 8080 [main]
12:51:49.519 INFO  [ProxyOptions.kt:61] - proxyAgentPort: 50051 [main]
12:51:49.520 INFO  [ProxyOptions.kt:68] - sdEnabled: false [main]
12:51:49.520 INFO  [ProxyOptions.kt:75] - sdPath: discovery [main]
12:51:49.520 INFO  [ProxyOptions.kt:82] - sdTargetPrefix: http://localhost:8080/ [main]
12:51:49.521 INFO  [BaseOptions.kt:139] - adminEnabled: false [main]
12:51:49.521 INFO  [BaseOptions.kt:145] - adminPort: 8092 [main]
12:51:49.521 INFO  [BaseOptions.kt:151] - metricsEnabled: false [main]
12:51:49.522 INFO  [BaseOptions.kt:163] - metricsPort: 8082 [main]
12:51:49.522 INFO  [BaseOptions.kt:169] - transportFilterDisabled: false [main]
12:51:49.523 INFO  [BaseOptions.kt:157] - debugEnabled: false [main]
12:51:49.523 INFO  [BaseOptions.kt:175] - certChainFilePath:  [main]
12:51:49.523 INFO  [BaseOptions.kt:181] - privateKeyFilePath:  [main]
12:51:49.524 INFO  [BaseOptions.kt:187] - trustCertCollectionFilePath:  [main]
12:51:49.524 INFO  [ProxyOptions.kt:95] - proxy.internal.scrapeRequestTimeoutSecs: 90 [main]
12:51:49.524 INFO  [ProxyOptions.kt:96] - proxy.internal.staleAgentCheckPauseSecs: 10 [main]
12:51:49.525 INFO  [ProxyOptions.kt:97] - proxy.internal.maxAgentInactivitySecs: 60 [main]
12:51:49.610 INFO  [GrpcDsl.kt:90] - Listening for gRPC traffic on port 50051 using plaintext [main]
12:51:49.729 INFO  [GenericService.kt:186] - Adding service ProxyGrpcService{serverType=Netty, port=50051} [main]
12:51:49.730 INFO  [GenericService.kt:186] - Adding service ProxyHttpService{port=8080} [main]
12:51:49.730 INFO  [GenericService.kt:119] - Metrics service disabled [main]
12:51:49.730 INFO  [GenericService.kt:127] - Zipkin reporter service disabled [main]
12:51:49.731 INFO  [GenericService.kt:186] - Adding service Proxy{proxyPort=8080, adminService=Disabled, metricsService=Disabled} [main]
12:51:49.747 INFO  [GenericServiceListener.kt:29] - Starting Proxy{proxyPort=8080, adminService=Disabled, metricsService=Disabled} [main]
12:51:49.751 INFO  [GenericServiceListener.kt:29] - Starting ProxyGrpcService{serverType=Netty, port=50051} [Proxy]
12:51:49.832 INFO  [GenericServiceListener.kt:30] - Running ProxyGrpcService{serverType=Netty, port=50051} [ProxyGrpcService STARTING]
12:51:49.832 INFO  [GenericServiceListener.kt:29] - Starting ProxyHttpService{port=8080} [Proxy]
12:51:49.841 INFO  [ApplicationEngineEnvironmentReloading.kt:160] - Autoreload is disabled because the development mode is off. [DefaultDispatcher-worker-1]
12:51:49.885 INFO  [ProxyHttpConfig.kt:130] - Not adding /discovery service discovery endpoint [DefaultDispatcher-worker-1]
12:51:49.890 INFO  [BaseApplicationEngine.kt:64] - Application started in 0.328 seconds. [DefaultDispatcher-worker-1]
12:51:49.890 INFO  [CallLogging.kt:30] - Application started: io.ktor.server.application.Application@454058a5 [DefaultDispatcher-worker-1]
12:51:49.913 INFO  [BaseApplicationEngine.kt:76] - Responding at http://0.0.0.0:8080 [DefaultDispatcher-worker-2]
12:51:49.913 INFO  [GenericServiceListener.kt:30] - Running ProxyHttpService{port=8080} [ProxyHttpService STARTING]
12:51:49.915 INFO  [GenericService.kt:186] - Adding service AgentContextCleanupService{max inactivity secs=60, pause secs=10} [Proxy]
12:51:49.915 INFO  [GenericServiceListener.kt:29] - Starting AgentContextCleanupService{max inactivity secs=60, pause secs=10} [Proxy]
12:51:49.916 INFO  [GenericServiceListener.kt:30] - Running AgentContextCleanupService{max inactivity secs=60, pause secs=10} [Proxy]
12:51:49.916 INFO  [GenericServiceListener.kt:30] - Running Proxy{proxyPort=8080, adminService=Disabled, metricsService=Disabled} [Proxy]
12:51:49.916 INFO  [GenericService.kt:139] - All Proxy services healthy [Proxy]
13:13:44.739 WARN  [DefaultPromise.java:581] - An exception was thrown by io.grpc.netty.NettyServerTransport$1TerminationNotifier.operationComplete() [grpc-nio-worker-ELG-3-1]
java.lang.NullPointerException: Parameter specified as non-null is null: method io.prometheus.proxy.ProxyServerTransportFilter.transportTerminated, parameter attributes
	at io.prometheus.proxy.ProxyServerTransportFilter.transportTerminated(ProxyServerTransportFilter.kt)
	at io.grpc.internal.ServerImpl$ServerTransportListenerImpl.transportTerminated(ServerImpl.java:455)
	at io.grpc.netty.NettyServerTransport.notifyTerminated(NettyServerTransport.java:207)
	at io.grpc.netty.NettyServerTransport.access$100(NettyServerTransport.java:51)
	at io.grpc.netty.NettyServerTransport$1TerminationNotifier.operationComplete(NettyServerTransport.java:141)
	at io.grpc.netty.NettyServerTransport$1TerminationNotifier.operationComplete(NettyServerTransport.java:134)
	at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:578)
	at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:552)
	at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:491)
	at io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:616)
	at io.netty.util.concurrent.DefaultPromise.setSuccess0(DefaultPromise.java:605)
	at io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:104)
	at io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:84)
	at io.netty.channel.AbstractChannel$CloseFuture.setClosed(AbstractChannel.java:1164)
	at io.netty.channel.AbstractChannel$AbstractUnsafe.doClose0(AbstractChannel.java:755)
	at io.netty.channel.AbstractChannel$AbstractUnsafe.close(AbstractChannel.java:731)
	at io.netty.channel.AbstractChannel$AbstractUnsafe.handleWriteError(AbstractChannel.java:950)
	at io.netty.channel.AbstractChannel$AbstractUnsafe.flush0(AbstractChannel.java:933)
	at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.flush0(AbstractNioChannel.java:354)
	at io.netty.channel.AbstractChannel$AbstractUnsafe.flush(AbstractChannel.java:895)
	at io.netty.channel.DefaultChannelPipeline$HeadContext.flush(DefaultChannelPipeline.java:1372)
	at io.netty.channel.AbstractChannelHandlerContext.invokeFlush0(AbstractChannelHandlerContext.java:750)
	at io.netty.channel.AbstractChannelHandlerContext.invokeFlush(AbstractChannelHandlerContext.java:742)
	at io.netty.channel.AbstractChannelHandlerContext.flush(AbstractChannelHandlerContext.java:728)
	at io.grpc.netty.AbstractNettyHandler.sendInitialConnectionWindow(AbstractNettyHandler.java:114)
	at io.grpc.netty.AbstractNettyHandler.handlerAdded(AbstractNettyHandler.java:78)
	at io.grpc.netty.NettyServerHandler.handlerAdded(NettyServerHandler.java:381)
	at io.netty.channel.AbstractChannelHandlerContext.callHandlerAdded(AbstractChannelHandlerContext.java:938)
	at io.netty.channel.DefaultChannelPipeline.callHandlerAdded0(DefaultChannelPipeline.java:609)
	at io.netty.channel.DefaultChannelPipeline.replace(DefaultChannelPipeline.java:572)
	at io.netty.channel.DefaultChannelPipeline.replace(DefaultChannelPipeline.java:515)
	at io.grpc.netty.ProtocolNegotiators$GrpcNegotiationHandler.userEventTriggered(ProtocolNegotiators.java:919)
	at io.netty.channel.AbstractChannelHandlerContext.invokeUserEventTriggered(AbstractChannelHandlerContext.java:346)
	at io.netty.channel.AbstractChannelHandlerContext.invokeUserEventTriggered(AbstractChannelHandlerContext.java:332)
	at io.netty.channel.AbstractChannelHandlerContext.fireUserEventTriggered(AbstractChannelHandlerContext.java:324)
	at io.grpc.netty.ProtocolNegotiators$ProtocolNegotiationHandler.fireProtocolNegotiationEvent(ProtocolNegotiators.java:1090)
	at io.grpc.netty.ProtocolNegotiators$WaitUntilActiveHandler.protocolNegotiationEventTriggered(ProtocolNegotiators.java:1005)
	at io.grpc.netty.ProtocolNegotiators$ProtocolNegotiationHandler.userEventTriggered(ProtocolNegotiators.java:1061)
	at io.netty.channel.AbstractChannelHandlerContext.invokeUserEventTriggered(AbstractChannelHandlerContext.java:346)
	at io.netty.channel.AbstractChannelHandlerContext.invokeUserEventTriggered(AbstractChannelHandlerContext.java:332)
	at io.netty.channel.AbstractChannelHandlerContext.fireUserEventTriggered(AbstractChannelHandlerContext.java:324)
	at io.netty.channel.DefaultChannelPipeline$HeadContext.userEventTriggered(DefaultChannelPipeline.java:1428)
	at io.netty.channel.AbstractChannelHandlerContext.invokeUserEventTriggered(AbstractChannelHandlerContext.java:346)
	at io.netty.channel.AbstractChannelHandlerContext.invokeUserEventTriggered(AbstractChannelHandlerContext.java:332)
	at io.netty.channel.DefaultChannelPipeline.fireUserEventTriggered(DefaultChannelPipeline.java:913)
	at io.grpc.netty.WriteBufferingAndExceptionHandler.handlerAdded(WriteBufferingAndExceptionHandler.java:62)
	at io.netty.channel.AbstractChannelHandlerContext.callHandlerAdded(AbstractChannelHandlerContext.java:938)
	at io.netty.channel.DefaultChannelPipeline.callHandlerAdded0(DefaultChannelPipeline.java:609)
	at io.netty.channel.DefaultChannelPipeline.addLast(DefaultChannelPipeline.java:223)
	at io.netty.channel.DefaultChannelPipeline.addLast(DefaultChannelPipeline.java:381)
	at io.netty.channel.DefaultChannelPipeline.addLast(DefaultChannelPipeline.java:370)
	at io.grpc.netty.NettyServerTransport.start(NettyServerTransport.java:153)
	at io.grpc.netty.NettyServer$1.initChannel(NettyServer.java:290)
	at io.netty.channel.ChannelInitializer.initChannel(ChannelInitializer.java:129)
	at io.netty.channel.ChannelInitializer.handlerAdded(ChannelInitializer.java:112)
	at io.netty.channel.AbstractChannelHandlerContext.callHandlerAdded(AbstractChannelHandlerContext.java:938)
	at io.netty.channel.DefaultChannelPipeline.callHandlerAdded0(DefaultChannelPipeline.java:609)
	at io.netty.channel.DefaultChannelPipeline.access$100(DefaultChannelPipeline.java:46)
	at io.netty.channel.DefaultChannelPipeline$PendingHandlerAddedTask.execute(DefaultChannelPipeline.java:1463)
	at io.netty.channel.DefaultChannelPipeline.callHandlerAddedForAllHandlers(DefaultChannelPipeline.java:1115)
	at io.netty.channel.DefaultChannelPipeline.invokeHandlerAddedIfNeeded(DefaultChannelPipeline.java:650)
	at io.netty.channel.AbstractChannel$AbstractUnsafe.register0(AbstractChannel.java:514)
	at io.netty.channel.AbstractChannel$AbstractUnsafe.access$200(AbstractChannel.java:429)
	at io.netty.channel.AbstractChannel$AbstractUnsafe$1.run(AbstractChannel.java:486)
	at io.netty.util.concurrent.AbstractEventExecutor.runTask(AbstractEventExecutor.java:174)
	at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:167)
	at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:470)
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:503)
	at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997)
	at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
	at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
	at java.base/java.lang.Thread.run(Thread.java:833)
13:17:10.236 WARN  [DefaultPromise.java:581] - An exception was thrown by io.grpc.netty.NettyServerTransport$1TerminationNotifier.operationComplete() [grpc-nio-worker-ELG-3-2]
java.lang.NullPointerException: Parameter specified as non-null is null: method io.prometheus.proxy.ProxyServerTransportFilter.transportTerminated, parameter attributes
	at io.prometheus.proxy.ProxyServerTransportFilter.transportTerminated(ProxyServerTransportFilter.kt)
	at io.grpc.internal.ServerImpl$ServerTransportListenerImpl.transportTerminated(ServerImpl.java:455)
	at io.grpc.netty.NettyServerTransport.notifyTerminated(NettyServerTransport.java:207)
	at io.grpc.netty.NettyServerTransport.access$100(NettyServerTransport.java:51)
	at io.grpc.netty.NettyServerTransport$1TerminationNotifier.operationComplete(NettyServerTransport.java:141)
	at io.grpc.netty.NettyServerTransport$1TerminationNotifier.operationComplete(NettyServerTransport.java:134)
	at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:578)
	at io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:571)
	at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:550)
	at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:491)
	at io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:616)
	at io.netty.util.concurrent.DefaultPromise.setSuccess0(DefaultPromise.java:605)
	at io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:104)
	at io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:84)
	at io.netty.channel.AbstractChannel$CloseFuture.setClosed(AbstractChannel.java:1164)
	at io.netty.channel.AbstractChannel$AbstractUnsafe.doClose0(AbstractChannel.java:755)
	at io.netty.channel.AbstractChannel$AbstractUnsafe.close(AbstractChannel.java:731)
	at io.netty.channel.AbstractChannel$AbstractUnsafe.close(AbstractChannel.java:620)
	at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.closeOnRead(AbstractNioByteChannel.java:105)
	at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:174)
	at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:722)
	at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:658)
	at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:584)
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:496)
	at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997)
	at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
	at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
	at java.base/java.lang.Thread.run(Thread.java:833)
13:20:15.873 WARN  [DefaultPromise.java:581] - An exception was thrown by io.grpc.netty.NettyServerTransport$1TerminationNotifier.operationComplete() [grpc-nio-worker-ELG-3-3]
java.lang.NullPointerException: Parameter specified as non-null is null: method io.prometheus.proxy.ProxyServerTransportFilter.transportTerminated, parameter attributes
	at io.prometheus.proxy.ProxyServerTransportFilter.transportTerminated(ProxyServerTransportFilter.kt)
	at io.grpc.internal.ServerImpl$ServerTransportListenerImpl.transportTerminated(ServerImpl.java:455)
	at io.grpc.netty.NettyServerTransport.notifyTerminated(NettyServerTransport.java:207)
	at io.grpc.netty.NettyServerTransport.access$100(NettyServerTransport.java:51)
	at io.grpc.netty.NettyServerTransport$1TerminationNotifier.operationComplete(NettyServerTransport.java:141)
	at io.grpc.netty.NettyServerTransport$1TerminationNotifier.operationComplete(NettyServerTransport.java:134)
	at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:578)
	at io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:571)
	at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:550)
	at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:491)
	at io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:616)
	at io.netty.util.concurrent.DefaultPromise.setSuccess0(DefaultPromise.java:605)
	at io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:104)
	at io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:84)
	at io.netty.channel.AbstractChannel$CloseFuture.setClosed(AbstractChannel.java:1164)
	at io.netty.channel.AbstractChannel$AbstractUnsafe.doClose0(AbstractChannel.java:755)
	at io.netty.channel.AbstractChannel$AbstractUnsafe.close(AbstractChannel.java:731)
	at io.netty.channel.AbstractChannel$AbstractUnsafe.close(AbstractChannel.java:620)
	at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.closeOnRead(AbstractNioByteChannel.java:105)
	at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:174)
	at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:722)
	at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:658)
	at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:584)
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:496)
	at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997)
	at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
	at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
	at java.base/java.lang.Thread.run(Thread.java:833)
14:06:48.094 WARN  [DefaultPromise.java:581] - An exception was thrown by io.grpc.netty.NettyServerTransport$1TerminationNotifier.operationComplete() [grpc-nio-worker-ELG-3-4]
java.lang.NullPointerException: Parameter specified as non-null is null: method io.prometheus.proxy.ProxyServerTransportFilter.transportTerminated, parameter attributes
	at io.prometheus.proxy.ProxyServerTransportFilter.transportTerminated(ProxyServerTransportFilter.kt)
	at io.grpc.internal.ServerImpl$ServerTransportListenerImpl.transportTerminated(ServerImpl.java:455)
	at io.grpc.netty.NettyServerTransport.notifyTerminated(NettyServerTransport.java:207)
	at io.grpc.netty.NettyServerTransport.access$100(NettyServerTransport.java:51)
	at io.grpc.netty.NettyServerTransport$1TerminationNotifier.operationComplete(NettyServerTransport.java:141)
	at io.grpc.netty.NettyServerTransport$1TerminationNotifier.operationComplete(NettyServerTransport.java:134)
	at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:578)
	at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:552)
	at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:491)
	at io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:616)
	at io.netty.util.concurrent.DefaultPromise.setSuccess0(DefaultPromise.java:605)
	at io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:104)
	at io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:84)
	at io.netty.channel.AbstractChannel$CloseFuture.setClosed(AbstractChannel.java:1164)
	at io.netty.channel.AbstractChannel$AbstractUnsafe.doClose0(AbstractChannel.java:755)
	at io.netty.channel.AbstractChannel$AbstractUnsafe.close(AbstractChannel.java:731)
	at io.netty.channel.AbstractChannel$AbstractUnsafe.handleWriteError(AbstractChannel.java:950)
	at io.netty.channel.AbstractChannel$AbstractUnsafe.flush0(AbstractChannel.java:933)
	at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.flush0(AbstractNioChannel.java:354)
	at io.netty.channel.AbstractChannel$AbstractUnsafe.flush(AbstractChannel.java:895)
	at io.netty.channel.DefaultChannelPipeline$HeadContext.flush(DefaultChannelPipeline.java:1372)
	at io.netty.channel.AbstractChannelHandlerContext.invokeFlush0(AbstractChannelHandlerContext.java:750)
	at io.netty.channel.AbstractChannelHandlerContext.invokeFlush(AbstractChannelHandlerContext.java:742)
	at io.netty.channel.AbstractChannelHandlerContext.flush(AbstractChannelHandlerContext.java:728)
	at io.grpc.netty.AbstractNettyHandler.sendInitialConnectionWindow(AbstractNettyHandler.java:114)
	at io.grpc.netty.AbstractNettyHandler.handlerAdded(AbstractNettyHandler.java:78)
	at io.grpc.netty.NettyServerHandler.handlerAdded(NettyServerHandler.java:381)
	at io.netty.channel.AbstractChannelHandlerContext.callHandlerAdded(AbstractChannelHandlerContext.java:938)
	at io.netty.channel.DefaultChannelPipeline.callHandlerAdded0(DefaultChannelPipeline.java:609)
	at io.netty.channel.DefaultChannelPipeline.replace(DefaultChannelPipeline.java:572)
	at io.netty.channel.DefaultChannelPipeline.replace(DefaultChannelPipeline.java:515)
	at io.grpc.netty.ProtocolNegotiators$GrpcNegotiationHandler.userEventTriggered(ProtocolNegotiators.java:919)
	at io.netty.channel.AbstractChannelHandlerContext.invokeUserEventTriggered(AbstractChannelHandlerContext.java:346)
	at io.netty.channel.AbstractChannelHandlerContext.invokeUserEventTriggered(AbstractChannelHandlerContext.java:332)
	at io.netty.channel.AbstractChannelHandlerContext.fireUserEventTriggered(AbstractChannelHandlerContext.java:324)
	at io.grpc.netty.ProtocolNegotiators$ProtocolNegotiationHandler.fireProtocolNegotiationEvent(ProtocolNegotiators.java:1090)
	at io.grpc.netty.ProtocolNegotiators$WaitUntilActiveHandler.protocolNegotiationEventTriggered(ProtocolNegotiators.java:1005)
	at io.grpc.netty.ProtocolNegotiators$ProtocolNegotiationHandler.userEventTriggered(ProtocolNegotiators.java:1061)
	at io.netty.channel.AbstractChannelHandlerContext.invokeUserEventTriggered(AbstractChannelHandlerContext.java:346)
	at io.netty.channel.AbstractChannelHandlerContext.invokeUserEventTriggered(AbstractChannelHandlerContext.java:332)
	at io.netty.channel.AbstractChannelHandlerContext.fireUserEventTriggered(AbstractChannelHandlerContext.java:324)
	at io.netty.channel.DefaultChannelPipeline$HeadContext.userEventTriggered(DefaultChannelPipeline.java:1428)
	at io.netty.channel.AbstractChannelHandlerContext.invokeUserEventTriggered(AbstractChannelHandlerContext.java:346)
	at io.netty.channel.AbstractChannelHandlerContext.invokeUserEventTriggered(AbstractChannelHandlerContext.java:332)
	at io.netty.channel.DefaultChannelPipeline.fireUserEventTriggered(DefaultChannelPipeline.java:913)
	at io.grpc.netty.WriteBufferingAndExceptionHandler.handlerAdded(WriteBufferingAndExceptionHandler.java:62)
	at io.netty.channel.AbstractChannelHandlerContext.callHandlerAdded(AbstractChannelHandlerContext.java:938)
	at io.netty.channel.DefaultChannelPipeline.callHandlerAdded0(DefaultChannelPipeline.java:609)
	at io.netty.channel.DefaultChannelPipeline.addLast(DefaultChannelPipeline.java:223)
	at io.netty.channel.DefaultChannelPipeline.addLast(DefaultChannelPipeline.java:381)
	at io.netty.channel.DefaultChannelPipeline.addLast(DefaultChannelPipeline.java:370)
	at io.grpc.netty.NettyServerTransport.start(NettyServerTransport.java:153)
	at io.grpc.netty.NettyServer$1.initChannel(NettyServer.java:290)
	at io.netty.channel.ChannelInitializer.initChannel(ChannelInitializer.java:129)
	at io.netty.channel.ChannelInitializer.handlerAdded(ChannelInitializer.java:112)
	at io.netty.channel.AbstractChannelHandlerContext.callHandlerAdded(AbstractChannelHandlerContext.java:938)
	at io.netty.channel.DefaultChannelPipeline.callHandlerAdded0(DefaultChannelPipeline.java:609)
	at io.netty.channel.DefaultChannelPipeline.access$100(DefaultChannelPipeline.java:46)
	at io.netty.channel.DefaultChannelPipeline$PendingHandlerAddedTask.execute(DefaultChannelPipeline.java:1463)
	at io.netty.channel.DefaultChannelPipeline.callHandlerAddedForAllHandlers(DefaultChannelPipeline.java:1115)
	at io.netty.channel.DefaultChannelPipeline.invokeHandlerAddedIfNeeded(DefaultChannelPipeline.java:650)
	at io.netty.channel.AbstractChannel$AbstractUnsafe.register0(AbstractChannel.java:514)
	at io.netty.channel.AbstractChannel$AbstractUnsafe.access$200(AbstractChannel.java:429)
	at io.netty.channel.AbstractChannel$AbstractUnsafe$1.run(AbstractChannel.java:486)
	at io.netty.util.concurrent.AbstractEventExecutor.runTask(AbstractEventExecutor.java:174)
	at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:167)
	at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:470)
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:503)
	at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997)
	at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
	at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
	at java.base/java.lang.Thread.run(Thread.java:833)
14:09:01.445 WARN  [DefaultPromise.java:581] - An exception was thrown by io.grpc.netty.NettyServerTransport$1TerminationNotifier.operationComplete() [grpc-nio-worker-ELG-3-5]
java.lang.NullPointerException: Parameter specified as non-null is null: method io.prometheus.proxy.ProxyServerTransportFilter.transportTerminated, parameter attributes
	at io.prometheus.proxy.ProxyServerTransportFilter.transportTerminated(ProxyServerTransportFilter.kt)
	at io.grpc.internal.ServerImpl$ServerTransportListenerImpl.transportTerminated(ServerImpl.java:455)
	at io.grpc.netty.NettyServerTransport.notifyTerminated(NettyServerTransport.java:207)
	at io.grpc.netty.NettyServerTransport.access$100(NettyServerTransport.java:51)
	at io.grpc.netty.NettyServerTransport$1TerminationNotifier.operationComplete(NettyServerTransport.java:141)
	at io.grpc.netty.NettyServerTransport$1TerminationNotifier.operationComplete(NettyServerTransport.java:134)
	at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:578)
	at io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:571)
	at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:550)
	at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:491)
	at io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:616)
	at io.netty.util.concurrent.DefaultPromise.setSuccess0(DefaultPromise.java:605)
	at io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:104)
	at io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:84)
	at io.netty.channel.AbstractChannel$CloseFuture.setClosed(AbstractChannel.java:1164)
	at io.netty.channel.AbstractChannel$AbstractUnsafe.doClose0(AbstractChannel.java:755)
	at io.netty.channel.AbstractChannel$AbstractUnsafe.close(AbstractChannel.java:731)
	at io.netty.channel.AbstractChannel$AbstractUnsafe.handleWriteError(AbstractChannel.java:950)
	at io.netty.channel.AbstractChannel$AbstractUnsafe.flush0(AbstractChannel.java:933)
	at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.flush0(AbstractNioChannel.java:354)
	at io.netty.channel.AbstractChannel$AbstractUnsafe.flush(AbstractChannel.java:895)
	at io.netty.channel.DefaultChannelPipeline$HeadContext.flush(DefaultChannelPipeline.java:1372)
	at io.netty.channel.AbstractChannelHandlerContext.invokeFlush0(AbstractChannelHandlerContext.java:750)
	at io.netty.channel.AbstractChannelHandlerContext.invokeFlush(AbstractChannelHandlerContext.java:742)
	at io.netty.channel.AbstractChannelHandlerContext.flush(AbstractChannelHandlerContext.java:728)
	at io.netty.handler.codec.http2.Http2ConnectionHandler.onError(Http2ConnectionHandler.java:658)
	at io.grpc.netty.AbstractNettyHandler.exceptionCaught(AbstractNettyHandler.java:94)
	at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:302)
	at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:281)
	at io.netty.channel.AbstractChannelHandlerContext.fireExceptionCaught(AbstractChannelHandlerContext.java:273)
	at io.netty.channel.DefaultChannelPipeline$HeadContext.exceptionCaught(DefaultChannelPipeline.java:1377)
	at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:302)
	at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:281)
	at io.netty.channel.DefaultChannelPipeline.fireExceptionCaught(DefaultChannelPipeline.java:907)
	at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.handleReadException(AbstractNioByteChannel.java:125)
	at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:177)
	at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:722)
	at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:658)
	at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:584)
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:496)
	at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997)
	at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
	at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
	at java.base/java.lang.Thread.run(Thread.java:833)
14:11:53.223 WARN  [DefaultPromise.java:581] - An exception was thrown by io.grpc.netty.NettyServerTransport$1TerminationNotifier.operationComplete() [grpc-nio-worker-ELG-3-6]
java.lang.NullPointerException: Parameter specified as non-null is null: method io.prometheus.proxy.ProxyServerTransportFilter.transportTerminated, parameter attributes
	at io.prometheus.proxy.ProxyServerTransportFilter.transportTerminated(ProxyServerTransportFilter.kt)
	at io.grpc.internal.ServerImpl$ServerTransportListenerImpl.transportTerminated(ServerImpl.java:455)
	at io.grpc.netty.NettyServerTransport.notifyTerminated(NettyServerTransport.java:207)
	at io.grpc.netty.NettyServerTransport.access$100(NettyServerTransport.java:51)
	at io.grpc.netty.NettyServerTransport$1TerminationNotifier.operationComplete(NettyServerTransport.java:141)
	at io.grpc.netty.NettyServerTransport$1TerminationNotifier.operationComplete(NettyServerTransport.java:134)
	at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:578)
	at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:552)
	at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:491)
	at io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:616)
	at io.netty.util.concurrent.DefaultPromise.setSuccess0(DefaultPromise.java:605)
	at io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:104)
	at io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:84)
	at io.netty.channel.AbstractChannel$CloseFuture.setClosed(AbstractChannel.java:1164)
	at io.netty.channel.AbstractChannel$AbstractUnsafe.doClose0(AbstractChannel.java:755)
	at io.netty.channel.AbstractChannel$AbstractUnsafe.close(AbstractChannel.java:731)
	at io.netty.channel.AbstractChannel$AbstractUnsafe.handleWriteError(AbstractChannel.java:950)
	at io.netty.channel.AbstractChannel$AbstractUnsafe.flush0(AbstractChannel.java:933)
	at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.flush0(AbstractNioChannel.java:354)
	at io.netty.channel.AbstractChannel$AbstractUnsafe.flush(AbstractChannel.java:895)
	at io.netty.channel.DefaultChannelPipeline$HeadContext.flush(DefaultChannelPipeline.java:1372)
	at io.netty.channel.AbstractChannelHandlerContext.invokeFlush0(AbstractChannelHandlerContext.java:750)
	at io.netty.channel.AbstractChannelHandlerContext.invokeFlush(AbstractChannelHandlerContext.java:742)
	at io.netty.channel.AbstractChannelHandlerContext.flush(AbstractChannelHandlerContext.java:728)
	at io.grpc.netty.AbstractNettyHandler.sendInitialConnectionWindow(AbstractNettyHandler.java:114)
	at io.grpc.netty.AbstractNettyHandler.handlerAdded(AbstractNettyHandler.java:78)
	at io.grpc.netty.NettyServerHandler.handlerAdded(NettyServerHandler.java:381)
	at io.netty.channel.AbstractChannelHandlerContext.callHandlerAdded(AbstractChannelHandlerContext.java:938)
	at io.netty.channel.DefaultChannelPipeline.callHandlerAdded0(DefaultChannelPipeline.java:609)
	at io.netty.channel.DefaultChannelPipeline.replace(DefaultChannelPipeline.java:572)
	at io.netty.channel.DefaultChannelPipeline.replace(DefaultChannelPipeline.java:515)
	at io.grpc.netty.ProtocolNegotiators$GrpcNegotiationHandler.userEventTriggered(ProtocolNegotiators.java:919)
	at io.netty.channel.AbstractChannelHandlerContext.invokeUserEventTriggered(AbstractChannelHandlerContext.java:346)
	at io.netty.channel.AbstractChannelHandlerContext.invokeUserEventTriggered(AbstractChannelHandlerContext.java:332)
	at io.netty.channel.AbstractChannelHandlerContext.fireUserEventTriggered(AbstractChannelHandlerContext.java:324)
	at io.grpc.netty.ProtocolNegotiators$ProtocolNegotiationHandler.fireProtocolNegotiationEvent(ProtocolNegotiators.java:1090)
	at io.grpc.netty.ProtocolNegotiators$WaitUntilActiveHandler.protocolNegotiationEventTriggered(ProtocolNegotiators.java:1005)
	at io.grpc.netty.ProtocolNegotiators$ProtocolNegotiationHandler.userEventTriggered(ProtocolNegotiators.java:1061)
	at io.netty.channel.AbstractChannelHandlerContext.invokeUserEventTriggered(AbstractChannelHandlerContext.java:346)
	at io.netty.channel.AbstractChannelHandlerContext.invokeUserEventTriggered(AbstractChannelHandlerContext.java:332)
	at io.netty.channel.AbstractChannelHandlerContext.fireUserEventTriggered(AbstractChannelHandlerContext.java:324)
	at io.netty.channel.DefaultChannelPipeline$HeadContext.userEventTriggered(DefaultChannelPipeline.java:1428)
	at io.netty.channel.AbstractChannelHandlerContext.invokeUserEventTriggered(AbstractChannelHandlerContext.java:346)
	at io.netty.channel.AbstractChannelHandlerContext.invokeUserEventTriggered(AbstractChannelHandlerContext.java:332)
	at io.netty.channel.DefaultChannelPipeline.fireUserEventTriggered(DefaultChannelPipeline.java:913)
	at io.grpc.netty.WriteBufferingAndExceptionHandler.handlerAdded(WriteBufferingAndExceptionHandler.java:62)
	at io.netty.channel.AbstractChannelHandlerContext.callHandlerAdded(AbstractChannelHandlerContext.java:938)
	at io.netty.channel.DefaultChannelPipeline.callHandlerAdded0(DefaultChannelPipeline.java:609)
	at io.netty.channel.DefaultChannelPipeline.addLast(DefaultChannelPipeline.java:223)
	at io.netty.channel.DefaultChannelPipeline.addLast(DefaultChannelPipeline.java:381)
	at io.netty.channel.DefaultChannelPipeline.addLast(DefaultChannelPipeline.java:370)
	at io.grpc.netty.NettyServerTransport.start(NettyServerTransport.java:153)
	at io.grpc.netty.NettyServer$1.initChannel(NettyServer.java:290)
	at io.netty.channel.ChannelInitializer.initChannel(ChannelInitializer.java:129)
	at io.netty.channel.ChannelInitializer.handlerAdded(ChannelInitializer.java:112)
	at io.netty.channel.AbstractChannelHandlerContext.callHandlerAdded(AbstractChannelHandlerContext.java:938)
	at io.netty.channel.DefaultChannelPipeline.callHandlerAdded0(DefaultChannelPipeline.java:609)
	at io.netty.channel.DefaultChannelPipeline.access$100(DefaultChannelPipeline.java:46)
	at io.netty.channel.DefaultChannelPipeline$PendingHandlerAddedTask.execute(DefaultChannelPipeline.java:1463)
	at io.netty.channel.DefaultChannelPipeline.callHandlerAddedForAllHandlers(DefaultChannelPipeline.java:1115)
	at io.netty.channel.DefaultChannelPipeline.invokeHandlerAddedIfNeeded(DefaultChannelPipeline.java:650)
	at io.netty.channel.AbstractChannel$AbstractUnsafe.register0(AbstractChannel.java:514)
	at io.netty.channel.AbstractChannel$AbstractUnsafe.access$200(AbstractChannel.java:429)
	at io.netty.channel.AbstractChannel$AbstractUnsafe$1.run(AbstractChannel.java:486)
	at io.netty.util.concurrent.AbstractEventExecutor.runTask(AbstractEventExecutor.java:174)
	at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:167)
	at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:470)
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:503)
	at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997)
	at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
	at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
	at java.base/java.lang.Thread.run(Thread.java:833)

Did i miss something ? can i get some help to move forward on my case

Unable to use HTTP_PROXY and HTTPS_PROXY env variable

I have a use case where i am using an external proxy to pull the metrics, and currently using the both HTTP_PROXY and HTTPS_PROXY while running the agent.

docker run -p 8083:8083 -p 8093:8093 --mount type=bind,source="$(pwd)"/prom-agent.conf,target=/app/prom-agent.conf --env HTTP_PROXY=https://proxium-us-east-1.aws-dev.abc.com:3128 --env HTTPS_PROXY=https://proxium-us-east-1.aws-dev.abc.com:3128 --env AGENT_CONFIG=prom-agent.conf --env DEBUG_ENABLED=true --network host -d pambrose/prometheus-agent:1.21.0

Opened required ports from host and destination machines, and validated the connectivity to proxy from the agent node machine works fine.

When i try to start the agent with the above variables it is somehow unable to read it from the variable.

10:58:20.304 INFO [AgentGrpcService.kt:186] - Cannot connect to proxy at prom-proxy.aws-dev.abc.com:50051 using plaintext - StatusException: UNAVAILABLE: io exception [Agent Unnamed-ip-100-120-14-241.ec2.internal] 10:58:23.296 INFO [Agent.kt:211] - Waited 3s to reconnect [Agent Unnamed-ip-100-120-14-241.ec2.internal] 10:58:23.297 INFO [AgentGrpcService.kt:175] - Connecting to proxy at prom-proxy.aws-dev.abc.com:50051 using plaintext... [Agent Unnamed-ip-100-120-14-241.ec2.internal]

Configured the proxy_hostname in the agent.conf
proxy.hostname = prom-proxy.aws-dev.abc.com

Not sure what i am missing here, but any help would be appreciated.

Thanks

Prometheus Proxy through Nginx reverse proxy

Hi

I have exactly the same issue than #29 with the last release v1.14.1. I create a minimalist example to reproduce the problem there (https://github.com/vincedom/prom-proxy-nginx/tree/main).

I also investigate it more deeply. The problem seems that when using nginx we go through io.prometheus.proxy.ProxyServerTransportFilter->transportReady and io.prometheus.proxy.ProxyServerTransportFilter->transportTerminated for each api call.
So when the agent calls connectAgent the agentContext is created at the begin of the call and remove at the end, so when it calls registerAgent there was an error since agentContext is missing.
I make my example "works" by removing the call to proxy.removeAgentContext in transportTerminated, but of course this is not a true fix since it's generated memory leaks, however it's demonstrated that this is the right problem.
I can't go further since I don't known Kotlin neither gRPC.

According to me, your project aims to use Prometheus through firewall or more generally in non easily accessible installations so it would make sense, to have a robust connection that can go through proxy and reverse proxy.

Thanks by advance, and thanks for your great job.
Regards

Vincent

To many open files

I am facing the issue that prometheus-proxy stops working after a few hours due to "to many open files". After increasing the amount of file descriptors it runs for a longer time but still stops working after a few hours. Is this a known issue?

Thank you

First node works, second does not...

I have two nodes configured. I am using wmi_exporter for both. The first node is up and I can see all the metrics. Everything looks great, but then I added the second node and for some reason it just will not come up. I can't make much sense of it.

I am getting these errors in the proxy logs:

15:46:30.350 ERROR [ScrapeRequestManager.kt:46] - Missing ScrapeRequestWrapper for scrape_id: 91 [grpc-default-executor-1]
15:46:40.123 ERROR [ScrapeRequestManager.kt:46] - Missing ScrapeRequestWrapper for scrape_id: 93 [grpc-default-executor-1]
15:46:49.936 ERROR [ScrapeRequestManager.kt:46] - Missing ScrapeRequestWrapper for scrape_id: 95 [grpc-default-executor-1]
15:46:59.862 ERROR [ScrapeRequestManager.kt:46] - Missing ScrapeRequestWrapper for scrape_id: 97 [grpc-default-executor-1]

Agent logs look good:

15:44:12.266 INFO  [GenericServiceListener.kt:30] - Running AdminService{port=8093, paths=[/ping, /version, /healthcheck, /threaddump]} [AdminService STARTING]
15:44:12.266 INFO  [GenericService.kt:136] - All Agent services healthy [AdminService STARTING]
15:44:12.573 INFO  [AgentGrpcService.kt:144] - Connected to proxy at {IP Removed]:50051 using plaintext [Agent Unnamed-prometheus-agent]
15:44:12.697 INFO  [AgentPathManager.kt:65] - Registered http://10.100.61.63:9182/metrics as /bgr-rds02_metrics [Agent Unnamed-prometheus-agent]
15:44:12.723 INFO  [AgentPathManager.kt:65] - Registered http://10.100.61.61:9182/metrics as /bgr-rds01_metrics [Agent Unnamed-prometheus-agent]
15:44:12.767 INFO  [Agent.kt:194] - Heartbeat scheduled to fire after 5.00s of inactivity [DefaultDispatcher-worker-1]

prometheus.yml

  - job_name: 'bgr-rds02'
    metrics_path: '/bgr-rds02_metrics'
    static_configs:
      - targets: ['prometheus-proxy:8080']

  - job_name: 'bgr-rds01'
    metrics_path: '/bgr-rds01_metrics'
    static_configs:
      - targets: ['prometheus-proxy:8080']

prom-agent.conf

proxy {
  admin.enabled: true
  metrics.enabled: true
}

agent {
  proxy.hostname = ${HOSTNAME}
  admin.enabled: true
  metrics.enabled: true

  pathConfigs: [
    {
      name: "bgr-rds02"
      path: bgr-rds02_metrics
      url: "http://10.100.61.63:9182/metrics"
    }
    {
      name: "bgr-rds01"
      path: bgr-rds01_metrics
      url: "http://10.100.61.61:9182/metrics"
    }
  ]
}

Can prometeus-proxy be used to proxy metrics from an external source?

Here's an example. I'm trying to use Flagger within my K8s cluster to get progressive rollouts. It can integrate with Istio to grab metrics from it via Prometheus. However, unless defining a bunch of custom metrics (instead of the built in ones) it seems to not be able to handle a source needing a secret access (fluxcd/flagger#1671).

I'd like to have a proxy inside the cluster, without needing auth, that proxies to my external Prometheus, using a secret. Thus, Flagger can just talk to the proxy, without needing the secret.

Is this possible with prometheus-proxy?

Is it possible to publish this for use in Spring Boot projects?

Hi Paul,

I'd love to use the agent embedded within a spring boot application of mine, however the "javax.servlet" is causing me some headaches and I can't exclude the dependency to use the version that comes with spring. Is there anyway you could publish the project for use in other applications at all?

Eclipse Jetty DoS Vulnerability (GHSA-8mpp-f3f7-xc28)

Jetty's version presents a DoS Vulnerability as can be seen here.
I use docker version of prometheus-proxy v.1.14.2 and it uses Jetty:// 9.4.49.v20220914.
Recommendations to fix this problem, update to 10.0.10 or 11.0.10, which are patched versions.
Hope this helps!

Question: Labels in discovery api

Hi,

Have been playing a little bit with prometheus-proxy and I am planning to use it in one scenario.
I have not found anything about it in the documention but maybe I have missed it.
Is it possible to add tags to the output of the discovery api? The best solution would be to add tags to the agent and have them exposed in the discovery api if possible.

Something like:

"labels": {
  "__scheme__": "http",
  "__shard": "Default",
  "farm": "DockerHosts",
  "job": "cadvisor",
  "project": "Docker",
  "environment": "Prod"
},

Service Discovery wrongly generated

Hello

I have a problem setting up service discovery.

My generated file is as follow:

cat /opt/prometheus/targets.json
[
    {
        "targets": [
            "http://prometheus-proxy:8080/sandbox_ops_cadvisor_metrics"
        ]
    }
]

And in my prometheus logs I have an error like this:

prometheus_1        | ts=2022-11-23T10:45:59.899Z caller=scrape.go:477 level=error component="scrape manager" scrape_pool=tmp msg="Creating target failed" err="instance 7 in group /etc/prometheus/targets.json:0: \"http://prometheus-proxy:8080/sandbox_ops_cadvisor_metrics\" is not a valid hostname"

Looking at the documentation, I found that the discovery file should be as follow:

[
    {
        "targets": [
            "prometheus-proxy:8080"
        ],
        "labels": {
            "__metrics_path__": "sandbox_ops_cadvisor_metrics"
        }
    }
]

Can you confirm that the problem is from your generated service discovery file ?

Regards.

Forwarding match[] params

Hi

I'm trying out the proxy for federating multiple Prometheus instances across network boundaries.

I have got a fairly simple POC up and running locally looking something like this

image

My question is does the proxy support the forwarding of queries to the federate endpoint?

To support the question I have been playing about with the following config.

If I configure my agent like this, with just the federate endpoint configured it doesn't seem to pass through the query param from the prometheus config. I have even tried accessing promprox-proxy:8080/local_prom?match[]={__name__=~'..*'} in the browser and i get no metrics.

agent {
  pathConfigs: [
    {
      name: local_prom_metrics
      path: local_prom
      url: "http://prometheus:9090/federate"
    }
  ]
}
scrape_configs:
  - job_name: 'proxied-metrics'
    params:
      'match[]':
        - '{__name__=~"..*"}'
    metrics_path: '/local_prom'
    static_configs:
      - targets: ['promprox-proxy:8080']

But if I change the agent to this (or some variant of)

agent {
  pathConfigs: [
    {
      name: local_prom_metrics
      path: local_prom
      url: "http://prometheus:9090/federate?match[]={__name__=~'..*'}"
    }
  ]
}

I then begin to get metrics through the proxy (without changing the prom config at all)

how many apps does one prometheus-agent support?

hi,
I'm using your product.
And I have almost 200 apps. eg. app1_metrics app2_metrics ……app189_metrics
I find some data is missed.
curl http://mymachine.local:8080/app1_metrics --> OK
curl http://mymachine.local:8080/app30_metrics --> OK
curl http://mymachine.local:8080/app123_metrics --> 404 Not Found

Of course, the number is just an example, don't know the exact number.
So, I want to ask you:
Do you know how many apps for one prometheus-agent support?

Timeout config parameter

Hi,
Currently I have an prometheus-proxy in us-east-1 and trying to pull the cloudwatch metrics from us-east-1 of same account.
When I scrape the data the time is - cloudwatch_exporter_scrape_duration_seconds 11.337143927 and when I scrape the current instance metrics using node_exporter (9100 port) I get the data immediately.

When I try to access this metrics across proxy, I was able to get node_exporter data (9100) without any issue. However I was not able to get cloudwatch data in the agent as I see that it is getting time out.

Please could you advise if there is a workaround for this or have a timeout configuration at proxy side?

Thanks in advance.

support include params

can support include params

in my case

scrape_configs:
  - job_name: "test"
    honor_labels: true
    file_sd_configs:
    - refresh_interval: 1m
      files:
      - "/etc/prometheus/myconfig/*.yml"

I want use config file like this , can prometheus-proxy support it ?

for example

agent {
  pathConfigs: [
     some_path_A,
     some_path_B,
  ]
}

Prometheus-agent cache

Is there any cache on the Prometheus-agent to store scraped data if the connection between agent & proxy is lost ??

Explanation of Service Discovery

Could you explain me what exactly SD exposes? Does it expose services behind the agents or maybe exposes agents or something else?

Thank you.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.