Coder Social home page Coder Social logo

mojolicious-plugin-prometheus's People

Contributors

christopherraa avatar doojonio avatar lammel avatar marcusramberg avatar tyldum avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

mojolicious-plugin-prometheus's Issues

Collectors are run on every request

Currently prometheus->render() is called in an after_render-hook. This means that for each and every request all collectors are run and prometheus renders / caches the stats. Effectively this makes /metrics pretty performant while each request to the application get latency equal to the sum of each collectors execution time.

I plan on making a change so that prometheus->render is called only for /metrics. This will make this endpoint less performant but remove the previously mentioned per-request-latency.

Comments welcome.

Add support for pluggable Storable backends

The addition of IPC was great and works well for a single node and multiple workers but Iā€™m working on dockerized microservices deployed on Kubernetes so really need a distributed storage solution (like Redis). Iā€™m happy to add my own version of this plugin but thought it would be a nice addition for others who might need the same functionality.

Differentiate worker-specific metrics and global / on-demand metrics

Some metrics are worker-specific such as resource usage, stats for requests etc. There is however another category of stats that are inherently not tied to a specific worker process. These are stats for global things where there is only one correct "truth" at any one time. An example of this would be Minion-task statistics, as these have one correct value at any given time, no matter which worker that is serving a request to /metrics. The current behaviour is broken in this regard.

If you collect such global stats and add a label noting which worker served the stat ({worker => $$}) you would not end up with duplicated metrics, as they would render as something like this:

minion_inactive{worker="1234"} 5
minion_finished{worker="1234"} 10
minion_failed{worker="1234"} 1
minion_inactive{worker="4455"} 1
minion_finished{worker="4455"} 8
minion_failed{worker="4455"} 0

Thus the lines are not duplicates per se, but you cannot in any way tell which of them is correct, or outdated based on available information.

You could also choose to not tag with worker id, but that would give you this output which would most certainly be bonkers:

minion_inactive 5
minion_finished 10
minion_failed 1
minion_inactive 1
minion_finished 8
minion_failed 0

So, we need a way to handle the need for uniqueness in metrics for those metrics that are not tied to specific workers in any way.

I am currently working on an implementation that will support this.

Test failures on FreeBSD

On my FreeBSD smokers the test suite fails:

#   Failed test 'content is similar'
#   at t/basic.t line 16.
#                   '# HELP http_request_duration_seconds Summary request processing time
# # TYPE http_request_duration_seconds histogram
# http_request_duration_seconds_count{method="GET"} 1
# http_request_duration_seconds_sum{method="GET"} 0.000416
# http_request_duration_seconds_bucket{method="GET",le="0.005"} 1
# http_request_duration_seconds_bucket{method="GET",le="0.01"} 1
# http_request_duration_seconds_bucket{method="GET",le="0.025"} 1
# http_request_duration_seconds_bucket{method="GET",le="0.05"} 1
# http_request_duration_seconds_bucket{method="GET",le="0.075"} 1
# http_request_duration_seconds_bucket{method="GET",le="0.1"} 1
# http_request_duration_seconds_bucket{method="GET",le="0.25"} 1
# http_request_duration_seconds_bucket{method="GET",le="0.5"} 1
# http_request_duration_seconds_bucket{method="GET",le="0.75"} 1
# http_request_duration_seconds_bucket{method="GET",le="1"} 1
# http_request_duration_seconds_bucket{method="GET",le="2.5"} 1
# http_request_duration_seconds_bucket{method="GET",le="5"} 1
# http_request_duration_seconds_bucket{method="GET",le="7.5"} 1
# http_request_duration_seconds_bucket{method="GET",le="10"} 1
# http_request_duration_seconds_bucket{method="GET",le="+Inf"} 1
# # HELP http_requests_total How many HTTP requests processed, partitioned by status code and HTTP method.
# # TYPE http_requests_total counter
# http_requests_total{method="GET",code="200"} 1
# '
#     doesn't match '(?^:process_cpu_seconds_total)'
# Looks like you failed 1 test of 7.
t/basic.t ......... 
Dubious, test returned 1 (wstat 256, 0x100)
Failed 1/7 subtests 
#   Failed test 'content is similar'
#   at t/custom_path.t line 11.
#                   '# HELP http_request_duration_seconds Summary request processing time
# # TYPE http_request_duration_seconds histogram
# # HELP http_requests_total How many HTTP requests processed, partitioned by status code and HTTP method.
# # TYPE http_requests_total counter
# '
#     doesn't match '(?^:process_cpu_seconds_total)'
# Looks like you failed 1 test of 4.
t/custom_path.t ... 
Dubious, test returned 1 (wstat 256, 0x100)
Failed 1/4 subtests 

However, everything's fine on the Linux smokers.

Inconsistencies in indenting

Some of the files in the project indent by 2 spaces, some indent by 4 spaces. Should this perhaps be normalized so the same scheme is used throughout?

SUGGESTION: use method signatures

In my opinion method signatures makes Perl code more pleasant to read. Has signatures been left out to be backwards compatible with ancient Perls?

Stats are rendered in duplicates

Snippet from a test-run:

# HELP process_open_fds Number of open file handles
# TYPE process_open_fds gauge
process_open_fds{worker="101781"} 13
process_open_fds{worker="101781"} 13
# HELP process_resident_memory_bytes Resident memory size in bytes
# TYPE process_resident_memory_bytes gauge
process_resident_memory_bytes{worker="101781"} 43929600
process_resident_memory_bytes{worker="101781"} 44195840
# HELP process_start_time_seconds Unix epoch time the process started at
# TYPE process_start_time_seconds gauge
process_start_time_seconds{worker="101781"} 1645466999.71
process_start_time_seconds{worker="101781"} 1645466999.71
# HELP process_virtual_memory_bytes Virtual memory size in bytes
# TYPE process_virtual_memory_bytes gauge
process_virtual_memory_bytes{worker="101781"} 55885824
process_virtual_memory_bytes{worker="101781"} 55877632

So at least the process-collector is rendered twice. This isn't optimal, especially since some metrics (eg process_resident_memory_bytes) is even rendered with two different values for the same metric.

Data not aggregated in prefork mode

When running under hypnotoad the data does dot get aggregated for the application instance, instead each worker is responding with a different value for the http_requests_total value. This is a problem since the whole point for this is to aggregate data at the application level.

Missing defaults for duration_buckets

The documentation specifies that duration_buckets should have a default of (0.005, 0.01, 0.025, 0.05, 0.075, 0.1, 0.25, 0.5, 0.75, 1.0, 2.5, 5.0, 7.5, 10). Such is not the case in the code.

Is the code or the doc at fault?

Possible bug in after_dispatch hook

The after_dispatch hook runs $c->res->content->asset->size when logging response size. However, if the response is HTTP multipart then asset() does not exist. It is only Mojo::Content::Single that has ->asset(). The method ->is_multipart() does however exist on Mojo::Content so checking for this is possible.

Makefile.PL lacks Net::Prometheus requirement

I see the requirement in META.yml, but it's missing in Makefile.PL's PREREQ_PM. So test suite fails, at least if CPAN.pm was used for the installation:

...
Can't locate object method "new_histogram" via package "Net::Prometheus" at /usr/home/eserte/.cpan/build/2017122021/Mojolicious-Plugin-Prometheus-0.9.2-2/blib/lib/Mojolicious/Plugin/Prometheus.pm line 20.
t/basic.t ......... 
Dubious, test returned 255 (wstat 65280, 0xff00)
No subtests run 
...

"get_sereal_encoder" is not exported by the Sereal module

On at least one of my smoker systems the test suite fails, possibly because an older Sereal version is installed:

#   Failed test 'use Mojolicious::Plugin::Prometheus;'
#   at t/00_compile.t line 4.
#     Tried to use 'Mojolicious::Plugin::Prometheus'.
#     Error:  "get_sereal_decoder" is not exported by the Sereal module
#  "get_sereal_encoder" is not exported by the Sereal module
# Can't continue after import errors at /home/cpansand/.cpan/build/2020041300/Mojolicious-Plugin-Prometheus-1.3.1-PSgRSv/blib/lib/Mojolicious/Plugin/Prometheus.pm line 176.
# BEGIN failed--compilation aborted at /home/cpansand/.cpan/build/2020041300/Mojolicious-Plugin-Prometheus-1.3.1-PSgRSv/blib/lib/Mojolicious/Plugin/Prometheus.pm line 176.
# Compilation failed in require at t/00_compile.t line 4.
# BEGIN failed--compilation aborted at t/00_compile.t line 4.
# Looks like you failed 1 test of 1.
t/00_compile.t ...... 
Dubious, test returned 1 (wstat 256, 0x100)
Failed 1/1 subtests 
"get_sereal_decoder" is not exported by the Sereal module
 "get_sereal_encoder" is not exported by the Sereal module
Can't continue after import errors at /home/cpansand/.cpan/build/2020041300/Mojolicious-Plugin-Prometheus-1.3.1-PSgRSv/blib/lib/Mojolicious/Plugin/Prometheus.pm line 176.
BEGIN failed--compilation aborted at /home/cpansand/.cpan/build/2020041300/Mojolicious-Plugin-Prometheus-1.3.1-PSgRSv/blib/lib/Mojolicious/Plugin/Prometheus.pm line 176, <DATA> line 2231.
Compilation failed in require at (eval 273) line 1, <DATA> line 2231.
t/basic.t ........... 
Dubious, test returned 255 (wstat 65280, 0xff00)
No subtests run 
... (etc) ...

Ambiguous wording in documentation

The documentation says In addition to exporting the default .... Is it worth considering using exposes instead, since exporting has a special meaning in Perl?

Add basics to .gitignore

Currently the project does not have a .gitignore-file. Would be useful to have such a file so various temporary files never get added by accident. Will create a PR for this.

Prereq version for Net::Prometheus

On some of my smokers the test suite fails like this:

Can't locate object method "new_histogram" via package "Net::Prometheus" at /usr/home/eserte/.cpan/build/2017122121/Mojolicious-Plugin-Prometheus-1.0.2-2/blib/lib/Mojolicious/Plugin/Prometheus.pm line 26.
t/basic.t ........... 
Dubious, test returned 255 (wstat 65280, 0xff00)
No subtests run 
... (etc.) ...

This seems to happen if Net::Prometheus is too old. Statistical analysis:

****************************************************************
Regression 'mod:Net::Prometheus'
****************************************************************
Name           	       Theta	      StdErr	 T-stat
[0='const']    	     -0.0000	      0.0000	  -0.91
[1='eq_0.02']  	      0.0000	      0.0000	   0.79
[2='eq_0.03']  	      1.0000	      0.0000	5775607831695252.00
[3='eq_0.05']  	      1.0000	      0.0000	8020760728480842.00

R^2= 1.000, N= 32, K= 4
****************************************************************

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    šŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. šŸ“ŠšŸ“ˆšŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ā¤ļø Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.