Coder Social home page Coder Social logo

dipstick's People

Contributors

fralalonde avatar grippy avatar mixalturek avatar rafalgoslawski avatar vorner avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

dipstick's Issues

Accumulate metrics

Hello, I am a newbie to Rust and have one question about your crate:
Is there any way to accumulate count metrics?

For example: if I send metrics every 3 seconds and if I count 10 in first 3 seconds and 20 in second 3 seconds, is there any way to return 30 after 6 seconds? I didn't find that neither in handbook nor in examples... Sorry in advance if question is silly...

New metric type gauge supplier

It would be great to have a gauge that is able to call a function to get the value to report. Current gauges use push model, this would be pull. Inspired by https://metrics.dropwizard.io/3.2.3/manual/core.html#gauges.

Use cases:

  • various buffers sizes
  • dynamic thread pool sizes
  • system and process metrics, partially from /proc filesystem
    • application uptime
    • RAM allocated by process
    • number of threads
    • number of open files
  • many more

Reporting of the application uptime using the existing API:

// Setup
let app_start_time = std::time::Instant::now();
let bucket: AtomicBucket = ...;
let uptime = bucket.gauge("uptime");

// Manually schedule a timer to periodically call this code
uptime.value(app_start_time.elapsed().as_millis());

Proposal for the new API. (Fighting with borrow checker and multi-threading may occur, this is only to get the idea.)

// Setup
let app_start_time = std::time::Instant::now();
let bucket: AtomicBucket = ...;
// The callback is evaluated on each reporting (or somehow internally scheduled, if enabled).
bucket.gauge_supplier("uptime", || app_start_time.elapsed().as_millis());

I plan to implement the feature soon either directly to Dipstick or as its extension. If you like it I will be happy to contribute. I have already looked at the code and it would probably require significant internal changes due to the push model in InputMetric.write() so I'm asking in advance...

Exporting Result at the crate root can cause conflicts with std::result::Result

Filing this just because I had to go clean up a lot of code in my own project after not noticing that the Result other areas of my code were relying upon was actually dipstick::Result rather than the std Result.

The following line in lib.rs can cause issues for users (like me) who are basically copying and pasting from the examples which have use dipstick::*;

src/lib.rs

pub use crate::error::Result;

This basically causes my functions with Result to now be expecting dipstick::Result rather than the built in.

My suggestion would be to just remove pub in that declaration or otherwise stop exporting it. If users need to access the Result type, they can just as easily reference or bring in dipstick::error::Result

New metric type fixed histogram

It would be nice to have at least basic support for histograms. I know it's one of the non-goals, but the note was probably meant as expensive computation of various percentiles (which are in fact no histograms). https://en.wikipedia.org/wiki/Histogram

A histogram built on top of a small set of Counters with fixed predefined ranges should be quite simple to implement using API that Dipstick already provides and still quite powerful and with a great performance.

// Ranges and counters for: 'less', 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 'more'
let histogram = metrics.histogram("hist", Histogram::unit_step(0, 10));

// Ranges and counters for: 'less', 0-2, 3-5, 6-8, 9-11, 'more'
let histogram = metrics.histogram("hist", Histogram::const_step(0, 11, 3));

// Ranges and counters for: 'less', 0, 1, 2-4, 5-9, 10-49, 50-99, 100-999, 'more'
let histogram = metrics.histogram("hist", Histogram::steps(vec![0, 1, 2, 5, 10, 50, 100, 1000]));

What do you think? I'm ready to contribute, I'm asking in advance...

Panic at already BorrowMutError

I was doing some load testing on a small Rust application which is using async-std. At some threshold while load testing dipstick causes the application to panic.

I'm basically just using the statsd interface, and only have a few counters in the application. I don't have a great test case since this happens for the full application, but if the stacktrace below isn't sufficient, I can share the whole (open source) application with details on how to run the load test.

    let metrics = Arc::new(Statsd::send_to(&settings.global.metrics.statsd)
        .expect("Failed to create Statsd recorder")
        .named("hotdog")
        .metrics());
thread 'async-std/executor' panicked at 'already borrowed: BorrowMutError', /rustc/b8cedc00407a4c56a3bda1ed605c6fc166655447/src/libcore/cell.rs:878:9
stack backtrace:
   0: backtrace::backtrace::libunwind::trace
             at /cargo/registry/src/github.com-1ecc6299db9ec823/backtrace-0.3.40/src/backtrace/libunwind.rs:88
   1: backtrace::backtrace::trace_unsynchronized
             at /cargo/registry/src/github.com-1ecc6299db9ec823/backtrace-0.3.40/src/backtrace/mod.rs:66
   2: std::sys_common::backtrace::_print_fmt
             at src/libstd/sys_common/backtrace.rs:77
   3: <std::sys_common::backtrace::_print::DisplayBacktrace as core::fmt::Display>::fmt
             at src/libstd/sys_common/backtrace.rs:59
   4: core::fmt::write
             at src/libcore/fmt/mod.rs:1052
   5: std::io::Write::write_fmt
             at src/libstd/io/mod.rs:1426
   6: std::sys_common::backtrace::_print
             at src/libstd/sys_common/backtrace.rs:62
   7: std::sys_common::backtrace::print
             at src/libstd/sys_common/backtrace.rs:49
   8: std::panicking::default_hook::{{closure}}
             at src/libstd/panicking.rs:204
   9: std::panicking::default_hook
             at src/libstd/panicking.rs:224
  10: std::panicking::rust_panic_with_hook
             at src/libstd/panicking.rs:472
  11: rust_begin_unwind
             at src/libstd/panicking.rs:380
  12: core::panicking::panic_fmt
             at src/libcore/panicking.rs:85
  13: core::option::expect_none_failed
             at src/libcore/option.rs:1199
  14: dipstick::output::statsd::StatsdScope::print
  15: <dipstick::output::statsd::StatsdScope as dipstick::core::output::OutputScope>::new_metric::{{closure}}
  16: <dipstick::core::locking::LockingOutput as dipstick::core::input::InputScope>::new_metric::{{closure}}
  17: dipstick::core::input::Counter::count
  18: <std::future::GenFuture<T> as core::future::future::Future>::poll
  19: async_task::raw::RawTask<F,R,S,T>::run
  20: std::thread::local::LocalKey<T>::with
  21: async_std::task::executor::pool::main_loop
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
thread 'main' panicked at 'LockingOutput: "PoisonError { inner: .. }"', /home/tyler/.cargo/registry/src/github.com-1ecc6299db9ec823/dipstick-0.7.11/src/core/locking.rs:44:26
stack backtrace:
   0: backtrace::backtrace::libunwind::trace
             at /cargo/registry/src/github.com-1ecc6299db9ec823/backtrace-0.3.40/src/backtrace/libunwind.rs:88
   1: backtrace::backtrace::trace_unsynchronized
             at /cargo/registry/src/github.com-1ecc6299db9ec823/backtrace-0.3.40/src/backtrace/mod.rs:66
   2: std::sys_common::backtrace::_print_fmt
             at src/libstd/sys_common/backtrace.rs:77
   3: <std::sys_common::backtrace::_print::DisplayBacktrace as core::fmt::Display>::fmt
             at src/libstd/sys_common/backtrace.rs:59
   4: core::fmt::write
             at src/libcore/fmt/mod.rs:1052
   5: std::io::Write::write_fmt
             at src/libstd/io/mod.rs:1426
   6: std::sys_common::backtrace::_print
             at src/libstd/sys_common/backtrace.rs:62
   7: std::sys_common::backtrace::print
             at src/libstd/sys_common/backtrace.rs:49
   8: std::panicking::default_hook::{{closure}}
             at src/libstd/panicking.rs:204
   9: std::panicking::default_hook
             at src/libstd/panicking.rs:224
  10: std::panicking::rust_panic_with_hook
             at src/libstd/panicking.rs:472
  11: rust_begin_unwind
             at src/libstd/panicking.rs:380
  12: core::panicking::panic_fmt
             at src/libcore/panicking.rs:85
  13: core::option::expect_none_failed
             at src/libcore/option.rs:1199
  14: <dipstick::core::locking::LockingOutput as dipstick::core::input::InputScope>::new_metric::{{closure}}
  15: dipstick::core::input::Counter::count
  16: <std::future::GenFuture<T> as core::future::future::Future>::poll
  17: std::thread::local::LocalKey<T>::with
  18: std::thread::local::LocalKey<T>::with
  19: async_std::task::block_on::block_on
  20: hotdog::main
  21: std::rt::lang_start::{{closure}}
  22: std::rt::lang_start_internal::{{closure}}
             at src/libstd/rt.rs:52
  23: std::panicking::try::do_call
             at src/libstd/panicking.rs:305
  24: __rust_maybe_catch_panic
             at src/libpanic_unwind/lib.rs:86
  25: std::panicking::try
             at src/libstd/panicking.rs:281
  26: std::panic::catch_unwind
             at src/libstd/panic.rs:394
  27: std::rt::lang_start_internal
             at src/libstd/rt.rs:51
  28: main
  29: __libc_start_main
  30: <unknown>
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.

Add build flag to disable labels

Metric Labels have a small runtime performance impact even if they're not used:
With labels

test aggregate::bucket::bench::aggregate_counter   ... bench:          17 ns/iter (+/- 0)
test aggregate::bucket::bench::aggregate_marker    ... bench:          10 ns/iter (+/- 1)
test core::bench::time_bench_direct_dispatch_event ... bench:          10 ns/iter (+/- 0)
test core::proxy::bench::proxy_marker_to_aggregate ... bench:          29 ns/iter (+/- 0)
test core::proxy::bench::proxy_marker_to_void      ... bench:          28 ns/iter (+/- 1)

Without labels:

test aggregate::bucket::bench::aggregate_counter   ... bench:          13 ns/iter (+/- 0)
test aggregate::bucket::bench::aggregate_marker    ... bench:           8 ns/iter (+/- 1)
test core::bench::time_bench_direct_dispatch_event ... bench:           8 ns/iter (+/- 0)
test core::proxy::bench::proxy_marker_to_aggregate ... bench:          22 ns/iter (+/- 0)
test core::proxy::bench::proxy_marker_to_void      ... bench:          22 ns/iter (+/- 4)

This is possibly due to the extra parameter being passed around. A build flag could disable the use of labels to regain the extra perf when they're not needed.

Per-metric sampling for statsd

It isn't possible to setup also a "local" sampling for a particular counter/timer/... together with "global" one. Errors are typically rare and setting sampling for them would cause very unwanted information loss.

This would require significant API changes just for statsd, but it's also possible using an external configuration (see avast/metrics#22).

Full Prometheus support (pull-based HTTP endpoint)

Hello Francis,

So, I use a lot of Prometheus lately - mostly with the Go and Python clients. If I were to contribute "Full Prometheus support" to this crate, how would you envision that?

  • There's https://github.com/pingcap/rust-prometheus which is based on the upstream Prometheus protobufs - would you want to introduce a dependency, or redo the Prometheus metric layer/exposition format in this code from scratch?
  • "Real" Prometheus is a pull model, as you describe in the README (No backend for "pull" metrics yet. Should at least provide tiny-http listener capability) - is this something you're interested in? It's more "the right way" than Pushgateway. Not sure how this fits in the dipstick model, e.g.:
METRICS.target(Prometheus::listen_on("localhost:8080/metrics").expect("serving").metrics());
COUNTER.count(32);

I can start some preliminary commits to get the ball rolling.

Impossible removal of observed metrics, resource leaks

Re-registration of a new metric under already used name lets all of them active. If a metric is registered periodically or on an recurring action this behavior will result in kinda resource leak. Metrics that are once added can't be removed nor unregistered, the API doesn't support that.

Code to reproduce, using dipstick/examples/observer.rs:

    metrics.observe(metrics.gauge("uptime"), |_| 5).on_flush();
    metrics.observe(metrics.gauge("uptime"), |_| 3).on_flush();
    metrics.observe(metrics.gauge("uptime"), |_| 10).on_flush();

Value 6 that has never existed and that is based on average is then reported forever.

process.heartbeat 1
process.threads 4
process.uptime 6
process.heartbeat 1
process.threads 4
process.uptime 6
process.heartbeat 1
process.threads 4
process.uptime 6
...

I prefer a behavior similar to this one - replace the existing metric by a new one.
5e35e68

Negative values cause panics

Hi there, negative values passed to the API calls cause the code to panic. We are using latest dipstick = "~0.6", but I found similar code also in master.

counter.count(-1);
gauge.value(-1);
timer.interval_us(-1);

The library expects only values from unsigned range to be passed in and internally operates with u64 type. This assumption is surely wrong for counter and gauge where the negative values are fully valid. I'm unsure about timer now, but runtime panic in production wouldn't be much pleasant ;-)

impl Counter {
    /// Record a value count.
    pub fn count<V: ToPrimitive>(&self, count: V) {
        self.inner.write(count.to_u64().unwrap(), labels![])    // <<<<< e.g. here
    }
}

Heartbeat metric

For network push output, add an option to send "heartbeat" metric that will let downstream analytics know that the server is alive.

Pre-populated metrics for statsd and similar push models

Hello

I'm not sure this would belong into dipstick itself, so I'm asking what you think about it.

With the statsd collector, and likely with other push ones too, if metric is created but not used, it isn't sent at all. This can happen to a rare error metric. This has the downside of any plotting frontend not knowing about that metric (like grafana), creating alerts and graphs is therefore „blind“, which is prone to typos, etc ‒ and if the error eventually comes, it can be missed because of the typo.

The hack we do is adding an .init method to all the metric types and using it once we create the metric, to send one instance of some inert value (eg. 0 for counter).

Would it make sense to upstream it here? Or even do it automatically, without calling the method? Or doing it only for the push model sinks?

Add buffer template

Extend output template functionality to allow multi-metric formats with header/footer (JSON, XML, etc)

combine app_metric! and app_"metric_type"! macros to allow block declaration

go from

    app_metric!(Aggregate, TEST_METRICS, DIPSTICK_METRICS.with_prefix("test_prefix"));
    app_marker!(Aggregate, TEST_METRICS, {
        M1: "failed",
        M2: "success",
    });

to something like

    app_metric!(Aggregate, TEST_METRICS, DIPSTICK_METRICS.with_prefix("test_prefix"), {
        marker(M1: "failed"),
        counter(M2: "success"),
    });

Custom Logger templates

Allow customizing the to_log and to_stdout with custom templates.

  • Templates receive metric name and value
  • Templates may be used in buffered or unbuffered output
  • Buffered output should have header / trailer templates

Broken Result type, missing Send/Sync in v0.7.1

Dipstick's Result seems broken, Error part should be extended by Send & Sync. Please also consider to return concrete error types instead of generic trait.

/// Just put any error in a box.
pub type Result<T> = result::Result<T, Box<Error>>;
let statsd = Statsd::send_to((config.statsd_host.as_ref(), config.statsd_port))?;
error[E0277]: the size for values of type `dyn std::error::Error` cannot be known at compilation time
  --> src/metrics.rs:46:22
   |
46 |         let statsd = Statsd::send_to((config.statsd_host.as_ref(), config.statsd_port))?;
   |                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ doesn't have a size known at compile-time
   |
   = help: the trait `std::marker::Sized` is not implemented for `dyn std::error::Error`
   = note: to learn more, visit <https://doc.rust-lang.org/book/second-edition/ch19-04-advanced-types.html#dynamically-sized-types-and-the-sized-trait>
   = note: required because of the requirements on the impl of `std::error::Error` for `std::boxed::Box<dyn std::error::Error>`
   = note: required because of the requirements on the impl of `failure::Fail` for `std::boxed::Box<dyn std::error::Error>`
   = note: required because of the requirements on the impl of `std::convert::From<std::boxed::Box<dyn std::error::Error>>` for `failure::error::Error`
   = note: required by `std::convert::From::from`

error[E0277]: `dyn std::error::Error` cannot be sent between threads safely
  --> src/metrics.rs:46:22
   |
46 |         let statsd = Statsd::send_to((config.statsd_host.as_ref(), config.statsd_port))?;
   |                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ `dyn std::error::Error` cannot be sent between threads safely
   |
   = help: the trait `std::marker::Send` is not implemented for `dyn std::error::Error`
   = note: required because of the requirements on the impl of `std::marker::Send` for `std::ptr::Unique<dyn std::error::Error>`
   = note: required because it appears within the type `std::boxed::Box<dyn std::error::Error>`
   = note: required because of the requirements on the impl of `failure::Fail` for `std::boxed::Box<dyn std::error::Error>`
   = note: required because of the requirements on the impl of `std::convert::From<std::boxed::Box<dyn std::error::Error>>` for `failure::error::Error`
   = note: required by `std::convert::From::from`

error[E0277]: `dyn std::error::Error` cannot be shared between threads safely
  --> src/metrics.rs:46:22
   |
46 |         let statsd = Statsd::send_to((config.statsd_host.as_ref(), config.statsd_port))?;
   |                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ `dyn std::error::Error` cannot be shared between threads safely
   |
   = help: the trait `std::marker::Sync` is not implemented for `dyn std::error::Error`
   = note: required because of the requirements on the impl of `std::marker::Sync` for `std::ptr::Unique<dyn std::error::Error>`
   = note: required because it appears within the type `std::boxed::Box<dyn std::error::Error>`
   = note: required because of the requirements on the impl of `failure::Fail` for `std::boxed::Box<dyn std::error::Error>`
   = note: required because of the requirements on the impl of `std::convert::From<std::boxed::Box<dyn std::error::Error>>` for `failure::error::Error`
   = note: required by `std::convert::From::from`

error: aborting due to 3 previous errors

For more information about this error, try `rustc --explain E0277`.
error: Could not compile `urlite`.

Example with statsd not compiling

https://github.com/fralalonde/dipstick/blob/master/examples/statsd_nosampling.rs

error[E0599]: no function or associated item named `send_to` found for type `dipstick::Statsd` in the current scope
  --> src/main.rs:10:9
   |
10 |         Statsd::send_to("localhost:8125")
   |         ^^^^^^^^^^^^^^^ function or associated item not found in `dipstick::Statsd`

error: aborting due to previous error

For more information about this error, try `rustc --explain E0599`.
error: Could not compile `metrics`.

To learn more, run the command again with --verbose.

Process finished with exit code 101

Example of a pull strategy

hi,

i have been looking through the documentation and examples but i cannot find a way to output the metrics based on a pull strategy. since i am writing a library what i am looking for is a way to iterate over list of metrics exposing their values to the hosting app. is there a way to achieve so?

thank you in advance

Publish aggregated metrics on request over HTTP

  • Using minimal dependencies (i.e. httpparse) and a single thread, allow to take peek and reset snapshots of aggregated global metric stats.

REST-style API:

  • Line-delimited and / or JSON output encoding, following client accept: if specified
  • Use request subpath to limit queried metrics
  • Use HTTP GET for non-destructive reads, and DELE (???) for resets

A single server should allow querying multiple aggregators (e.g. for 1m / 5m / 15m setups).

Feature: CancelGuard

Hello

I've noticed that usually when I want to have something cancelable (eg. I have a CancelHandle or anything that implements Cancel), I often want to wrap it into a guard that cancels on drop.

I wonder if it makes sense to:

  • Introduce a CancelGuard<C: Cancel> wrapper that calls self.0.cancel in its Drop.
  • Add a (provided) method to Cancel to wrap itself into the wrapper.

I can send the pull request if you like the idea.

Fails to compile under newest nightly

Hello

When trying to compile with today's nightly ($ rustc --version rustc 1.36.0-nightly (d35181ad8 2019-05-20)), the compilation fails with:

error[E0283]: type annotations required: cannot resolve `std::string::String: std::convert::AsRef<_>`
   --> src/output/prometheus.rs:150:41
    |
150 |         match minreq::get(self.push_url.as_ref())
    |     

AFAIK this „needs to be more specific“ errors are actually allowed breakage/acceptable exception to the stability promise ☹. So I'm not sure if it's worth reporting to rustc (I'll try it anyway, but it might be worth to add some annotations in dipstick too, just in case).

Timer default output format

The handbook states that the "Timer's default output format is milliseconds", however when using a timer with an atomic bucket and regular flushs to stdout the format seems to be microseconds.

extern crate dipstick;

use std::time::Duration;
use std::io;
use dipstick::*;

fn main() {
    let metrics = AtomicBucket::new().add_prefix("test");
    metrics.set_drain(Stream::write_to(io::stdout()));
    metrics.flush_every(Duration::from_secs(1));
    let timer = metrics.timer("timer_a");
    timer.interval_us(1);
    loop {
    }
}

yields:

test.timer_a 1

Can't compile without default features

Hello

If I turn off the default features, dipstick (master and 0.7.12) fails to compile:

error[E0308]: mismatched types
   --> src/output/graphite.rs:106:26
    |
106 |         self.flush_inner(buf)
    |                          ^^^ expected struct `std::sync::RwLockWriteGuard`, found enum `std::result::Result`
    |
    = note: expected struct `std::sync::RwLockWriteGuard<'_, _>`
                 found enum `std::result::Result<std::sync::RwLockWriteGuard<'_, _>, std::sync::PoisonError<std::sync::RwLockWriteGuard<'_, std::string::String>>>`

I suspect it's about using std or parking-lot mutexes, but I haven't dug into it yet.

Add relative counters and markers

Differentiate from existing "absolute" Counters and Markers. Relative counters allow negative values. Relative markers can be decremented and incremented. Global Metric Value type alias and Bucket scores need to be changed from u64 (unsigned) to isize (signed). Still limited to integers because of rust std atomics support.

API changes between 0.6 and 0.7, proposal for more improvements

I'm trying to update Dipstick dependency to 0.7.1 in our project. There are a lot of great API improvements, but I like some parts of 0.6 more. Migration guide with few items would help a lot during the switch.

Nice .with_name() has been renamed to strange .add_prefix(). .with_name() was clear, it extended an existing name by a new subname. Current .add_prefix() is quite hard to use. Does it prefix the existing name or adds a suffix? I don't understand this change, it looks like the implementation detail leaked to the API.

AppMetrics was the "core" object of the library in 0.6. It's now called AtomicBucket in our use case. What about to add a type alias for it? We are using Monitor, Metrics or something other would be also good.

pub(crate) type Monitor = AppMetrics<Arc<Scoreboard>>; // 0.6
pub(crate) type Monitor = AtomicBucket; // 0.7

There is no clean "entry point" for the library that is otherwise very good. There are many of them in the examples (listed below), which one should be preferred? It's extremely hard for newcomers. Finding AtomicBucket was difficult even for me who was already using 0.6 for some time. I know it's in the example at crates.io but who would expect AtomicBucket?! What about to introduce a builder or helper struct called Dipstick and start all library uses with it?

  • Stream::write_to()
  • AtomicBucket::new()
  • MultiInputScope::new()
  • Graphite::send_to()
  • MultiInput::input()
  • MultiOutput::output()
  • Prometheus::send_json_to()
  • dipstick::Log::log_to().input()
  • Statsd::send_to()
  • Stream::to_stderr()

Observe effectively unimplementable for out-of-crate types

Hello

I have the spirit-dipstick library, whose purpose is to set up dipstick (its outputs, etc) from configuration files. I wrap the AtomicBucket type into a newtype and delegate to it.

However, it is impossible to do so with the Observe trait, because:

  • The return type of observe is ObserveWhen<Self, _>.
  • ObserveWhen has any methods only if that Self is WithAttributes.
  • WithAttributes is not exposed outside of the crate.

I'd see two possible solutions:

  • Add a type ObserveWhen to the Observe trait and return that. I'll then be able to return ObserveWhen<AtomicBucket, _> instead of ObserveWhen<Self, _>.
  • Make WithAttributes public and therefore implementable by out-of-crate types.

I can send a PR, I'd just like to know if you agree with either of these solutions.

Purpose of Instant in the Observe's Fn(Instant) callback

What is purpose of Instant parameter in the Observe's callback function? No example uses it, the closure is always defined as |_|. I found Instant::now() is passed there in Scheduler, but I'm still curious about expected use cases. A short note in observe()'s doc would be great.

/// Schedule a recurring task
pub trait Observe {
    /// Provide a source for a metric's values.
    fn observe<F>(
        &self,
        metric: impl Deref<Target = InputMetric>,
        operation: F,
    ) -> ObserveWhen<Self, F>
    where
        F: Fn(Instant) -> MetricValue + Send + Sync + 'static,
        Self: Sized;
}

Panic from `dipstick` (seemingly during flushing of metrics)

I got a panic coming from the dipstick code. I wasn't able to reproduce it again, though. Happened once so far.

thread '<unnamed>' panicked at 'attempt to subtract with overflow', /Users/idubrov/.cargo/registry/src/github.com-1ecc6299db9ec823/dipstick-0.6.5/src/scores.rs:100:36
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
stack backtrace:
   0: std::sys::unix::backtrace::tracing::imp::unwind_backtrace
             at libstd/sys/unix/backtrace/tracing/gcc_s.rs:49
   1: std::sys_common::backtrace::_print
             at libstd/sys_common/backtrace.rs:71
   2: std::panicking::default_hook::{{closure}}
             at libstd/sys_common/backtrace.rs:59
             at libstd/panicking.rs:380
   3: std::panicking::default_hook
             at libstd/panicking.rs:396
   4: std::panicking::begin_panic
             at libstd/panicking.rs:576
   5: std::panicking::begin_panic
             at libstd/panicking.rs:537
   6: std::panicking::try::do_call
             at libstd/panicking.rs:521
   7: std::panicking::try::do_call
             at libstd/panicking.rs:497
   8: <core::ops::range::Range<Idx> as core::fmt::Debug>::fmt
             at libcore/panicking.rs:71
   9: <core::ops::range::Range<Idx> as core::fmt::Debug>::fmt
             at libcore/panicking.rs:51
  10: dipstick::scores::Scoreboard::snapshot
             at /Users/idubrov/.cargo/registry/src/github.com-1ecc6299db9ec823/dipstick-0.6.5/src/scores.rs:100
  11: dipstick::aggregate::aggregate::{{closure}}::{{closure}}::{{closure}}
             at /Users/idubrov/.cargo/registry/src/github.com-1ecc6299db9ec823/dipstick-0.6.5/src/aggregate.rs:50
  12: core::ops::function::impls::<impl core::ops::function::FnOnce<A> for &'a mut F>::call_once
             at /Users/travis/build/rust-lang/rust/src/libcore/ops/function.rs:271
  13: <core::option::Option<T>>::map
             at /Users/travis/build/rust-lang/rust/src/libcore/option.rs:404
  14: <core::iter::FlatMap<I, U, F> as core::iter::iterator::Iterator>::next
             at /Users/travis/build/rust-lang/rust/src/libcore/iter/mod.rs:2446
  15: <alloc::vec::Vec<T> as alloc::vec::SpecExtend<T, I>>::from_iter
             at /Users/travis/build/rust-lang/rust/src/liballoc/vec.rs:1801
  16: <alloc::vec::Vec<T> as core::iter::traits::FromIterator<T>>::from_iter
             at /Users/travis/build/rust-lang/rust/src/liballoc/vec.rs:1713
  17: core::iter::iterator::Iterator::collect
             at /Users/travis/build/rust-lang/rust/src/libcore/iter/iterator.rs:1303
  18: dipstick::aggregate::aggregate::{{closure}}::{{closure}}
             at /Users/idubrov/.cargo/registry/src/github.com-1ecc6299db9ec823/dipstick-0.6.5/src/aggregate.rs:50
  19: <dipstick::core::ControlScopeFn<M>>::flush
             at /Users/idubrov/.cargo/registry/src/github.com-1ecc6299db9ec823/dipstick-0.6.5/src/core.rs:136
  20: <dipstick::core::ControlScopeFn<M> as core::ops::drop::Drop>::drop
             at /Users/idubrov/.cargo/registry/src/github.com-1ecc6299db9ec823/dipstick-0.6.5/src/core.rs:155
  21: core::ptr::drop_in_place
             at /Users/travis/build/rust-lang/rust/src/libcore/ptr.rs:59
  22: core::ptr::drop_in_place
             at /Users/travis/build/rust-lang/rust/src/libcore/ptr.rs:59

It looks to me that the cause is that now captured from self.scores[0] around scores.rs:100 somehow got bigger than now captured by calling time::precise_time_ns().

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.