Coder Social home page Coder Social logo

Comments (6)

fralalonde avatar fralalonde commented on May 12, 2024

Thanks for your input. I am glad you address these points, as I've struggled with naming for quite a while and never seemed to be able to come up with a truly consistent way of doing things. My goal was to come up with fluent, meaningful method names for common use cases, hence the send_to and such.

The with_name / add_prefix in particular was a hard call. I wanted to convey the idea that the namespace gets appended to rather than entirely replaced. Also, the "prefix" is a prefix to the eventual metrics, but the added prefix is itself appended to the existing prefixes actuallyt making it a suffix of the existing namespace. Finally, to be consistent, the with_* convention would have to be used with other properties, such as with_cache, with_buffer, etc.

Some structs were not meant to be instantiated directly:

  • Multi* should be created using some form of and_then on a first output
  • Proxy would usually be instantiated from within the metrics!() macro
  • Cache* is just a decorator

Following the previous points, AtomicBucket is a special case and might be represented better with AtomicBucket::aggregate() instead of new(). It is now called AtomicBucket because there could be a non-Atomic version of it in the future, and I thought Bucket was better than Aggregator which is long-ish and sounds like the name of a basement-dwelling death metal band.

from dipstick.

mixalturek avatar mixalturek commented on May 12, 2024

Regarding the with_name and add_prefix... What about just named()? We are using it in our Java/Scala library and it's nicely accepted.

https://github.com/avast/metrics#naming-of-monitors

from dipstick.

mixalturek avatar mixalturek commented on May 12, 2024

I have just finished the update to 0.7, our setup method is below. The rest of the code operates with add_prefix/counters/timers/... - nothing special. Our application writes to StatsD but it aggregates all metrics internally and pushes them every minute. StatsD is close to be only 1:1 transport channel.

The issues with API that I see from our use case point of view:

  • I didn't expect that AtomicBucket is the "core" object of the library.
  • It's impossible to use ? operator for errors propagation, #35.
  • I didn't expect the cached() method to be on the StatsD object, I was searching it in AtomicBucket` for a long time and I was thinking this feature was fully removed for a while. I'm still quite uncertain what exactly it does. It's connection with StatsD object is just against my mental model how it works (no deep looking to the code).
  • add_prefix() uses fluent API, but set_stats() and set_drain() do not, I'm unsure why such difference.
pub use dipstick::{AtomicBucket as Monitor, Counter, Gauge, InputScope, Prefixed, Timer};
use dipstick::{
    CachedOutput, CancelHandle, Flush, InputKind, MetricName, MetricValue, ScheduleFlush,
    ScoreType, Statsd,
};

    fn statsd_metrics(config: &MetricsConfig) -> Result<Self, Error> {
        let statsd = Statsd::send_to((config.statsd_host.as_ref(), config.statsd_port))
            // TODO: https://github.com/fralalonde/dipstick/issues/35
            .map_err(|e| err_msg(format!("Creation of StatsD connector failed: {}", e)))?
            .cached(100);

        let monitor = Monitor::new().add_prefix(config.prefix.clone());
        monitor.set_stats(Self::selected_stats);
        monitor.set_drain(statsd);
        let cancel_handle = Some(monitor.flush_every(config.flush_period));

        Ok(Metrics {
            monitor,
            cancel_handle,
        })
    }

from dipstick.

fralalonde avatar fralalonde commented on May 12, 2024

cached() does not exist on AtomicBucket because it is not required. To track scores, the Bucket internally maintains a list of name-deduplicated metrics, which is what the Cache decorator does for non-persistent outputs. For example, a Statsd output does not remember that you asked it for a Counter metrics named "bananas" and would rebuild a new Counter everytime you asked for it.

Also, the cache is only useful if the metrics names are dynamically generated at runtime to prevent any cost of re-creating the metrics of the same name. If using static metrics, either through the metrics!() macro or programatically, caching will only add a slight overhead but provide no benefit.

from dipstick.

mixalturek avatar mixalturek commented on May 12, 2024

We use both static-programmatic metrics wherever possible - counters of requests, errors, latency, etc. - and the dynamic ones for e.g. HTTP status codes in responses.

We don't use fully static metrics!(), they work only for the simplest cases. Separately monitored multiple instances of the same struct are simply impossible with it. Imagine for example multiple HTTP handlers (/endpoint/a, /envpoint/b, ...) and generic MonitoredHandler wrapper that tracks number of requests, in/out bytes, latency, status codes, etc.

from dipstick.

fralalonde avatar fralalonde commented on May 12, 2024

Closing this as the new names have been merged.

from dipstick.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.