Coder Social home page Coder Social logo

Comments (10)

florianl avatar florianl commented on June 18, 2024 1

That should be most CPU profiles, right?

Most of my experiments are with CPU profiles.
For profiles that focus on lock contention or memory allocation, the situation might be slightly different. I can imagine that for such profiles the leaf frame is similar more often. But the described problem should also in this case be the same, if memory allocations or lock acquisition happens in a stack with a high number of frames.

from oteps.

athre0z avatar athre0z commented on June 18, 2024 1

Yeah, this is definitely tree-ish: we're essentially trying to encode a flamegraph tree efficiently. For optimal density we'd probably want some sort of prefix tree structure. That being said, I'm not sure whether we're willing to pay the compute price of maintaining one in the profiler.

The algorithm that I had in mind for use with the repeated fields falls more into the "simple and hopefully good enough" category: define some chunk size, split traces by that size and then keep a hash-LRU of chunks that we've seen previously. Should provide a good amount of dedup at very little compute / memory overhead. Implementations that wish to roll something fancier can do that as well.

from oteps.

arminru avatar arminru commented on June 18, 2024

cc @open-telemetry/profiling-maintainers @open-telemetry/profiling-approvers

from oteps.

mtwo avatar mtwo commented on June 18, 2024

Comment from the OTel maintainer meeting: could / should this be moved to a comment on the current Profiling PR in the OTLP repository?

from oteps.

florianl avatar florianl commented on June 18, 2024

This issue is linked in https://github.com/open-telemetry/opentelemetry-proto/pull/534/files#r1561128746. As this particular issue is relevant to the specification, I did open the issue in this repository.

from oteps.

felixge avatar felixge commented on June 18, 2024

Thanks for raising this.

In particular for deep stack traces with a high number of similar frames and where only leaf frames are different,

That should be most CPU profiles, right? @petethepig IIRC you had some benchmarks that showed the efficiency of this new encoding of stack traces. Did you use realistic CPU profiling data?

If this new approach is not a clear win in the majority of situations, we should remove it.

from oteps.

athre0z avatar athre0z commented on June 18, 2024

We could simply make both locations_start_index and locations_length repeated fields: this would allow implementations to de-duplicate prefixes and should be even more efficient than just listing all indices all the time.

For example if you had two traces that only vary in the leaf:

trace_foo:
  0) libc_entry_point
  1) main
  2) run_my_app
  3) do_fancy_stuff
  4) do_boring_stuff
  5) strcpy
  6) strlen

trace_bar:
  0) libc_entry_point
  1) main
  2) run_my_app
  3) do_fancy_stuff
  4) do_boring_stuff
  5) strcpy
  6) memcpy

Then you could create locations like so:

locations:
  0) libc_entry_point
  1) main
  2) run_my_app
  3) do_fancy_stuff
  4) do_boring_stuff
  5) strcpy
  6) strlen
  7) memcpy

And then encode the reference like this:

trace_foo:
  locations_start_index: [0]
  locations_length: [7]

trace_bar:
  locations_start_index: [0, 7]
  locations_length: [6, 1]

from oteps.

felixge avatar felixge commented on June 18, 2024

@athre0z interesting idea! Do you have an algorithm in mind for encoding the data in this way?

A bit of a meta comment: I think it's difficult to evaluate different stack trace encoding schemas without some alignment on how we value encoding vs decoding efficiency, compression, as well as overall complexity. Additionally I suspect that we're reinventing well-known tree encoding formats here (the above looks trie-ish?), and that there is a lot more prior art that we could explore.

from oteps.

felixge avatar felixge commented on June 18, 2024

Your algorithm sounds like it could work nicely. That being said, I see two paths forward:

  1. Try out new ideas like yours and make sure we get the evaluation right this time (it seems like we didn't the first time around).
  2. Go back to pprof's encoding, which is not ideal, but is simpler to encode/decode and keeps us more compatible with pprof.

What do you think?

from oteps.

athre0z avatar athre0z commented on June 18, 2024

I don't really have a strong opinion on this. Intuitively I'd guess that this location list may make up a significant portion of message size, but these things tend to be hard to guess. Makes me wish for a protobuf message size profiler that attributes size consumed to message fields. Bonus points if it could also do it for compressed message size!

Whether "keeping more compatible with pprof" is a priority, IMHO, depends on whether Google decides to donate the format to OTel or not. If pprof keeps evolving independently, then we'll find ourselves in a C/C++ kind of situation where newer C versions gained features that C++ doesn't have, and it'll just be pure pain to somehow keep things compatible. In that case I'd prefer to intentionally break compatibility to avoid the misconception of interoperability without transpiling.

from oteps.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.