Coder Social home page Coder Social logo

Comments (4)

doug-explorys avatar doug-explorys commented on July 30, 2024

A lighter weight version of this request are logs that can detail the processing status of each thread. For example, we just debugged an issue with our R&D cluster where 2 nodes were acting flaky, and I'm pretty sure that the Phoenix client was hanging specifically on those two nodes. Through trial and error we figured it out, but it would be nice to have logs something along the line of this:

[[timestamp][request-id]Starting query "select count(*) from myTable"
[timestamp]request-id][thread-id] starting against RS-X (for each thread)
[timestamp]request-id][thread-id] ending in XXX (ms)
[[timestamp][request-id]Finished in YYY (ms)

What this would have uncovered is that we had N region servers, and N-2 requests were completing.

Icing on the cake is that if a query times out, tell me which RS's it's waiting on.

from phoenix.

jtaylor-sfdc avatar jtaylor-sfdc commented on July 30, 2024

We should look at the Zipkin-based monitoring that Elliot Clark is doing for HBase here. Needs to aggregate/rollup the costs, but if it did that, it would be a sweet way to monitor perf.

from phoenix.

jyates avatar jyates commented on July 30, 2024

I was thinking we could use the Hadoop metrics2 to manage the metrics for a given request (both scans and inserts). What you really want in metrics tooling is:

  1. Async collection
  2. non-blocking
  3. Flexible writers
  4. Dropping extra metrics if it becomes too full

metrics2 gives us all of that. Also, there are good reference implementations, for instance, the hadoop code itself (here and here) as well as the new HBase metrics system.

We can then use this to keep stats on phoenix in phoenix tables. By instrumenting the phoenix methods correctly we can gather things like number of bytes sent, method times, region/regionserver response times, etc. Then you would publish these metrics to a phoenix sink that again, writes back to a phoenix table (and possibly updates a local stats cache too).

This only interesting bits are then:

  1. Tracking method calls from the client to the server
  2. Creating a clean abstraction around dynamic variables

The latter is just good engineering. The former can be solved by tagging each method call with a UUID (similar to how Zipkin would track the same request). Stats about the whole call would then both eventually end up in the same phoenix stat table, which is then queryable.

The intelligent bit then becomes updating the stats table with metrics in an intelligent way so you can do a rollup later to reconstruct history. Since you know the query id, you can correlate it between the clients and servers. This also gives you perfect timing as you know the operation order (and could get smarter when you parallelize things by having "sub" parts that getting their own UUID, but that correlate to the original request, e.g. UUID 1234 splits into 1234-1, 1234-2, 1234-3).

I started working through some simple, toy examples of using metrics2 for logging (simple and with dynamic method calls). Its nothing fancy and shouldn't be directly used for phoenix, but might be helpful to someone trying to figure out how the metrics2 stuff all works.

from phoenix.

jyates avatar jyates commented on July 30, 2024

Simple prototype is up at github.com/jyates/phoenix/tree/tracing. Just traces mutations from the client to the server through the indexing path, writes them to the sink, which writes them to a phoenix table and then has a simple reader to rebuild the traces.

See the end-to-end test for a full example.

from phoenix.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.