Coder Social home page Coder Social logo

etsy / statsd-jvm-profiler Goto Github PK

View Code? Open in Web Editor NEW
328.0 53.0 93.0 978 KB

Simple JVM Profiler Using StatsD and Other Metrics Backends

License: MIT License

Scala 2.32% Java 59.84% Python 3.87% JavaScript 10.17% CSS 1.25% HTML 3.07% Perl 19.48%
non-sox

statsd-jvm-profiler's Introduction

statsd-jvm-profiler Build Status

statsd-jvm-profiler is a JVM agent profiler that sends profiling data to StatsD. Inspired by riemann-jvm-profiler, it was primarily built for profiling Hadoop jobs, but can be used with any JVM process.

Read the blog post that introduced statsd-jvm-profiler on Code as Craft, Etsy's engineering blog.

Also check out the blog post reflecting on the experience of open-sourcing the project.

Mailing List

There is a mailing list for this project at https://groups.google.com/forum/#!forum/statsd-jvm-profiler. If you have questions or suggestions for the project send them here!

Installation

You will need the statsd-jvm-profiler JAR on the machine where the JVM will be running. If you are profiling Hadoop jobs, that means the JAR will need to be on all of the datanodes.

The JAR can be built with mvn package. You will need a relatively recent Maven (at least Maven 3).

statsd-jvm-profiler is available in Maven Central:

<dependency>
  <groupId>com.etsy</groupId>
  <artifactId>statsd-jvm-profiler</artifactId>
  <version>2.0.0</version>
</dependency>

If you would like an uberjar containing all of the dependencies instead of the standard JAR, use the jar-with-dependencies classifier:

<dependency>
  <groupId>com.etsy</groupId>
  <artifactId>statsd-jvm-profiler</artifactId>
  <version>2.0.0</version>
  <classifier>jar-with-dependencies</classifier>
</dependency>

Usage

The profiler is enabled using the JVM's -javaagent argument. You are required to specify at least the StatsD host and port number to use. You can also specify the prefix for metrics and a whitelist of packages to be included in the CPU profiling. Arguments can be specified like so:

-javaagent:/usr/etsy/statsd-jvm-profiler/statsd-jvm-profiler.jar=server=hostname,port=num

You should use the uberjar when starting the profiler in this manner so that all the profiler's dependencies are available.

The profiler can also be loaded dynamically (after the JVM has already started), but this technique requires relying on Sun's tools.jar, meaning it's an implementation-specific solution that might not work for all JVMs. For more information see the Dynamic Loading section.

An example of setting up Cascading/Scalding jobs to use the profiler can be found in the example directory.

Global Options

Name Meaning
server The hostname to which the reporter should send data (required)
port The port number for the server to which the reporter should send data (required)
prefix The prefix for metrics (optional, defaults to statsd-jvm-profiler)
packageWhitelist Colon-delimited whitelist for packages to include (optional, defaults to include everything)
packageBlacklist Colon-delimited whitelist for packages to exclude (optional, defaults to exclude nothing)
profilers Colon-delimited list of profiler class names (optional, defaults to CPUTracingProfiler and MemoryProfiler)
reporter Class name of the reporter to use (optional, defaults to StatsDReporter)
httpServerEnabled Determines if the embedded HTTP server should be started. (optional, defaults to true)
httpPort The port on which to bind the embedded HTTP server (optional, defaults to 5005). If this port is already in use, the next free port will be taken.

Embedded HTTP Server

statsd-jvm-profiler embeds an HTTP server to support simple interactions with the profiler while it is in operation. You can configure the port on which this server runs with the httpPort option. You can disable it altogether using the httpServerEnabled=false argument.

Endpoint Usage
/profilers List the currently enabled profilers
/isRunning List the running profilers. This should be the same as /profilers.
/disable/:profiler Disable the profiler specified by :profiler. The name must match what is returned by /profilers.
/errors List the past 10 errors from the running profilers and reporters.
/status/profiler/:profiler Displays a status message with the number of recorded stats for the requested profiler.

Reporters

statsd-jvm-profiler supports multiple backends. StatsD is the default, but InfluxDB is also supported. You can select the backend to use by passing the reporter argument to the profiler; StatsDReporter and InfluxDBReporter are the supported values.

Some reporters may require additional arguments.

StatsDReporter

This reporter does not have any additional arguments.

InfluxDBReporter

Name Meaning
username The username with which to connect to InfluxDB (required)
password The password with which to connect to InfluxDB (required)
database The database to which to write metrics (required)
tagMapping A mapping of tag names from the metric prefix (optional, defaults to no mapping)
useHttps A flag indicating if https connecition should be used (optional, defaults to false)
Tag Mapping

InfluxDB 0.9 supports tagging measurements and querying based on those tags. statsd-jvm-profilers uses these tags to support richer querying of the produced data. For compatibility with other metric backends, the tags are extracted from the metric prefix.

If the tagMapping argument is not defined, only the prefix tag will be added, with the value of the entire prefix.

tagMapping should be a period-delimited set of tag names. It must have the same number of components as prefix, or else an exception would be thrown. Each component of tagMapping is the name of the tag. The component in the corresponding position of prefix will be the value.

If you do not want to include a component of prefix as a tag, use the special name SKIP in tagMapping for that position.

Profilers

statsd-jvm-profiler offers 3 profilers: MemoryProfiler, CPUTracingProfiler and CPULoadProfiler.

The metrics for all these profilers will prefixed with the value from the prefix argument or it's default value: statsd-jvm-profiler.

You can enable specific profilers through the profilers argument like so:

  1. Memory metrics only: profilers=MemoryProfiler
  2. CPU Tracing metrics only: profilers=CPUTracingProfiler
  3. JVM/System CPU load metrics only: profilers=CPULoadProfiler

Default value: profilers=MemoryProfiler:CPUTracingProfiler

Garbage Collector and Memory Profiler: MemoryProfiler

This profiler will record:

  1. Heap and non-heap memory usage
  2. Number of GC pauses and GC time

Assuming you use the default prefix of statsd-jvm-profiler, the memory usage metrics will be under statsd-jvm-profiler.heap and statsd-jvm-profiler.nonheap, the GC metrics will be under statsd-jvm-profiler.gc.

Memory and GC metrics are reported once every 10 seconds.

CPU Tracing Profiler: CPUTracingProfiler

This profiler records the time spent in each function across all Threads.

Assuming you use the default prefix of statsd-jvm-profiler, the the CPU time metrics will be under statsd-jvm-profiler.cpu.trace.

The CPU time is sampled every millisecond, but only reported every 10 seconds. The CPU time metrics represent the total time spent in that function.

Profiling a long-running process or a lot of processes simultaneously will produce a lot of data, so be careful with the capacity of your StatsD instance. The packageWhitelist and packageBlacklist arguments can be used to limit the number of functions that are reported. Any function whose stack trace contains a function in one of the whitelisted packages will be included.

The visualization directory contains some utilities for visualizing the output of this profiler.

JVM And System CPU Load Profiler: CPULoadProfiler

This profiler will record the JVM's and the overall system's CPU load, if the JVM is capable of providing this information.

Assuming you use the default prefix of statsd-jvm-profiler, the JVM CPU load metrics will be under statsd-jvm-profiler.cpu.jvm, and the System CPU load wil be under statsd-jvm-profiler.cpu.system.

The reported metrics will be percentages in the range of [0, 100] with 1 decimal precision.

CPU load metrics are sampled and reported once every 10 seconds.

Important notes:

  • This Profiler is not enabled by default. To enable use the argument profilers=CPULoadProfiler
  • This Profiler relies on Sun/Oracle-specific JVM implementations that offer a JMX bean that might not be available in other JVMs. Even if you are using the right JVM, there's no guarantee this JMX bean will remain there in the future.
  • The minimum required JVM version that offers support for this is for Java 7.
  • See com.sun.management.OperatingSystemMXBean for more information.
  • If the JVM doesn't support the required operations, the metrics above won't be reported at all.

Dynamic Loading of Agent

  1. Make sure you have the tools.jar available in your classpath during compilation and runtime. This JAR is usually found in the JAVA_HOME directory under the /lib folder for Oracle Java installations.
  2. Make sure the jvm-profiler JAR is available during runtime.
  3. During your application boostrap process, do the following:
  val jarPath: String = s"$ABSOLUTE_PATH_TO/com.etsy.statsd-jvm-profiler-$VERSION.jar"
  val agentArgs: String = s"server=$SERVER,port=$PORT"
  attachJvmAgent(jarPath, agentArgs)

  def attachJvmAgent(profilerJarPath: String, agentArgs: String): Unit = {
    val nameOfRunningVM: String = java.lang.management.ManagementFactory.getRuntimeMXBean.getName
    val p: Integer = nameOfRunningVM.indexOf('@')
    val pid: String = nameOfRunningVM.substring(0, p)

    try {
      val vm: com.sun.tools.attach.VirtualMachine = com.sun.tools.attach.VirtualMachine.attach(pid)
      vm.loadAgent(profilerJarPath, agentArgs)
      vm.detach()
      LOGGER.info("Dynamically loaded StatsD JVM Profiler Agent...");
    } catch {
      case e: Exception => LOGGER.warn(s"Could not dynamically load StatsD JVM Profiler Agent ($profilerJarPath)", e);
    }
  }

Contributing

Contributions are highly encouraged! Check out the contribution guidlines.

Any ideas you have are welcome, but check out some ideas for contributions.

statsd-jvm-profiler's People

Contributors

ajsquared avatar danosipov avatar dossett avatar noamshaish avatar stickperson avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

statsd-jvm-profiler's Issues

Profilers Exit on Exceptions

Occasionally I get errors with my influxdb metrics writer like this:

java.lang.RuntimeException: timeout
at org.influxdb.impl.InfluxDBErrorHandler.handleError(InfluxDBErrorHandler.java:19)
at retrofit.RestAdapter$RestHandler.invoke(RestAdapter.java:242)
at org.influxdb.impl.$Proxy0.writePoints(Unknown Source)
at org.influxdb.impl.InfluxDBImpl.write(InfluxDBImpl.java:156)
at com.etsy.statsd.profiler.reporter.InfluxDBReporter.recordGaugeValues(InfluxDBReporter.java:66)

While this unfortunate, and probably means I need to do more batches, record less frequently, or scale up influx, it would be nice if the profilers continued running. This could be done by adding some exception handling here:

https://github.com/etsy/statsd-jvm-profiler/blob/master/src/main/java/com/etsy/statsd/profiler/worker/ProfilerWorkerThread.java#L19

Also, nice work guys. This is a super awesome project!

graphite_dump.py isn't liking current stats format

Running curl to see what's been dumped into graphite shows the following:
curl "http://127.0.0.1/metrics/expand?query=stats.gauges.statsd-jvm-profiler.cpu.trace.*&leavesOnly=1"

{"results": ["stats.gauges.statsd-jvm-profiler.cpu.trace.1", "stats.gauges.statsd-jvm-profiler.cpu.trace.72", "stats.gauges.statsd-jvm-profiler.cpu.trace.73", "stats.gauges.statsd-jvm-profiler.cpu.trace.74", "stats.gauges.statsd-jvm-profiler.cpu.trace.75", "stats.gauges.statsd-jvm-profiler.cpu.trace.77", "stats.gauges.statsd-jvm-profiler.cpu.trace.org-apache-commons-daemon-support-DaemonLoader-load-200"]}

Running graphite_dump.py against the same server bails out like this:
python2.7 ./graphite_dump.py -o 127.0.0.1 -s 12:40_20160316 -e 12:41_20160316 -p stats.gauges.statsd-jvm-profiler.cpu.trace

Traceback (most recent call last):
File "./graphite_dump.py", line 83, in
results = get_tree(host, args.prefix, args.start, args.end)
File "./graphite_dump.py", line 49, in get_tree
(min, max) = get_bounds(host, prefix)
File "./graphite_dump.py", line 34, in get_bounds
bounds = [int(bound.replace(prefix + '.', '')) for bound in json_results['results']]
ValueError: invalid literal for int() with base 10: 'org-apache-commons-daemon-support-DaemonLoader-load-200'

Looking at the code, it's expecting the stats to be prefix + '.' and the rest a number, where the value it's choking on has a "-" for the separator.

Statsd-jvm-proiler configuration

Hi guys we are passing statsd-jvm-proiler as a Java agent.
3 profilers : memory metrics, CPU tracing & CPU load metrics are enabled out of which only the first 2 are working but the 3rd one is showing as '0' thats means no stats are being shown for that.
The code is:
2.1.1. jar=server=localhost,port=8125,prefix=applicationname, profilers=MemoryProfiler:CPUTracingProfiler:CPULoadProfiler -jar......

The application which we are using is a Microservices application.

Java version is 8

Kindly help out with this. Thanks in Advance.

Release with vertx version 3.x

I am trying to use statsd-jvm-profiler with my Spark application. There is mismatch in jackson library since vertx 2.4.1 in statsd uses old (2.2.2) and my application has jackson version 2.8
Spark cluster does not seem to be accepting jars with different versions of same software.
Is there plan to upgrade statds profiler with newer vertx 3.x

Problems with dashboard.

Andrew,

there is one more issue with the dashboard. Here is the stacktrace - see below.
Feel free to ask me any data that you might need (e.g. to send you the backup of InfluxDB or anything like that). Just give me instructions how to give it to you.

Also feel free to give me any debug scripts to temporarily place instead of normal ones (to dump my data on the screen - I will paste it here).

Thanks.

GET /scripts/bootstrap.min.js 304 3.740 ms - -
GET /css/style.css 200 430.211 ms - -
/opt/influxdb-dashboard/public/scripts/influxdb.js:48
var series = seriesNames[0];
^
TypeError: Cannot read property '0' of undefined
at /opt/influxdb-dashboard/public/scripts/influxdb.js:48:26
at /opt/influxdb-dashboard/node_modules/influx/index.js:181:14
at /opt/influxdb-dashboard/node_modules/influx/index.js:64:14
at InfluxRequest._request (/opt/influxdb-dashboard/node_modules/influx/lib/InfluxRequest.js:97:12)
at InfluxRequest._parseCallback (/opt/influxdb-dashboard/node_modules/influx/lib/InfluxRequest.js:115:19)
at Request._callback (/opt/influxdb-dashboard/node_modules/influx/lib/InfluxRequest.js:107:10)
at self.callback (/opt/influxdb-dashboard/node_modules/influx/node_modules/request/request.js:197:22)
at Request.emit (events.js:107:17)
at Request.onRequestError (/opt/influxdb-dashboard/node_modules/influx/node_modules/request/request.js:854:8)
at ClientRequest.emit (events.js:107:17)

Set Up Checkstyle

Set up checkstyle for the project and run it in Travis to ensure a consistent code style.

Missing hostname in metric

Hostname's are missing in metrics being generated. If plugin is deployed on multiple jvm's across hosts and data is fed into same statsd instance then metrics are not useful due to overlap.

Error on flamegraph output

I ran the profiler, and see the CPU trace metrics in graphite. However, when I ran graphite_dump.py, it fails with this error:

$ /usr/local/graphite_dump.py -o 127.0.0.1 -s 20:25_20150312 -e 20:50_20150312 -p stats.gauges.bigdata.profiler.hadoop.com.shazam.TitleExtractor.C83612C60A954CB4A442C56597331C5E.1.cpu.trace
Traceback (most recent call last):
  File "/usr/local/graphite_dump.py", line 82, in <module>
    format_output(args.prefix, results)
  File "/usr/local/graphite_dump.py", line 67, in format_output
    print '%s %d' % (format_metric(metric, prefix), value)
TypeError: %d format: a number is required, not NoneType

Adding a null check to value on line 67 produces no output. Let me know if the error is in my StatsD/Graphite setup (I've used a default configuration)

Enhanced Memory Metrics

Hello all. I'm a big user of NewRelic but I'm also interested in getting a lot of the JVM stats it provides into a more open metrics solution (graphite/influxdb/cloudwatch). This project exposes some of those metrics but there are a few pieces that are missing.

First and easiest would be class (un)loading tracking which an be accessed via http://docs.oracle.com/javase/7/docs/api/java/lang/management/ClassLoadingMXBean.html

Second and a bit more involved but also a bit more valuable would be memory pool tracking (old gen, eden, survivor) which I think can be accessed via http://docs.oracle.com/javase/7/docs/api/java/lang/management/MemoryPoolMXBean.html

Would these additions be valuable to this tool and in keeping with its vision? I wanted to ask before jumping in on any implementation work.

Generate Separate Uberjar

Generating only an uberjar can cause trouble for users that rely on different versions of libraries included in the uberjar. Instead we should produce separate standard and uber jars.

graphite time ranges seem broken

So after solving the bounds-finding issue and sorting out a few issues where empty results weren't properly handled, I went and tested gathering profiling on a specific map/reduce run. I generated carbonite-formatted timestamps for beginning and ending of the run, and enabled the profiling.

Afterwards, I found that running graphite_dump.py generated no results when using the start and stop timestamps generated before and after the map/reduce job. I verified that all computers had synced time and same timezone. The stat prefixes used for this run were brand new, so no stats had been under these trees in graphite prior to this run.

I was able to extract stats by expanding the time range; using a multi-day time range w/ start time of 01:00 - 23:00 got me stats, but I'm not sure how to find what exact time range these stats are tied to.

Why use a gauge for time measures?

Hi,
I wonder why you use a gauge for recording the GC run time, as opposed to a count or a timing measure. Does this not mean one value will be overwritten by the next measure, and the total time might be lost?

recordGaugeValue("gc." + gcMXBean.getName() + ".time", time);
recordGaugeValue("gc." + gcMXBean.getName() + ".runtime", runtime);

Any response or insight would be appreciated! Thanks!

Graph New Metrics

Pull requests #18 and #19 added a lot of new metrics, and it would be good to include those in the dashboard.

Problems writing directly to InfluxDB

I'm getting this problem:
Exception in thread "Thread-1" java.lang.NullPointerException
at org.influxdb.impl.TimeUtil.toTimePrecision(TimeUtil.java:21)
at org.influxdb.impl.InfluxDBImpl.write(InfluxDBImpl.java:155)
at com.etsy.statsd.profiler.reporter.InfluxDBReporter.recordGaugeValues(InfluxDBReporter.java:69)
at com.etsy.statsd.profiler.Profiler.recordGaugeValues(Profiler.java:73)
at com.etsy.statsd.profiler.profilers.MemoryProfiler.recordStats(MemoryProfiler.java:108)
at com.etsy.statsd.profiler.profilers.MemoryProfiler.flushData(MemoryProfiler.java:49)
at com.etsy.statsd.profiler.worker.ProfilerShutdownHookWorker.run(ProfilerShutdownHookWorker.java:22)
at java.lang.Thread.run(Thread.java:744)

Steps to reproduce:

  1. download http://sourceforge.net/projects/suprfractalthng/ - the jar file name is superfractalthing_0.8.3.jar
  2. install InfluxDB on another machine
  3. run this set of commands:

export _JAVA_OPTIONS='-javaagent:/var/lib/statsd-jvm-profiler-0.8.1-SNAPSHOT.jar=server=192.168.56.101,port=8086,reporter=InfluxDBReporter,database=monitoring,username=monitoring,password=monitoring,profiler=CPUProfilers'

java -jar superfractalthing_0.8.3.jar

(I've built the profiler by git cloning your repository first, then "maven package", then copied to /var/lib/; then I created a database called "monitoring" with the same username and password in the InfluxDB).

  1. play with the app, close the app. And you will get an exception.

Old dependencies

There is nice functionality "http server" which seems to be configurable, however that one brings vertx's old version as a dependency which causes old version of fasterxml and netty to appear in the classpath byproduct. When I run spring boot 2 with the agent it also has netty and fasterxml dependencies, as you can predict this causes conflicts and here it throws verify error https://github.com/spring-projects/spring-framework/blob/master/spring-web/src/main/java/org/springframework/http/converter/json/Jackson2ObjectMapperBuilder.java#L742

I would propose to use simpler httpserver there which brings no dependencies byproduct.

Bad rendering of Flame Graph in Firefox (it is OK in Chrome)

Hi, Andrew.

See what I have in the Firefox: http://screencast.com/t/7XHV32R5
Now look what I have in the Chrome: http://screencast.com/t/CnwS6tjsOlY

Steps to reproduce:

  1. download the application is the same as in my previous test - http://sourceforge.net/projects/suprfractalthng/

  2. run commands

export _JAVA_OPTIONS='-javaagent:/var/lib/statsd-jvm-profiler-0.8.3-SNAPSHOT.jar=server=192.168.56.101,port=8086,reporter=InfluxDBReporter,database=monitoring,username=root,password=root,prefix=bigdata.profiler.v1.v2.v3.v4.v5,tagMapping=SKIP.SKIP.username.job.flow.stage.phase'

java -jar superfractalthing_0.8.3.jar

  1. click on some mandelbrott sets, and then let the app to wait for 10-15 minutes - until it will generate those two "waiting" stacktraces.

P.S.
By the way, if you could give a hint how to get the SVG file (instead of html page with D3JS code) - I would be very happy. It would be nice to have influxdb_dump.py - just to get the file with traces and the pass it to the Flame Graph native builder script (flamegraph.pl the_dumped_file > some.svg)

Upgrade Vertx

There is a much newer version of Vertx available.

Problems with injecting the profiler to Spark Application

Hello,

I am running spark-submit as follows

spark-submit --packages com.etsy:statsd-jvm-profiler:2.1.0 --deploy-mode cluster --master spark://14.16.47.27:7077 --class par.met.TS --conf spark.executor.extraJavaOptions=-Xss100m --conf "spark.executor.extraJavaOptions=-javaagent:statsd-jvm-profiler-2.1.0-jar-with-dependencies.jar=server=myInfluxServer,port=8086,reporter=InfluxDBReporter,database=profiler,username=profiler,password=profiler" --num-executors 3 myAppJar

but executors keep failing with the following messages:

stderr:
Error opening zip file or JAR manifest missing : statsd-jvm-profiler-2.1.0-jar-with-dependencies.jar

stdout:
Error occurred during initialization of VM
agent library failed to init: instrument

Any help appreciated.
Thank you,
Giannis

Upgrading VertX to 3.x makes statsd-jvm-profiler require 1.8 JVM

Not sure if you were aware of this, but the 3.x series of the vertx libraries are java 1.8- and I don't think many hadoop clusters are being run under JVM 1.8. Despite your pom specifying a maven-compiler-plugin target of 1.7, the vertx artifact that gets pulled in is 1.8 only, and will bomb any 1.7 JVM. In order to avoid trying replacement of the JVMs for my hadoop nodes, I was able to recompile statsd-jvm-profiler by cloning the repo and reverting the commit that upgraded vertx, but I thought you might want to be aware of this issue.

Out of memory in a mapreduce job (when launched with profiler)

I am not sure that this happens due to the profiler, but without the profiler this same job with the same input data worked fine. I see by stacktrace that CPUTraces.getDataToFlush is working and is interrupted. Don't you think that this is the reason for memory consumption?

Here is the screenshot of my memory parameters:
http://postimg.org/image/8sdq8l6al/

2015-07-21 10:12:43,981 INFO [main] org.apache.gora.mapreduce.GoraRecordReader: gora.buffer.read.limit = 10000
2015-07-21 10:12:44,329 FATAL [main] org.apache.hadoop.mapred.YarnChild: Error running child : java.lang.OutOfMemoryError: Java heap space
at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.init(MapTask.java:983)
at org.apache.hadoop.mapred.MapTask.createSortingCollector(MapTask.java:401)
at org.apache.hadoop.mapred.MapTask.access$100(MapTask.java:81)
at org.apache.hadoop.mapred.MapTask$NewOutputCollector.(MapTask.java:695)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:767)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1642)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163)

2015-07-21 10:12:44,383 ERROR [Thread-2] org.apache.hadoop.yarn.YarnUncaughtExceptionHandler: Thread Thread[Thread-2,5,main] threw an Throwable, but we are shutting down, so ignoring this
java.util.ConcurrentModificationException
at java.util.HashMap$HashIterator.nextEntry(HashMap.java:922)
at java.util.HashMap$KeyIterator.next(HashMap.java:956)
at com.etsy.statsd.profiler.util.CPUTraces.getDataToFlush(CPUTraces.java:45)
at com.etsy.statsd.profiler.profilers.CPUProfiler.recordMethodCounts(CPUProfiler.java:116)
at com.etsy.statsd.profiler.profilers.CPUProfiler.flushData(CPUProfiler.java:71)
at com.etsy.statsd.profiler.worker.ProfilerShutdownHookWorker.run(ProfilerShutdownHookWorker.java:22)
at java.lang.Thread.run(Thread.java:745)

Support InfluxDB 0.9

We'll need the client libraries updated first. influxdb-java is ready but node-influx is not.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.