Coder Social home page Coder Social logo

biggraphite's Introduction

Build Status Coverage Status PyPi version Supported Python versions

BigGraphite

BigGraphite is a storage layer for timeseries data. It integrates with Graphite as a plugin.

For usage information and how to contribute, please see CONTRIBUTING.md.

Usage

See USAGE.md and CONFIGURATION.md.

Contact

Backends

There is only one supported backend that provides all features: Cassandra, whose design is described in CASSANDRA_DESIGN.md.

Another backend supports metadata only, stored in Elasticsearch, see ELASTICSEARCH_DESIGN.md. Using it, it is possible to use Cassandra to store data points and Elasticsearch to store metrics metadata.

Code structure

  • biggraphite.accessor exposes the public API to store/retrieve metrics
  • biggraphite.metadata_cache implements a machine-local cache using LMDB so that one does not need a round-trip for each call to accessor
  • biggraphite.plugins.* implements integration with Carbon and Graphite
  • biggraphite.drivers.* implements the storage backends (eg: Cassandra-specific code)

Disclaimer

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

biggraphite's People

Contributors

adericbourg avatar biox avatar camathieu avatar dbxfb avatar erebe avatar fedj avatar geobeau avatar hdost avatar iksaif avatar informatiq avatar jfwm2 avatar melchiormoulin avatar mycroft avatar natbraun avatar rbizos avatar rclaude avatar roguelazer avatar rudy-6-4 avatar rveznaver avatar thib17 avatar unbrice avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

biggraphite's Issues

Add indexes on uuid and name

For directories and metrics we should also index the name, and use it as a prefix when it makes sense (most of the time when the user is auto-completing).
For metrics we should also index the uuid to make it faster to retrieve the metric name from a point.

AttributeError: 'NoneType' object has no attribute 'get_metric'

Traceback (most recent call last):
  File "/opt/graphite/webapp/graphite/render/datalib.py", line 201, in fetchData
    seriesList = _fetchData(pathExpr,startTime, endTime, requestContext, seriesList)
  File "/opt/graphite/webapp/graphite/render/datalib.py", line 116, in _fetchData
    fetches = [(node, node.fetch(startTime, endTime)) for node in matching_nodes if node.is_leaf]
  File "/opt/graphite/webapp/graphite/node.py", line 30, in fetch
    return self.reader.fetch(startTime, endTime)
  File "/opt/graphite/lib/python2.7/site-packages/biggraphite/plugins/graphite.py", line 85, in fetch
    self.__refresh_metric()
  File "/opt/graphite/lib/python2.7/site-packages/biggraphite/plugins/graphite.py", line 70, in __refresh_metric
    self._metric = self._metadata_cache.get_metric(self._metric_name)
AttributeError: 'NoneType' object has no attribute 'get_metric'
Thu Sep 29 12:57:01 2016 :: Exception encountered in <POST http://graphite.preprod.crto.in/render>
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/django/core/handlers/base.py", line 115, in get_response
    response = callback(request, *callback_args, **callback_kwargs)
  File "/opt/graphite/webapp/graphite/render/views.py", line 113, in renderView
    seriesList = evaluateTarget(requestContext, target)
  File "/opt/graphite/webapp/graphite/render/evaluator.py", line 8, in evaluateTarget
    result = evaluateTokens(requestContext, tokens)
  File "/opt/graphite/webapp/graphite/render/evaluator.py", line 29, in evaluateTokens
    return evaluateTokens(requestContext, tokens.expression, replacements)
  File "/opt/graphite/webapp/graphite/render/evaluator.py", line 54, in evaluateTokens
    args = [evaluateTokens(requestContext, arg, replacements) for arg in tokens.call.args]
  File "/opt/graphite/webapp/graphite/render/evaluator.py", line 29, in evaluateTokens
    return evaluateTokens(requestContext, tokens.expression, replacements)
  File "/opt/graphite/webapp/graphite/render/evaluator.py", line 54, in evaluateTokens
    args = [evaluateTokens(requestContext, arg, replacements) for arg in tokens.call.args]
  File "/opt/graphite/webapp/graphite/render/evaluator.py", line 29, in evaluateTokens
    return evaluateTokens(requestContext, tokens.expression, replacements)
  File "/opt/graphite/webapp/graphite/render/evaluator.py", line 54, in evaluateTokens
    args = [evaluateTokens(requestContext, arg, replacements) for arg in tokens.call.args]
  File "/opt/graphite/webapp/graphite/render/evaluator.py", line 29, in evaluateTokens
    return evaluateTokens(requestContext, tokens.expression, replacements)
  File "/opt/graphite/webapp/graphite/render/evaluator.py", line 54, in evaluateTokens
    args = [evaluateTokens(requestContext, arg, replacements) for arg in tokens.call.args]
  File "/opt/graphite/webapp/graphite/render/evaluator.py", line 29, in evaluateTokens
    return evaluateTokens(requestContext, tokens.expression, replacements)
  File "/opt/graphite/webapp/graphite/render/evaluator.py", line 45, in evaluateTokens
    return fetchData(requestContext, expression)
  File "/opt/graphite/webapp/graphite/render/datalib.py", line 207, in fetchData
    raise e
AttributeError: 'NoneType' object has no attribute 'get_metric'

Add support for quotas

It would be neat to support

  • maximum number of child metric per directory
  • maximum number of points per directories

This could be implemented with counters in a quota table and teaching FSCK #60 to recompute counters if they are mistaken.

Add statistics on used metrics

It would be nice to have a 'stats' table that could for example record the 'last_access_time' for a metric. This could have a probability of 10% of being updated and would give us an idea of rarely used metrics.

Add the ability to have a TTL for metric names

It's nice to be able to automatically purge unused metrics and directories. We should be able to delete unused metric names after 8 days and automatically clean up empty directories.

And idea for that would be to set TTLs in the metadata cache and in the database. To make it work we would have to fetch the ttl from the database when caching it. LMDB (used in the metadata cache) doesn't seem to support TTLs so this might be an issue.

SASI Index performance issues

CC: @dpanth3r

With ~2M rows and Cassandra 3.9.0

SELECT name from biggraphite_metadata.metrics WHERE parent LIKE 'criteo.%' AND component_3 = '__END__' ALLOW FILTERING;

slow (~3s)

SELECT name from biggraphite_metadata.metrics WHERE component_0 = 'criteo' AND component_3 = '__END__' AND parent LIKE 'criteo.%' ALLOW FILTERING;

slow (~3s)

SELECT name from biggraphite_metadata.metrics WHERE component_0 = 'criteo' AND component_3 = '__END__' ALLOW FILTERING;

fast (<1s)

cqlsh> select name from biggraphite_metadata.metrics where parent = 'criteo.foo.bar.foo.bar.foo.bar.foo.' AND component_10 = '__END__' ALLOW FILTERING;

slow (>3s)

cqlsh> select name from biggraphite_metadata.metrics where parent = 'criteo.foo.bar.foo.bar.foo.bar.foo.';

fast (<1s)

cli: bgutil repair

Metadata can be made inconsistent:

  • After a Cassandra-level issue, we can miss some of the parents in the directories table
  • After a Cassandra-level issue or after a failed metric creation, we can have empty directories

A tool should fetch metadata and update them.

Improve _CassandraAccessor.__glob_names performances

Looking at current thread pool usage it seems that this kind of read is using most of our CPU.
Looking at http://xf.iksaif.net/bordel/trace it also seems that a single query is doing a lot !

It is yet to be seen if it's fine for listing metrics, but this is clearly the bottleneck when creating them.

  • See if we can make this query less expensive.
  • Do not use self.glob_directory_names() when we can simply do a select on the primary key.

This is big issue not only when we get a ton of new metric, but also when carbon looses its cache because only the cache is checked (a miss in the cache should probably trigger a read in the DB).

cli: bgutil du <glob>

Like '$ du -h'. Display disk usage for a glob (simply multiply the number of metrics by the maximum number of points in the retention policy).

cache expiration should not happen on the main thread

Current we expire the cache in the main thread, which can lead to disk i/o and additional database traffic when writing points (which will slow everything down). Instead we should try to refresh it in the background.

Improve metric creation performances

Metric creations should be completely asynchronous. In the carbon plugin If we don't have the metric locally cached, put it in some queue but continue and write the points directly. Not sure if it's better to do that in exists() or in create(). The important point is that points do not get blocked by that, it's totally ok if new metrics appear only after a few minutes.

Better metadacache

  • Add an in-memory metadata cache for dev/tests
  • Allow the use of django's cache (when running along with django)
  • Add ways to choose the cache (BG_CACHE=) and some cache settings (max size)

Handle graphite-like patterns in the drivers

Currently the drivers only supports raw wildcards. This doesn't make good use of our indexes and make real-life queries quite long (a lot of them have prefixes before wildcards). We already use the "parent" column for prefix queries, but we could do that for each components.

  • On queries such as foo.bar* or foo.*bar*, the driver should get all the information (bar* and bar even if it can't always optimize, on Cassandra one could switch some components to 'CONTAINS' if they want to).
  • On queries with square brackets, it might be interesting to expand them before querying the driver, and not after.

CC @dpanth3r, @rveznaver

Issues with scaleToSeconds(1) and higher stages

Looks like stageToSeconds(1) isn't returning the right values when called on higher stages. Add unit tests to validate that Reader.fetch() return valid answers for metrics with multiple stages.

Also Tweak find_stage_for_ts() to choose the best stage based on both start_time and end_time and check if it would make sense to have overlapping stages.

repair(): Too many metrics

Traceback (most recent call last):
  File "/opt/graphite/pypy/bin/bgutil", line 11, in <module>
    load_entry_point('biggraphite==0.5', 'console_scripts', 'bgutil')()
  File "/opt/graphite/pypy/site-packages/biggraphite/cli/bgutil.py", line 74, in main
    opts.func(accessor, opts)
  File "/opt/graphite/pypy/site-packages/biggraphite/cli/command_repair.py", line 76, in run
    end_key=opts.end_key)
  File "/opt/graphite/pypy/site-packages/biggraphite/drivers/cassandra.py", line 1241, in repair
    if parent_dir and self.has_directory(parent_dir):
  File "/opt/graphite/pypy/site-packages/biggraphite/drivers/cassandra.py", line 869, in has_directory
    return list(self.glob_directory_names(directory))
  File "/opt/graphite/pypy/site-packages/biggraphite/drivers/cassandra.py", line 940, in _extract_results
    (self.max_metrics_per_pattern)
TooManyMetrics: Query bla.[9E67CD0952164CFA9CCD0115ADAD49F8%2F7B88566A65814F1DBE27D63DC0518633]%bli%2F\(1%2F1\)%20%2E%2E%2E11-08%2Fhour%3D22%2Fplatform%3DUS.queue.long-jobs on directories yields more than 5000 results

drivers: reduce write load

When we expire datapoints (most writes), we update consecutive points within one row. By using unlogged batch, we can update all points in one Cassandra update.

We should not however put the different stage level in a batch. Instead we can skip updating retention levels if the value did not change.

cli: bg-wsgi

Simple HTTP API to read points and list metrics. Must be enough to act as a remote storage for prometheus

Move the offset by two bytes to the left

The current offset is an int, hence 4 bytes.
However, we round the timestamp to 1 000, hence the offset will never be more than 1 000.
But using just two bytes It thus can go up to 65 000 and more.

DoD:

  • move the offset by two bytes to the left
  • make sure everything works properly

Connections stale on mutations

When applying mutations, the connection to the cluster can stale due to the exception below. This is particularly annoying when using consistency higher than one (like the local_quorum) as when this append cassandra respond nothing to the client.

This is related to this issue
The problem is kind of hard to reproduce as there is no particular query or data configuration that trigger it

I have to spend some time on it to dig further

Dec 22 08:30:08 cstars07e01-par cassandra[15245]: WARN  [TracingStage:1] 2016-12-22 08:30:08,632 AbstractLocalAwareExecutorService.java:167 - Uncaught exception on thread Thread[TracingStage:1,5,main]: {}
Dec 22 08:30:08 cstars07e01-par cassandra[15245]: java.lang.AssertionError: null
Dec 22 08:30:08 cstars07e01-par cassandra[15245]: at io.netty.util.Recycler$WeakOrderQueue.<init>(Recycler.java:225) ~[netty-all-4.0.39.Final.jar:4.0.39.Final]
Dec 22 08:30:08 cstars07e01-par cassandra[15245]: at io.netty.util.Recycler$DefaultHandle.recycle(Recycler.java:180) ~[netty-all-4.0.39.Final.jar:4.0.39.Final]
Dec 22 08:30:08 cstars07e01-par cassandra[15245]: at io.netty.util.Recycler.recycle(Recycler.java:141) ~[netty-all-4.0.39.Final.jar:4.0.39.Final]
Dec 22 08:30:08 cstars07e01-par cassandra[15245]: at org.apache.cassandra.utils.btree.BTree$Builder.recycle(BTree.java:839) ~[apache-cassandra-3.10-af35eb6.jar:3.10-af35eb6]
Dec 22 08:30:08 cstars07e01-par cassandra[15245]: at org.apache.cassandra.utils.btree.BTree$Builder.build(BTree.java:1092) ~[apache-cassandra-3.10-af35eb6.jar:3.10-af35eb6]
Dec 22 08:30:08 cstars07e01-par cassandra[15245]: at org.apache.cassandra.db.partitions.PartitionUpdate.build(PartitionUpdate.java:587) ~[apache-cassandra-3.10-af35eb6.jar:3.10-af35eb6]
Dec 22 08:30:08 cstars07e01-par cassandra[15245]: at org.apache.cassandra.db.partitions.PartitionUpdate.maybeBuild(PartitionUpdate.java:577) ~[apache-cassandra-3.10-af35eb6.jar:3.10-af35eb6]
Dec 22 08:30:08 cstars07e01-par cassandra[15245]: at org.apache.cassandra.db.partitions.PartitionUpdate.holder(PartitionUpdate.java:388) ~[apache-cassandra-3.10-af35eb6.jar:3.10-af35eb6]
Dec 22 08:30:08 cstars07e01-par cassandra[15245]: at org.apache.cassandra.db.partitions.AbstractBTreePartition.unfilteredIterator(AbstractBTreePartition.java:177) ~[apache-cassandra-3.10-af35eb6.jar:3.10-af35eb6]
Dec 22 08:30:08 cstars07e01-par cassandra[15245]: at org.apache.cassandra.db.partitions.AbstractBTreePartition.unfilteredIterator(AbstractBTreePartition.java:172) ~[apache-cassandra-3.10-af35eb6.jar:3.10-af35eb6]
Dec 22 08:30:08 cstars07e01-par cassandra[15245]: at org.apache.cassandra.db.partitions.PartitionUpdate$PartitionUpdateSerializer.serialize(PartitionUpdate.java:779) ~[apache-cassandra-3.10-af35eb6.jar:3.10-af35eb6]
Dec 22 08:30:08 cstars07e01-par cassandra[15245]: at org.apache.cassandra.db.Mutation$MutationSerializer.serialize(Mutation.java:389) ~[apache-cassandra-3.10-af35eb6.jar:3.10-af35eb6]
Dec 22 08:30:08 cstars07e01-par cassandra[15245]: at org.apache.cassandra.db.commitlog.CommitLog.add(CommitLog.java:249) ~[apache-cassandra-3.10-af35eb6.jar:3.10-af35eb6]
Dec 22 08:30:08 cstars07e01-par cassandra[15245]: at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:581) ~[apache-cassandra-3.10-af35eb6.jar:3.10-af35eb6]
Dec 22 08:30:08 cstars07e01-par cassandra[15245]: at org.apache.cassandra.db.Keyspace.applyNotDeferrable(Keyspace.java:440) ~[apache-cassandra-3.10-af35eb6.jar:3.10-af35eb6]
Dec 22 08:30:08 cstars07e01-par cassandra[15245]: at org.apache.cassandra.db.Mutation.apply(Mutation.java:223) ~[apache-cassandra-3.10-af35eb6.jar:3.10-af35eb6]
Dec 22 08:30:08 cstars07e01-par cassandra[15245]: at org.apache.cassandra.db.Mutation.apply(Mutation.java:237) ~[apache-cassandra-3.10-af35eb6.jar:3.10-af35eb6]
Dec 22 08:30:08 cstars07e01-par cassandra[15245]: at org.apache.cassandra.service.StorageProxy$8.runMayThrow(StorageProxy.java:1416) ~[apache-cassandra-3.10-af35eb6.jar:3.10-af35eb6]
Dec 22 08:30:08 cstars07e01-par cassandra[15245]: at org.apache.cassandra.service.StorageProxy$LocalMutationRunnable.run(StorageProxy.java:2640) ~[apache-cassandra-3.10-af35eb6.jar:3.10-af35eb6]
Dec 22 08:30:08 cstars07e01-par cassandra[15245]: at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[na:1.8.0_91]
Dec 22 08:30:08 cstars07e01-par cassandra[15245]: at org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:162) ~[apache-cassandra-3.10-af35eb6.jar:3.10-af35eb6]
Dec 22 08:30:08 cstars07e01-par cassandra[15245]: at org.apache.cassandra.concurrent.SEPExecutor.maybeExecuteImmediately(SEPExecutor.java:194) [apache-cassandra-3.10-af35eb6.jar:3.10-af35eb6]
Dec 22 08:30:08 cstars07e01-par cassandra[15245]: at org.apache.cassandra.service.StorageProxy.performLocally(StorageProxy.java:1410) [apache-cassandra-3.10-af35eb6.jar:3.10-af35eb6]
Dec 22 08:30:08 cstars07e01-par cassandra[15245]: at org.apache.cassandra.service.StorageProxy.sendToHintedEndpoints(StorageProxy.java:1317) [apache-cassandra-3.10-af35eb6.jar:3.10-af35eb6]
Dec 22 08:30:08 cstars07e01-par cassandra[15245]: at org.apache.cassandra.service.StorageProxy$2.apply(StorageProxy.java:140) [apache-cassandra-3.10-af35eb6.jar:3.10-af35eb6]
Dec 22 08:30:08 cstars07e01-par cassandra[15245]: at org.apache.cassandra.service.StorageProxy.performWrite(StorageProxy.java:1100) [apache-cassandra-3.10-af35eb6.jar:3.10-af35eb6]
Dec 22 08:30:08 cstars07e01-par cassandra[15245]: at org.apache.cassandra.service.StorageProxy.mutate(StorageProxy.java:634) [apache-cassandra-3.10-af35eb6.jar:3.10-af35eb6]
Dec 22 08:30:08 cstars07e01-par cassandra[15245]: at org.apache.cassandra.tracing.TraceStateImpl.mutateWithCatch(TraceStateImpl.java:122) [apache-cassandra-3.10-af35eb6.jar:3.10-af35eb6]
Dec 22 08:30:08 cstars07e01-par cassandra[15245]: at org.apache.cassandra.tracing.TraceStateImpl$1.runMayThrow(TraceStateImpl.java:109) [apache-cassandra-3.10-af35eb6.jar:3.10-af35eb6]
Dec 22 08:30:08 cstars07e01-par cassandra[15245]: at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) [apache-cassandra-3.10-af35eb6.jar:3.10-af35eb6]
Dec 22 08:30:08 cstars07e01-par cassandra[15245]: at java.util.concurrent.CompletableFuture$AsyncRun.run(CompletableFuture.java:1626) [na:1.8.0_91]
Dec 22 08:30:08 cstars07e01-par cassandra[15245]: at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_91]
Dec 22 08:30:08 cstars07e01-par cassandra[15245]: at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_91]
Dec 22 08:30:08 cstars07e01-par cassandra[15245]: at java.lang.Thread.run(Thread.java:745) [na:1.8.0_91]

__generate_normal_names_queries: IndexError: pop from empty list

Traceback (most recent call last):
  File "/opt/graphite/webapp/graphite/render/datalib.py", line 201, in fetchData
    seriesList = _fetchData(pathExpr,startTime, endTime, requestContext, seriesList)
  File "/opt/graphite/webapp/graphite/render/datalib.py", line 116, in _fetchData
    fetches = [(node, node.fetch(startTime, endTime)) for node in matching_nodes if node.is_leaf]
  File "/opt/graphite/webapp/graphite/storage.py", line 47, in find
    for node in finder.find_nodes(query):
  File "/opt/graphite/pypy/site-packages/biggraphite/plugins/graphite.py", line 177, in find_nodes
    metric_names, directories = glob_utils.graphite_glob(self.accessor(), query.pattern)
  File "/opt/graphite/pypy/site-packages/biggraphite/glob_utils.py", line 203, in graphite_glob
    metrics = accessor.glob_metric_names(graphite_glob)
  File "/opt/graphite/pypy/site-packages/biggraphite/drivers/cassandra.py", line 899, in glob_metric_names
    return self.__glob_names("metrics", glob)
  File "/opt/graphite/pypy/site-packages/biggraphite/drivers/cassandra.py", line 915, in __glob_names
    queries = self.__generate_normal_names_queries(table, components)
  File "/opt/graphite/pypy/site-packages/biggraphite/drivers/cassandra.py", line 986, in __generate_normal_names_queries
    idx, count = entry.pop()
IndexError: pop from empty list

Experiment with compression and other ways to save space

  • Look at the current benefits of compression
  • See if compression can be tweaked (snappy, block size, ..)
  • Check if double delta encoding could improve the compression ratio
  • Check if committing the count value could let us save space

Make tox config more flexible

We should be able to use tox's {posargs} to pass file/class/method selector to avoid running the whole test suite when trying to iterate (I cannot bear the 100s waiting time for everything + Cassandra tests anymore when I just want Cassandra).

Can't create metrics when a single Cassandra server is down

Sep 29 06:20:39 graphite-global-cache-bg05-am5 carbon-cache-0[15072]: [console] 'Error from server: code=2200 [Invalid query] message="SERIAL is not supported as conditional update commit consistency. Use ANY if you mean "make sure it is accepted but I don\'t care how many replicas commit it for non-SERIAL reads""'
Sep 29 06:20:39 graphite-global-cache-bg05-am5 carbon-cache-0[15072]: [console] Unhandled Error
Sep 29 06:20:39 graphite-global-cache-bg05-am5 carbon-cache-0[15072]: [console]         Traceback (most recent call last):
Sep 29 06:20:39 graphite-global-cache-bg05-am5 carbon-cache-0[15072]: [console]           File "/opt/graphite/pypy/site-packages/twisted/python/threadpool.py", line 191, in _worker
Sep 29 06:20:39 graphite-global-cache-bg05-am5 carbon-cache-0[15072]: [console]             result = context.call(ctx, function, *args, **kwargs)
Sep 29 06:20:39 graphite-global-cache-bg05-am5 carbon-cache-0[15072]: [console]           File "/opt/graphite/pypy/site-packages/twisted/python/context.py", line 118, in callWithContext
Sep 29 06:20:39 graphite-global-cache-bg05-am5 carbon-cache-0[15072]: [console]             return self.currentContext().callWithContext(ctx, func, *args, **kw)
Sep 29 06:20:39 graphite-global-cache-bg05-am5 carbon-cache-0[15072]: [console]           File "/opt/graphite/pypy/site-packages/twisted/python/context.py", line 81, in callWithContext
Sep 29 06:20:39 graphite-global-cache-bg05-am5 carbon-cache-0[15072]: [console]             return func(*args,**kw)
Sep 29 06:20:39 graphite-global-cache-bg05-am5 carbon-cache-0[15072]: [console]           File "/opt/graphite/lib/carbon/writer.py", line 145, in writeForever
Sep 29 06:20:39 graphite-global-cache-bg05-am5 carbon-cache-0[15072]: [console]             writeCachedDataPoints()
Sep 29 06:20:39 graphite-global-cache-bg05-am5 carbon-cache-0[15072]: [console]         --- <exception caught here> ---
Sep 29 06:20:39 graphite-global-cache-bg05-am5 carbon-cache-0[15072]: [console]           File "/opt/graphite/lib/carbon/writer.py", line 108, in writeCachedDataPoints
Sep 29 06:20:39 graphite-global-cache-bg05-am5 carbon-cache-0[15072]: [console]             state.database.create(metric, archiveConfig, xFilesFactor, aggregationMethod)
Sep 29 06:20:39 graphite-global-cache-bg05-am5 carbon-cache-0[15072]: [console]           File "/opt/graphite/pypy/site-packages/biggraphite/plugins/carbon.py", line 103, in create
Sep 29 06:20:39 graphite-global-cache-bg05-am5 carbon-cache-0[15072]: [console]             self.cache().create_metric(metric)
Sep 29 06:20:39 graphite-global-cache-bg05-am5 carbon-cache-0[15072]: [console]           File "/opt/graphite/pypy/site-packages/biggraphite/metadata_cache.py", line 138, in create_metric
Sep 29 06:20:39 graphite-global-cache-bg05-am5 carbon-cache-0[15072]: [console]             self.__accessor.create_metric(metric)
Sep 29 06:20:39 graphite-global-cache-bg05-am5 carbon-cache-0[15072]: [console]           File "/opt/graphite/pypy/site-packages/biggraphite/drivers/cassandra.py", line 616, in create_metric
Sep 29 06:20:39 graphite-global-cache-bg05-am5 carbon-cache-0[15072]: [console]             self._execute(statement, args)
Sep 29 06:20:39 graphite-global-cache-bg05-am5 carbon-cache-0[15072]: [console]           File "/opt/graphite/pypy/site-packages/biggraphite/drivers/cassandra.py", line 532, in _execute
Sep 29 06:20:39 graphite-global-cache-bg05-am5 carbon-cache-0[15072]: [console]             result = self.__session.execute(*args, **kwargs)
Sep 29 06:20:39 graphite-global-cache-bg05-am5 carbon-cache-0[15072]: [console]           File "/opt/graphite/pypy/site-packages/cassandra/cluster.py", line 1998, in execute
Sep 29 06:20:39 graphite-global-cache-bg05-am5 carbon-cache-0[15072]: [console]             return self.execute_async(query, parameters, trace, custom_payload, timeout, execution_profile, paging_state).result()
Sep 29 06:20:39 graphite-global-cache-bg05-am5 carbon-cache-0[15072]: [console]           File "/opt/graphite/pypy/site-packages/cassandra/cluster.py", line 3781, in result
Sep 29 06:20:39 graphite-global-cache-bg05-am5 carbon-cache-0[15072]: [console]             raise self._final_exception
Sep 29 06:20:39 graphite-global-cache-bg05-am5 carbon-cache-0[15072]: [console]         cassandra.InvalidRequest: Error from server: code=2200 [Invalid query] message="SERIAL is not supported as conditional update commit consistency. Use ANY if you mean "make sure it is accepted but I don't care how many replicas commit it for non-SERIAL reads""
Sep 29 06:20:39 graphite-global-cache-bg05-am5 carbon-cache-0[15072]: [console] 'Error creating //biggraphite/cassandra:biggraphite/criteo....'

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.