Coder Social home page Coder Social logo

veepee-oss / influxdb-relay Goto Github PK

View Code? Open in Web Editor NEW

This project forked from influxdata/influxdb-relay

200.0 200.0 47.0 401 KB

Service to replicate InfluxDB data for high availability.

License: MIT License

Python 27.03% Go 63.53% Shell 7.92% Dockerfile 1.52%

influxdb-relay's People

Contributors

abronan avatar alxdm avatar beckettsean avatar camskkz avatar damoun avatar j3ffrw avatar jarisukanen avatar jkielbaey avatar joelegasse avatar mark-rushakoff avatar moul avatar nathanielc avatar pauldix avatar rockyluke avatar rossmcdonald avatar simcap avatar toddboom avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

influxdb-relay's Issues

Error message RPM install

Finally the rpm has been installed , and it works well but the error message apears on install and also on some systemctl actions.

$> rpm -ivh  /root/influxdb-srelay-latest.x86_64.rpm
Preparing...                          ################################# [100%]
Updating / installing...
   1:influxdb-srelay-latest-1         ################################# [100%]
Created symlink from /etc/systemd/system/influxdb-srelay.service to /usr/lib/systemd/system/influxdb-srelay.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/influxdb-srelay.service to /usr/lib/systemd/system/influxdb-srelay.service.
Failed to execute operation: Too many levels of symbolic links
warning: %posttrans(influxdb-srelay-latest-1.x86_64) scriptlet failed, exit status 1


$>   systemctl enable influxdb-srelay
Failed to execute operation: Too many levels of symbolic links

Add monitoring on influxdb-relay

WHAT:
I want to generate alert if influxdb-relay cant connect to influxdb or write to it. Currently /health endpoint is provided but prometheus cant understand it. It will be helpful if someone can add standard /metrics (up, retry-count etc) to relay.

Thanks

Install/build issue

There seems to be a problem when installing influxdb-relay lately. I was able to install it successfully a few weeks ago but if I try to install it now it will return the following error:

ubuntu@relay-test-1804:~$ go get -u github.com/vente-privee/influxdb-relay
package github.com/influxdata/influxdb/pkg/escape: code in directory /home/ubuntu/go/src/github.com/influxdata/influxdb/pkg/escape expects import "github.com/influxdata/influxdb/v2/pkg/escape

I've tried on fresh install of Ubuntu 20.04 and 18.04 and get the same error message on both.

There has been some discussion on a similar issue here but nothing mentioned there seems to work influxdata#79

Any thoughts?

http: invalid Read on closed Body when using /admin endpoint

When I trying to create database or something like that

curl -u username:password -POST "http://localhost:8086/admin" --data-urlencode "q=create database testdb"

I have got next error:

2018/12/20 16:42:02 Problem posting to relay "influx-http" backend "db02": Post http://10.10.10.102:8086/query: http: invalid Read on closed Body

Ubuntu 18.04.1 LTS
go version go1.10.4 linux/amd64

unable to handle more request

Hi,
I am getting error while trying to push more then 50 records.

after 10-15 records.
..............................
204 No Content
204 No Content
429 Too Many Requests
"Too Many Requests"
429 Too Many Requests
"Too Many Requests"
429 Too Many Requests
........................................................

Thanks.

example string to add users

Hi
Is it possible someone could provide me the syntax to add in a username and password via curl

I thought it would be something like this
curl -X POST "http://loadbalancer:9096/admin" --data-urlencode 'q=CREATE USER test WITH PASSWORD 'testing' WITH ALL PRIVILEGES'

Error in sending data from Telegraf to Influxdb-relay

Hello, I am trying to send the data from Telegraf to Influxdb-relay and from there to Influxdb i.e., Telegraf -> Influxdb_relay -> Influxdb. So, I have configured Influxdb-relay IP in the telegraf.conf
as follows:-

[[outputs.influxdb]]
urls = [“http://172.29.29.12:9096/write”]

But the issue is Telegraf is not able to write to Influxdb-relay I am getting error as “Database creation Failed: post http://172.29.29.12:9096/query?q=CREATE DATABASE getsockopt: connection refused”

Can anyone know the solution for this. Can anyone please suggest the correct way to allow Telegraf send data to Influxdb-relay.

Any help is much appreciated. Thank you

Add enhanced influxdb-relay queries logging

I would like control who is querying my databases and how many resources it is spending.

A good library to this component would be

https://github.com/rs/zerolog

https://github.com/rs/zerolog#integration-with-nethttp

I propose log the following fields

For all query endpoints (/prom,/write/,/query/,etc)

  • Client IP
  • User
  • URI (/query, /write/ etc)
  • influx database
  • measurement/s ( should explore body in /write and query in /query)
  • retention policy
  • amount of data sent
  • response time
  • precision

Specific for /query

  • Range Time queried. ( from to)
  • influx server sent
  • referer
  • grafana_refresh_time ( when grafana querys small refresh time could crash the database, this info is locate inside the referer)

the latest version do not support buffer recover to the fail node

My configuration like

InfluxDB && Prometheus

[[http]]
name = "example-http-influxdb"
bind-addr = "0.0.0.0:9096"
#health-timeout-ms = 100
#rate-limit = 10000
#burst-limit = 10000
#default-ping-response = 200

[[http.output]]
name = "dev-1"
location="http://10.17.14.5:8086/"
endpoints = {write="/write", ping="/ping", query="/query"}
timeout="10s"
#skip-tls-verification = false
buffer-size-mb = 1000
max-batch-kb = 1000
max-delay-interval = "5s"
[[http.output]]
name="dev-2"
location="http://10.17.14.6:8086/"
endpoints = {write="/write", ping="/ping", query="/query"}
timeout="10s"
buffer-size-mb = 1000
max-batch-kb = 1000
max-delay-interval = "5s"

while i insert datas,
curl -i -XPOST 'http://localhost:9096/write?db=db2' --data-binary 'cpu_load_short,host=server01,region=us-west value=0.64 '$date''

once one of my node broken, while it recover, the influxdb-relay only send two rows to the failed node, the others will be dropped.
the health node

select count() from cpu_load_short
name: cpu_load_short
time count_value
0 10
select count(
) from cpu_load_short
name: cpu_load_short
time count_value
0 20

the failed node

select count() from cpu_load_short
name: cpu_load_short
time count_value
0 10
select count(
) from cpu_load_short
ERR: Post http://localhost:8086/query?chunked=true&db=db2&epoch=ns&q=select+count%28%2A%29+from+cpu_load_short: dial tcp [::1]:8086: connect: connection refused
select count(*) from cpu_load_short
name: cpu_load_short
time count_value
0 12

Influxdb-Relay does not create db's

This is possibly a feature or a side effect of the design but, it does not appear that Influxdb-Relay will create databases in InfluxDB. If this is intended, I recommend updating the documentation in the caveats section.

Seems always doing TLS verification

I am running as docker container, It seems not matter how I configure the skip-tls-verification, it always check the certs validation.

This is partial of the configure file:

[[http.output]]
name = "influxdb01"
location = "https://Some_Silly_Cert_Influx01:8086/"
skip-tls-verification = true
endpoints = {write="/write", write_prom="/api/v1/prom/write", ping="/ping", query="/query"}
timeout = "10s"
λ curl -G "http://127.0.0.1:9090/health"
{"status":"critical","problem":{"influxdb01":"KO. Get https://Some_Silly_Cert_Influx01:8086/ping: x509: certificate signed by unknown authority","influxdb02":"KO. Get https://Some_Silly_Cert_Influx02:8086/ping: x509: certificate signed by unknown authority"}}

Any helps?

influxdb-relay intermittently stops sending data to InfluxDB

Use Case:

I have 5 Telegraf instances sending data to a single influxdb-relay. Each instance is on a different machine (including influxdb-relay and influxdb itself). My single Influxdb-relay instance is forwarding data to two InfluxDB servers.

Issue:

From my testing, Influxdb-relay appears to handle up to 3 Telegraf streams just fine. Once a 4th or 5th is added, the input streams simply stop. If I restart the Telegraf instances that stop showing up in InfluxDB, the streams begin again. However, another stream will eventually stop.

I've attached two screenshots from Chronograf of this occurring.

lost01
lost02

InfluxDBClientError: 400: "unable to parse points

hello,
i use openstack-monasca components wirte data to influxdb with influxdb-relay.but encounter a error,message information too sample that i do not know more useful information and which metrics lead to this error.is anyone can help me?very thanks.

image

Query via load balancer

Hi

Just started using your application for H/A. It seems to be working well from what I have started using it for. However, I am in the process of setting up grafana and querying the database. When I attempt to use the loadbalancer as the source it was showing errors (502 and 404). I am trying to use the loadbalancer itself and then using the query. Can you please advise if there is something which I am doing wrong here?

Thanks

/status endpoint does not return valid JSON, /health does

As /health is valid JSON I'd expect /status being too.. This has relevance if I want to monitor relay for example telegraf's httpjson-monitoring.

# curl -s http://127.0.0.1:9096/status | jq .
""status": {"local-influxdb":{"buffering":"0","maxSize":"52428800","size":"0"},"teleflow-influxdb":{"buffering":"0","maxSize":"52428800","size":"0"}}"

# curl -s http://127.0.0.1:9096/health | jq .
{
"healthy": {
"teleflow-influxdb": "OK. Time taken 10.001475ms",
"local-influxdb": "OK. Time taken 391.016µs"
},
"status": "healthy"
}

Question - InfluxDB 1.6 Support?

I see that this fork is the most current fork for InfluxDB-Relay. Despite this, I have been unable to get the relay working with InfluxDB 1.6. Whenever I start the relay, it appears to hang at "Starting relays...".

Add metrics regarding buffer usage

Hi,

First of all, thanks guys for this great tool and for sharing it.

One feature of influxdb-relay is to buffer queries untill influxdb instance is up. This buffer is critical as if it gets full, we lose queries, and data. However, we have no visibility at all on the usage of this buffer.

It would be great if we could have prometheus metrics on the buffer usage.Ideally, we would need:

  1. The total size of the buffer
  2. The size usage of the buffer

And we would need this, per influxdb instance on which queries are relayed.

Thanks

why influxdb-relay opens TCP 8088 port?

Hello,

I see that influxdb-relay uses TCP port 8088...

bash-5.0# netstat -noap | grep tcp
...
tcp6       0      0 :::9096                 :::*                    LISTEN      1/influxdb-relay     off (0.00/0/0)
tcp6       0      0 :::8088                 :::*                    LISTEN      1/influxdb-relay     off (0.00/0/0)

There is nothing in the documentation about this. It seems that it is hardcoded in the following file:
https://github.com/strike-team/influxdb-relay/blob/master/metric/metric.go.

Unfortunately, this overlaps with the port used by InfluxDB (https://docs.influxdata.com/influxdb/v1.7/administration/ports/), so if you want to run it on the same node, or Docker containers running on the host network, they overlap and only one service can run.

Can we somehow disable this service, or make this port configurable?

Infinite buffering

I'm opening this following : influxdata#71

The problem is that when an 'impossible' request is in the retry buffering, it will basically loop and fail forever. This is actually excepted behavior and two situations might append:

  • One wants the request to actually pass and update the database (for exemple, creating a database that is still not existing)
  • One does not want the request to pass but would like to remove it from the buffer

The first situation is not related to the relay but the second one is. Before ed7f5d8 it was not possible to flush the buffer without restarting the relay. This is not very handy so we implemented a route allowing to flush the retry buffer and dropping the 'bad' requests.

Therefore one can monitor the logs and eventually query the flushing routes when too many requests are buffered.

connection: connection refused

influxdb-relay version 3.0.1

I am experiencing an issue with one of the clusters which appears to be a memory issue. I have attached part of the log

Dec 10 16:59:59  influxd: ts=2019-12-10T16:59:59.925185Z lvl=info msg="Opened file" log_id=0JdBtiS0000 engine=tsm
1 service=filestore path=/var/lib/influxdb/data/server_one/autogen/226/000000002-000000002.tsm id=0 duration=1.436ms
Dec 10 16:59:59  influxd: ts=2019-12-10T16:59:59.925289Z lvl=info msg="Reading file" log_id=0JdBtiS0000 engine=ts
m1 service=cacheloader path=/var/lib/influxdb/wal/server_one/autogen/226/_00007.wal size=10488166
Dec 10 17:00:00  influxdb-relay: 2019/12/10 17:00:00 Problem posting to relay "datacollector" backend "": Post http://10.0.4.247:7086/write?db=server_two: dial tcp 10.0.4.247:7086: connect: connection refused

Restarting the services doesn't appear to work (influxdb-relay and influxdb) looking at netstat the influx port isnt listening, Had to upgrade the memory for the box to recover. For all ports to be listening. I am not 100% sure that this is an issue with the relay. Thought I'd report it here first.

No changes have taken place on the system for a few days, possibly 7 days.

Is it no need to maintain consistency of influxdb replicas?

Hi dear,
I read the source code recently. If I made no mistakes, every relay server write data to all influxdb replicas, and there isn't any sync operation among them. So there is no need to do the sync operation while using influxdb, right?
As I'm not familiar with it about the usage scenario, is there anyone could help me with the confusion?

Is the influxdb /query enpoint supported?

Does the relay support read as well as write or is it necessary to have a load balancer redirect /query to the backend endpoints? It's unclear in the docs. I have a successfull configuration setup for writes but neither grafana nor the influx cli works for reads. They work directly against the backend however.

My config is:

-- toml --

InfluxDB

[[http]]
name = "xxxx-influxdb-relay"
bind-addr = "0.0.0.0:9096"
default-ping-response = 200
health-timeout-ms = 10000

[[http.output]]
name="influxdb01"
location = "http://xxxx:yyy/"
endpoints = {write="/write", ping="/ping?verbose=true", query="/query"}
timeout="10s"
buffer-size-mb = 100
max-batch-kb = 50
max-delay-interval = "5s"

EOF

Influxdb-Relay doesn't report status to Telegraf

This is also probably a side effect of how influxdb-relay is designed but, if using Telegraf to send data to InfluxDB via the relay, Telegraf will assume transmission failed because it does not receive anything back from influxdb-relay.

The only real effect of this is filling up the Telegraf log with erroneous "could not connect to database!" error messages. Either way, this also should be mentioned in the caveats section.

geting error while trying to write data

Hi Team,

I am trying to write same data by Influxdb-relay and directly to influxdb. Directly i am able to write in influxdb but by realy unable to write that getting log.

2020-01-13T11:11:48.256797028Z unable to parse 'Use\ of\ uninitialized\ value\ in\ printf\ at\ /usr/lib64/nagios/plugins/check_nwc_health\ line\ 67217.': missing fields
2020-01-13T11:11:48.256803029Z unable to parse 'Use\ of\ uninitialized\ value\ in\ printf\ at\ /usr/lib64/nagios/plugins/check_nwc_health\ line\ 67217.': missing fields
2020-01-13T11:11:48.256809085Z unable to parse 'Use\ of\ uninitialized\ value\ in\ printf\ at\ /usr/lib64/nagios/plugins/check_nwc_health\ line\ 67217.': missing fields
2020-01-13T11:11:48.256815032Z unable to parse 'Use\ of\ uninitialized\ value\ in\ printf\ at\ /usr/lib64/nagios/plugins/check_nwc_health\ line\ 67217.': missing fields
2020-01-13T11:11:48.256821112Z unable to parse 'Use\ of\ uninitialized\ value\ in\ printf\ at\ /usr/lib64/nagios/plugins/check_nwc_health\ line\ 67217.': missing fields
2020-01-13T11:11:48.256827157Z unable to parse 'Use\ of\ uninitialized\ value\ in\ printf\ at\ /usr/lib64/nagios/plugins/check_nwc_health\ line\ 67217.': missing fields
2020-01-13T11:11:48.256833279Z unable to parse 'Use\ of\ uninitialized\ value\ in\ printf\ at\ /usr/lib64/nagios/plugins/check_nwc_health\ line\ 67217.': missing fields
2020-01-13T11:11:48.256839220Z unable to parse 'Use\ of\ uninitialized\ value\ in\ printf\ at\ /usr/lib64/nagios/plugins/check_nwc_health\ line\ 67217.': missing fields
2020-01-13T11:11:48.256845210Z unable to parse 'Use\ of\ uninitialized\ value\ in\ printf\ at\ /usr/lib64/nagios/plugins/check_nwc_health\ line\ 67217.': missing fields
2020-01-13T11:11:48.256851574Z unable to parse 'Use\ of\ uninitialized\ value\ in\ printf\ at\ /usr/lib64/nagios/plugins/check_nwc_health\ line\ 67217.': missing fields
2020-01-13T11:11:48.256864182Z unable to parse 'Use\ of\ uninitialized\ value\ in\ printf\ at\ /usr/lib64/nagios/plugins/check_nwc_health\ line\ 67217.': missing fields
2020-01-13T11:11:48.256871823Z unable to parse 'Use\ of\ uninitialized\ value\ in\ printf\ at\ /usr/lib64/nagios/plugins/check_nwc_health\ line\ 67217.': missing fields
2020-01-13T11:11:48.256878565Z unable to parse '\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ ': missing fields
2020-01-13T11:11:48.256888113Z unable to parse '\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ ': missing fields
2020-01-13T11:11:48.256897057Z unable to parse '\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ ': missing fields
2020-01-13T11:11:48.256905793Z unable to parse '\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ ': missing fields
2020-01-13T11:11:48.256914330Z unable to parse '\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ ': missing fields
2020-01-13T11:11:48.256922956Z unable to parse '\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ ': missing fields

Http panic error while serving requests

~200 servers with telegraf-1.12.2-1.x86_64 with a fairly standard configuration
Influxdb-relay (3.0.1 version) bulded on go-1.12.10
Influxdb-1.7.8 on the same host

Config

[[http]]
name = "influxdb01-http-relay"
bind-addr = "0.0.0.0:9096"

[[http.output]]
name = "influxdb01"
location = "http://server01:8086/"
endpoints = {write="/write", write_prom="/api/v1/prom/write", ping="/ping", query="/query"}
timeout = "10s"

Every few minutes, errors appear in the log

Nov 5 22:02:10 server01 influxdb-relay: 2019/11/05 22:02:10 http: panic serving serverXXX:37880: runtime error: slice bounds out of range
Nov 5 22:02:10 server01 influxdb-relay: goroutine 28766 [running]:
Nov 5 22:02:10 server01 influxdb-relay: net/http.(*conn).serve.func1(0xc001411400)
Nov 5 22:02:10 server01 influxdb-relay: /usr/local/go/src/net/http/server.go:1769 +0x139
Nov 5 22:02:10 server01 influxdb-relay: panic(0x719160, 0xa10af0)
Nov 5 22:02:10 server01 influxdb-relay: /usr/local/go/src/runtime/panic.go:522 +0x1b5
Nov 5 22:02:10 server01 influxdb-relay: bytes.Replace(0xc00032e71d, 0x18, 0x96e3, 0x9e5584, 0x2, 0x2, 0x9e5583, 0x1, 0x1, 0x1, ...)
Nov 5 22:02:10 server01 influxdb-relay: /usr/local/go/src/bytes/bytes.go:788 +0x5bf
Nov 5 22:02:10 server01 influxdb-relay: github.com/veepee-moc/influxdb-relay/vendor/github.com/influxdata/influxdb/models.unescapeTag(0xc00032e71d, 0x18, 0x96e3, 0xc00032e718, 0x4, 0x96e8)
Nov 5 22:02:10 server01 influxdb-relay: /root/go/src/github.com/veepee-moc/influxdb-relay/vendor/github.com/influxdata/influxdb/models/points.go:1256 +0x17e
Nov 5 22:02:10 server01 influxdb-relay: github.com/veepee-moc/influxdb-relay/vendor/github.com/influxdata/influxdb/models.walkTags(0xc00032e6f1, 0x44, 0x970f, 0xc000bbd778)
Nov 5 22:02:10 server01 influxdb-relay: /root/go/src/github.com/veepee-moc/influxdb-relay/vendor/github.com/influxdata/influxdb/models/points.go:1517 +0x229
Nov 5 22:02:10 server01 influxdb-relay: github.com/veepee-moc/influxdb-relay/vendor/github.com/influxdata/influxdb/models.parseTags(0xc00032e6f1, 0x44, 0x970f, 0xc00032e6f1, 0xa, 0x970f)
Nov 5 22:02:10 server01 influxdb-relay: /root/go/src/github.com/veepee-moc/influxdb-relay/vendor/github.com/influxdata/influxdb/models/points.go:1560 +0x12b
Nov 5 22:02:10 server01 influxdb-relay: github.com/veepee-moc/influxdb-relay/vendor/github.com/influxdata/influxdb/models.(*point).Tags(0xc001edcd20, 0xc00032e6f1, 0xa, 0x970f)
Nov 5 22:02:10 server01 influxdb-relay: /root/go/src/github.com/veepee-moc/influxdb-relay/vendor/github.com/influxdata/influxdb/models/points.go:1469 +0x5c
Nov 5 22:02:10 server01 influxdb-relay: github.com/veepee-moc/influxdb-relay/relay.(*httpBackend).validateRegexps(0xc00009a180, 0xc000429c00, 0xb0, 0xb1, 0x1a, 0x0)
Nov 5 22:02:10 server01 influxdb-relay: /root/go/src/github.com/veepee-moc/influxdb-relay/relay/http.go:338 +0x16c
Nov 5 22:02:10 server01 influxdb-relay: github.com/veepee-moc/influxdb-relay/relay.(*HTTP).handleStandard(0xc00007b1e0, 0x7ee0e0, 0xc000101960, 0xc00119b400, 0xbf688fac846acaa4, 0x22fcf3e61efad, 0xa1d900)
Nov 5 22:02:10 server01 influxdb-relay: /root/go/src/github.com/veepee-moc/influxdb-relay/relay/http_handlers.go:304 +0x6c2
Nov 5 22:02:10 server01 influxdb-relay: github.com/veepee-moc/influxdb-relay/relay.(*HTTP).bodyMiddleWare.func1(0xc00007b1e0, 0x7ee0e0, 0xc000101960, 0xc00119b400, 0xbf688fac846acaa4, 0x22fcf3e61efad, 0xa1d900)
Nov 5 22:02:10 server01 influxdb-relay: /root/go/src/github.com/veepee-moc/influxdb-relay/relay/http_middlewares.go:42 +0xfa
Nov 5 22:02:10 server01 influxdb-relay: github.com/veepee-moc/influxdb-relay/relay.(*HTTP).queryMiddleWare.func1(0xc00007b1e0, 0x7ee0e0, 0xc000101960, 0xc00119b400, 0xbf688fac846acaa4, 0x22fcf3e61efad, 0xa1d900)
Nov 5 22:02:10 server01 influxdb-relay: /root/go/src/github.com/veepee-moc/influxdb-relay/relay/http_middlewares.go:60 +0x152
Nov 5 22:02:10 server01 influxdb-relay: github.com/veepee-moc/influxdb-relay/relay.(*HTTP).logMiddleWare.func1(0xc00007b1e0, 0x7ee0e0, 0xc000101960, 0xc00119b400, 0xbf688fac846acaa4, 0x22fcf3e61efad, 0xa1d900)
Nov 5 22:02:10 server01 influxdb-relay: /root/go/src/github.com/veepee-moc/influxdb-relay/relay/http_middlewares.go:23 +0x81
Nov 5 22:02:10 server01 influxdb-relay: github.com/veepee-moc/influxdb-relay/relay.(*HTTP).rateMiddleware.func1(0xc00007b1e0, 0x7ee0e0, 0xc000101960, 0xc00119b400, 0xbf688fac846acaa4, 0x22fcf3e61efad, 0xa1d900)
Nov 5 22:02:10 server01 influxdb-relay: /root/go/src/github.com/veepee-moc/influxdb-relay/relay/http_middlewares.go:72 +0x10d
Nov 5 22:02:10 server01 influxdb-relay: github.com/veepee-moc/influxdb-relay/relay.(*HTTP).ServeHTTP(0xc00007b1e0, 0x7ee0e0, 0xc000101960, 0xc00119b400)
Nov 5 22:02:10 server01 influxdb-relay: /root/go/src/github.com/veepee-moc/influxdb-relay/relay/http.go:197 +0xd7
Nov 5 22:02:10 server01 influxdb-relay: net/http.serverHandler.ServeHTTP(0xc000194000, 0x7ee0e0, 0xc000101960, 0xc00119b400)
Nov 5 22:02:10 server01 influxdb-relay: /usr/local/go/src/net/http/server.go:2774 +0xa8
Nov 5 22:02:10 server01 influxdb-relay: net/http.(*conn).serve(0xc001411400, 0x7eea60, 0xc000aa7780)
Nov 5 22:02:10 server01 influxdb-relay: /usr/local/go/src/net/http/server.go:1878 +0x851
Nov 5 22:02:10 server01 influxdb-relay: created by net/http.(*Server).Serve
Nov 5 22:02:10 server01 influxdb-relay: /usr/local/go/src/net/http/server.go:2884 +0x2f4

What could be the reason for this?

Help: docker seems stuck on start up

Thank you for the docker image so I can try your project on windows. However it seems the application stuck on startup without further details.

So when I run docker run -v {{ConfigPath}}/influxdb-relay.conf:/etc/influxdb-relay/influxdb-relay.conf -p 9096:9096 --rm vptech/influxdb-relay:latest, I got following output:

2019/05/17 15:56:26 starting relays...
2019/05/17 15:56:26 starting UDP relay "example-udp" on 127.0.0.1:9096

That is no extra logs available and the container seems still running. However, when I run curl -X GET "http://127.0.0.1:9096/health" to test the container, I got "curl: (52) Empty reply from server" message. The container don't seems to have any logs too.

I am running two influxdb using containers too on mac, which I can access using windows. So I think there might have networking issues in between the relay container on windows and the db container on Linux. But I can ping through using docker exec 9c6484081b17 ping atl-2156-osx.kabbage.com, results seems fine:

PING atl-2156-osx.kabbage.com (10.15.200.110): 56 data bytes
64 bytes from 10.15.200.110: seq=0 ttl=37 time=70.311 ms
64 bytes from 10.15.200.110: seq=1 ttl=37 time=5.607 ms
64 bytes from 10.15.200.110: seq=2 ttl=37 time=10.209 ms
64 bytes from 10.15.200.110: seq=3 ttl=37 time=4.672 ms

Anywhere I may have it setup wrong? There are no extra logs from relay container available.

data is relayed when database does not exist in InfluxDB

Hi strike-team,

Thanks for maintaining this project.

I experience one issue/behavior for which I believe it is not correct. I see that the data is not buffered, but relayed, in case that InfluxDB is up, but the database does not exist. I think that influxdb-relay should check if database exists and only then relay data. Otherwise, the data is "black-holed" and the point of buffering feature is lost.

I will give one example use case: I have multiple nodes which receive data from influxdb relay. When one node fails, the relay will start buffering... When I recover node, my InfluxDB instance still does not have any database created. I would like to first recover my databases from the backup and then get the buffered data from influxdb-relay. However, as soon as InfluxDB is up, the influxdb-relay will send the data and clear the buffer, even though the database is still not created in InfluxDB instance.

many thanks!

InfluxDB HA problem

hello, i use influxdb-relay as proxy,three influxdb as backend.persister pod write metrics to influx-relay.now i test a situtaion,i shutdown influxdb2 service that relay can not access influxdb2.in my optition,influxdb-relay write success in influxdb0 influxdb1.and return successful message.but in fact ,not return successful message but resuturn a error ,that lead to persister pod crashed.

image

persister error logs as follows.
image

and relay debug message as follows:
image

influxdb-relay issue, node1 suddenly stops and unable to connect

Hi ,

I'm using influxdb relay with AWS load balancer and suddenly one of the node stops responding
details logs as given below

  1. With Journalctl cmd
sudo journalctl -f -u influxdb-relay
output:
Problem posting to relay "influxdb-http" backend "node1": Post http://myhostip:8086/write?db=mydb: dial tcp myhostip:8086: getsockopt: connection refused
  1. When trying to connect influx shell
    Failed to connect to http://localhost:8086 Please check your connection settings and ensure 'influxd' is running.

  2. when trying to exec influxdb container where as container is already running

sudo docker exec -it influxdb  /bin/bash
rpc error: code = 2 desc = containerd: container not found
sudo docker ps -a
CONTAINER ID        IMAGE                    COMMAND                  CREATED             STATUS                        PORTS                    NAMES
cb4628d66364        influxdb                 "/entrypoint.sh in..."   3 months ago        Up 5 days                     0.0.0.0:8086->8086/tcp   influxdb

So now my questions are

  1. what is the issue with node1? why its running?
  2. Even if node1 comes back after restart or fixing the issue, so how much time will it take to sync with node2? as we are using this in our production environment . Otherwise it will lead to inconsistency.

Thanks,
Rohan

Netdata Support

I am trying to setup Influx relay for Netdata.

I'm referring to https://github.com/vente-privee/influxdb-relay/blob/master/docs/architecture.md

However, my netdata writes to port 2003 on my single server and same has influx on 8086 (default configs)
I'd like to setup the architecture as specified in the doc, but I'm unable to separate out the calls from graphite to influx (2 relays - 2 influx db) directly.

The only solution that I could think of was to push Netdata output directly to a HA system (HAPROXY or NGINX) and the reroute that to 2 relays and then from there to 2 graphites on the influx instances.

Does this work? Anyone else have a better solution?

Is the buffering function is not supported when using influxdb as input?

I use prometheus as input and there are two influxdbs on the back end. When I deliberately stopped one of the influxdbs and started it again 10 minutes later, the ten minutes of data was not restored to this influxdb.

Did I misunderstand the buffering?

I used the configuration file just like sample_buffered.conf and set the type to prometheus

Question: influxdb-relay benchmark

Hi Guys,
I try to use inch to do influxdb-relay benchmark but seems inch doesn't support path /write and show 404, does anyone test it or has suggested a tool to test influxdb-relay?
https://github.com/influxdata/inch

[root@xxx bin]# ss -tunlp
Netid State      Recv-Q Send-Q                                                          Local Address:Port                                                                         Peer Address:Port              
udp   UNCONN     0      0                                                                   127.0.0.1:323                                                                                     *:*                   users:(("chronyd",pid=849,fd=1))
udp   UNCONN     0      0                                                                          :::8082                                                                                   :::*                   users:(("macmnsvc",pid=2208,fd=24))
udp   UNCONN     0      0                                                                         ::1:323                                                                                    :::*                   users:(("chronyd",pid=849,fd=2))
tcp   LISTEN     0      128                                                                         *:80                                                                                      *:*                   users:(("haproxy",pid=48356,fd=5))
tcp   LISTEN     0      128                                                                         *:22                                                                                      *:*                   users:(("sshd",pid=1099,fd=3))
tcp   LISTEN     0      128                                                                         *:8087                                                                                    *:*                   users:(("haproxy",pid=48356,fd=6))
tcp   LISTEN     0      128                                                                        :::8081                                                                                   :::*                   users:(("macmnsvc",pid=2208,fd=23))
tcp   LISTEN     0      128                                                                        :::8086                                                                                   :::*                   users:(("influxdb-relay",pid=33032,fd=3))
tcp   LISTEN     0      128                                                                        :::22                                                                                     :::*                   users:(("sshd",pid=1099,fd=4))
tcp   LISTEN     0      128                                                                        :::12800                                                                                  :::*                   users:(("bootstrap_agent",pid=20707,fd=7))
tcp   LISTEN     0      128                                                                        :::12801                                                                                  :::*                   users:(("backup_agent_ma",pid=20687,fd=7))
[root@tng3159 bin]# ./inch -v -c 8 -b 10000 -t 1,5000,1 -p 100000 -consistency any -v
Host: http://localhost:8086
Concurrency: 8
Virtual Hosts: 0
Measurements: 1
Tag cardinalities: [1 5000 1]
Points per series: 100000
Total series: 5000
Total points: 500000000
Total fields per point: 1
Batch Size: 10000
Database: stress (Shard duration: 7d)
Write Consistency: any
unexpected status code: 404

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.