Coder Social home page Coder Social logo

logplex's Introduction

Logplex [DEPRECATED]

This project is officially retired and no longer maintained.

Logplex is a distributed syslog log router, able to merge and redistribute multiple incoming streams of syslog logs to individual subscribers.

A typical logplex installation will be a cluster of distributed Erlang nodes connected in a mesh, with one or more redis instances (which can be sharded). The cluster may or may not be sitting behind a load-balancer or proxy, but any of them may be contacted at any time for ideal scenarios.

Applications sitting on their own node or server need to send their log messages either to a local syslog, or through log shuttle, which will then forward them to one instance of a logplex router.

On the other end of the spectrum, consumers may subscribe to a logplex instance, which will then merge streams of incoming log messages and forward them to the subscriber. Alternatively, the consumer may register a given endpoint (say, a database behind the proper API) and logplex nodes will be able to push messages to that end-point as they come in.

For more details, you can look at stream management documentation in doc/.

Table of Contents

Erlang Version Requirements

As of Logplex v93, Logplex requires Erlang 18. Logplex is currently tested againts OTP-18.1.3.

Prior versions of Logplex are designed to run on R16B03 and 17.x.

Development

Local development

build

$ ./rebar3 as public compile

develop

run

$ INSTANCE_NAME=`hostname` \
  LOGPLEX_CONFIG_REDIS_URL="redis://localhost:6379" \
  LOGPLEX_REDGRID_REDIS_URL="redis://localhost:6379" \
  LOCAL_IP="127.0.0.1" \
  LOGPLEX_COOKIE=123 \
  LOGPLEX_AUTH_KEY=123 \
  erl -name logplex@`hostname` -pa ebin -env ERL_LIBS deps -s logplex_app -setcookie ${LOGPLEX_COOKIE} -config sys

test

Given an empty local redis (v2.6ish):

$ ./rebar3 as public,test compile
$ INSTANCE_NAME=`hostname` \
  LOGPLEX_CONFIG_REDIS_URL="redis://localhost:6379" \
  LOGPLEX_SHARD_URLS="redis://localhost:6379" \
  LOGPLEX_REDGRID_REDIS_URL="redis://localhost:6379" \
  LOCAL_IP="127.0.0.1" \
  LOGPLEX_COOKIE=123 \
  ERL_LIBS=`pwd`/deps/:$ERL_LIBS \
  ct_run -spec logplex.spec -pa ebin

Runs the common test suite for logplex.

Docker development

develop

Requires a working install of Docker and Docker Compose. Follow the installations steps outlined docs.docker.com.

docker-compose build         # Run once
docker-compose run compile   # Run everytime source files change
docker-compose up logplex    # Run logplex post-compilation

To connect to the above logplex Erlang shell:

docker exec -it logplex_logplex_1 bash -c "TERM=xterm bin/connect"

test

docker-compose run test

Data setup

create creds

1> logplex_cred:store(logplex_cred:grant('full_api', logplex_cred:grant('any_channel', logplex_cred:rename(<<"Local-Test">>, logplex_cred:new(<<"local">>, <<"password">>))))).
ok

hit healthcheck

$ curl http://local:password@localhost:8001/healthcheck
{"status":"normal"}

create a channel

$ curl -d '{"tokens": ["app"]}' http://local:password@localhost:8001/channels
{"channel_id":1,"tokens":{"app":"t.feff49f1-4d55-4c9e-aee1-2d2b10e69b42"}}

post a log msg

$ curl -v \
-H "Content-Type: application/logplex-1" \
-H "Logplex-Msg-Count: 1" \
-d "116 <134>1 2012-12-10T03:00:48.123456Z erlang t.feff49f1-4d55-4c9e-aee1-2d2b10e69b42 console.1 - - Logsplat test message 1" \
http://local:password@localhost:8601/logs

create a log session

$ curl -d '{"channel_id": "1"}' http://local:password@localhost:8001/v2/sessions
{"url":"/sessions/9d53bf70-7964-4429-a589-aaa4df86fead"}

fetch logs for session

$ curl http://local:password@localhost:8001/sessions/9d53bf70-7964-4429-a589-aaa4df86fead
2012-12-10T03:00:48Z+00:00 app[console.1]: test message 1

Supervision Tree

logplex_app logplex_sup logplex_db
config_redis (redo)
logplex_drain_sup logplex_http_drain
logplex_tcpsyslog_drain
logplex_tlssyslog_drain
nsync
redgrid
logplex_realtime redo
logplex_stats
logplex_tail
logplex_redis_writer_sup (logplex_worker_sup) logplex_redis_writer
logplex_shard redo
logplex_api
logplex_syslog_sup tcp_proxy_sup tcp_proxy
logplex_logs_rest

Processes

logplex_db

Starts and supervises a number of ETS tables:

channels
tokens
drains
creds
sessions

config_redis

A redo redis client process connected to the logplex config redis.

logplex_drain_sup

An empty one_for_one supervisor. Supervises HTTP, TCP Syslog and TLS Syslog drain processes.

nsync

An nsync process connected to the logplex config redis. Callback module is nsync_callback.

Nsync is an Erlang redis replication client. It allows the logplex node to act as a redis slave and sync the logplex config redis data into memory.

redgrid

A redgrid process that registers the node in a central redis server to facilitate discovery by other nodes.

logplex_realtime

Captures realtime metrics about the running logplex node. This metrics are exported using folsom_cowboy and are available for consumption via HTTP.

Memory Usage information is available:

> curl -s http://localhost:5565/_memory | jq '.'
{
  "total": 27555464,
  "processes": 10818248,
  "processes_used": 10818136,
  "system": 16737216,
  "atom": 388601,
  "atom_used": 371948,
  "binary": 789144,
  "code": 9968116,
  "ets": 789128
}

As is general VM statistics:

> curl -s http://localhost:5565/_statistics | jq '.'
{
  "context_switches": 40237,
  "garbage_collection": {
    "number_of_gcs": 7676,
    "words_reclaimed": 20085443
  },
  "io": {
    "input": 9683207,
    "output": 2427112
  },
  "reductions": {
    "total_reductions": 6584440,
    "reductions_since_last_call": 6584440
  },
  "run_queue": 0,
  "runtime": {
    "total_run_time": 1140,
    "time_since_last_call": 1140
  },
  "wall_clock": {
    "total_wall_clock_time": 207960,
    "wall_clock_time_since_last_call": 207748
  }
}

Several custom logplex metrics are also exported via a special /_metrics endpoint:

> curl -s http://localhost:5565/_metrics | jq '.'
[
  "drain.delivered",
  "drain.dropped",
  "message.processed",
  "message.received"
]

These can then be queried individually:

> curl -s http://localhost:5565/_metrics/message.received | jq '.'
{
  "type": "gauge",
  "value": 1396
}

logplex_stats

Owns the logplex_stats ETS table. Prints channel, drain and system stats every 60 seconds.

logplex_tail

Maintains the logplex_tail ETS table that is used to register tail sessions.

logplex_redis_writer_sup

Starts a logplex_worker_sup process, registered as logplex_redis_writer_sup, that supervises logplex_redis_writer processes.

logplex_shard

Owns the logplex_shard_info ETS table. Starts a separate read and write redo client for each redis shard found in the logplex_shard_urls var.

logplex_api

Blocks waiting for nsync to finish replicating data into memory before starting a mochiweb acceptor that handles API requests for managing channels/tokens/drains/sessions.

logplex_syslog_sup

Supervises a tcp_proxy_sup process that supervises a tcp_proxy process that accepts syslog messages over TCP.

logplex_logs_rest

Starts a cowboy_tcp_transport process and serves as the callback for processing HTTP log input.

Realtime Metrics

Logplex can send realtime metrics to Redis via pubsub and to a drain channel as logs. The following metrics are currently logged in this fashion:

* `message_received`
* `message_processed`
* `drain_delivered`
* `drain_dropped`

To log these metrics to an internal drain channel, you'll need to set the INTERNAL_METRICS_CHANNEL_ID environment variable to a drain token that has already been created.

logplex's People

Contributors

agnaite avatar amerine avatar apg avatar archaelus avatar bgentry avatar bigkevmcd avatar cyx avatar danp avatar eiri avatar evanmcc avatar ferd avatar getong avatar heroku-mirror avatar jkvor avatar jorinvo avatar jttyeung avatar mgomes avatar mikehale avatar mkrysiak avatar nolman avatar omarkj avatar ravipudi avatar ricardochimal avatar srid avatar svc-scm avatar technomancy avatar tsloughter avatar voidlock avatar ypaq avatar yyamano avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

logplex's Issues

Adding a new ENV in cache_os_envvars() makes it required

{"init terminating in do_boot",{{app_start_failed,logplex,{bad_return,{{logplex_app,start,[normal,[]]},{'EXIT',{{missing_config,internal_metrics_channel_token},[{logplex_app,config,1,[{file,"src/logplex_app.erl"},{line,141}]},{logplex_app,'-cache_os_envvars/1-lc$^0/1-0-',1,[{file,"src/logplex_app.erl"},{line,109}]},{logplex_app,cache_os_envvars,1,[{file,"src/logplex_app.erl"},{line,109}]},{logplex_app,cache_os_envvars,0,[{file,"src/logplex_app.erl"},{line,85}]},{logplex_app,start,2,[{file,"src/logplex_app.erl"},{line,58}]},{application_master,start_supervisor,3,[{file,"application_master.erl"},{line,328}]},{application_master,start_the_app,5,[{file,"application_master.erl"},{line,310}]},{application_master,start_it_new,7,[{file,"application_master.erl"},{line,296}]}]}}}}},[{logplex_app,start_ok,3,[{file,"src/logplex_app.erl"},{line,261}]},{init,start_it,1,[]},{init,start_em,1,[]}]}}

If this variable isn't in the ENV, the app crashes. However, if I don't specify it in cache_os_envvars(), it never gets cached and is hence unavailable to the rest of the app.

logplex_api_v3_SUITE / fetch_channel_logs test flake

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
logplex_api_v3_SUITE:'-fetch_channel_logs/1-fun-4-' failed on line 588
Reason: assertEqual
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
----------------------------------------------------
2018-02-27 19:46:28.365
PUT url="http://localhost:8002/v3/channels/app-e21238d4-fa15-4155-8f4c-d4b2623620e3", opts=[{headers,
                                                                                             [{"Authorization",
                                                                                               "Basic e693a548-85aa-43ee-a6b8-19abe6e63c89"}]},
                                                                                            {body,
                                                                                             <<"{\"tokens\":[\"token-3ceae1a3-1ac5-48c4-9264-dd2a40959b12\",\"token-d9af8f66-8ae2-41b1-9a89-1be0a4cfa72b\",\"token-a9843a78-7a6d-453c-9cff-05bc8ad15fa6\",\"token-a95914e0-94ba-43cb-9b51-812d774182b3\",\"token-20ffcc50-4105-44db-a600-ca86ed9e2e92\"]}">>},
                                                                                            {timeout,
                                                                                             10000}]
%%% logplex_api_v3_SUITE ==> fetch_channel_logs (group channel_logs): FAILED
%%% logplex_api_v3_SUITE ==> 
Failure/Error: ?assertEqual(match, re : run ( Line , Expected , [ { capture , none } ] ))
  expected: match
       got: nomatch
      line: 588

why aren't the ports open?

Hello,
i've got logplex running, no errors in the logs that i can see, but the only port open by beam is port 9100. What happened to 8001 and 8601?? I'm running logplex on 2 ec2 servers and using rediscloud as the redis db. Any insight/help is appreciated.

Building with rebar fails because of git.herokai.com:erlang_redis_pool.git

[ logplex master ] rebar --version
rebar 2.0.0 R13B04 20121109_020031 git 2.0.0-237-ga2fb8fd
[ logplex master ] rebar get-deps
==> logplex (get-deps)
Pulling redis from {git,"[email protected]:erlang_redis_pool.git","master"}
Cloning into 'redis'...
Host key verification failed.
fatal: Could not read from remote repository.

Please make sure you have the correct access rights
and the repository exists.
ERROR: git clone -n [email protected]:erlang_redis_pool.git redis failed with error: 128 and output:
Cloning into 'redis'...
Host key verification failed.
fatal: Could not read from remote repository.

Please make sure you have the correct access rights
and the repository exists.

ERROR: 'get-deps' failed while processing /Users/ryandotsmith/src/logplex: rebar_abort

Question: How to best deal with an app that produces too many logs?

I'm running my own instance of Logplex and I have a bunch of Elixir apps sending it logs. Normally all is well, but sometimes the (buggy) Elixir apps will go into an infinite crash loop and generate a TON of logs. The Logplex machine doesn't handle this very well: CPU goes to 100% and oddly enough ETS entries start disappearing among other things.

This happens even if there are no tails or drains on the channel.

My understanding is that logs are not throttled or rate limited when Logplex receives them and are instead only dropped when the logs are written out to tails, drains, redis, or the firehose. In my case though, this doesn't prevent the CPU from getting pegged or prevent weird ETS behavior.

I wanted to ask and find out if there is a proper way to deal with an app that produces waaaaay too many logs. Any advice is extremely appreciated! Thanks!

Is a redis clustered required?

Is a redis cluster required? Looking at the nsync callback it would seem i need to setup a cluster in order to get a running instance? I cant seem to get past wait_for_nsync in the api server with a single redis instance. Is there any development flags or options to get quickly going with logplex?

Passing a tail parameter to /v2/sessions always tails, regardless of the value

Passing no tail parameter creates a non-tailing session:

curl http://user:pass@localhost:8001/v2/sessions -d '{"channel_id":"5", "num": "1"}'

While all of these tail:

curl http://user:pass@localhost:8001/v2/sessions -d '{"channel_id":"5", "num": "1", "tail": false}'
curl http://user:pass@localhost:8001/v2/sessions -d '{"channel_id":"5", "num": "1", "tail": "false"}'

curl http://user:pass@localhost:8001/v2/sessions -d '{"channel_id":"5", "num": "1", "tail": "true"}'
curl http://user:pass@localhost:8001/v2/sessions -d '{"channel_id":"5", "num": "1", "tail": true}

The API seems very picky: {"channel_id": 5}returns an error, doing curl -I http://user:pass@localhost:8001/session/:id 404's, and so on.

Add mechanism for consumer to provide its own token ID

A problem that we're seeing occasionally from core is that we're making provisioning requests for Heroku Postgres databases before an app has its log token back from Logplex (see heroku/core#1188). This is because we make that Logplex request async to speed up app creation, and especially for Bamboo, it's possible that a provisioning request can go out before the Logplex response comes back. It ends up causing some trouble for DoD, because they have nowhere to log to unless they pull this data back manually at some later time.

@mfine suggested that a potential solution might be for Logplex to take an optional parameter on any API endpoint that could result in the creation of a log token that would allow the consumer to supply its own token ID (because anyone can generate a UUID). This would solve our problem above because we could guarantee that a token ID always goes out with a provisioning request.

Any thoughts from Geoff/Routing as to whether this might be a good or bad idea?

/cc @mfine @will @heroku/routing @heroku/api

HTTP input returns 204 when procid is missing

If I include a blank field for procid, HTTP inputs return a 204. However, if I tail the log stream, the messages without the procid field are not found. I propose that your HTTP input API return a 4XX when the procid is missing since it is required for logs to make it through.

Syslog messages are missing a NILVALUE for the STRUCTURED-DATA field.

The logplex_syslog_utils:rfc5424 function used to format syslog messages https://github.com/heroku/logplex/blob/master/src/logplex_syslog_utils.erl#L34-L46 doesn't set a NILVALUE (-) for the STRUCTURED-DATA field.

The heroku log drains docs claim that logplex sends "syslog formatted messages".

However https://tools.ietf.org/html/rfc5424#section-6 defines SYSLOG-MSG as:

      SYSLOG-MSG      = HEADER SP STRUCTURED-DATA [SP MSG]

and STRUCTURED-DATA is defined as:

      STRUCTURED-DATA = NILVALUE / 1*SD-ELEMENT
      SD-ELEMENT      = "[" SD-ID *(SP SD-PARAM) "]"
      SD-PARAM        = PARAM-NAME "=" %d34 PARAM-VALUE %d34
      SD-ID           = SD-NAME
      PARAM-NAME      = SD-NAME
      PARAM-VALUE     = UTF-8-STRING ; characters '"', '\' and
                                     ; ']' MUST be escaped.
      SD-NAME         = 1*32PRINTUSASCII
                        ; except '=', SP, ']', %d34 (")

Making STRUCTURED-DATA a required part of the message.

Indeed in section 6.3: https://tools.ietf.org/html/rfc5424#section-6.3

   STRUCTURED-DATA can contain zero, one, or multiple structured data
   elements, which are referred to as "SD-ELEMENT" in this document.

   In case of zero structured data elements, the STRUCTURED-DATA field
   MUST contain the NILVALUE.

At the very least logplex/logdrains docs should clearly note this violation and specify the message format.

Better of course would be a fix and some upgrade path, but clearly the number of adhoc implementations of the logplex message format are going to make that difficult.

What's logplex's license?

Hey, I noticed you guys used to have the MIT license in the README file, until af62378. How do you guys intend to license it?
Thanks!

Protocol of 'log tail' api is unclear

The API for core/logplex appears to emit 'chunks' but there seems no meaning of each 'chunk'. Solving this will probably require some definition of a protocol, but I'll leave implementation ideas out of this :)

I've observed the following of a 'chunk' emitted on different occasions:

  • it's an empty string
  • it's a single event
  • it's multiple events delimited by newline

This observation was while writing some integration monitoring for logplex. I also dug into the Heroku::Client ruby code, and both 'empty string' and 'multiple events in a chunk' are worked round.

Push failed: Could not get a logplex token for this app

Receiving this message:
"Push failed: Could not get a logplex token for this app. Please try the request again." - on "Launching..." stage, while trying to push my project to heroku with 'git push heroku master'.
Any tips on solving this?
Thanks

Redis version and other deps

Is a particular verison of redis required, looking at nsync it seems you filter on 0001, also what sha1/tags should your dependencies be pointing to (for nsync etc..). Ive replaced the stuff in rebar that references herokai with stuff publically available but no idea how far ahead you guys are as depends arent marked with a sha1 or a tag. Excuse the pun: http://www.12factor.net/dependencies

Unify nomenclature

  • event producers call it a token what a consumer calls a 'ps' (in API terms).
  • a consumer calls a 'source' what an event producer has no analog for, as far as I can tell.

Producers and Consumers should use the same terms to avoid confusion.

POST /sessions should respect false/null `tail` param

Currently POST /sessions including tail with any value, including null and false will create a tail session. Would be nice if tail could be included with those values and not create a tailing session. Then API consumers wouldn't have to go out of their way to remove it from requests.

logs with -num -5 returns 1500 lines

It seems that a negative number of requested log lines is not handled correctly. When a -5 is passed in, the full maximum of 1500 is passed back.

Opening based on heroku/core#947 based on the fact that Core just forwards the num parameter directly to Logplex. That ticket also contains the original support ticket that we opened the issue from.

failed_to_start_child, logplex_shard

Hello,
I'm seeing an error when I try to start logplex, not sure why. (sorry not an erlang guy)

{"init terminating in do_boot",{{app_start_failed,logplex,{{shutdown,{failed_to_start_child,logplex_shard,{{badmatch,{error,{error,<<23 bytes>>}}},[{logplex_shard,add_pool,1,[{file,"src/logplex_shard.erl"},{line,246}]},{logplex_shard,'-populate_info_table/3-lc$^0/1-0-',1,[{file,"src/logplex_shard.erl"},{line,230}]},{logplex_shard,populate_info_table,3,[{file,"src/logplex_shard.erl"},{line,230}]},{logplex_shard,init,1,[{file,"src/logplex_shard.erl"},{line,91}]},{gen_server,init_it,6,[{file,"gen_server.erl"},{line,304}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,239}]}]}}},{logplex_app,start,[normal,[]]}}},[{logplex_app,start_ok,3,[{file,"src/logplex_app.erl"},{line,252}]},{init,start_it,1,[]},{init,start_em,1,[]}]}}

I have the LOGPLEX_SHARD_URLS env set to something like this

pid=<0.127.0> m=logplex_app ln=143 class=info at=update_running_config key=logplex_shard_urls value="redis://:<really long password>@<subdomain>.openredis.io:<port>"

Any insight?? thanks.

bin/get-deps development not working (publickey denied)

What is described in your README does not seem to work. It looks like this project uses dependencies that are not accessible (git.herokai.com which looks like a heroku domain). Does this mean you guys are using non open source versions of the dependencies, which makes logplex unusable at this moment ?

HTTP drains drop additional parameters.

Not sure if this is intended or not, but trying to add a drain like this:

https://foo.com/webhook?code_name=Logs&oauth=token

Results in:

Successfully added drain https://foo.com/webhook?code_name=Logs

Where the additional &oauth=token is silently dropped.

Log L10 errors accurately from http drains.

logplex_http_drain:drop_frame doesn't store the time/count of dropped logs, so we don't accurately generate L10 messages.

L10 messages are desirable for log destination providers as a way of working out where in the log pipeline log loss occurs.

Error L10 - No new line after msg drop log

142 <172>1 2014-11-11T08:24:25+00:00 host heroku logplex - Error L10 (output buffer overflow): 1 messages dropped since 2014-11-07T23:29:45+00:00.93 <45>1 2014-11-11T08:24:24.064853+00:00 host heroku web.1 - State changed from starting to up

This is what's delivered to my drain after a short downtime and droped logs, but I think it should be two logs. Or adds the error the message that was dropped?
I thinks it's just a missing newline after the logplex log.

logplex_channel:info is slow.

I kinda suspect the thing is doing table scans of drains/tokens - maybe this would be way faster with ordered_set tables?

Expiring redis spool key deletes channel from ets

I'm not sure if this is a bug or desired behavior. redis_helper:build_push_msg creates a ch:channelId:spool key in redis with an exipration, and when that key expires, nsync_callback:handle({cmd, "del", [<<"ch:", Suffix/binary>> | Args]}) is called, which deletes the channelid from ets. However, data associated with that channelId remains, such as tokens, drains, and the redis drain:channelId:data key. Is this desired? My guess is that the channelid should only be deleted from ets when a call is made to the API, not when a spool key expires.

Log lines out of order -- even when submitted IN ORDER

I've been dealing with this issue for a long time on Heroku and it makes it almost impossible to read through my logs to track down issues. Why are log lines from 1 dyno and 1 thread/process that are created in order, spit out of the logplex in a different order? Somewhere there is something not buffering correctly. I understand that different dynos/processes might be interleaved in the logplex, but here is an example of a single request that I'm trying to follow and it's all over the place if you look at the timestamps. There is no buffering happening on my end (STDOUT.sync=true etc). It's very clear that Completed 200 OK is supposed to come at the end (based on timestamp). Why would it be interleaved in the middle?

2015-02-20T20:39:27.864570+00:00 app[web.1]: [817f4b05-4d0a-4458-8226-ce66d72cdcba] [ ] Started GET "/shop/fashion" for  at 2015-02-20 20:39:27 +0000
2015-02-20T20:39:27.908664+00:00 app[web.1]: [817f4b05-4d0a-4458-8226-ce66d72cdcba] [ ]   MOPED:   COMMAND      database=admin command={:ismaster=>1} runtime: 5.7000ms
2015-02-20T20:39:27.920864+00:00 app[web.1]: [817f4b05-4d0a-4458-8226-ce66d72cdcba] [ ]   MOPED:   COMMAND      database=admin command={:ismaster=>1} runtime: 8.7036ms
2015-02-20T20:39:27.926832+00:00 app[web.1]: [817f4b05-4d0a-4458-8226-ce66d72cdcba] [ ] In overridden order controller helper /app/decorators/ /core_controller_helpers_order_decorator.rb
2015-02-20T20:39:27.992627+00:00 app[web.1]: [817f4b05-4d0a-4458-8226-ce66d72cdcba] [ ]    ::Order Load (1.5ms)  SELECT  " _orders".* FROM " _orders"  WHERE " _orders"."user_id" = $1 AND " _orders"."completed_at" IS NULL  ORDER BY created_at DESC LIMIT 1  [["user_id", 2]]
2015-02-20T20:39:28.043291+00:00 app[web.1]: [817f4b05-4d0a-4458-8226-ce66d72cdcba] [ ]    ::Store Load (1.0ms)  SELECT  " _stores".* FROM " _stores"  WHERE (url like '% %')  ORDER BY " _stores"."id" ASC LIMIT 1
2015-02-20T20:39:28.059925+00:00 app[web.1]: [817f4b05-4d0a-4458-8226-ce66d72cdcba] [ ]    ::Order Load (1.4ms)  SELECT  " _orders".* FROM " _orders"  WHERE " _orders"."completed_at" IS NULL AND " _orders"."currency" = 'USD' AND " _orders"."guest_token" = 'emMw8Cbh87fN1lGOnmGHLw' AND " _orders"."store_id" = 1 AND " _orders"."user_id" = 2 LIMIT 1
2015-02-20T20:39:28.062099+00:00 app[web.1]: [817f4b05-4d0a-4458-8226-ce66d72cdcba] [ ]   CACHE (0.0ms)  SELECT " _line_items".* FROM " _line_items"  WHERE " _line_items"."order_id" IN (50703)  ORDER BY updated_at DESC
2015-02-20T20:39:27.887817+00:00 app[web.1]: [817f4b05-4d0a-4458-8226-ce66d72cdcba] [ ] Processing by ShopController#category as HTML
2015-02-20T20:39:27.887842+00:00 app[web.1]: [817f4b05-4d0a-4458-8226-ce66d72cdcba] [ ]   Parameters: {"category"=>"fashion"}
2015-02-20T20:39:27.902959+00:00 app[web.1]: [817f4b05-4d0a-4458-8226-ce66d72cdcba] [ ] MOPED: Creating new connection pool for <Moped::Node resolved_address="  ">
2015-02-20T20:39:27.912154+00:00 app[web.1]: [817f4b05-4d0a-4458-8226-ce66d72cdcba] [ ] MOPED: Creating new connection pool for <Moped::Node resolved_address=" ">
2015-02-20T20:39:27.926272+00:00 app[web.1]: [817f4b05-4d0a-4458-8226-ce66d72cdcba] [ ]   MOPED:   QUERY        database= _cluster collection= _users selector={"$query"=>{"_id"=>BSON::ObjectId('5232282c75bda27c2b000008')}, "$orderby"=>{:_id=>1}} flags=[] limit=-1 skip=0 batch_size=nil fields=nil runtime: 3.0719ms
2015-02-20T20:39:27.935691+00:00 app[web.1]: [817f4b05-4d0a-4458-8226-ce66d72cdcba] [ ]    ::User Load (1.6ms)  SELECT  " _users".* FROM " _users"  WHERE " _users"."deleted_at" IS NULL AND " _users"."id" = 2  ORDER BY " _users"."id" ASC LIMIT 1
2015-02-20T20:39:28.038534+00:00 app[web.1]: [817f4b05-4d0a-4458-8226-ce66d72cdcba] [ ]    ::LineItem Load (1.0ms)  SELECT " _line_items".* FROM " _line_items"  WHERE " _line_items"."order_id" IN (50703)  ORDER BY updated_at DESC
2015-02-20T20:39:28.046638+00:00 app[web.1]: [817f4b05-4d0a-4458-8226-ce66d72cdcba] [ ]    ::Store Load (1.2ms)  SELECT  " _stores".* FROM " _stores"  WHERE " _stores"."default" = 't'  ORDER BY " _stores"."id" ASC LIMIT 1
2015-02-20T20:39:28.061002+00:00 app[web.1]: [817f4b05-4d0a-4458-8226-ce66d72cdcba] [ ]   CACHE (0.0ms)  SELECT  " _orders".* FROM " _orders"  WHERE " _orders"."user_id" = $1 AND " _orders"."completed_at" IS NULL  ORDER BY created_at DESC LIMIT 1  [["user_id", 2]]
2015-02-20T20:39:28.066773+00:00 app[web.1]: [817f4b05-4d0a-4458-8226-ce66d72cdcba] [ ]    ::Order Load (0.7ms)  SELECT " _orders".* FROM " _orders"  WHERE " _orders"."user_id" = $1 AND " _orders"."completed_at" IS NULL AND (id!=50703)  [["user_id", 2]]
2015-02-20T20:39:28.070422+00:00 app[web.1]: [817f4b05-4d0a-4458-8226-ce66d72cdcba] [ ]   MOPED:   QUERY        database= _cluster collection= _sites selector={"$query"=>{}, "$orderby"=>{:_id=>1}} flags=[] limit=-1 skip=0 batch_size=nil fields=nil runtime: 2.0788ms
2015-02-20T20:39:28.194699+00:00 app[web.1]: [817f4b05-4d0a-4458-8226-ce66d72cdcba] [ ]   MOPED:   QUERY        database= _cluster collection= _contents selector={"$query"=>{"site_id"=>BSON::ObjectId('5230edc2e80e0f4508000002'), "state"=>"Finished", "publish_at"=>{"$lte"=>2015-02-20 20:39:28 UTC}, "expire_at"=>{"$not"=>{"$lte"=>2015-02-20 20:39:28 UTC}}, "categories"=>{"$all"=>[/^fashion$/i], "$size"=>1}, "$and"=>[{"_type"=>"ProductCategory"}]}, "$orderby"=>{"publish_at"=>-1}} flags=[] limit=-1 skip=0 batch_size=nil fields=nil runtime: 121.9047ms
2015-02-20T20:39:28.216997+00:00 app[web.1]: [817f4b05-4d0a-4458-8226-ce66d72cdcba] [ ]   MOPED:   QUERY        database= _cluster collection= _contents selector={"$query"=>{"state"=>"Finished", "publish_at"=>{"$lte"=>2015-02-20 20:39:28 UTC}, "expire_at"=>{"$not"=>{"$lte"=>2015-02-20 20:39:28 UTC}}}, "$orderby"=>{"updated_at"=>-1}} flags=[] limit=-1 skip=0 batch_size=nil fields=nil runtime: 3.3950ms
2015-02-20T20:39:28.236519+00:00 app[web.1]: [817f4b05-4d0a-4458-8226-ce66d72cdcba] [ ] Read fragment views/cache_buster-6fb6d83bd654ca6e6a5a16872755f4af/b4c36000694893544f1c51a2afc72661/b4c36000694893544f1c51a2afc72661 (15.7ms)
2015-02-20T20:39:28.243188+00:00 app[web.1]: [817f4b05-4d0a-4458-8226-ce66d72cdcba] [ ]   Rendered shared/_metas.html.erb (1.1ms)
2015-02-20T20:39:28.269094+00:00 app[web.1]: [817f4b05-4d0a-4458-8226-ce66d72cdcba] [ ]   Rendered shared/_analytics_top.html.erb (0.9ms)
2015-02-20T20:39:28.299775+00:00 app[web.1]: [817f4b05-4d0a-4458-8226-ce66d72cdcba] [ ]   Rendered content/_issue_provocation.html.erb (2.5ms)
2015-02-20T20:39:28.299893+00:00 app[web.1]: [817f4b05-4d0a-4458-8226-ce66d72cdcba] [ ]   Rendered shared/_branding.html.erb (20.9ms)
2015-02-20T20:39:28.348042+00:00 app[web.1]: [817f4b05-4d0a-4458-8226-ce66d72cdcba] [ ]   MOPED:   QUERY        database= _cluster collection= _contents selector={"$query"=>{"state"=>"Finished", "publish_at"=>{"$lte"=>2015-02-20 20:39:28 UTC}, "expire_at"=>{"$not"=>{"$lte"=>2015-02-20 20:39:28 UTC}}, "_type"=>{"$in"=>["Apartment"]}}, "$orderby"=>{"publish_at"=>-1}} flags=[] limit=-1 skip=0 batch_size=nil fields=nil runtime: 36.1754ms
2015-02-20T20:39:28.363995+00:00 app[web.1]: [817f4b05-4d0a-4458-8226-ce66d72cdcba] [ ]   Rendered shared/_nav.html.erb (91.6ms)
2015-02-20T20:39:28.405038+00:00 app[web.1]: [817f4b05-4d0a-4458-8226-ce66d72cdcba] [ ]   Rendered email_subscribers/_form.html.erb (3.1ms)
2015-02-20T20:39:28.408777+00:00 app[web.1]: [817f4b05-4d0a-4458-8226-ce66d72cdcba] [ ]   Rendered shared/_share_modal.html.erb (1.1ms)
2015-02-20T20:39:28.414779+00:00 app[web.1]: [817f4b05-4d0a-4458-8226-ce66d72cdcba] [ ]   Rendered shared/_analytics_bottom.html.erb (3.4ms)
2015-02-20T20:39:28.416666+00:00 app[web.1]: [817f4b05-4d0a-4458-8226-ce66d72cdcba] [ ] Completed 200 OK in 529ms (Views: 141.8ms | ActiveRecord: 32.7ms | Mongoid: 207.3ms | Solr: 0.0ms)
2015-02-20T20:39:28.198823+00:00 app[web.1]: [817f4b05-4d0a-4458-8226-ce66d72cdcba] [ ]   MOPED:   QUERY        database= _cluster collection= _sites selector={"$query"=>{}, "$orderby"=>{:_id=>1}} flags=[] limit=-1 skip=0 batch_size=nil fields=nil runtime: 2.0129ms
2015-02-20T20:39:28.217956+00:00 app[web.1]: [817f4b05-4d0a-4458-8226-ce66d72cdcba] [ ] Cache read: digestor/shop/category/html
2015-02-20T20:39:28.217997+00:00 app[web.1]: [817f4b05-4d0a-4458-8226-ce66d72cdcba] [ ] Cache generate: digestor/shop/category/html
2015-02-20T20:39:28.218024+00:00 app[web.1]: [817f4b05-4d0a-4458-8226-ce66d72cdcba] [ ] Cache write: digestor/shop/category/html
2015-02-20T20:39:28.220453+00:00 app[web.1]: [817f4b05-4d0a-4458-8226-ce66d72cdcba] [ ] Cache digest for shop/category.html: b4c36000694893544f1c51a2afc72661
2015-02-20T20:39:28.220477+00:00 app[web.1]: [817f4b05-4d0a-4458-8226-ce66d72cdcba] [ ] Cache write: digestor/shop/category/html
2015-02-20T20:39:28.220612+00:00 app[web.1]: [817f4b05-4d0a-4458-8226-ce66d72cdcba] [ ] Cache read: digestor/shop/category/html
2015-02-20T20:39:28.220656+00:00 app[web.1]: [817f4b05-4d0a-4458-8226-ce66d72cdcba] [ ] Cache fetch_hit: digestor/shop/category/html
2015-02-20T20:39:28.220842+00:00 app[web.1]: [817f4b05-4d0a-4458-8226-ce66d72cdcba] [ ] Cache read: views/cache_buster-6fb6d83bd654ca6e6a5a16872755f4af/b4c36000694893544f1c51a2afc72661/b4c36000694893544f1c51a2afc72661
2015-02-20T20:39:28.236926+00:00 app[web.1]: [817f4b05-4d0a-4458-8226-ce66d72cdcba] [ ]   Rendered shop/category.html.erb within layouts/application (28.1ms)
2015-02-20T20:39:28.264855+00:00 app[web.1]: [817f4b05-4d0a-4458-8226-ce66d72cdcba] [ ]    (0.8ms)  SELECT COUNT(*) FROM " _roles" INNER JOIN " _roles_users" ON " _roles"."id" = " _roles_users"."role_id" WHERE " _roles_users"."user_id" = $1 AND " _roles"."name" = 'retail'  [["user_id", 2]]
2015-02-20T20:39:28.294360+00:00 app[web.1]: [817f4b05-4d0a-4458-8226-ce66d72cdcba] [ ]   MOPED:   QUERY        database= _cluster collection= _contents selector={"$query"=>{"state"=>"Finished", "publish_at"=>{"$lte"=>2015-02-20 20:39:28 UTC}, "expire_at"=>{"$not"=>{"$lte"=>2015-02-20 20:39:28 UTC}}, "_type"=>{"$in"=>["Volume"]}}, "$orderby"=>{"publish_at"=>-1}} flags=[] limit=-1 skip=0 batch_size=nil fields=nil runtime: 13.4639ms
2015-02-20T20:39:28.360472+00:00 app[web.1]: [817f4b05-4d0a-4458-8226-ce66d72cdcba] [ ]   MOPED:   QUERY        database= _cluster collection= _contents selector={"$query"=>{"state"=>"Finished", "publish_at"=>{"$lte"=>2015-02-20 20:39:28 UTC}, "expire_at"=>{"$not"=>{"$lte"=>2015-02-20 20:39:28 UTC}}, "_type"=>{"$in"=>["SiteMessaging"]}}, "$orderby"=>{"publish_at"=>-1}} flags=[] limit=-1 skip=0 batch_size=nil fields=nil runtime: 10.8335ms
2015-02-20T20:39:28.392157+00:00 app[web.1]: [817f4b05-4d0a-4458-8226-ce66d72cdcba] [ ]   Rendered email_subscribers/_form.html.erb (21.0ms)
2015-02-20T20:39:28.392266+00:00 app[web.1]: [817f4b05-4d0a-4458-8226-ce66d72cdcba] [ ]   Rendered shared/_footer.html.erb (25.5ms)
2015-02-20T20:39:28.397998+00:00 app[web.1]: [817f4b05-4d0a-4458-8226-ce66d72cdcba] [ ]   Rendered shared/_minicart_tmpl.html.erb (1.9ms)
2015-02-20T20:39:28.405182+00:00 app[web.1]: [817f4b05-4d0a-4458-8226-ce66d72cdcba] [ ]   Rendered email_subscribers/_newsletter_bottombanner.html.erb (4.1ms)
2015-02-20T20:39:28.414438+00:00 app[web.1]: [817f4b05-4d0a-4458-8226-ce66d72cdcba] [ ]   CACHE (0.0ms)  SELECT COUNT(*) FROM " _roles" INNER JOIN " _roles_users" ON " _roles"."id" = " _roles_users"."role_id" WHERE " _roles_users"."user_id" = $1 AND " _roles"."name" = 'retail'  [["user_id", 2]]
2015-02-20T20:39:28.415787+00:00 app[web.1]: [817f4b05-4d0a-4458-8226-ce66d72cdcba] [ ]   CACHE (0.0ms)  SELECT COUNT(*) FROM " _roles" INNER JOIN " _roles_users" ON " _roles"."id" = " _roles_users"."role_id" WHERE " _roles_users"."user_id" = $1 AND " _roles"."name" = 'retail'  [["user_id", 2]]

Encrypted log transport

Ability for customers to receive logs via syslog over encrypted transport via stunnel or simliar. Mentioned by Facebook InfoSec and a feature I'd personally like to see.

Public Logplex doesn't start

Hi,

I am trying to use the public-logplex branch.

The compilation worked well, with one warning

./rebar --config public.rebar.config get-deps compile
...
==> quoted (compile)
src/quoted.erl:none: Warning: this system is not configured for native-code compilation.

But loglex doesn't want to start :-/

$ ./bin/logplex 

=PROGRESS REPORT==== 9-Jan-2013::09:30:22 ===
          supervisor: {local,sasl_safe_sup}
             started: [{pid,<0.41.0>},
                       {name,alarm_handler},
                       {mfargs,{alarm_handler,start_link,[]}},
                       {restart_type,permanent},
                       {shutdown,2000},
                       {child_type,worker}]

=PROGRESS REPORT==== 9-Jan-2013::09:30:22 ===
          supervisor: {local,sasl_safe_sup}
             started: [{pid,<0.42.0>},
                       {name,overload},
                       {mfargs,{overload,start_link,[]}},
                       {restart_type,permanent},
                       {shutdown,2000},
                       {child_type,worker}]

=PROGRESS REPORT==== 9-Jan-2013::09:30:22 ===
          supervisor: {local,sasl_sup}
             started: [{pid,<0.40.0>},
                       {name,sasl_safe_sup},
                       {mfargs,
                           {supervisor,start_link,
                               [{local,sasl_safe_sup},sasl,safe]}},
                       {restart_type,permanent},
                       {shutdown,infinity},
                       {child_type,supervisor}]

=PROGRESS REPORT==== 9-Jan-2013::09:30:22 ===
          supervisor: {local,sasl_sup}
             started: [{pid,<0.43.0>},
                       {name,release_handler},
                       {mfargs,{release_handler,start_link,[]}},
                       {restart_type,permanent},
                       {shutdown,2000},
                       {child_type,worker}]

=PROGRESS REPORT==== 9-Jan-2013::09:30:22 ===
         application: sasl
          started_at: 'logplex@corinne-HP-Compaq-6730s'

=PROGRESS REPORT==== 9-Jan-2013::09:30:22 ===
          supervisor: {local,inets_sup}
             started: [{pid,<0.49.0>},
                       {name,ftp_sup},
                       {mfargs,{ftp_sup,start_link,[]}},
                       {restart_type,permanent},
                       {shutdown,infinity},
                       {child_type,supervisor}]

=PROGRESS REPORT==== 9-Jan-2013::09:30:22 ===
          supervisor: {local,httpc_profile_sup}
             started: [{pid,<0.52.0>},
                       {name,httpc_manager},
                       {mfargs,
                           {httpc_manager,start_link,
                               [default,only_session_cookies,inets]}},
                       {restart_type,permanent},
                       {shutdown,4000},
                       {child_type,worker}]

=PROGRESS REPORT==== 9-Jan-2013::09:30:22 ===
          supervisor: {local,httpc_sup}
             started: [{pid,<0.51.0>},
                       {name,httpc_profile_sup},
                       {mfargs,
                           {httpc_profile_sup,start_link,
                               [[{httpc,{default,only_session_cookies}}]]}},
                       {restart_type,permanent},
                       {shutdown,infinity},
                       {child_type,supervisor}]

=PROGRESS REPORT==== 9-Jan-2013::09:30:22 ===
          supervisor: {local,httpc_sup}
             started: [{pid,<0.53.0>},
                       {name,httpc_handler_sup},
                       {mfargs,{httpc_handler_sup,start_link,[]}},
                       {restart_type,permanent},
                       {shutdown,infinity},
                       {child_type,supervisor}]

=PROGRESS REPORT==== 9-Jan-2013::09:30:22 ===
          supervisor: {local,inets_sup}
             started: [{pid,<0.50.0>},
                       {name,httpc_sup},
                       {mfargs,
                           {httpc_sup,start_link,
                               [[{httpc,{default,only_session_cookies}}]]}},
                       {restart_type,permanent},
                       {shutdown,infinity},
                       {child_type,supervisor}]

=PROGRESS REPORT==== 9-Jan-2013::09:30:22 ===
          supervisor: {local,inets_sup}
             started: [{pid,<0.54.0>},
                       {name,httpd_sup},
                       {mfargs,{httpd_sup,start_link,[[]]}},
                       {restart_type,permanent},
                       {shutdown,infinity},
                       {child_type,supervisor}]

=PROGRESS REPORT==== 9-Jan-2013::09:30:22 ===
          supervisor: {local,inets_sup}
             started: [{pid,<0.55.0>},
                       {name,tftp_sup},
                       {mfargs,{tftp_sup,start_link,[[]]}},
                       {restart_type,permanent},
                       {shutdown,infinity},
                       {child_type,supervisor}]

=PROGRESS REPORT==== 9-Jan-2013::09:30:22 ===
         application: inets
          started_at: 'logplex@corinne-HP-Compaq-6730s'

=PROGRESS REPORT==== 9-Jan-2013::09:30:22 ===
          supervisor: {local,crypto_sup}
             started: [{pid,<0.60.0>},
                       {name,crypto_server},
                       {mfargs,{crypto_server,start_link,[]}},
                       {restart_type,permanent},
                       {shutdown,2000},
                       {child_type,worker}]

=PROGRESS REPORT==== 9-Jan-2013::09:30:22 ===
         application: crypto
          started_at: 'logplex@corinne-HP-Compaq-6730s'

=PROGRESS REPORT==== 9-Jan-2013::09:30:22 ===
         application: public_key
          started_at: 'logplex@corinne-HP-Compaq-6730s'

=PROGRESS REPORT==== 9-Jan-2013::09:30:22 ===
          supervisor: {local,ssl_sup}
             started: [{pid,<0.66.0>},
                       {name,ssl_broker_sup},
                       {mfargs,{ssl_broker_sup,start_link,[]}},
                       {restart_type,permanent},
                       {shutdown,2000},
                       {child_type,supervisor}]

=PROGRESS REPORT==== 9-Jan-2013::09:30:22 ===
          supervisor: {local,ssl_sup}
             started: [{pid,<0.67.0>},
                       {name,ssl_manager},
                       {mfargs,{ssl_manager,start_link,[[]]}},
                       {restart_type,permanent},
                       {shutdown,4000},
                       {child_type,worker}]

=PROGRESS REPORT==== 9-Jan-2013::09:30:22 ===
          supervisor: {local,ssl_sup}
             started: [{pid,<0.68.0>},
                       {name,ssl_connection},
                       {mfargs,{ssl_connection_sup,start_link,[]}},
                       {restart_type,permanent},
                       {shutdown,4000},
                       {child_type,supervisor}]

=PROGRESS REPORT==== 9-Jan-2013::09:30:22 ===
         application: ssl
          started_at: 'logplex@corinne-HP-Compaq-6730s'

=PROGRESS REPORT==== 9-Jan-2013::09:30:22 ===
          supervisor: {local,gproc_sup}
             started: [{pid,<0.73.0>},
                       {name,gproc},
                       {mfargs,{gproc,start_link,[]}},
                       {restart_type,permanent},
                       {shutdown,2000},
                       {child_type,worker}]

=PROGRESS REPORT==== 9-Jan-2013::09:30:22 ===
          supervisor: {local,gproc_sup}
             started: [{pid,<0.74.0>},
                       {name,gproc_monitor},
                       {mfargs,{gproc_monitor,start_link,[]}},
                       {restart_type,permanent},
                       {shutdown,2000},
                       {child_type,worker}]

=PROGRESS REPORT==== 9-Jan-2013::09:30:22 ===
          supervisor: {local,gproc_sup}
             started: [{pid,<0.75.0>},
                       {name,gproc_bcast},
                       {mfargs,{gproc_bcast,start_link,[]}},
                       {restart_type,permanent},
                       {shutdown,2000},
                       {child_type,worker}]

=PROGRESS REPORT==== 9-Jan-2013::09:30:22 ===
         application: gproc
          started_at: 'logplex@corinne-HP-Compaq-6730s'

=PROGRESS REPORT==== 9-Jan-2013::09:30:22 ===
          supervisor: {local,ehmon_sup}
             started: [{pid,<0.80.0>},
                       {name,ehmon_report_srv},
                       {mfargs,{ehmon_report_srv,start_link,[]}},
                       {restart_type,permanent},
                       {shutdown,2000},
                       {child_type,worker}]

=PROGRESS REPORT==== 9-Jan-2013::09:30:22 ===
         application: ehmon
          started_at: 'logplex@corinne-HP-Compaq-6730s'

=PROGRESS REPORT==== 9-Jan-2013::09:30:22 ===
         application: ex_uri
          started_at: 'logplex@corinne-HP-Compaq-6730s'

=PROGRESS REPORT==== 9-Jan-2013::09:30:22 ===
          supervisor: {local,redis_sup}
             started: [{pid,<0.86.0>},
                       {name,redis_pool_sup},
                       {mfargs,{redis_pool_sup,start_link,[]}},
                       {restart_type,permanent},
                       {shutdown,10000},
                       {child_type,worker}]

=PROGRESS REPORT==== 9-Jan-2013::09:30:22 ===
          supervisor: {local,redis_sup}
             started: [{pid,<0.87.0>},
                       {name,redis_pid_sup},
                       {mfargs,{redis_pid_sup,start_link,[]}},
                       {restart_type,permanent},
                       {shutdown,10000},
                       {child_type,worker}]

=PROGRESS REPORT==== 9-Jan-2013::09:30:22 ===
         application: redis
          started_at: 'logplex@corinne-HP-Compaq-6730s'
pid=<0.96.0> m=logplex_app ln=56 class=info at=start
{"init terminating in do_boot",{{app_start_failed,logplex,{bad_return,{{logplex_app,start,[normal,[]]},{'EXIT',{{missing_config,cookie},[{logplex_app,config,1},{logplex_app,set_cookie,0},{logplex_app,start,2},{application_master,start_supervisor,3},{application_master,start_the_app,5},{application_master,start_it_new,7}]}}}}},[{logplex_app,start_ok,3},{init,start_it,1},{init,start_em,1}]}}

Crash dump was written to: erl_crash.dump
init terminating in do_boot ()

I use the Ubuntu 64bit 12.04 LTS, with the following packages:

ii  erlang-base                            1:14.b.4-dfsg-1ubuntu1                  Erlang/OTP virtual machine and base applications
ii  erlang-crypto                          1:14.b.4-dfsg-1ubuntu1                  Erlang/OTP cryptographic modules
ii  erlang-dev                             1:14.b.4-dfsg-1ubuntu1                  Erlang/OTP development libraries and headers
ii  erlang-eunit                           1:14.b.4-dfsg-1ubuntu1                  Erlang/OTP module for unit testing
ii  erlang-inets                           1:14.b.4-dfsg-1ubuntu1                  Erlang/OTP Internet clients and servers
ii  erlang-mnesia                          1:14.b.4-dfsg-1ubuntu1                  Erlang/OTP distributed relational/object hybrid database
ii  erlang-public-key                      1:14.b.4-dfsg-1ubuntu1                  Erlang/OTP public key infrastructur
ii  erlang-runtime-tools                   1:14.b.4-dfsg-1ubuntu1                  Erlang/OTP runtime tracing/debugging tools
ii  erlang-ssl                             1:14.b.4-dfsg-1ubuntu1                  Erlang/OTP implementation of SSL
ii  erlang-syntax-tools                    1:14.b.4-dfsg-1ubuntu1                  Erlang/OTP modules for handling abstract Erlang syntax trees
...
ii  redis-server                           2:2.2.12-1build1                        Persistent key-value database with network interface

Do you have any idea ?

Thanks
Romain

Does not support Redis 2.8.x

I got it to compile with a couple of simple tweaks:

But it fails to run with this:

gen_server <0.150.0> terminated with reason: {error,vsn_not_supported}
CRASH REPORT Process <0.150.0> with 0 neighbours exited with reason: {error,vsn_not_supported} in gen_server:terminate/6 line 746
Supervisor logplex_sup had child nsync started with nsync:start_link([{callback,{nsync_callback,handle,[]}},{host,"127.0.0.1"},{port,6379},{pass,undefined}]) at <0.150.0> exit with reason {error,vsn_not_supported} in context child_terminated
gen_server <0.178.0> terminated with reason: {error,vsn_not_supported}
CRASH REPORT Process <0.178.0> with 0 neighbours exited with reason: {error,vsn_not_supported} in gen_server:terminate/6 line 746
** System running to use fully qualified hostnames **
** Hostname dds-492c55 is illegal **

I believe the hostname is not being the issue for vsn_not_supported error, right?

Are Syslog UDP Drains Fully Supported?

I've created a channel with 3 tokens (eg - name: "someapp", token: "t.some-uuid") and a single drain of the form udpsyslog://<hostname>:<port> and have netcat running, listening for udp logs (while true; do nc --telnet --verbose --udp -l -p <port>; done).

When I log to this channel using the token, I see nothing in netcat. Also, I see no udp/tcp traffic when looking running tcpdump on the port (sudo tcpdump -i any port <port>-A). If, instead, I have netcat running in tcp mode (the default) and create a regular syslog:// or http:// drain, I see log traffic immediately.

I'm using a rather old version of Logplex (v89), but I'm unsure if the udp-specific changes since then would solve this issue.

Thanks for taking a look!

Logplex User Agent is Logplex/unknown

The User-Agent header for http drains is supposed to be Logplex/(Some version string), and used to be based on the OTP logplex app env var git_branch. This doesn't seem to be set to anything anymore so Logplex is sending Logplex/unknown. It'd be good to use whatever version information is available for this so that receivers know any version fixups they should use when interpreting logplex POSTs.

Fail to reconnect to redis on timeout

I noticed that if you have the attribute timeout configured in redis (redis.conf), meaning the value is larger than 0 (0 is disabled), that when redis times out the logplex connection, logplex stops saving logs to the db and still responds with a 204 response code. This doesn't seem to affect channels or tokens though, only saving new logs.

The easy workaround for now is to set timeout to 0 in /etc/redis/redis.conf, but logplex should see the connection has been lost and reconnect.

POST /v2/sessions channel_id type inconsistency

This interaction errors:

POST /v2/sessions
{ "channel_id": 12345, ... }

200
{ "url": "..." }

GET $url

400
'channel_id' missing

Changing channel_id to "12345" in the POST makes the flow work. As channel_id is json-encoded as a number everywhere else I've seen this is surprising. I think the least intrusive change would be to have this endpoint accept a numeric channel_id and coerce as necessary.

illegible error message when running bin/test

I have installed erlang (Erlang R13B03), redis-server and made sure
redis-server is running. However when I ran 'bin/test' I get some errors.

OS:
Ubuntu Server 10.04 LTS 64-bit

Steps:

  1. Install erlang, git-core, redis-server. Run redis-server
  2. Go to logplex dir and run $ ./bin/get-deps
  3. Run $ ./bin/test

Output:
vagrant@vagrantup:~/apps/logplex$ ./bin/test
./bin/test: 8: [[: not found
./bin/test: 12: [[: not found
./bin/test: 16: [[: not found
./bin/test: 20: [[: not found
ulimit: 22: error setting limit (Operation not permitted)
{"init terminating in do_boot",{'cannot get bootfile','release/logplex-1.0.boot'}}

Crash dump was written to: erl_crash.dump
init terminating in do_boot ()
vagrant@vagrantup:~/apps/logplex$ netstat -tnlp 2> /dev/null | grep 6379
tcp 0 0 127.0.0.1:6379 0.0.0.0:* LISTEN -

Here's the last bits from 'erl_crash.dump':
...
all
active
absoluteURI
abs_path
aborted
'EXIT'
'UP'
'DOWN'
undefined_lambda
undefined_function
nocatch
undefined
exit
error
throw
return
call
normal
timeout
infinity
fun
''
'$end_of_table'
'nonode@nohost'
'_'
true
false
=end

Observation:
There's no 'release/logplex-1.0.boot' file as the error says. Not sure how
and/or where to get it.

Request:
If you can provide at least a basic admin guide, it'd be much appreciated.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.