Coder Social home page Coder Social logo

vamp-gateway-agent's Introduction

Vamp Gateway Agent

Join the chat at https://gitter.im/magneticio/vamp Docker

Based on Vamp gateways, Vamp generates HAProxy configuration and stores it to KV store.

Vamp Gateway Agent:

  • reads the HAProxy configuration using confd
  • appends it to the base configuration haproxy.basic.cnf
  • if new configuration is valid, VGA reloads HAProxy with as little client traffic interruption as possible

In addition to this VGA also:

  • send HAProxy log to Elasticsearch using Filebeat
  • handle and recover from ZooKeeper, etcd, Consul and Vault outages without interrupting the haproxy process and client requests
  • does Vault token renewal if needed

Usage

Following environment variables are mandatory:

  • VAMP_KEY_VALUE_STORE_TYPE <=> confd -backend
  • VAMP_KEY_VALUE_STORE_CONNECTION <=> confd -node
  • VAMP_KEY_VALUE_STORE_PATH <=> key used by confd
  • VAMP_ELASTICSEARCH_URL <=> http://elasticsearch:9200

Example:

docker run -e VAMP_KEY_VALUE_STORE_TYPE=zookeeper \
           -e VAMP_KEY_VALUE_STORE_CONNECTION=localhost:2181 \
           -e VAMP_KEY_VALUE_STORE_PATH=/vamp/gateways/haproxy/1.6 \
           -e VAMP_ELASTICSEARCH_URL=http://localhost:9200 \
           magneticio/vamp-gateway-agent:katana

Available Docker images can be found at Docker Hub.

Domain name resolver

To enable dnsmasq to resolve virtual hosts, pass the following environment variables to the Docker container:

  • VAMP_VGA_DNS_ENABLE Set to non-empty value to enable
  • VAMP_VGA_DNS_PORT Listening port, default: 5353

Building Docker images

make targets:

  • version - displaying version (tag)
  • clean - removing temporal build directory ./target
  • purge - running clean and removing all images magneticio/vamp-gateway-agent:*
  • build - copying files to ./target directory and building the image magneticio/vamp-gateway-agent:${version}
  • default - clean build

Additional documentation and examples

vamp-gateway-agent's People

Contributors

bgokden avatar dennis-bell avatar dragoslav avatar itsmeijers avatar jzubielik avatar luciangabor avatar tymoteuszgach avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

vamp-gateway-agent's Issues

Sometimes sava returns 503 on DC/OS on Azure.

Sava sometimes returns 503 for some resources randomly like this picture. It sometimes return 200 and sometimes returns 503 even it's the main html file.

2016-10-29_1755

I'm doubting something happen with VGA, because it always returns 200 when I upload Sava image on Marathon directly. I'm glad if you kindly let me know what I should check?

I deployed sava with following blueprint, only modifying port to 80 from 9050.


---
name: sava:1.0
gateways:
  80: sava/webport
clusters:
  sava:
    services:
      breed:
        name: sava:1.0.0
        deployable: magneticio/sava:1.0.0
        ports:
          webport: 8080/http
      scale:
        cpu: 0.2       
        memory: 64MB
        instances: 1

I installed vamp on Azure DC/OS following this document.
http://vamp.io/documentation/installation/dcos/

I use 1 public agent and 2 private agents. So I have 3 VGAs.

Hi, I'm Vamp! How are you? 

RUNNING SINCE   29-10-2016 17:38:00
VERSION 0.9.0-65-g1ccc425
UI VERSION  0.9.0-107-g8186782
PERSISTENCE elasticsearch
KEY VALUE STORE zookeeper
GATEWAY DRIVER  haproxy 1.6.x
CONTAINER DRIVER    marathon
WORKFLOW DRIVER marathon
Marathon Details

Version
1.3.0
$ dcos --version
dcoscli.version=0.4.12
dcos.version=1.8.4
dcos.commit=e64024af95b62c632c90b9063ed06296fcf38ea5
dcos.bootstrap-id=5b4aa43610c57ee1d60b4aa0751a1fb75824c083

simplify haproxy runit service

haproxy runit service should only check if it's running or execute the reload script.
currently the whole reload script is defined as a service, which executes iptables and other things. this is too much for running every second.

Error building docker container

on both mac OS X and Linux building the container with make fails.

On linux:

make
docker pull magneticio/buildserver
Using default tag: latest
latest: Pulling from magneticio/buildserver
Digest: sha256:881c59f7429188e76843694732376627eba247dfa6b85a30922e79784a77a990
Status: Image is up to date for magneticio/buildserver:latest
docker run \
    --rm \
    --volume /var/run/docker.sock:/var/run/docker.sock \
    --volume /usr/bin/docker:/usr/bin/docker \
    --volume /home/thomas/VAMP/vamp-workflow-agent:/srv/src/go/src/github.com/magneticio/vamp-workflow-agent \
    --workdir=/srv/src/go/src/github.com/magneticio/vamp-workflow-agent \
    magneticio/buildserver \
        ./build.sh --build

┌───────────────────╴Vamp Buildserver╶───────────────────┄
│
│ Source code: https://github.com/magneticio/buildserver
│ Version    : 0.4.6 (latest)
│ Commit     : 3508aad14a7cc39ebee72870aecc6bd9d8326de6
│ Build date : 2017-02-24T14:29:50Z
│
└────────────────────────────────────────────────────────┄

tput: No value for $TERM and no -T specified
Makefile:23: recipe for target 'default' failed
make: *** [default] Error 2

on Mac OS X (with docker for mac)

MacBook-Pro-6:vamp-gateway-agent olafmol$ make
docker pull magneticio/buildserver
Using default tag: latest
latest: Pulling from magneticio/buildserver
Digest: sha256:881c59f7429188e76843694732376627eba247dfa6b85a30922e79784a77a990
Status: Image is up to date for magneticio/buildserver:latest
docker run \
		--rm \
		--volume /var/run/docker.sock:/var/run/docker.sock \
		--volume /usr/local/bin/docker:/usr/bin/docker \
		--volume /Users/olafmol/vamp-gateway-agent:/srv/src \
		--workdir=/srv/src \
		magneticio/buildserver \
			./build.sh --build
docker: Error response from daemon: Mounts denied: 
The path /usr/local/bin/docker
is not shared from OS X and is not known to Docker.
You can configure shared paths from Docker -> Preferences... -> File Sharing.
See https://docs.docker.com/docker-for-mac/osxfs/#namespaces for more info.
..
ERRO[0000] error getting events from daemon: net/http: request canceled 
make: *** [default] Error 125

confd can't find keys on DC/OS 1.8 when using zookeeper

bash-4.3# /etc/service/confd/run 
confd/run: looking for confd configuration and templates
confd/run: polling for changes
2017-01-26T10:22:53Z ip-10-0-6-184.eu-west-1.compute.internal /usr/bin/confd[232]: INFO Backend set to zookeeper
2017-01-26T10:22:53Z ip-10-0-6-184.eu-west-1.compute.internal /usr/bin/confd[232]: INFO Starting confd
2017-01-26T10:22:53Z ip-10-0-6-184.eu-west-1.compute.internal /usr/bin/confd[232]: INFO Backend nodes set to zk-1.zk:2181
2017-01-26T10:22:53Z ip-10-0-6-184.eu-west-1.compute.internal /usr/bin/confd[232]: ERROR template: haproxy.tmpl:39:2: executing "haproxy.tmpl" at <getv "/vamp/gateways...>: error calling getv: key does not exist

key exists:

core@ip-10-0-6-184 ~ $ ./zookeepercli -servers=zk-1.zk:2181 -c ls /vamp/gateways/haproxy/1.7
configuration

Marathon JSON:

{
  "id": "vamp/vamp-gateway-agent",
  "instances": 6,
  "cpus": 0.2,
  "mem": 256.0,
  "container": {
    "type": "DOCKER",
    "docker": {
      "image": "magneticio/vamp-gateway-agent:katana",
      "network": "HOST",
      "privileged": true,
      "forcePullImage": true
    }
  },
  "env": {
  "VAMP_KEY_VALUE_STORE_TYPE": "zookeeper",
  "VAMP_KEY_VALUE_STORE_CONNECTION": "zk-1.zk:2181",
  "VAMP_KEY_VALUE_STORE_PATH": "/vamp/gateways/haproxy/1.7",
  "VAMP_PERSISTENCE_STORE_CONNECTION": "elasticsearch.marathon.mesos:9200"
  },
  "acceptedResourceRoles": [
    "slave_public",
    "*"
  ],
  "constraints": [
    [
      "hostname",
      "UNIQUE"
    ]
  ],
  "healthChecks": [
    {
      "protocol": "HTTP",
      "port": 1988,
      "path": "/health",
      "gracePeriodSeconds": 30,
      "intervalSeconds": 10,
      "timeoutSeconds": 5,
      "maxConsecutiveFailures": 0
    }
  ]
}

Define what minimal haproxy metrics we need to log

currently we log all fields from HAproxy, at minimum we need 3 ('ft' (frontend id - based on gateway), 'Tt' (response time), 'ST' (status code)), we need to investigate if these are enough to log and what needs to be done to implement this.

Dropping all agent binary altogether

VGA binary does:

  • sending HAProxy logs
  • monitoring configuration change and reloading HAProxy

These can be covered by #4 and #7.
Important is test edge cases with restarting key-value stores etc.

Update README

Latest katana builds don't support command line arguments, only environmental variables. README should be updated to reflect this.

Slightly related:

upgrade HAproxy configuration template to avoid no timeout warnings

the current 1.7.x HAproxy template doesn't set any timeouts, which throws warnings. We should update the template to have sane default time-outs.

[WARNING] 107/211405 (33489) : config : missing timeouts for frontend 'virtual_hosts'.
   | While not properly invalid, you will certainly encounter various problems
   | with such a configuration. To fix this, please ensure that all following
   | timeouts are set to a non-zero value: 'client', 'connect', 'server’.

https://serverfault.com/questions/504308/by-what-criteria-do-you-tune-timeouts-in-ha-proxy-config

HAproxy log shipping without Logstash

VGA HAproxy logs written by busybox syslogd get truncated, and doesn't pick up the correct log format. Unsure if this is because of the busybox syslogd, or other reason.

This is causing Filebeat to ship incomplete logs to ES.

Also Filebeat (and other beats) need GNU libc in order to work, which requires additional libraries in Alpine docker containers. See https://github.com/frol/docker-alpine-glibc for a way to install glibc in alpine.

Possible solution: patch filebeat to read the HAproxy log socket, and/or make a custom log shipper sitting on top of the socket.

This is needed for gathering necessary metrics and events about connections to the different services, for showing in the UI.

Go binaries

Hi,

vamp-gateway-agent go binaries are not working in alpine, due to the missing libc in it instead of musl. I think you have to build the agent with static dependencies to accomplish that go binary runs everywhere.
CGO_ENABLED=0 go build -v -a -installsuffix cgo

Regards,

Raul

VGA workflow doesn't correct scale and throws 409 status code

when changing the scale of VGA's directly within marathon/DCOS the VGA workflow script doesn't correct, but throws a 409 status code:

2017/05/03 14:07:23.955393 http.go:90: Add new websocket client: 10
2017/05/03 14:07:23.955410 http.go:92: Number of connected clients: 1
2017/05/03 14:07:23.977016 http.go:154: Command [ 10 ] execution-history
2017/05/03 14:08:17.714249 run.go:52: Executing workflow by Node.js.
2017/05/03 14:08:17.714304 api.go:59: New execution:30
2017/05/03 14:08:17.783758 run.go:124: WORKFLOW - API options: {"host":"http://10.20.0.100:8080","path":"/api/v1","headers":{"Accept":"application/json","Content-Type":"application/json"},"cache":true,"namespace":"vamp"}
2017/05/03 14:08:17.791262 run.go:124: WORKFLOW - API GET /config
2017/05/03 14:08:17.818998 run.go:124: WORKFLOW - HTTP REQUEST [0] {"protocol":"http:","port":"8080","hostname":"10.20.0.100","method":"GET","headers":{"Accept":"application/json","Content-Type":"application/json"},"path":"/api/v1/vamp/config?page=1&flatten=true"}
2017/05/03 14:08:17.879703 run.go:124: WORKFLOW - HTTP RESPONSE [0] 200
2017/05/03 14:08:17.894147 run.go:124: WORKFLOW - HTTP REQUEST [0] {"protocol":"http:","port":"5050","hostname":"leader.mesos","method":"GET","headers":{},"path":"/master/slaves"}
2017/05/03 14:08:17.918708 run.go:124: WORKFLOW - HTTP RESPONSE [0] 200
2017/05/03 14:08:17.920770 run.go:124: WORKFLOW - checking if deployed: /vamp/vamp-gateway-agent
2017/05/03 14:08:17.921260 run.go:124: WORKFLOW - HTTP REQUEST [1] {"protocol":"http:","port":"8080","hostname":"marathon.mesos","method":"GET","headers":{},"path":"/v2/apps/vamp/vamp-gateway-agent"}
2017/05/03 14:08:17.927766 run.go:124: WORKFLOW - HTTP RESPONSE [1] 200
2017/05/03 14:08:17.928473 run.go:124: WORKFLOW - already deployed, checking number of instances...
2017/05/03 14:08:17.929166 run.go:124: WORKFLOW - deployed instances: 9
2017/05/03 14:08:17.929600 run.go:124: WORKFLOW - expected instances: 8
2017/05/03 14:08:17.930127 run.go:124: WORKFLOW - deploying...
2017/05/03 14:08:17.930740 run.go:124: WORKFLOW - HTTP REQUEST [2] {"protocol":"http:","port":"8080","hostname":"marathon.mesos","method":"POST","headers":{},"path":"/v2/apps"}
2017/05/03 14:08:17.939097 run.go:124: WORKFLOW - HTTP RESPONSE [2] 409
2017/05/03 14:08:17.939670 run.go:124: WORKFLOW - error - undefined
2017/05/03 14:08:17.941618 api.go:76: Finalized execution:30
2017/05/03 14:08:17.941638 run.go:86: Workflow execution took  : 227.313498ms
2017/05/03 14:08:17.941645 run.go:87: Workflow exit status code: 0
2017/05/03 14:09:17.726820 http.go:95: Remove websocket client: 10

expected behaviour is that the VGA workflow always corrects the scale for the current number of nodes available on the mesos cluster.

out of disk space after time

running VGA's on DC/OS 1.8.x on Azure CS with Vamp 094 after a day the allocated disk space is full, throwing errors:

stdout:
pidfile /us...>: write /usr/local/vamp/.haproxy.cfg445082482: no space left on device
2017-04-20T11:26:07Z dcos-agent-private-4A0F46EF000001 /usr/bin/confd[63939]: ERROR template: filebeat.tmpl:1:0: executing "filebeat.tmpl" at <filebeat.prospectors...>: write /usr/local/filebeat/.filebeat.yml907398874: no space left on device
2017-04-20T11:26:07Z dcos-agent-private-4A0F46EF000001 /usr/bin/confd[63939]: ERROR template: haproxy.tmpl:1:0: executing "haproxy.tmpl" at <global

error log:

runsv vamp-gateway-agent: warning: unable to write supervise/status.new: out of disk space
runsv vamp-gateway-agent: warning: unable to write supervise/status.new: out of disk space
runsv haproxy: warning: unable to write supervise/status.new: out of disk space
runsv haproxy: warning: unable to write supervise/status.new: out of disk space

VGA's spread ZK connects over multiple nodes

currently our VGA's (and maybe also other Vamp components) only hit 1 specific ZK node (often zk-1) while there are multiple ZK nodes (typically 5). It would be good to investigate if spreading the connections over multiple ZK nodes increased performance/stability of the entire system. Supposedly a comma-delimited list of ZK nodes kan be passed for connection options.

Improve 503 page

When adding a new gateway VGA HAproxy is reconfigured and reloaded before the upstream server is fully responding, and in that case will show the user a HTTP 503 Service Unavailable.

As 503 is a temporary error, and having a proper maintenance page for it will prevent search engines from delisting websites. See:

Hardcoded VAMP_PULSE_ELASTICSEARCH_URL

since release 0.9.0 the docker container magneticio/vamp:0.9.0-dcos is searching on startup for:

http://elasticsearch-executor.elasticsearch.mesos:9200

The should be overridden by the environment varibale

"VAMP_PULSE_ELASTICSEARCH_URL": "http://elasticsearch.service.consul:9200" (in our case)

This is not happening so, the container wil not start the vamp application.
I guess it is somewhere hardcoded.

HAProxy won't start

  • run katana quick start
  • VGA will be up
  • HAProxy doesn't run inside the vga container

If reload.sh haproxy.cfg is executed, HAProxy shows help message only.

vamp-gateway-agent not starting

I downloaded the latest vamp-gateway-agent.
When starting the agent i am getting the following errors:

/opt/vamp/vamp-gateway-agent --logo=false --storeType=zookeeper --storeConnection=127.0.0.1:2181
15:18:13.618 main CRIT ==> No basic HAProxy configuration: /opt/vamp/haproxy.basic.cfg 
panic: No basic HAProxy configuration: /opt/vamp/haproxy.basic.cfg

goroutine 1 [running]:
github.com/op/go-logging.(*Logger).Panic(0x18bc8a00, 0x18b37f64, 0x3, 0x3)
        /home/travis/build/magneticio/vamp-dist/vamp-gateway-agent/target/go_path/src/github.com/op/go-logging/logger.go:182 +0x13b
main.main()
        /home/travis/build/magneticio/vamp-dist/vamp-gateway-agent/target/go_path/src/github.com/vamp-gateway-agent/main.go:74 +0x501

goroutine 6 [syscall]:
os/signal.loop()
        /home/travis/build/magneticio/vamp-dist/vamp-gateway-agent/target/go/src/os/signal/signal_unix.go:22 +0x1a
created by os/signal.init.1
        /home/travis/build/magneticio/vamp-dist/vamp-gateway-agent/target/go/src/os/signal/signal_unix.go:28 +0x36

Any hints

Use Vamp Workflow Agent release approach

Travis should create deliverable and upload it to Bintray, Docker files should have ADD with Bintray download URL and Docker hub automated build should use master branch.

changing deployment scale fails with http 409 response and workflow error

tested on DCOS 1.9 with Katana (094RC):

when suspending VGA workflow, scaling down the number of VGA's to (f.e.) 2 (when there are 6 nodes available), and then restarting the VGA workflow, i see this in the VGA workflow log, effectively keeping the scale at 2 instead of scaling up to 6:

2017/04/06 12:32:36.133955 api.go:59: New execution:3
2017/04/06 12:32:36.225568 run.go:124: WORKFLOW - API options: {"host":"http://10.20.0.100:8080","path":"/api/v1","headers":{"Accept":"application/json","Content-Type":"application/json"},"cache":true,"namespace":"vamp"}
2017/04/06 12:32:36.226755 run.go:124: WORKFLOW - API GET /config
2017/04/06 12:32:36.237836 run.go:124: WORKFLOW - HTTP REQUEST [0] {"protocol":"http:","port":"8080","hostname":"10.20.0.100","method":"GET","headers":{"Accept":"application/json","Content-Type":"application/json"},"path":"/api/v1/vamp/config?page=1&flatten=true"}
2017/04/06 12:32:36.261885 run.go:124: WORKFLOW - HTTP RESPONSE [0] 200
2017/04/06 12:32:36.264969 run.go:124: WORKFLOW - HTTP REQUEST [0] {"protocol":"http:","port":"5050","hostname":"leader.mesos","method":"GET","headers":{},"path":"/master/slaves"}
2017/04/06 12:32:36.275250 run.go:124: WORKFLOW - HTTP RESPONSE [0] 200
2017/04/06 12:32:36.275964 run.go:124: WORKFLOW - checking if deployed: /vamp/vamp-gateway-agent
2017/04/06 12:32:36.276223 run.go:124: WORKFLOW - HTTP REQUEST [1] {"protocol":"http:","port":"8080","hostname":"marathon.mesos","method":"GET","headers":{},"path":"/v2/apps/vamp/vamp-gateway-agent"}
2017/04/06 12:32:36.284075 run.go:124: WORKFLOW - HTTP RESPONSE [1] 200
2017/04/06 12:32:36.284584 run.go:124: WORKFLOW - already deployed, checking number of instances...
2017/04/06 12:32:36.284738 run.go:124: WORKFLOW - deployed instances: 2
2017/04/06 12:32:36.284755 run.go:124: WORKFLOW - expected instances: 6
2017/04/06 12:32:36.284933 run.go:124: WORKFLOW - deploying...
2017/04/06 12:32:36.285204 run.go:124: WORKFLOW - HTTP REQUEST [2] {"protocol":"http:","port":"8080","hostname":"marathon.mesos","method":"POST","headers":{},"path":"/v2/apps"}
2017/04/06 12:32:36.296110 run.go:124: WORKFLOW - HTTP RESPONSE [2] 409
2017/04/06 12:32:36.296531 run.go:124: WORKFLOW - error - undefined
2017/04/06 12:32:36.298302 api.go:76: Finalized execution:3
2017/04/06 12:32:36.298315 run.go:86: Workflow execution took  : 164.360049ms
2017/04/06 12:32:36.298318 run.go:87: Workflow exit status code: 0
ostname":"marathon.mesos","method":"GET","headers":{},"path":"/v2/apps/vamp/vamp-gateway-agent"}
2017/04/06 12:32:36.284075 run.go:124: WORKFLOW - HTTP RESPONSE [1] 200
2017/04/06 12:32:36.284584 run.go:124: WORKFLOW - already deployed, checking number of instances...
2017/04/06 12:32:36.284738 run.go:124: WORKFLOW - deployed instances: 2
2017/04/06 12:32:36.284755 run.go:124: WORKFLOW - expected instances: 6
2017/04/06 12:32:36.284933 run.go:124: WORKFLOW - deploying...
2017/04/06 12:32:36.285204 run.go:124: WORKFLOW - HTTP REQUEST [2] {"protocol":"http:","port":"8080","hostname":"marathon.mesos","method":"POST","headers":{},"path":"/v2/apps"}
2017/04/06 12:32:36.296110 run.go:124: WORKFLOW - HTTP RESPONSE [2] 409
2017/04/06 12:32:36.296531 run.go:124: WORKFLOW - error - undefined
2017/04/06 12:32:36.298302 api.go:76: Finalized execution:3
2017/04/06 12:32:36.298315 run.go:86: Workflow execution took  : 164.360049ms
2017/04/06 12:32:36.298318 run.go:87: Workflow exit status code: 0

Redundant haproxy logging

Just noticed this being printed on my console using the quick-start:

{
       "message" => "<134>Sep 28 21:24:33 haproxy[48]: {\"ci\":\"127.0.0.1\",\"cp\":51972,\"t\":\"28/Sep/2016:21:24:33.158\",\"ft\":\"a7ad6869e65e9c047f956cf7d1b4d01a89eef486\",\"b\":\"o_a7ad6869e65e9c047f956cf7d1b4d01a89eef486\",\"s\":\"15273989f22771b96c85d496176bdf19add88120\",\"Tq\":0,\"Tw\":0,\"Tc\":0,\"Tr\":12,\"Tt\":12,\"ST\":404,\"B\":127,\"CC\":\"-\",\"CS\":\"-\",\"tsc\":\"----\",\"ac\":5,\"fc\":1,\"bc\":0,\"sc\":1,\"rc\":0,\"sq\":0,\"bq\":0,\"hr\":\"\",\"hs\":\"\",\"r\":\"GET /bla HTTP/1.1\"}\n",
      "@version" => "1",
    "@timestamp" => "2016-09-28T21:24:33.158Z",
          "type" => "haproxy",
          "host" => "172.17.0.1",
       "metrics" => "{\"ci\":\"127.0.0.1\",\"cp\":51972,\"t\":\"28/Sep/2016:21:24:33.158\",\"ft\":\"a7ad6869e65e9c047f956cf7d1b4d01a89eef486\",\"b\":\"o_a7ad6869e65e9c047f956cf7d1b4d01a89eef486\",\"s\":\"15273989f22771b96c85d496176bdf19add88120\",\"Tq\":0,\"Tw\":0,\"Tc\":0,\"Tr\":12,\"Tt\":12,\"ST\":404,\"B\":127,\"CC\":\"-\",\"CS\":\"-\",\"tsc\":\"----\",\"ac\":5,\"fc\":1,\"bc\":0,\"sc\":1,\"rc\":0,\"sq\":0,\"bq\":0,\"hr\":\"\",\"hs\":\"\",\"r\":\"GET /bla HTTP/1.1\"}",
            "ci" => "127.0.0.1",
            "cp" => 51972,
             "t" => "28/Sep/2016:21:24:33.158",
            "ft" => "a7ad6869e65e9c047f956cf7d1b4d01a89eef486",
             "b" => "o_a7ad6869e65e9c047f956cf7d1b4d01a89eef486",
             "s" => "15273989f22771b96c85d496176bdf19add88120",
            "Tq" => 0,
            "Tw" => 0,
            "Tc" => 0,
            "Tr" => 12,
            "Tt" => 12,
            "ST" => 404,
             "B" => 127,
            "CC" => "-",
            "CS" => "-",
           "tsc" => "----",
            "ac" => 5,
            "fc" => 1,
            "bc" => 0,
            "sc" => 1,
            "rc" => 0,
            "sq" => 0,
            "bq" => 0,
            "hr" => "",
            "hs" => "",
             "r" => "GET /bla HTTP/1.1"
}

This shows information in a message key, a metrics key and on top of that all fields are individual keys. Why is this being stored 3 (!) times?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.