Coder Social home page Coder Social logo

awesome-flow / flow Goto Github PK

View Code? Open in Web Editor NEW
18.0 6.0 4.0 21.24 MB

Flow is a Golang framework for building complex configurable message relays

License: MIT License

Go 90.62% Makefile 0.09% CSS 0.85% JavaScript 8.15% Assembly 0.03% Dockerfile 0.08% Shell 0.17%
relay sidecar messaging eventstream pipeline flow-framework sidecar-proxy yaml-configuration plugin-system

flow's Introduction

[WIP] The Flow Framework

logo

Build Status Coverage Status

Intro

The Flow framework is a comprehensive library of primitive building blocks and tools that lets one design and build data relays of any complexity. Highly inspired by electrical circuit engineering primitives, it provides a clear and well-defined approach to building message pipelines of any nature. One can think of Flow as LEGO in the world of data: a set of primitive reusable building bricks which are gathered together in a sophisticated assembly.

Flow can be a great fit in a SOA environment. It's primitives can be combined with a service discovery solution, external config provider etc; it can plug a set of security checks and obfuscation rules, perform an in-flight dispatching, implement a complex aggregation logic and so on. It can also be a good replacement for existing sidecars: it's high performance, modularity and the plugin system allows one to solve nearly any domain-specific messaging problem.

The ultimate goal of Flow is to turn a pretty complex low-level software problem into a logical map of data transition and transformation elements. There exists an extensive list of narrow-scoped relays, each one of them is dedicated to solve it's very own problem. In a bigger infrastructure it normally turns into a necessity of supporting a huge variety of daemons and sidecars, their custom orchestration recipes and a limitation of knowledge sharing. Flow is solving these problems by unifying the approach, making the knowledge base generic and transferable and by shifting developer's minds from low-level engineering and/or system administration problem towards a pure business-logic decision making process.

Status of the Project

This project is in active development. It means some parts of it would look totally different in the future. Some ideas still need validation and battle testing. The changes come pretty rapidly.

This also means the project is looking for any kind of contribution. Ideas, suggestions, critics, bugfixing, general interest, development: it all would be a tremendous help to Flow. The very first version was implemented for fun, to see how far the ultimate modularity idea can go. It went quite far :-) The project went public on the very early stage in hope to attract ome attention and gather people who might be interested in it, so the right direction would be defined as early as possible.

So, if you have any interest in Flow, please do join the project. Don't hesitate to reach out to us if you have any questions or feedback. And enjoy hacking!

Milestones and a Bigger Picture

The short-term plans are defined as milestones. Milestones are described on Github and represent some sensible amount of work and progress. For now, the project milestones have no time constraints. A milestone is delivered and closed onse all enlisted features are done. Each successfuly finished milestone initiates a minor release version bump.

Regarding a bigger picture, the ambitions of the project is to become a generic mature framework for building sidecars. This might be a long-long road. In the meantime, the project would be focusing on 2 primary directions: core direction and plugin direction.

The core activity would be focusing on general system performance, bugfixing, common library interface enhancements, and some missing generic features.

The plugin direction would be aiming to implement as many 3rd party integration connectors as needed. Among the nearest goals: Graphite, Redis, Kafka, Pulsar, Bookkeeper, etc. Connectors that will end up in flow-plugins repo should be reliable, configurable and easily reusable.

Concepts

Flow comes with a very compact dictionary or terms which are widely used in this documentation.

First of all, Flow is here to pass some data around. A unit of data is a message. Every Flow program is a single pipeline, which is built of primitives: we call them links. An example of a link: UDP receiver, router, multiplexer, etc. Links are connectable to each other, and the connecting elements are called connectors. Connectors are mono-directional: they pass messages in one direction from link A to link B. In this case we say that A has an outcoming connector, an B has an incoming connector.

Links come with the semantics of connectability: some of them can have outcoming connectors only: we call them out-links, or receivers (this is where the data comes into the pipeline), and some can have incoming connectors only: in-links, or sinks (where the data leaves the pipeline). A receiver is a link that receives internal messages: a network listener, pub-sub client etc. They ingest messages into the pipeline. A sink has the opposite purpose: to send messages somewhere else. This is where the lifecycle of the message ends. An example of a sink: an HTTP sender, Kafka ingestor, log file dumper, etc. A pipeline is supposed to start with one or more receivers and end up with one or more sinks. Generic in-out links are supposed to be placed in the middle of the pipeline.

Links are gathered in a chain of isolated self-contained elements. Every link has a set of methods to receive and pass messages. The custom logic is implemented inside a link body. A link knows nothing about it's neighbours and should avoid any neighbour-specific logic.

Links and Connectors

The link connectability is polymorphic. Depending on what a link implements, it might have 0, 1 or more incoming connectors and 0, 1 and more outcoming.

Links might be of 5 major types:

  • Receiver (none-to-one)
  • One-to-one
  • One-to-many
  • Many-to-one
  • Sink (one-to-none)
  Receiver    One-to-one    One-to-many    Many-to-one    Sink
      O           |              |             \|/          |
      |           O              O              O           O
                  |             /|\             |

This might give an idea about a trivial pipeline:

   R (Receiver)
   |
   S (Sink)

In this configuration, the receiver gets messages from the outer world and forwards it to the sink. The latter one takes care of sending them over, and this is effectively a trivial message lifecycle.

Some more examples of pipelines:

  Aggregator          Demultiplexer                Multi-stage Router

  R  R  R (Receivers)     R     (Receiver)                R (Receiver)
   \ | /                  |                               |
     M    (Mux)           D     (Demux)                   R (Router)
     |                  / | \                           /   \
     S    (Sink)       S  S  S  (Sinks)       (Buffer) B     D (Demux)
                                                       |     | \
                                               (Sinks) S     S   \
                                                                  R (Router)
                                                                / | \
                                                               S  S  S (Sinks)

In the examples above:

Aggregator is a set of receivers: it might encounter different transports, multiple endpoints, etc. All messages are piped into a single Mux link, and are collected by a sink.

A multiplexer is the opposite: a single receiver gets all messages from the outer world, proxies it to a multiplexer link and sends several times to distinct endpoints.

The last one might be interesting as it's way closer to the real-life configuration: A single receiver gets messages and passes them to a router. Router decides where the message should be directed and chooses one of the branches. The left branch is quite simple, but it contains an extra link: a buffer. If a message submission fails somewhere down the pipe (no matter where), it would be retried by the buffer.The right branch starts with a multiplexer, where one of the directions is a trivial sink, and the other one is another router, which might be using some routing key, which is different from the one used by the upper router. And this ends up with a sophisticated setup of 3 other sinks.

A pipeline is defined using these 3 basic types of links. Links define corresponding methods in order to expose connectors:

  • ConnectTo(flow.Link)
  • LinkTo([]flow.Link)
  • RouteTo(map[string]flow.Link)

Here comes one important remark about connectors: RouteTo defines OR logic: where a message is being dispatched to at most 1 link (therefore the connectors are named using keys, but the message is never replicated). LinkTo, on the opposite size, defines AND logic: a message is being dispatched to 0 or more links (message is replicated).

Links

Flow core comes with a set of primitive links which might be a use in the majority of basic pipelines. These links can be used for building extremely complex pipelines.

Core Links:

Receivers:

  • receiver.http: a none-to-one link, HTTP receiver server
  • receiver.tcp: a none-to-one link, TCP receiver server
  • receiver.udp: a none-to-one link, UDP receiver server
  • receiver.unix: a non-to-one link, UNIX socket server

Intermediate Links:

  • link.buffer: a one-to-one link, implements an intermediate buffer with lightweight retry logic.
  • link.mux: multiplexer, a many-to-one link, collects messages from N(>=1) links and pipes them in a single channel
  • link.fanout: a one-to-many link, sends messages to exactly 1 link, changing destination after every submission like a roller.
  • link.meta_perser: a one-to-one link, parses a prepending meta in URL format. To be more specific: for messages in format: foo=bar&bar=baz <binary payload here> meta_parser link will extract key-value pairs [foo=bar, bar=baz] and trim the payload accordingly. This might be useful in combination with router: a client provides k/v URL-encoded attributes, and router performs some routing logic.
  • link.demux: a one-to-many link, demultiplexes copies of messages to N(>=0) links and reports the composite status back.
  • link.replicator: a one-to-many link, implements a consistent hash replication logic. Accepts the number of replicas and the hashing key to be used. If no key provided, it will hash the entire message body.
  • link.router: a one-to-many link, sends messages to at most 1 link based on the message meta attributes (this attribute is configurable).
  • link.throttler: a one-to-one link, implements rate limiting functionality.

Sinks:

  • sink.dumper: a one-to-none link, dumps messages into a file (including STDOUT and STDERR).
  • sink.tcp: a one-to-none link, sends messages to a TCP endpoint
  • sink.udp: a one-to-none link, sends messages to a UDP endpoint

Messages

flowd is supposed to pass messages. From the user perspective, a message is a binary payload with a set of key-value metainformation tied with it.

Internally, messages are stateful. Message initiator can subscribe to message updates. Pipeline links pass messages top-down. Every link can stop message propagation immediately and finalize it. Message termination notification bubbles up to it's initiator (this mechanism is being used for synchronous message submission: when senders can report the exact submission status back).

  Message lifecycle
  +-----------------+
  | message created |  < . . . . .
  +-----------------+            .
           |  <-------+          .
           V          |          .
  +----------------+  |          .
  | passed to link |  | N times  .
  +----------------+  |          .
           |          |          .
           +----------+          .
           |                     . Ack
           V                     .
        +------+                 .
        | sink |                 .
        +------+                 .
           |                     .
           V                     .
     +-----------+               .
     | finalized | . . . . . . . .
     +-----------+

The intermediate loop of responsibility

Links like multiplexer (MPX) multiply messages to 0 or more links and report the composite status. In order to send the accurate submission status back, they implement behavior which we call intermediate responsibility. It means these links behave like implicit message producers and subscribe to notifications from all messages they emitted.

Once all multiplexed messages have notified their submission status (or a timeout fired), the link reports back the composite status update: it might be a timeout, a partial send status, a total failure of a total success. For the upstream links this behavior is absolutely invisible and they only receive the original message status update.

  The intermediate loop of responsibility

               +----------+
               | Producer | < .
               +----------+   . Composite
                     |        . status 
                     V        . update
                  +-----+ . . .
                  | MPX |
    . . . . . >   +-----+    < . . . . . 
    .               /|\                .
    .             /  |  \              . Individual
    .           /    |    \            . status
    .         /      |      \          . update
    . +-------+  +-------+  +--------+ .
      | Link1 |  | Link2 |  | Link 3 |
      +-------+  +-------+  +--------+

Message Status Updates

A message reports it's status exactly once. Once the message has reported it's submission status, it's finalized: none to be done with this message anymore.

Message statuses are pre-defined:

  • MsgStatusNew: In-flight status.
  • MsgStatusDone: Full success status.
  • MsgStatusPartialSend: Parital success.
  • MsgStatusInvalid: Message processing terminated due to an external error (wrong message).
  • MsgStatusFailed: Message processing terminated due to an internal error.
  • MsgStatusTimedOut Message processing terminated due to a timeout.
  • MsgStatusUnroutable Message type or destination is unknown.
  • MsgStatusThrottled Message processing terminated due to an internal rate limits.

Pipeline commands

Sometimes there might be a need of sending control signals to components. If a component is intended to react to these signals, it overrides method called ExecCmd(*flow.Cmd) error. If a component keeps some internal hierarchy of links, it can use the same API and send custom commands.

It's the pipeline that keeps knowledge of the component hierarchy and it represents it as a tree internally. Commands propagate either top-down or bottom-up. Pipeline implements method ExecCmd(*flow.Cmd, flow.CmdPropagation).

The second argument indicates the direction in which a command would be propagated. Say, pipeline start command should take effect bottom-up: receivers should be activated last. On the other hand, stopping the pipeline should be applied top-down as deactivating receivers allows to flush messages in flight safely.

flow.Cmd is a structure, not just a constant for reasons: it allows one to extend command instances by attaching a payload.

Flow command constants are named:

  • CmdCodeStart
  • CmdCodeStop

Modularity and Plugin Infrastructure

See Flow plugins.

Copyright

This software is created by Oleg Sidorov in 2018โ€“2019. It uses some ideas and code samples written by Ivan Kruglov and Damian Gryski and is partially inspired by their work. The major concept is inspired by GStreamer pipeline ecosystem.

This software is distributed under under MIT license. See LICENSE file for full license text.

flow's People

Contributors

dizzy57 avatar ikkeps avatar makambi avatar osdrv avatar vnechypor avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

flow's Issues

Race condition in fanout link

WARNING: DATA RACE
Read at 0x00c0000f81c0 by goroutine 19:
  github.com/awesome-flow/flow/pkg/link/fanout.(*Fanout).fanout()
      /Users/olegs/workspace/golang/src/github.com/awesome-flow/flow/pkg/link/fanout/fanout.go:45 +0xaf
  github.com/awesome-flow/flow/pkg/link/fanout.New.func1()
      /Users/olegs/workspace/golang/src/github.com/awesome-flow/flow/pkg/link/fanout/fanout.go:32 +0x42

Previous write at 0x00c0000f81c0 by goroutine 17:
  github.com/awesome-flow/flow/pkg/link/fanout.(*Fanout).fanout()
      /Users/olegs/workspace/golang/src/github.com/awesome-flow/flow/pkg/link/fanout/fanout.go:52 +0x11d
  github.com/awesome-flow/flow/pkg/link/fanout.New.func1()
      /Users/olegs/workspace/golang/src/github.com/awesome-flow/flow/pkg/link/fanout/fanout.go:32 +0x42

Goroutine 19 (running) created at:
  github.com/awesome-flow/flow/pkg/link/fanout.New()
      /Users/olegs/workspace/golang/src/github.com/awesome-flow/flow/pkg/link/fanout/fanout.go:31 +0x1fe
  github.com/awesome-flow/flow/pkg/link/fanout.TestFanout_Send()
      /Users/olegs/workspace/golang/src/github.com/awesome-flow/flow/pkg/link/fanout/fanout_test.go:29 +0xdf
  testing.tRunner()
      /usr/local/Cellar/go/1.12.1/libexec/src/testing/testing.go:865 +0x163

Goroutine 17 (running) created at:
  github.com/awesome-flow/flow/pkg/link/fanout.New()
      /Users/olegs/workspace/golang/src/github.com/awesome-flow/flow/pkg/link/fanout/fanout.go:31 +0x1fe
  github.com/awesome-flow/flow/pkg/link/fanout.TestFanout_Send()
      /Users/olegs/workspace/golang/src/github.com/awesome-flow/flow/pkg/link/fanout/fanout_test.go:29 +0xdf
  testing.tRunner()
      /usr/local/Cellar/go/1.12.1/libexec/src/testing/testing.go:865 +0x163
==================
--- FAIL: TestFanout_Send (0.01s)

Remarks on Go's plugin system

This isn't really an issue, just some stuff I've seen while looking into Go's plugin system.

I've found some hints in the Go mailing lists that back up what's claimed in this Reddit comment. It claims that the plugin system has the following limitations:

  1. A plugin must be compiled with the same Go toolchain as the code it will be used in.
  2. Any common dependencies between the plugin and the code must be the same.

This discussion on Go-nuts and this issue seem to confirm (1).

I guess both (1) and (2) follow from how the plugin system is supposed to work. It just makes plugin symbols available to outside programs, and if those symbols point to other symbols (that is, standard library dependencies or 3rd party dependencies) you get problems (1) and (2), respectively, if you want to be sure that the plugin behaves correctly.

It's probably easier to deal with (1) in an open source project, as plugin users have the sources for everything they want at compile time. (2) might be trickier, as a plugin and the core might both depend on different, incompatible versions of the same 3rd package. No amount of vendor or modules can deal with that. (This is a problem as well with (1), but we can probably count on the standard library being backwards API compatible.)

If this is more or less correct, then a core system that wants to go all-in on plugins should use as few third-party dependencies as possible, and of the ones it uses, carefully choose ones whose maintainers have proven to care deeply about backwards API compatibility. (Or defer all of its third-party dependencies to plugins itself? Want logging? Provide a plugin that implements an interface. Seems hairy.)

Data corruption by the regular UDP receiver

The problem was detected by observing the replicator distribution. It was given a single metric, replication factor 1 and 12 distinct hosts. The metric must be routed to the same host all the time, but it was not always the case. Some messages were routed to sibling hosts, and a quick experiment with changing the backend (well, I was lucky enough to get evio backend working as expected) exposed the problem pretty clearly: data corruption on the receiver level.

I changed the replicator code like:

diff --git a/pkg/link/replicator/replicator.go b/pkg/link/replicator/replicator.go
index 8038b70..7b0da3a 100644
--- a/pkg/link/replicator/replicator.go
+++ b/pkg/link/replicator/replicator.go
@@ -157,6 +157,10 @@ func (repl *Replicator) linksForKey(key []byte) ([]core.Link, error) {
        h := hObj.Sum64()
        i := len(linksCp)
 
+       if h != 1956225504491267871 {
+               panic(fmt.Sprintf("Wrong hash value: %d, based on the key: [%s]", h, key))
+       }
+
        res := make([]core.Link, repl.replFactor)
        resIx := 0
        for i > 0 {

and got this message right away (newline preserved form the original output):

panic: Wrong hash value: 13409471506710027011, based on the key: [
o.bar.baz]

The UDP receiver code needs a careful inspection. Once the root cause is known, the same should be done around other receivers.

Implement a compressor link

This link should compress message payload with distinct compression algorithms.
The first iteration should support standard compression algorithms from compress package:
bzip2, flate, gzip, lzw, zlib.

Separate setup and connect logic of links

Review core links (including receivers and sinks) and separate dry initialization from actual connection logic. The latest should only happen as a reaction to the Start command. This will let initialize pipeline safely without actual connection to the endpoints.

Segfault on tcp sender instantiation

flogd config file:

components:
  udp_rcv:
    module: receiver.udp
    params:
      bind_addr: localhost:3101
  tcp_sink:
    module: sink.tcp
    params:
      bind_addr: localhost:7222

pipeline:
  udp_rcv:
    connect: tcp_sink
olegs@oleg-ThinkPad:~/workspace/golang/src/github.com/whiteboxio/flow$ make build && ./flowd -config configs/udp2tcp-config.yml
github.com/whiteboxio/flow/pkg/pipeline
github.com/whiteboxio/flow/cmd/flowd
INFO[0000] Starting flowd version 1
INFO[0000] Initializing the pipeline
INFO[0000] Instantiating standard backend for UDP receiver
INFO[0000] Instantited a new TCP
INFO[0000] Connecting udp_rcv to tcp_sink
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x857454]

goroutine 1 [running]:
github.com/whiteboxio/flow/pkg/pipeline.(*Pipeline).applySysCfg(0xc42014c420, 0xc42014c420, 0xc42018c000)
        /home/olegs/workspace/golang/src/github.com/whiteboxio/flow/pkg/pipeline/pipeline.go:240 +0x74
github.com/whiteboxio/flow/pkg/pipeline.NewPipeline(0xc42009d800, 0xc42009d950, 0x0, 0x0, 0x0)
        /home/olegs/workspace/golang/src/github.com/whiteboxio/flow/pkg/pipeline/pipeline.go:136 +0x153e
main.main()
        /home/olegs/workspace/golang/src/github.com/whiteboxio/flow/cmd/flowd/main.go:41 +0x17c

Fix import cycle in util/data package

This is being output at the beginning:

package github.com/awesome-flow/flow/pkg/util/data (test)
imports github.com/awesome-flow/flow/pkg/core
imports github.com/awesome-flow/flow/pkg/config
imports github.com/awesome-flow/flow/pkg/util/data
FAIL    github.com/awesome-flow/flow/pkg/util/data [setup failed]

Performance optimisation

Perform a deep analysis of the major bottlenecks in the system. Review parallelization strategy and shared memory interactions. Review go plugins interface and conduct detailed measurements.

A bug in plugin initialization

If a plugin config is missing constructor field, flow fails with an error "symbol not found in plugin github.com/awesome-flow/flow-plugins/..."

Concurrent map read and map write in Evio receiver

System: Darwin XXX Darwin Kernel Version 17.7.0: Thu Jun 21 22:53:14 PDT 2018; root:xnu-4570.71.2~1/RELEASE_X86_64 x86_64

Go version: go1.10.3 darwin/amd64

Stack trace:

INFO[0000] Starting msgrelay version 1
INFO[0000] Initializing the pipeline
INFO[0000] Starting Evio receiver. Listeners: [tcp://:3110]
INFO[0000] Connecting tcp_rcv to meta_parser
INFO[0000] Connecting meta_parser to mpx
INFO[0000] Linking mpx with [tcp_sink_7222 tcp_sink_7223 tcp_sink_7224 tcp_sink_7225 tcp_sink_7226 tcp_sink_7227]
INFO[0000] Setting GOMAXPROCS to 0
INFO[0000] Pipeline initalization is complete
INFO[0000] Pipeline GraphViz diagram (plot using https://www.planttext.com):
digraph G {
    tcp_rcv -> meta_parser
    meta_parser -> mpx
    mpx -> tcp_sink_7222
    mpx -> tcp_sink_7223
    mpx -> tcp_sink_7224
    mpx -> tcp_sink_7225
    mpx -> tcp_sink_7226
    mpx -> tcp_sink_7227
}
INFO[0000] Activating the pipeline
INFO[0000] Pipeline successfully activated
fatal error: concurrent map read and map write

goroutine 38 [running]:
runtime.throw(0x43d7b08, 0x21)
	/usr/local/Cellar/go/1.10.3/libexec/src/runtime/panic.go:616 +0x81 fp=0xc420050bc0 sp=0xc420050ba0 pc=0x402b9d1
runtime.mapaccess2_faststr(0x436c040, 0xc4205a0f30, 0x43cd4ae, 0x4, 0x1, 0x1)
	/usr/local/Cellar/go/1.10.3/libexec/src/runtime/hashmap_fast.go:270 +0x461 fp=0xc420050c30 sp=0xc420050bc0 pc=0x400ce81
github.com/whiteboxio/flow/pkg/core.(*Message).IsSync(...)
	/Users/olegs/workspace/golang/src/github.com/whiteboxio/flow/pkg/core/message.go:83
github.com/whiteboxio/flow/pkg/receiver/evio.New.func3(0x4418120, 0xc42012e120, 0xc420344960, 0x18, 0x20, 0xc420344960, 0x0, 0x20, 0xc420092a00)
	/Users/olegs/workspace/golang/src/github.com/whiteboxio/flow/pkg/receiver/evio/evio.go:99 +0x5d4 fp=0xc420050df0 sp=0xc420050c30 pc=0x42ea664
github.com/whiteboxio/flow/vendor/github.com/tidwall/evio.loopRead(0xc4201d2000, 0xc42015e400, 0xc42012e120, 0xc42012e1f8, 0x0)
	/Users/olegs/workspace/golang/src/github.com/whiteboxio/flow/vendor/github.com/tidwall/evio/evio_unix.go:408 +0x2e0 fp=0xc420050eb0 sp=0xc420050df0 pc=0x42e7f70
github.com/whiteboxio/flow/vendor/github.com/tidwall/evio.loopRun.func2(0x13, 0x0, 0x0, 0x0, 0xc420050f78)
	/Users/olegs/workspace/golang/src/github.com/whiteboxio/flow/vendor/github.com/tidwall/evio/evio_unix.go:232 +0xe6 fp=0xc420050f00 sp=0xc420050eb0 pc=0x42e9256
github.com/whiteboxio/flow/vendor/github.com/tidwall/evio/internal.(*Poll).Wait(0xc42015e3c0, 0xc420051fa8, 0xc4201d2000, 0x0)
	/Users/olegs/workspace/golang/src/github.com/whiteboxio/flow/vendor/github.com/tidwall/evio/internal/internal_bsd.go:72 +0xad fp=0xc420051f88 sp=0xc420050f00 pc=0x42e15bd
github.com/whiteboxio/flow/vendor/github.com/tidwall/evio.loopRun(0xc4201d2000, 0xc42015e400)
	/Users/olegs/workspace/golang/src/github.com/whiteboxio/flow/vendor/github.com/tidwall/evio/evio_unix.go:217 +0x93 fp=0xc420051fd0 sp=0xc420051f88 pc=0x42e6813
runtime.goexit()
	/usr/local/Cellar/go/1.10.3/libexec/src/runtime/asm_amd64.s:2361 +0x1 fp=0xc420051fd8 sp=0xc420051fd0 pc=0x4058151
created by github.com/whiteboxio/flow/vendor/github.com/tidwall/evio.serve
	/Users/olegs/workspace/golang/src/github.com/whiteboxio/flow/vendor/github.com/tidwall/evio/evio_unix.go:150 +0x5d3

goroutine 1 [chan receive, 166 minutes]:
main.main()
	/Users/olegs/workspace/golang/src/github.com/whiteboxio/flow/cmd/flowd/main.go:59 +0x4ac

goroutine 19 [syscall, 166 minutes]:
os/signal.signal_recv(0x0)
	/usr/local/Cellar/go/1.10.3/libexec/src/runtime/sigqueue.go:139 +0xa7
os/signal.loop()
	/usr/local/Cellar/go/1.10.3/libexec/src/os/signal/signal_unix.go:22 +0x22
created by os/signal.init.0
	/usr/local/Cellar/go/1.10.3/libexec/src/os/signal/signal_unix.go:28 +0x41

goroutine 4 [sleep]:
time.Sleep(0x3b9aca00)
	/usr/local/Cellar/go/1.10.3/libexec/src/runtime/time.go:102 +0x166
github.com/whiteboxio/flow/pkg/metrics.sendMetrics(0xc420062070, 0x0)
	/Users/olegs/workspace/golang/src/github.com/whiteboxio/flow/pkg/metrics/metrics.go:62 +0x3f
github.com/whiteboxio/flow/pkg/metrics.Initialize.func1(0xc420062070)
	/Users/olegs/workspace/golang/src/github.com/whiteboxio/flow/pkg/metrics/metrics.go:50 +0x59
created by github.com/whiteboxio/flow/pkg/metrics.Initialize
	/Users/olegs/workspace/golang/src/github.com/whiteboxio/flow/pkg/metrics/metrics.go:47 +0x1a6

goroutine 5 [select]:
github.com/whiteboxio/flow/pkg/link/mpx.(*MPX).multiplex(0xc4200600c0)
	/Users/olegs/workspace/golang/src/github.com/whiteboxio/flow/pkg/link/mpx/mpx.go:66 +0x1f2
created by github.com/whiteboxio/flow/pkg/link/mpx.NewMPX
	/Users/olegs/workspace/golang/src/github.com/whiteboxio/flow/pkg/link/mpx/mpx.go:24 +0x1cc

goroutine 8 [semacquire, 166 minutes]:
sync.runtime_notifyListWait(0xc42015e1d0, 0x0)
	/usr/local/Cellar/go/1.10.3/libexec/src/runtime/sema.go:510 +0x10b
sync.(*Cond).Wait(0xc42015e1c0)
	/usr/local/Cellar/go/1.10.3/libexec/src/sync/cond.go:56 +0x80
github.com/whiteboxio/flow/vendor/github.com/tidwall/evio.(*server).waitForShutdown(0xc4201d2000)
	/Users/olegs/workspace/golang/src/github.com/whiteboxio/flow/vendor/github.com/tidwall/evio/evio_unix.go:67 +0x4f
github.com/whiteboxio/flow/vendor/github.com/tidwall/evio.serve.func1(0xc4201d2000)
	/Users/olegs/workspace/golang/src/github.com/whiteboxio/flow/vendor/github.com/tidwall/evio/evio_unix.go:114 +0x40
github.com/whiteboxio/flow/vendor/github.com/tidwall/evio.serve(0xffffffffffffffff, 0x0, 0x0, 0x43e53b8, 0x43e53c0, 0x0, 0x0, 0xc420046090, 0x0, 0xc42015a020, ...)
	/Users/olegs/workspace/golang/src/github.com/whiteboxio/flow/vendor/github.com/tidwall/evio/evio_unix.go:152 +0x5fc
github.com/whiteboxio/flow/vendor/github.com/tidwall/evio.Serve(0xffffffffffffffff, 0x0, 0x0, 0x43e53b8, 0x43e53c0, 0x0, 0x0, 0xc420046090, 0x0, 0xc4200460a0, ...)
	/Users/olegs/workspace/golang/src/github.com/whiteboxio/flow/vendor/github.com/tidwall/evio/evio.go:189 +0x680
github.com/whiteboxio/flow/pkg/receiver/evio.New.func4(0xc4201400a0, 0xc4200460a0, 0x1, 0x1)
	/Users/olegs/workspace/golang/src/github.com/whiteboxio/flow/pkg/receiver/evio/evio.go:127 +0x86
created by github.com/whiteboxio/flow/pkg/receiver/evio.New
	/Users/olegs/workspace/golang/src/github.com/whiteboxio/flow/pkg/receiver/evio/evio.go:126 +0x458

goroutine 13 [chan receive]:
github.com/whiteboxio/flow/pkg/core.(*Connector).ConnectTo.func1(0xc42000c1a0, 0x44189a0, 0xc42000c0c0)
	/Users/olegs/workspace/golang/src/github.com/whiteboxio/flow/pkg/core/components.go:45 +0x74
created by github.com/whiteboxio/flow/pkg/core.(*Connector).ConnectTo
	/Users/olegs/workspace/golang/src/github.com/whiteboxio/flow/pkg/core/components.go:44 +0x53

goroutine 14 [chan receive]:
github.com/whiteboxio/flow/pkg/core.(*Connector).ConnectTo.func1(0xc42000c0a0, 0x4418a00, 0xc4200600c0)
	/Users/olegs/workspace/golang/src/github.com/whiteboxio/flow/pkg/core/components.go:45 +0x74
created by github.com/whiteboxio/flow/pkg/core.(*Connector).ConnectTo
	/Users/olegs/workspace/golang/src/github.com/whiteboxio/flow/pkg/core/components.go:44 +0x53

goroutine 15 [select, 166 minutes, locked to thread]:
runtime.gopark(0x43e5d08, 0x0, 0x43cda2f, 0x6, 0x18, 0x1)
	/usr/local/Cellar/go/1.10.3/libexec/src/runtime/proc.go:291 +0x11a
runtime.selectgo(0xc420158750, 0xc4201920c0)
	/usr/local/Cellar/go/1.10.3/libexec/src/runtime/select.go:392 +0xe50
runtime.ensureSigM.func1()
	/usr/local/Cellar/go/1.10.3/libexec/src/runtime/signal_unix.go:549 +0x1c6
runtime.goexit()
	/usr/local/Cellar/go/1.10.3/libexec/src/runtime/asm_amd64.s:2361 +0x1

goroutine 35 [syscall, 166 minutes]:
syscall.Syscall6(0x16b, 0xc, 0x0, 0x0, 0xc420220f78, 0x80, 0x0, 0x1, 0x0, 0x0)
	/usr/local/Cellar/go/1.10.3/libexec/src/syscall/asm_darwin_amd64.s:41 +0x5
syscall.kevent(0xc, 0x0, 0x0, 0xc420220f78, 0x80, 0x0, 0x1, 0xc4201d2000, 0xc42015e280)
	/usr/local/Cellar/go/1.10.3/libexec/src/syscall/zsyscall_darwin_amd64.go:202 +0x9d
syscall.Kevent(0xc, 0xc420170160, 0x0, 0x1, 0xc420220f78, 0x80, 0x80, 0x0, 0x1, 0x0, ...)
	/usr/local/Cellar/go/1.10.3/libexec/src/syscall/syscall_bsd.go:447 +0x71
github.com/whiteboxio/flow/vendor/github.com/tidwall/evio/internal.(*Poll).Wait(0xc42015e240, 0xc420221fa8, 0xc4201d2000, 0x0)
	/Users/olegs/workspace/golang/src/github.com/whiteboxio/flow/vendor/github.com/tidwall/evio/internal/internal_bsd.go:60 +0x13c
github.com/whiteboxio/flow/vendor/github.com/tidwall/evio.loopRun(0xc4201d2000, 0xc42015e280)
	/Users/olegs/workspace/golang/src/github.com/whiteboxio/flow/vendor/github.com/tidwall/evio/evio_unix.go:217 +0x93
created by github.com/whiteboxio/flow/vendor/github.com/tidwall/evio.serve
	/Users/olegs/workspace/golang/src/github.com/whiteboxio/flow/vendor/github.com/tidwall/evio/evio_unix.go:150 +0x5d3

goroutine 36 [syscall, 166 minutes]:
syscall.Syscall6(0x16b, 0x9, 0x0, 0x0, 0xc420056f78, 0x80, 0x0, 0x1, 0x0, 0x0)
	/usr/local/Cellar/go/1.10.3/libexec/src/syscall/asm_darwin_amd64.s:41 +0x5
syscall.kevent(0x9, 0x0, 0x0, 0xc420056f78, 0x80, 0x0, 0x1, 0xc4201d2000, 0xc42015e300)
	/usr/local/Cellar/go/1.10.3/libexec/src/syscall/zsyscall_darwin_amd64.go:202 +0x9d
syscall.Kevent(0x9, 0xc420170180, 0x0, 0x1, 0xc420056f78, 0x80, 0x80, 0x0, 0x1, 0x0, ...)
	/usr/local/Cellar/go/1.10.3/libexec/src/syscall/syscall_bsd.go:447 +0x71
github.com/whiteboxio/flow/vendor/github.com/tidwall/evio/internal.(*Poll).Wait(0xc42015e2c0, 0xc420057fa8, 0xc4201d2000, 0x0)
	/Users/olegs/workspace/golang/src/github.com/whiteboxio/flow/vendor/github.com/tidwall/evio/internal/internal_bsd.go:60 +0x13c
github.com/whiteboxio/flow/vendor/github.com/tidwall/evio.loopRun(0xc4201d2000, 0xc42015e300)
	/Users/olegs/workspace/golang/src/github.com/whiteboxio/flow/vendor/github.com/tidwall/evio/evio_unix.go:217 +0x93
created by github.com/whiteboxio/flow/vendor/github.com/tidwall/evio.serve
	/Users/olegs/workspace/golang/src/github.com/whiteboxio/flow/vendor/github.com/tidwall/evio/evio_unix.go:150 +0x5d3

goroutine 37 [syscall, 166 minutes]:
syscall.Syscall6(0x16b, 0xd, 0x0, 0x0, 0xc42021cf78, 0x80, 0x0, 0x1, 0x0, 0x0)
	/usr/local/Cellar/go/1.10.3/libexec/src/syscall/asm_darwin_amd64.s:41 +0x5
syscall.kevent(0xd, 0x0, 0x0, 0xc42021cf78, 0x80, 0x0, 0x1, 0xc4201d2000, 0xc42015e380)
	/usr/local/Cellar/go/1.10.3/libexec/src/syscall/zsyscall_darwin_amd64.go:202 +0x9d
syscall.Kevent(0xd, 0xc4201701a0, 0x0, 0x1, 0xc42021cf78, 0x80, 0x80, 0x0, 0x1, 0x0, ...)
	/usr/local/Cellar/go/1.10.3/libexec/src/syscall/syscall_bsd.go:447 +0x71
github.com/whiteboxio/flow/vendor/github.com/tidwall/evio/internal.(*Poll).Wait(0xc42015e340, 0xc42021dfa8, 0xc4201d2000, 0x0)
	/Users/olegs/workspace/golang/src/github.com/whiteboxio/flow/vendor/github.com/tidwall/evio/internal/internal_bsd.go:60 +0x13c
github.com/whiteboxio/flow/vendor/github.com/tidwall/evio.loopRun(0xc4201d2000, 0xc42015e380)
	/Users/olegs/workspace/golang/src/github.com/whiteboxio/flow/vendor/github.com/tidwall/evio/evio_unix.go:217 +0x93
created by github.com/whiteboxio/flow/vendor/github.com/tidwall/evio.serve
	/Users/olegs/workspace/golang/src/github.com/whiteboxio/flow/vendor/github.com/tidwall/evio/evio_unix.go:150 +0x5d3

goroutine 205895316 [runnable]:
github.com/whiteboxio/flow/pkg/link/mpx.(*MPX).multiplex.func1(0xc42015a000, 0xc4204e4930, 0xc4205cbc69, 0x4418d60, 0xc4200601c0)
	/Users/olegs/workspace/golang/src/github.com/whiteboxio/flow/pkg/link/mpx/mpx.go:46
created by github.com/whiteboxio/flow/pkg/link/mpx.(*MPX).multiplex
	/Users/olegs/workspace/golang/src/github.com/whiteboxio/flow/pkg/link/mpx/mpx.go:46 +0xc9

goroutine 205895314 [runnable]:
net.(*conn).SetDeadline(0xc42009c2e8, 0xbed8c6f0120b39b3, 0x910613b6721, 0x45d1460, 0x0, 0x0)
	/usr/local/Cellar/go/1.10.3/libexec/src/net/net.go:228 +0x20a
github.com/whiteboxio/flow/pkg/sink/tcp.(*TCP).Recv(0xc420060180, 0xc42021ae40, 0x0, 0x0)
	/Users/olegs/workspace/golang/src/github.com/whiteboxio/flow/pkg/sink/tcp/tcp.go:71 +0xf5
github.com/whiteboxio/flow/pkg/link/mpx.(*MPX).multiplex.func1(0xc42015a000, 0xc4204e4930, 0xc4205cbc69, 0x4418d60, 0xc420060180)
	/Users/olegs/workspace/golang/src/github.com/whiteboxio/flow/pkg/link/mpx/mpx.go:48 +0xff
created by github.com/whiteboxio/flow/pkg/link/mpx.(*MPX).multiplex
	/Users/olegs/workspace/golang/src/github.com/whiteboxio/flow/pkg/link/mpx/mpx.go:46 +0xc9

goroutine 205895315 [runnable]:
github.com/whiteboxio/flow/pkg/link/mpx.(*MPX).multiplex.func1(0xc42015a000, 0xc4204e4930, 0xc4205cbc69, 0x4418d60, 0xc420060140)
	/Users/olegs/workspace/golang/src/github.com/whiteboxio/flow/pkg/link/mpx/mpx.go:46
created by github.com/whiteboxio/flow/pkg/link/mpx.(*MPX).multiplex
	/Users/olegs/workspace/golang/src/github.com/whiteboxio/flow/pkg/link/mpx/mpx.go:46 +0xc9

goroutine 205895265 [runnable]:
github.com/whiteboxio/flow/pkg/link/mpx.(*MPX).multiplex.func1(0xc42015a000, 0xc4204e4930, 0xc4205cbc69, 0x4418d60, 0xc420060100)
	/Users/olegs/workspace/golang/src/github.com/whiteboxio/flow/pkg/link/mpx/mpx.go:46
created by github.com/whiteboxio/flow/pkg/link/mpx.(*MPX).multiplex
	/Users/olegs/workspace/golang/src/github.com/whiteboxio/flow/pkg/link/mpx/mpx.go:46 +0xc9

goroutine 205895264 [runnable]:
github.com/whiteboxio/flow/pkg/link/mpx.(*MPX).multiplex.func1(0xc42015a000, 0xc4204e4930, 0xc4205cbc69, 0x4418d60, 0xc420060240)
	/Users/olegs/workspace/golang/src/github.com/whiteboxio/flow/pkg/link/mpx/mpx.go:46
created by github.com/whiteboxio/flow/pkg/link/mpx.(*MPX).multiplex

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.