Coder Social home page Coder Social logo

apple / swift-distributed-actors Goto Github PK

View Code? Open in Web Editor NEW
569.0 24.0 53.0 19.83 MB

Peer-to-peer cluster implementation for Swift Distributed Actors

Home Page: https://apple.github.io/swift-distributed-actors/

License: Apache License 2.0

Shell 1.61% Swift 97.78% TLA 0.32% C 0.25% Dockerfile 0.03%
swift distributed-systems actors actor-model

swift-distributed-actors's Introduction

Swift Distributed Actors

A peer-to-peer cluster actor system implementation for Swift.

NOTE: This project provides a cluster runtime implementation for the distributed actors language feature.

Introduction

Beta software

Important: This library is currently released as beta preview software. While we anticipate very few changes, please be mindful that until a stable 1.0 version is announced, this library does not guarantee source compatibility.

We anticipate to release a number of 1.0.0-beta-n releases during the beta phase of Swift 5.7, before releasing a source stable 1.0 soon after.

Most APIs and runtime are rather stable, and have proven itself for a long time already. Most of the remaining work is around making sure all APIs work nicely with the latest revision of the distributed actor language feature.

Important: Please ignore and do not use any functionality that is prefixed with an _ (such as types and methods), as they are intended to be removed as we reach the stable 1.0 release.

What are Distributed Actors?

Distributed actors are an extension of the "local only" actor model offered by Swift with its actor keyword.

Distributed actors are declared using the distributed actor keywords (and importing the Distributed module), and enable the declaring of distributed func methods inside such actor. Such methods may then be invoked remotely, from other peers in a distributed actor system.

The distributed actor language feature does not include any specific runtime, and only defines the language and semantic rules surrounding distributed actors. This library provides a feature-rich clustering server-side focused implementation of such runtime (i.e. a DistributedActorSystem implementation) for distributed actors.

To learn more about both the language feature and library, please refer to the reference documentation of this project.

The primary purpose of open sourcing this library early is proving the ability to implement a feature complete, compelling clustering solution using the distributed actor language feature, and co-evolving the two in tandem.

Samples

You can refer to the Samples/ directory to view a number of more realistic sample apps which showcase how distributed actors can be used in a cluster.

The most "classical" example of distributed actors is the SampleDiningPhilosophers.

You can run it all in a single node (run --package-path Samples/ SampleDiningPhilosophers), or in 3 cluster nodes hosted on the same physical machine: run --package-path Samples/ SampleDiningPhilosophers distributed. Notice how one does not need to change implementation of the distributed actors to run them in either "local" or "distributed" mode.

Documentation

Please refer to the rendered docc reference documentation to learn about distributed actors and how to use this library and its various features.

Note: Documentation is still work in progress, please feel free to submit issues or patches about missing or unclear documentation.

Development

This library requires beta releases of Swift (5.7+) and Xcode to function property as the distributed actor feature is part of that Swift release.

When developing on macOS, please also make sure to update to the latest beta of macOS, as some parts of the Swift runtime necessary for distributed actors to work are part of the Swift runtime library which is shipped with the OS.

Distributed actors are not back-deployed and require the latest versions of iOS, macOS, watchOS etc.

When developing on Linux systems, you can download the latest Swift 5.7 toolchains from swift.org/downloads, and use it to try out or run the library like this:

$ export TOOLCHAIN=/path/to/toolchain
$ $TOOLCHAIN/usr/bin/swift test
$ $TOOLCHAIN/usr/bin/swift run --package-path Samples/ SampleDiningPhilosophers dist

IDE Support: Xcode

Latest (beta) Xcode releases include complete support for the distributed language syntax (distributed actor, distributed func), so please use the latest Beta Xcode available to edit the project and any projects using distributed actors.

IDE Support: Other IDEs

It is possible to open and edit this project in other IDEs, however most IDEs have not yet caught up with the latest language syntax (i.e. distributed actor) and therefore may have trouble understanding the new syntax.

VSCode

You can use the Swift Server Work Group maintained VSCode plugin to edit this project from VSCode.

You can install the VSCode extension from here.

The extension uses sourcekit-lsp and thus should be able to highlight and edit distributed actor using sources just fine. If it does not, please report issues!

CLion

The project is possible to open in CLion as a SwiftPM package project, however CLion and AppCode do not yet support the new distributed syntax, so they might have issues formatting the code until this is implemented.

See also the following guides by community members about using CLion for Swift development:

Warnings

The project currently is emitting many warnings about Sendable, this is expected and we are slowly working towards removing them.

Much of the project's internals use advanced synchronization patterns not recognized by sendable checks, so many of the warnings are incorrect but the compiler has no way of knowing this. We will be removing much of these internals as we move them to use the Swift actor runtime instead.

Documentation workflow

Documentation for this project is using the Doc Compiler, via the SwiftPM DocC plugin.

If you are not familiar with DocC syntax and general style, please refer to its documentation: https://developer.apple.com/documentation/docc

The project includes two helper scripts, to build and preview documentation.

To build documentation:

./scripts/docs/generate_docc.sh

And to preview and browse the documentation as a web-page, run:

./scripts/docs/preview_docc.sh

Which will result in an output similar to this:

========================================
Starting Local Preview Server
	          http://localhost:8000/documentation/distributecluster

Integration tests

Integration tests include running actual multiple nodes of a cluster and e.g. killing them off to test the recovery mechanisms of the cluster.

Requirements:

  • macOS: brew install coreutils to install stdbuf

Supported Versions

This project requires Swift 5.7+.

swift-distributed-actors's People

Contributors

acche avatar akbashev avatar artemredkin avatar avolokhov avatar budde avatar christopherweems avatar ddunbar avatar discogestalt avatar drexin avatar erik-apple avatar hassila avatar jaapwijnen avatar ktoso avatar kyle-ye avatar maxdesiatov avatar maximbazarov avatar orobio avatar orta avatar peteradams-a avatar popflamingo avatar rmorey avatar ruslanskorb avatar tomerd avatar weissi avatar yim-lee avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

swift-distributed-actors's Issues

Make ActorTestProbe aware of eventually context

Right now the expectMessage matchers fail the test the first time they fail, meaning that in an eventually block, it can fail before the outer timeout expires. This is unexpected and undesired behavior and should be changed, so that only after the outer timeout expires, the test fails.

FAILED: ClusterMembershipGossipTests.test_gossip_down_node_shouldReachAllNodes

Test Case 'ClusterMembershipGossipTests.test_gossip_down_node_shouldReachAllNodes' started at 2019-08-21 13:17:25.860
/tmp/.nio-release-tools_4zc1Ke/sakkana/Tests/SakkanaActorTests/Cluster/ClusterMembershipGossipTests.swift:46: error: ClusterMembershipGossipTests.test_gossip_down_node_shouldReachAllNodes : failed -
try testKit.eventually(within: .seconds(3), interval: .milliseconds(150)) {
[0;31m^~~~~~~~~~~
error: No result within 3s for block at /tmp/.nio-release-tools_4zc1Ke/sakkana/Tests/SakkanaActorTests/Cluster/ClusterMembershipGossipTests.swift:46. Queried 19 times, within 3s. Last error:
throw testKit.error("Expected [\(foundMember)] on [\(system)] to be seen as: [\(expectedStatus)]")
[0;31m^~~~~~
error: Expected [Member(sact://third@localhost:9003, status: up, reachability: reachable)] on [ActorSystem(third, sact://third@localhost:9003)] to be seen as: [down]๏ฟฝ[0;0m๏ฟฝ[0;0m
<EXPR>:0: error: ClusterMembershipGossipTests.test_gossip_down_node_shouldReachAllNodes : threw error "EventuallyError(message: "\n        try testKit.eventually(within: .seconds(3), interval: .milliseconds(150)) {\n                   \u{1B}[0;31m^~~~~~~~~~~\nerror: No result within 3s for block at /tmp/.nio-release-tools_4zc1Ke/sakkana/Tests/SakkanaActorTests/Cluster/ClusterMembershipGossipTests.swift:46. Queried 19 times, within 3s. Last error: \n            throw testKit.error(\"Expected [\\(foundMember)] on [\\(system)] to be seen as: [\\(expectedStatus)]\")\n                          \u{1B}[0;31m^~~~~~\nerror: Expected [Member(sact://third@localhost:9003, status: up, reachability: reachable)] on [ActorSystem(third, sact://third@localhost:9003)] to be seen as: [down]\u{1B}[0;0m\u{1B}[0;0m", lastError: Optional(SakkanaActorTestKit.CallSiteError.error(message: "\n            throw testKit.error(\"Expected [\\(foundMember)] on [\\(system)] to be seen as: [\\(expectedStatus)]\")\n                          \u{1B}[0;31m^~~~~~\nerror: Expected [Member(sact://third@localhost:9003, status: up, reachability: reachable)] on [ActorSystem(third, sact://third@localhost:9003)] to be seen as: [down]\u{1B}[0;0m")))"
------------------------------------- ActorSystem(first, sact://first@localhost:9001) ------------------------------------------------
Captured log [first]: [trace] Started timer [TimerKey(receptionist/sync)] with generation [0]
Captured log [first]: [trace] Started timer [TimerKey(swim/periodic-ping)] with generation [0]
Captured log [first]: [debug] Spawning [SakkanaActor.Behavior<SakkanaActor.DowningStrategyMessage>.setup((Function))], on path: [/system/cluster/downingStrategy]
Captured log [first]: [info] Binding to: [sact://first@localhost:9001]
Captured log [first]: [trace] Successfully subscribed [ActorRef<ClusterEvent>(/system/cluster/downingStrategy/$sub-SakkanaActor.ClusterEvent-y)]
Captured log [first]: [info] Bound to [IPv4]127.0.0.1/127.0.0.1:9001
Captured log [first]: [trace] Started timer [TimerKey(ask/timeout)] with generation [0]
Captured log [first]: [info] Extending handshake offer to sact://second@localhost:9002)
Captured log [first]: [debug] Associated with: sact://second@localhost:9002. Membership: Membership([Member(sact://first@localhost:9001, status: up, reachability: reachable), Member(sact://second@localhost:9002, status: up, reachability: reachable)])
Captured log [first]: [trace] Cancel timer [TimerKey(ask/timeout)] with generation [0]
Captured log [first]: [trace] Sending ping to [ActorRef<SWIM.Message>(sact://second@localhost:9002/system/swim)] with payload [membership([SakkanaActor.SWIMMember(ref: ActorRef<SWIM.Message>(/system/swim), status: SakkanaActor.SWIM.Status.alive(incarnation: 0), protocolPeriod: 0)])]
Captured log [first]: [trace] Started timer [TimerKey(ask/timeout)] with generation [0]
Captured log [first]: [trace] Cancel timer [TimerKey(ask/timeout)] with generation [0]
Captured log [first]: [trace] Received ack from [ActorRef<SWIM.Message>(sact://second@localhost:9002/system/swim)] with incarnation [0] and payload [membership([SakkanaActor.SWIMMember(ref: ActorRef<SWIM.Message>(sact://second@localhost:9002/system/swim), status: SakkanaActor.SWIM.Status.alive(incarnation: 0), protocolPeriod: 0)])]
Captured log [first]: [debug] Ignoring gossip about member [SWIMMember(ref: ActorRef<SWIM.Message>(sact://second@localhost:9002/system/swim), status: SakkanaActor.SWIM.Status.alive(incarnation: 0), protocolPeriod: 0)] due to older status, incoming: [alive(incarnation: 0)], current: [alive(incarnation: 0)]
Captured log [first]: [trace] Received periodic trigger to ping random member
Captured log [first]: [debug] Ignoring gossip about member [SWIMMember(ref: ActorRef<SWIM.Message>(sact://second@localhost:9002/system/swim), status: SakkanaActor.SWIM.Status.alive(incarnation: 0), protocolPeriod: 0)] due to older status, incoming: [alive(incarnation: 0)], current: [alive(incarnation: 0)]
Captured log [first]: [info] Accept association with sact://third@localhost:9003!
Captured log [first]: [trace] Received periodic trigger to ping random member
Captured log [first]: [trace] Started timer [TimerKey(ask/timeout)] with generation [0]
Captured log [first]: [debug] Ignoring gossip about member [SWIMMember(ref: ActorRef<SWIM.Message>(sact://second@localhost:9002/system/swim), status: SakkanaActor.SWIM.Status.alive(incarnation: 0), protocolPeriod: 0)] due to older status, incoming: [alive(incarnation: 0)], current: [alive(incarnation: 0)]
Captured log [first]: [warning] Attempted associating with already associated node: [sact://third@localhost:9003], existing association: [associated(AssociatedState(channel: SocketChannel { selectable = BaseSocket { fd=779 }ย , localAddress = Optional([IPv4]127.0.0.1/127.0.0.1:9001), remoteAddress = Optional([IPv4]127.0.0.1/127.0.0.1:37588) }, selfNode: sact://first@localhost:9001, remoteNode: sact://third@localhost:9003))]
Captured log [first]: [trace] Cancel timer [TimerKey(ask/timeout)] with generation [0]
Captured log [first]: [info] Cluster membership change: [sact://third@localhost:9003 :: [     up] -> [   down]], membership: Membership([Member(sact://third@localhost:9003, status: down, reachability: reachable), Member(sact://first@localhost:9001, status: up, reachability: reachable), Member(sact://second@localhost:9002, status: up, reachability: reachable)])
Captured log [first]: [info] Marked node [sact://third@localhost:9003] as: DOWN
Captured log [first]: [trace] Received periodic trigger to ping random member
Captured log [first]: [trace] Sending ping to [ActorRef<SWIM.Message>(sact://third@localhost:9003/system/swim)] with payload [membership([SakkanaActor.SWIMMember(ref: ActorRef<SWIM.Message>(sact://third@localhost:9003/system/swim), status: SakkanaActor.SWIM.Status.alive(incarnation: 0), protocolPeriod: 2), SakkanaActor.SWIMMember(ref: ActorRef<SWIM.Message>(sact://second@localhost:9002/system/swim), status: SakkanaActor.SWIM.Status.alive(incarnation: 0), protocolPeriod: 0), SakkanaActor.SWIMMember(ref: ActorRef<SWIM.Message>(/system/swim), status: SakkanaActor.SWIM.Status.alive(incarnation: 0), protocolPeriod: 0)])]
Captured log [first]: [trace] Started timer [TimerKey(ask/timeout)] with generation [0]
Captured log [first]: [debug] Ignoring gossip about member [SWIMMember(ref: ActorRef<SWIM.Message>(sact://third@localhost:9003/system/swim), status: SakkanaActor.SWIM.Status.alive(incarnation: 0), protocolPeriod: 0)] due to older status, incoming: [alive(incarnation: 0)], current: [alive(incarnation: 0)]
Captured log [first]: [trace] Cancel timer [TimerKey(ask/timeout)] with generation [0]
Captured log [first]: [debug] Ignoring gossip about member [SWIMMember(ref: ActorRef<SWIM.Message>(sact://second@localhost:9002/system/swim), status: SakkanaActor.SWIM.Status.alive(incarnation: 0), protocolPeriod: 0)] due to older status, incoming: [alive(incarnation: 0)], current: [alive(incarnation: 0)]
Captured log [first]: [trace] Received ack from [ActorRef<SWIM.Message>(sact://third@localhost:9003/system/swim)] with incarnation [0] and payload [membership([SakkanaActor.SWIMMember(ref: ActorRef<SWIM.Message>(sact://second@localhost:9002/system/swim), status: SakkanaActor.SWIM.Status.alive(incarnation: 0), protocolPeriod: 0), SakkanaActor.SWIMMember(ref: ActorRef<SWIM.Message>(/system/swim), status: SakkanaActor.SWIM.Status.alive(incarnation: 0), protocolPeriod: 0), SakkanaActor.SWIMMember(ref: ActorRef<SWIM.Message>(sact://third@localhost:9003/system/swim), status: SakkanaActor.SWIM.Status.alive(incarnation: 0), protocolPeriod: 0)])]
Captured log [first]: [debug] Ignoring gossip about member [SWIMMember(ref: ActorRef<SWIM.Message>(sact://second@localhost:9002/system/swim), status: SakkanaActor.SWIM.Status.alive(incarnation: 0), protocolPeriod: 0)] due to older status, incoming: [alive(incarnation: 0)], current: [alive(incarnation: 0)]
Captured log [first]: [trace] Received periodic trigger to ping random member
Captured log [first]: [trace] Sending ping to [ActorRef<SWIM.Message>(sact://third@localhost:9003/system/swim)] with payload [membership([SakkanaActor.SWIMMember(ref: ActorRef<SWIM.Message>(sact://third@localhost:9003/system/swim), status: SakkanaActor.SWIM.Status.alive(incarnation: 0), protocolPeriod: 2), SakkanaActor.SWIMMember(ref: ActorRef<SWIM.Message>(sact://second@localhost:9002/system/swim), status: SakkanaActor.SWIM.Status.alive(incarnation: 0), protocolPeriod: 0)])]
Captured log [first]: [trace] Started timer [TimerKey(ask/timeout)] with generation [0]
Captured log [first]: [debug] Ignoring gossip about member [SWIMMember(ref: ActorRef<SWIM.Message>(sact://third@localhost:9003/system/swim), status: SakkanaActor.SWIM.Status.alive(incarnation: 0), protocolPeriod: 0)] due to older status, incoming: [alive(incarnation: 0)], current: [alive(incarnation: 0)]
Captured log [first]: [trace] Cancel timer [TimerKey(ask/timeout)] with generation [0]
Captured log [first]: [trace] Received ack from [ActorRef<SWIM.Message>(sact://third@localhost:9003/system/swim)] with incarnation [0] and payload [membership([SakkanaActor.SWIMMember(ref: ActorRef<SWIM.Message>(sact://second@localhost:9002/system/swim), status: SakkanaActor.SWIM.Status.alive(incarnation: 0), protocolPeriod: 0), SakkanaActor.SWIMMember(ref: ActorRef<SWIM.Message>(/system/swim), status: SakkanaActor.SWIM.Status.alive(incarnation: 0), protocolPeriod: 0), SakkanaActor.SWIMMember(ref: ActorRef<SWIM.Message>(sact://third@localhost:9003/system/swim), status: SakkanaActor.SWIM.Status.alive(incarnation: 0), protocolPeriod: 0)])]
Captured log [first]: [debug] Ignoring gossip about member [SWIMMember(ref: ActorRef<SWIM.Message>(sact://second@localhost:9002/system/swim), status: SakkanaActor.SWIM.Status.alive(incarnation: 0), protocolPeriod: 0)] due to older status, incoming: [alive(incarnation: 0)], current: [alive(incarnation: 0)]
Captured log [first]: [trace] Received periodic trigger to ping random member
Captured log [first]: [trace] Sending ping to [ActorRef<SWIM.Message>(sact://third@localhost:9003/system/swim)] with payload [membership([SakkanaActor.SWIMMember(ref: ActorRef<SWIM.Message>(sact://third@localhost:9003/system/swim), status: SakkanaActor.SWIM.Status.alive(incarnation: 0), protocolPeriod: 2)])]
Captured log [first]: [trace] Started timer [TimerKey(ask/timeout)] with generation [0]
Captured log [first]: [trace] Cancel timer [TimerKey(ask/timeout)] with generation [0]
Captured log [first]: [debug] Ignoring gossip about member [SWIMMember(ref: ActorRef<SWIM.Message>(sact://second@localhost:9002/system/swim), status: SakkanaActor.SWIM.Status.alive(incarnation: 0), protocolPeriod: 0)] due to older status, incoming: [alive(incarnation: 0)], current: [alive(incarnation: 0)]
Captured log [first]: [debug] Ignoring gossip about member [SWIMMember(ref: ActorRef<SWIM.Message>(sact://third@localhost:9003/system/swim), status: SakkanaActor.SWIM.Status.alive(incarnation: 0), protocolPeriod: 0)] due to older status, incoming: [alive(incarnation: 0)], current: [alive(incarnation: 0)]
Captured log [first]: [trace] Received ack from [ActorRef<SWIM.Message>(sact://third@localhost:9003/system/swim)] with incarnation [0] and payload [none]
Captured log [first]: [trace] Received periodic trigger to ping random member
Captured log [first]: [trace] Sending ping to [ActorRef<SWIM.Message>(sact://third@localhost:9003/system/swim)] with payload [none]
Captured log [first]: [trace] Started timer [TimerKey(ask/timeout)] with generation [0]
Captured log [first]: [trace] Cancel timer [TimerKey(ask/timeout)] with generation [0]
Captured log [first]: [trace] Received ack from [ActorRef<SWIM.Message>(sact://third@localhost:9003/system/swim)] with incarnation [0] and payload [none]
Captured log [first]: [trace] Received periodic trigger to ping random member
Captured log [first]: [trace] Sending ping to [ActorRef<SWIM.Message>(sact://third@localhost:9003/system/swim)] with payload [none]
Captured log [first]: [trace] Started timer [TimerKey(ask/timeout)] with generation [0]
Captured log [first]: [trace] Cancel timer [TimerKey(ask/timeout)] with generation [0]
Captured log [first]: [trace] Received ack from [ActorRef<SWIM.Message>(sact://third@localhost:9003/system/swim)] with incarnation [0] and payload [none]
Captured log [first]: [trace] Received periodic trigger to ping random member
Captured log [first]: [trace] Sending ping to [ActorRef<SWIM.Message>(sact://third@localhost:9003/system/swim)] with payload [none]
Captured log [first]: [trace] Started timer [TimerKey(ask/timeout)] with generation [0]
Captured log [first]: [trace] Cancel timer [TimerKey(ask/timeout)] with generation [0]
Captured log [first]: [trace] Received ack from [ActorRef<SWIM.Message>(sact://third@localhost:9003/system/swim)] with incarnation [0] and payload [none]
Captured log [first]: [trace] Received periodic trigger to ping random member
Captured log [first]: [trace] Sending ping to [ActorRef<SWIM.Message>(sact://third@localhost:9003/system/swim)] with payload [none]
Captured log [first]: [trace] Started timer [TimerKey(ask/timeout)] with generation [0]
Captured log [first]: [trace] Cancel timer [TimerKey(ask/timeout)] with generation [0]
Captured log [first]: [trace] Received ack from [ActorRef<SWIM.Message>(sact://third@localhost:9003/system/swim)] with incarnation [0] and payload [none]
Captured log [first]: [trace] Received periodic trigger to ping random member
Captured log [first]: [trace] Sending ping to [ActorRef<SWIM.Message>(sact://third@localhost:9003/system/swim)] with payload [none]
Captured log [first]: [trace] Started timer [TimerKey(ask/timeout)] with generation [0]
Captured log [first]: [trace] Cancel timer [TimerKey(ask/timeout)] with generation [0]
Captured log [first]: [trace] Received ack from [ActorRef<SWIM.Message>(sact://third@localhost:9003/system/swim)] with incarnation [0] and payload [none]
Captured log [first]: [trace] Received periodic trigger to ping random member
Captured log [first]: [trace] Sending ping to [ActorRef<SWIM.Message>(sact://third@localhost:9003/system/swim)] with payload [none]
Captured log [first]: [trace] Started timer [TimerKey(ask/timeout)] with generation [0]
Captured log [first]: [trace] Cancel timer [TimerKey(ask/timeout)] with generation [0]
Captured log [first]: [trace] Received ack from [ActorRef<SWIM.Message>(sact://third@localhost:9003/system/swim)] with incarnation [0] and payload [none]
Captured log [first]: [trace] Received periodic trigger to ping random member
Captured log [first]: [trace] Sending ping to [ActorRef<SWIM.Message>(sact://third@localhost:9003/system/swim)] with payload [none]
Captured log [first]: [trace] Started timer [TimerKey(ask/timeout)] with generation [0]
Captured log [first]: [trace] Cancel timer [TimerKey(ask/timeout)] with generation [0]
Captured log [first]: [trace] Received ack from [ActorRef<SWIM.Message>(sact://third@localhost:9003/system/swim)] with incarnation [0] and payload [none]
Captured log [first]: [trace] Received periodic trigger to ping random member
Captured log [first]: [trace] Sending ping to [ActorRef<SWIM.Message>(sact://third@localhost:9003/system/swim)] with payload [none]
Captured log [first]: [trace] Started timer [TimerKey(ask/timeout)] with generation [0]
Captured log [first]: [trace] Cancel timer [TimerKey(ask/timeout)] with generation [0]
Captured log [first]: [trace] Received ack from [ActorRef<SWIM.Message>(sact://third@localhost:9003/system/swim)] with incarnation [0] and payload [none]
Captured log [first]: [trace] Received periodic trigger to ping random member
Captured log [first]: [trace] Sending ping to [ActorRef<SWIM.Message>(sact://third@localhost:9003/system/swim)] with payload [none]
Captured log [first]: [trace] Started timer [TimerKey(ask/timeout)] with generation [0]
Captured log [first]: [trace] Cancel timer [TimerKey(ask/timeout)] with generation [0]
Captured log [first]: [trace] Received ack from [ActorRef<SWIM.Message>(sact://third@localhost:9003/system/swim)] with incarnation [0] and payload [none]
Captured log [first]: [trace] Received periodic trigger to ping random member
Captured log [first]: [trace] Sending ping to [ActorRef<SWIM.Message>(sact://third@localhost:9003/system/swim)] with payload [none]
Captured log [first]: [trace] Started timer [TimerKey(ask/timeout)] with generation [0]
Captured log [first]: [trace] Cancel timer [TimerKey(ask/timeout)] with generation [0]
Captured log [first]: [trace] Received ack from [ActorRef<SWIM.Message>(sact://third@localhost:9003/system/swim)] with incarnation [0] and payload [none]
Captured log [first]: [trace] Received periodic trigger to ping random member
Captured log [first]: [trace] Sending ping to [ActorRef<SWIM.Message>(sact://third@localhost:9003/system/swim)] with payload [none]
Captured log [first]: [trace] Started timer [TimerKey(ask/timeout)] with generation [0]
Captured log [first]: [trace] Cancel timer [TimerKey(ask/timeout)] with generation [0]
Captured log [first]: [trace] Received ack from [ActorRef<SWIM.Message>(sact://third@localhost:9003/system/swim)] with incarnation [0] and payload [none]
Captured log [first]: [trace] Received periodic trigger to ping random member
Captured log [first]: [trace] Sending ping to [ActorRef<SWIM.Message>(sact://third@localhost:9003/system/swim)] with payload [none]
Captured log [first]: [trace] Started timer [TimerKey(ask/timeout)] with generation [0]
Captured log [first]: [trace] Cancel timer [TimerKey(ask/timeout)] with generation [0]
Captured log [first]: [trace] Received ack from [ActorRef<SWIM.Message>(sact://third@localhost:9003/system/swim)] with incarnation [0] and payload [none]
Captured log [first]: [trace] Received periodic trigger to ping random member
Captured log [first]: [trace] Sending ping to [ActorRef<SWIM.Message>(sact://third@localhost:9003/system/swim)] with payload [none]
Captured log [first]: [trace] Started timer [TimerKey(ask/timeout)] with generation [0]
Captured log [first]: [trace] Cancel timer [TimerKey(ask/timeout)] with generation [0]
Captured log [first]: [trace] Received ack from [ActorRef<SWIM.Message>(sact://third@localhost:9003/system/swim)] with incarnation [0] and payload [none]
Captured log [first]: [trace] Received periodic trigger to ping random member
Captured log [first]: [trace] Sending ping to [ActorRef<SWIM.Message>(sact://third@localhost:9003/system/swim)] with payload [none]
Captured log [first]: [trace] Started timer [TimerKey(ask/timeout)] with generation [0]
Captured log [first]: [trace] Cancel timer [TimerKey(ask/timeout)] with generation [0]
Captured log [first]: [trace] Received ack from [ActorRef<SWIM.Message>(sact://third@localhost:9003/system/swim)] with incarnation [0] and payload [none]
Captured log [first]: [trace] Received periodic trigger to ping random member
Captured log [first]: [trace] Sending ping to [ActorRef<SWIM.Message>(sact://third@localhost:9003/system/swim)] with payload [none]
Captured log [first]: [trace] Started timer [TimerKey(ask/timeout)] with generation [0]
Captured log [first]: [trace] Cancel timer [TimerKey(ask/timeout)] with generation [0]
Captured log [first]: [trace] Received ack from [ActorRef<SWIM.Message>(sact://third@localhost:9003/system/swim)] with incarnation [0] and payload [none]
Captured log [first]: [trace] Received periodic trigger to ping random member
Captured log [first]: [trace] Sending ping to [ActorRef<SWIM.Message>(sact://third@localhost:9003/system/swim)] with payload [none]
Captured log [first]: [trace] Started timer [TimerKey(ask/timeout)] with generation [0]
Captured log [first]: [trace] Cancel timer [TimerKey(ask/timeout)] with generation [0]
Captured log [first]: [trace] Received ack from [ActorRef<SWIM.Message>(sact://third@localhost:9003/system/swim)] with incarnation [0] and payload [none]
Captured log [first]: [trace] Received periodic trigger to ping random member
Captured log [first]: [trace] Sending ping to [ActorRef<SWIM.Message>(sact://third@localhost:9003/system/swim)] with payload [none]
Captured log [first]: [trace] Started timer [TimerKey(ask/timeout)] with generation [0]
Captured log [first]: [trace] Cancel timer [TimerKey(ask/timeout)] with generation [0]
Captured log [first]: [trace] Received ack from [ActorRef<SWIM.Message>(sact://third@localhost:9003/system/swim)] with incarnation [0] and payload [none]
Captured log [first]: [trace] Received periodic trigger to ping random member
Captured log [first]: [trace] Sending ping to [ActorRef<SWIM.Message>(sact://third@localhost:9003/system/swim)] with payload [none]
Captured log [first]: [trace] Started timer [TimerKey(ask/timeout)] with generation [0]
Captured log [first]: [trace] Cancel timer [TimerKey(ask/timeout)] with generation [0]
Captured log [first]: [trace] Received ack from [ActorRef<SWIM.Message>(sact://third@localhost:9003/system/swim)] with incarnation [0] and payload [none]
Captured log [first]: [trace] Received periodic trigger to ping random member
Captured log [first]: [trace] Sending ping to [ActorRef<SWIM.Message>(sact://third@localhost:9003/system/swim)] with payload [none]
Captured log [first]: [trace] Started timer [TimerKey(ask/timeout)] with generation [0]
Captured log [first]: [trace] Cancel timer [TimerKey(ask/timeout)] with generation [0]
Captured log [first]: [trace] Received ack from [ActorRef<SWIM.Message>(sact://third@localhost:9003/system/swim)] with incarnation [0] and payload [none]
Captured log [first]: [trace] Received periodic trigger to ping random member
Captured log [first]: [trace] Sending ping to [ActorRef<SWIM.Message>(sact://third@localhost:9003/system/swim)] with payload [none]
Captured log [first]: [trace] Started timer [TimerKey(ask/timeout)] with generation [0]
Captured log [first]: [trace] Cancel timer [TimerKey(ask/timeout)] with generation [0]
Captured log [first]: [trace] Received ack from [ActorRef<SWIM.Message>(sact://third@localhost:9003/system/swim)] with incarnation [0] and payload [none]
Captured log [first]: [trace] Received periodic trigger to ping random member
Captured log [first]: [trace] Sending ping to [ActorRef<SWIM.Message>(sact://third@localhost:9003/system/swim)] with payload [none]
Captured log [first]: [trace] Started timer [TimerKey(ask/timeout)] with generation [0]
Captured log [first]: [trace] Cancel timer [TimerKey(ask/timeout)] with generation [0]
Captured log [first]: [trace] Received ack from [ActorRef<SWIM.Message>(sact://third@localhost:9003/system/swim)] with incarnation [0] and payload [none]
Captured log [first]: [trace] Received periodic trigger to ping random member
Captured log [first]: [trace] Sending ping to [ActorRef<SWIM.Message>(sact://third@localhost:9003/system/swim)] with payload [none]
Captured log [first]: [trace] Started timer [TimerKey(ask/timeout)] with generation [0]
Captured log [first]: [trace] Cancel timer [TimerKey(ask/timeout)] with generation [0]
Captured log [first]: [trace] Received ack from [ActorRef<SWIM.Message>(sact://third@localhost:9003/system/swim)] with incarnation [0] and payload [none]
Captured log [first]: [trace] Received periodic trigger to ping random member
Captured log [first]: [trace] Sending ping to [ActorRef<SWIM.Message>(sact://third@localhost:9003/system/swim)] with payload [none]
Captured log [first]: [trace] Started timer [TimerKey(ask/timeout)] with generation [0]
Captured log [first]: [trace] Cancel timer [TimerKey(ask/timeout)] with generation [0]
Captured log [first]: [trace] Received ack from [ActorRef<SWIM.Message>(sact://third@localhost:9003/system/swim)] with incarnation [0] and payload [none]
Captured log [first]: [trace] Received periodic trigger to ping random member
Captured log [first]: [trace] Sending ping to [ActorRef<SWIM.Message>(sact://third@localhost:9003/system/swim)] with payload [none]
Captured log [first]: [trace] Started timer [TimerKey(ask/timeout)] with generation [0]
Captured log [first]: [trace] Cancel timer [TimerKey(ask/timeout)] with generation [0]
Captured log [first]: [trace] Received ack from [ActorRef<SWIM.Message>(sact://third@localhost:9003/system/swim)] with incarnation [0] and payload [none]
Captured log [first]: [trace] Received periodic trigger to ping random member
Captured log [first]: [trace] Sending ping to [ActorRef<SWIM.Message>(sact://third@localhost:9003/system/swim)] with payload [none]
Captured log [first]: [trace] Started timer [TimerKey(ask/timeout)] with generation [0]
Captured log [first]: [trace] Cancel timer [TimerKey(ask/timeout)] with generation [0]
Captured log [first]: [trace] Received ack from [ActorRef<SWIM.Message>(sact://third@localhost:9003/system/swim)] with incarnation [0] and payload [none]
Captured log [first]: [trace] Received periodic trigger to ping random member
Captured log [first]: [trace] Sending ping to [ActorRef<SWIM.Message>(sact://third@localhost:9003/system/swim)] with payload [none]
Captured log [first]: [trace] Started timer [TimerKey(ask/timeout)] with generation [0]
Captured log [first]: [trace] Cancel timer [TimerKey(ask/timeout)] with generation [0]
Captured log [first]: [trace] Received ack from [ActorRef<SWIM.Message>(sact://third@localhost:9003/system/swim)] with incarnation [0] and payload [none]
Captured log [first]: [trace] Received periodic trigger to ping random member
Captured log [first]: [trace] Sending ping to [ActorRef<SWIM.Message>(sact://third@localhost:9003/system/swim)] with payload [none]
Captured log [first]: [trace] Started timer [TimerKey(ask/timeout)] with generation [0]
Captured log [first]: [trace] Cancel timer [TimerKey(ask/timeout)] with generation [0]
Captured log [first]: [trace] Received ack from [ActorRef<SWIM.Message>(sact://third@localhost:9003/system/swim)] with incarnation [0] and payload [none]
========================================================================================================================
------------------------------------- ActorSystem(second, sact://second@localhost:9002) ------------------------------------------------
Captured log [second]: [trace] Started timer [TimerKey(receptionist/sync)] with generation [0]
Captured log [second]: [trace] Started timer [TimerKey(swim/periodic-ping)] with generation [0]
Captured log [second]: [debug] Spawning [SakkanaActor.Behavior<SakkanaActor.DowningStrategyMessage>.setup((Function))], on path: [/system/cluster/downingStrategy]
Captured log [second]: [info] Binding to: [sact://second@localhost:9002]
Captured log [second]: [trace] Successfully subscribed [ActorRef<ClusterEvent>(/system/cluster/downingStrategy/$sub-SakkanaActor.ClusterEvent-y)]
Captured log [second]: [info] Bound to [IPv4]127.0.0.1/127.0.0.1:9002
Captured log [second]: [trace] Started timer [TimerKey(ask/timeout)] with generation [0]
Captured log [second]: [info] Extending handshake offer to sact://third@localhost:9003)
Captured log [second]: [warning] Failed await for outbound channel to sact://third@localhost:9003; Error was: ExecutionError(underlying: NIO.NIOConnectionError(host: "localhost", port: 9003, dnsAError: nil, dnsAAAAError: nil, connectionErrors: [NIO.SingleConnectionFailure(target: [IPv6]localhost/::1:9003, error: connect(descriptor:addr:size:) failed: Cannot assign requested address (errno: 99)), NIO.SingleConnectionFailure(target: [IPv4]localhost/127.0.0.1:9003, error: connection reset (error set): Connection refused (errno: 111))]))
Captured log [second]: [info] Schedule handshake retry to: [sact://third@localhost:9003] delay: [TimeAmount(100ms, nanoseconds: 100000000)]
Captured log [second]: [trace] Started timer [TimerKey(handshake-timer-sact://third@localhost:9003)] with generation [0]
Captured log [second]: [info] Accept association with sact://first@localhost:9001!
Captured log [second]: [trace] Started timer [TimerKey(ask/timeout)] with generation [0]
Captured log [second]: [warning] Attempted associating with already associated node: [sact://first@localhost:9001], existing association: [associated(AssociatedState(channel: SocketChannel { selectable = BaseSocket { fd=775 }ย , localAddress = Optional([IPv4]127.0.0.1/127.0.0.1:9002), remoteAddress = Optional([IPv4]127.0.0.1/127.0.0.1:36204) }, selfNode: sact://second@localhost:9002, remoteNode: sact://first@localhost:9001))]
Captured log [second]: [trace] Cancel timer [TimerKey(ask/timeout)] with generation [0]
Captured log [second]: [trace] Received periodic trigger to ping random member
Captured log [second]: [trace] Sending ping to [ActorRef<SWIM.Message>(sact://first@localhost:9001/system/swim)] with payload [membership([SakkanaActor.SWIMMember(ref: ActorRef<SWIM.Message>(sact://first@localhost:9001/system/swim), status: SakkanaActor.SWIM.Status.alive(incarnation: 0), protocolPeriod: 0), SakkanaActor.SWIMMember(ref: ActorRef<SWIM.Message>(/system/swim), status: SakkanaActor.SWIM.Status.alive(incarnation: 0), protocolPeriod: 0)])]
Captured log [second]: [trace] Started timer [TimerKey(ask/timeout)] with generation [0]
Captured log [second]: [trace] Cancel timer [TimerKey(ask/timeout)] with generation [0]
Captured log [second]: [trace] Received ack from [ActorRef<SWIM.Message>(sact://first@localhost:9001/system/swim)] with incarnation [0] and payload [membership([SakkanaActor.SWIMMember(ref: ActorRef<SWIM.Message>(/system/swim), status: SakkanaActor.SWIM.Status.alive(incarnation: 0), protocolPeriod: 0), SakkanaActor.SWIMMember(ref: ActorRef<SWIM.Message>(sact://first@localhost:9001/system/swim), status: SakkanaActor.SWIM.Status.alive(incarnation: 0), protocolPeriod: 0)])]
Captured log [second]: [debug] Ignoring gossip about member [SWIMMember(ref: ActorRef<SWIM.Message>(sact://first@localhost:9001/system/swim), status: SakkanaActor.SWIM.Status.alive(incarnation: 0), protocolPeriod: 0)] due to older status, incoming: [alive(incarnation: 0)], current: [alive(incarnation: 0)]
Captured log [second]: [trace] Cancel timer [TimerKey(handshake-timer-sact://third@localhost:9003)] with generation [0]
Captured log [second]: [info] Extending handshake offer to sact://third@localhost:9003)
Captured log [second]: [debug] Associated with: sact://third@localhost:9003. Membership: Membership([Member(sact://first@localhost:9001, status: up, reachability: reachable), Member(sact://third@localhost:9003, status: up, reachability: reachable), Member(sact://second@localhost:9002, status: up, reachability: reachable)])
Captured log [second]: [trace] Cancel timer [TimerKey(ask/timeout)] with generation [0]
Captured log [second]: [trace] Sending ping to [ActorRef<SWIM.Message>(sact://third@localhost:9003/system/swim)] with payload [membership([SakkanaActor.SWIMMember(ref: ActorRef<SWIM.Message>(sact://first@localhost:9001/system/swim), status: SakkanaActor.SWIM.Status.alive(incarnation: 0), protocolPeriod: 0), SakkanaActor.SWIMMember(ref: ActorRef<SWIM.Message>(/system/swim), status: SakkanaActor.SWIM.Status.alive(incarnation: 0), protocolPeriod: 0)])]
Captured log [second]: [trace] Started timer [TimerKey(ask/timeout)] with generation [0]
Captured log [second]: [trace] Cancel timer [TimerKey(ask/timeout)] with generation [0]
Captured log [second]: [trace] Received ack from [ActorRef<SWIM.Message>(sact://third@localhost:9003/system/swim)] with incarnation [0] and payload [membership([SakkanaActor.SWIMMember(ref: ActorRef<SWIM.Message>(sact://third@localhost:9003/system/swim), status: SakkanaActor.SWIM.Status.alive(incarnation: 0), protocolPeriod: 0)])]
Captured log [second]: [debug] Ignoring gossip about member [SWIMMember(ref: ActorRef<SWIM.Message>(sact://third@localhost:9003/system/swim), status: SakkanaActor.SWIM.Status.alive(incarnation: 0), protocolPeriod: 0)] due to older status, incoming: [alive(incarnation: 0)], current: [alive(incarnation: 0)]
Captured log [second]: [trace] Received periodic trigger to ping random member
Captured log [second]: [trace] Sending ping to [ActorRef<SWIM.Message>(sact://first@localhost:9001/system/swim)] with payload [membership([SakkanaActor.SWIMMember(ref: ActorRef<SWIM.Message>(sact://third@localhost:9003/system/swim), status: SakkanaActor.SWIM.Status.alive(incarnation: 0), protocolPeriod: 1), SakkanaActor.SWIMMember(ref: ActorRef<SWIM.Message>(sact://first@localhost:9001/system/swim), status: SakkanaActor.SWIM.Status.alive(incarnation: 0), protocolPeriod: 0), SakkanaActor.SWIMMember(ref: ActorRef<SWIM.Message>(/system/swim), status: SakkanaActor.SWIM.Status.alive(incarnation: 0), protocolPeriod: 0)])]
Captured log [second]: [trace] Started timer [TimerKey(ask/timeout)] with generation [0]
Captured log [second]: [debug] Ignoring gossip about member [SWIMMember(ref: ActorRef<SWIM.Message>(sact://first@localhost:9001/system/swim), status: SakkanaActor.SWIM.Status.alive(incarnation: 0), protocolPeriod: 0)] due to older status, incoming: [alive(incarnation: 0)], current: [alive(incarnation: 0)]
Captured log [second]: [trace] Cancel timer [TimerKey(ask/timeout)] with generation [0]
Captured log [second]: [trace] Received ack from [ActorRef<SWIM.Message>(sact://first@localhost:9001/system/swim)] with incarnation [0] and payload [membership([SakkanaActor.SWIMMember(ref: ActorRef<SWIM.Message>(/system/swim), status: SakkanaActor.SWIM.Status.alive(incarnation: 0), protocolPeriod: 0), SakkanaActor.SWIMMember(ref: ActorRef<SWIM.Message>(sact://first@localhost:9001/system/swim), status: SakkanaActor.SWIM.Status.alive(incarnation: 0), protocolPeriod: 0)])]
Captured log [second]: [debug] Ignoring gossip about member [SWIMMember(ref: ActorRef<SWIM.Message>(sact://first@localhost:9001/system/swim), status: SakkanaActor.SWIM.Status.alive(incarnation: 0), protocolPeriod: 0)] due to older status, incoming: [alive(incarnation: 0)], current: [alive(incarnation: 0)]
Captured log [second]: [trace] Received periodic trigger to ping random member
Captured log [second]: [trace] Sending ping to [ActorRef<SWIM.Message>(sact://first@localhost:9001/system/swim)] with payload [membership([SakkanaActor.SWIMMember(ref: ActorRef<SWIM.Message>(sact://third@localhost:9003/system/swim), status: SakkanaActor.SWIM.Status.alive(incarnation: 0), protocolPeriod: 1), SakkanaActor.SWIMMember(ref: ActorRef<SWIM.Message>(sact://first@localhost:9001/system/swim), status: SakkanaActor.SWIM.Status.alive(incarnation: 0), protocolPeriod: 0), SakkanaActor.SWIMMember(ref: ActorRef<SWIM.Message>(/system/swim), status: SakkanaActor.SWIM.Status.alive(incarnation: 0), protocolPeriod: 0)])]
Captured log [second]: [trace] Started timer [TimerKey(ask/timeout)] with generation [0]
Captured log [second]: [trace] Cancel timer [TimerKey(ask/timeout)] with generation [0]
Captured log [second]: [trace] Received ack from [ActorRef<SWIM.Message>(sact://first@localhost:9001/system/swim)] with incarnation [0] and payload [membership([SakkanaActor.SWIMMember(ref: ActorRef<SWIM.Message>(sact://third@localhost:9003/system/swim), status: SakkanaActor.SWIM.Status.alive(incarnation: 0), protocolPeriod: 0), SakkanaActor.SWIMMember(ref: ActorRef<SWIM.Message>(/system/swim), status: SakkanaActor.SWIM.Status.alive(incarnation: 0), protocolPeriod: 0), SakkanaActor.SWIMMember(ref: ActorRef<SWIM.Message>(sact://first@localhost:9001/system/swim), status: SakkanaActor.SWIM.Status.alive(incarnation: 0), protocolPeriod: 0)])]
Captured log [second]: [debug] Ignoring gossip about member [SWIMMember(ref: ActorRef<SWIM.Message>(sact://third@localhost:9003/system/swim), status: SakkanaActor.SWIM.Status.alive(incarnation: 0), protocolPeriod: 0)] due to older status, incoming: [alive(incarnation: 0)], current: [alive(incarnation: 0)]
Captured log [second]: [trace] Received periodic trigger to ping random member
Captured log [second]: [trace] Sending ping to [ActorRef<SWIM.Message>(sact://first@localhost:9001/system/swim)] with payload [membership([SakkanaActor.SWIMMember(ref: ActorRef<SWIM.Message>(sact://third@localhost:9003/system/swim), status: SakkanaActor.SWIM.Status.alive(incarnation: 0), protocolPeriod: 1), SakkanaActor.SWIMMember(ref: ActorRef<SWIM.Message>(sact://first@localhost:9001/system/swim), status: SakkanaActor.SWIM.Status.alive(incarnation: 0), protocolPeriod: 0)])]
Captured log [second]: [trace] Started timer [TimerKey(ask/timeout)] with generation [0]
Captured log [second]: [debug] Ignoring gossip about member [SWIMMember(ref: ActorRef<SWIM.Message>(sact://first@localhost:9001/system/swim), status: SakkanaActor.SWIM.Status.alive(incarnation: 0), protocolPeriod: 0)] due to older status, incoming: [alive(incarnation: 0)], current: [alive(incarnation: 0)]
Captured log [second]: [trace] Cancel timer [TimerKey(ask/timeout)] with generation [0]
Captured log [second]: [trace] Received ack from [ActorRef<SWIM.Message>(sact://first@localhost:9001/system/swim)] with incarnation [0] and payload [membership([SakkanaActor.SWIMMember(ref: ActorRef<SWIM.Message>(sact://third@localhost:9003/system/swim), status: SakkanaActor.SWIM.Status.alive(incarnation: 0), protocolPeriod: 0)])]
Captured log [second]: [debug] Ignoring gossip about member [SWIMMember(ref: ActorRef<SWIM.Message>(sact://third@localhost:9003/system/swim), status: SakkanaActor.SWIM.Status.alive(incarnation: 0), protocolPeriod: 0)] due to older status, incoming: [alive(incarnation: 0)], current: [alive(incarnation: 0)]
Captured log [second]: [trace] Received periodic trigger to ping random member
Captured log [second]: [trace] Sending ping to [ActorRef<SWIM.Message>(sact://first@localhost:9001/system/swim)] with payload [membership([SakkanaActor.SWIMMember(ref: ActorRef<SWIM.Message>(sact://third@localhost:9003/system/swim), status: SakkanaActor.SWIM.Status.alive(incarnation: 0), protocolPeriod: 1)])]
Captured log [second]: [trace] Started timer [TimerKey(ask/timeout)] with generation [0]
Captured log [second]: [trace] Cancel timer [TimerKey(ask/timeout)] with generation [0]
Captured log [second]: [trace] Received ack from [ActorRef<SWIM.Message>(sact://first@localhost:9001/system/swim)] with incarnation [0] and payload [none]
Captured log [second]: [trace] Received periodic trigger to ping random member
Captured log [second]: [trace] Sending ping to [ActorRef<SWIM.Message>(sact://first@localhost:9001/system/swim)] with payload [none]
Captured log [second]: [trace] Started timer [TimerKey(ask/timeout)] with generation [0]
Captured log [second]: [trace] Cancel timer [TimerKey(ask/timeout)] with generation [0]
Captured log [second]: [trace] Received ack from [ActorRef<SWIM.Message>(sact://first@localhost:9001/system/swim)] with incarnation [0] and payload [none]
Captured log [second]: [trace] Received periodic trigger to ping random member
Captured log [second]: [trace] Sending ping to [ActorRef<SWIM.Message>(sact://first@localhost:9001/system/swim)] with payload [none]
Captured log [second]: [trace] Started timer [TimerKey(ask/timeout)] with generation [0]
Captured log [second]: [trace] Cancel timer [TimerKey(ask/timeout)] with generation [0]
Captured log [second]: [trace] Received ack from [ActorRef<SWIM.Message>(sact://first@localhost:9001/system/swim)] with incarnation [0] and payload [none]
Captured log [second]: [trace] Received periodic trigger to ping random member
Captured log [second]: [trace] Sending ping to [ActorRef<SWIM.Message>(sact://first@localhost:9001/system/swim)] with payload [none]
Captured log [second]: [trace] Started timer [TimerKey(ask/timeout)] with generation [0]
Captured log [second]: [trace] Cancel timer [TimerKey(ask/timeout)] with generation [0]
Captured log [second]: [trace] Received ack from [ActorRef<SWIM.Message>(sact://first@localhost:9001/system/swim)] with incarnation [0] and payload [none]
Captured log [second]: [trace] Received periodic trigger to ping random member
Captured log [second]: [trace] Sending ping to [ActorRef<SWIM.Message>(sact://first@localhost:9001/system/swim)] with payload [none]
Captured log [second]: [trace] Started timer [TimerKey(ask/timeout)] with generation [0]
Captured log [second]: [trace] Cancel timer [TimerKey(ask/timeout)] with generation [0]
Captured log [second]: [trace] Received ack from [ActorRef<SWIM.Message>(sact://first@localhost:9001/system/swim)] with incarnation [0] and payload [none]
Captured log [second]: [trace] Received periodic trigger to ping random member
Captured log [second]: [trace] Sending ping to [ActorRef<SWIM.Message>(sact://first@localhost:9001/system/swim)] with payload [none]
Captured log [second]: [trace] Started timer [TimerKey(ask/timeout)] with generation [0]
Captured log [second]: [trace] Cancel timer [TimerKey(ask/timeout)] with generation [0]
Captured log [second]: [trace] Received ack from [ActorRef<SWIM.Message>(sact://first@localhost:9001/system/swim)] with incarnation [0] and payload [none]
Captured log [second]: [trace] Received periodic trigger to ping random member
Captured log [second]: [trace] Sending ping to [ActorRef<SWIM.Message>(sact://first@localhost:9001/system/swim)] with payload [none]
Captured log [second]: [trace] Started timer [TimerKey(ask/timeout)] with generation [0]
Captured log [second]: [trace] Cancel timer [TimerKey(ask/timeout)] with generation [0]
Captured log [second]: [trace] Received ack from [ActorRef<SWIM.Message>(sact://first@localhost:9001/system/swim)] with incarnation [0] and payload [none]
Captured log [second]: [trace] Received periodic trigger to ping random member
Captured log [second]: [trace] Sending ping to [ActorRef<SWIM.Message>(sact://first@localhost:9001/system/swim)] with payload [none]
Captured log [second]: [trace] Started timer [TimerKey(ask/timeout)] with generation [0]
Captured log [second]: [trace] Cancel timer [TimerKey(ask/timeout)] with generation [0]
Captured log [second]: [trace] Received ack from [ActorRef<SWIM.Message>(sact://first@localhost:9001/system/swim)] with incarnation [0] and payload [none]
Captured log [second]: [trace] Received periodic trigger to ping random member
Captured log [second]: [trace] Sending ping to [ActorRef<SWIM.Message>(sact://first@localhost:9001/system/swim)] with payload [none]
Captured log [second]: [trace] Started timer [TimerKey(ask/timeout)] with generation [0]
Captured log [second]: [trace] Cancel timer [TimerKey(ask/timeout)] with generation [0]
Captured log [second]: [trace] Received ack from [ActorRef<SWIM.Message>(sact://first@localhost:9001/system/swim)] with incarnation [0] and payload [none]
Captured log [second]: [trace] Received periodic trigger to ping random member
Captured log [second]: [trace] Sending ping to [ActorRef<SWIM.Message>(sact://first@localhost:9001/system/swim)] with payload [none]
Captured log [second]: [trace] Started timer [TimerKey(ask/timeout)] with generation [0]
Captured log [second]: [trace] Cancel timer [TimerKey(ask/timeout)] with generation [0]
Captured log [second]: [trace] Received ack from [ActorRef<SWIM.Message>(sact://first@localhost:9001/system/swim)] with incarnation [0] and payload [none]
Captured log [second]: [trace] Received periodic trigger to ping random member
Captured log [second]: [trace] Sending ping to [ActorRef<SWIM.Message>(sact://first@localhost:9001/system/swim)] with payload [none]
Captured log [second]: [trace] Started timer [TimerKey(ask/timeout)] with generation [0]
Captured log [second]: [trace] Cancel timer [TimerKey(ask/timeout)] with generation [0]
Captured log [second]: [trace] Received ack from [ActorRef<SWIM.Message>(sact://first@localhost:9001/system/swim)] with incarnation [0] and payload [none]
Captured log [second]: [trace] Received periodic trigger to ping random member
Captured log [second]: [trace] Sending ping to [ActorRef<SWIM.Message>(sact://first@localhost:9001/system/swim)] with payload [none]
Captured log [second]: [trace] Started timer [TimerKey(ask/timeout)] with generation [0]
Captured log [second]: [trace] Cancel timer [TimerKey(ask/timeout)] with generation [0]
Captured log [second]: [trace] Received ack from [ActorRef<SWIM.Message>(sact://first@localhost:9001/system/swim)] with incarnation [0] and payload [none]
Captured log [second]: [trace] Received periodic trigger to ping random member
Captured log [second]: [trace] Sending ping to [ActorRef<SWIM.Message>(sact://first@localhost:9001/system/swim)] with payload [none]
Captured log [second]: [trace] Started timer [TimerKey(ask/timeout)] with generation [0]
Captured log [second]: [trace] Cancel timer [TimerKey(ask/timeout)] with generation [0]
Captured log [second]: [trace] Received ack from [ActorRef<SWIM.Message>(sact://first@localhost:9001/system/swim)] with incarnation [0] and payload [none]
Captured log [second]: [trace] Received periodic trigger to ping random member
Captured log [second]: [trace] Sending ping to [ActorRef<SWIM.Message>(sact://first@localhost:9001/system/swim)] with payload [none]
Captured log [second]: [trace] Started timer [TimerKey(ask/timeout)] with generation [0]
Captured log [second]: [trace] Cancel timer [TimerKey(ask/timeout)] with generation [0]
Captured log [second]: [trace] Received ack from [ActorRef<SWIM.Message>(sact://first@localhost:9001/system/swim)] with incarnation [0] and payload [none]
Captured log [second]: [trace] Received periodic trigger to ping random member
Captured log [second]: [trace] Sending ping to [ActorRef<SWIM.Message>(sact://first@localhost:9001/system/swim)] with payload [none]
Captured log [second]: [trace] Started timer [TimerKey(ask/timeout)] with generation [0]
Captured log [second]: [trace] Cancel timer [TimerKey(ask/timeout)] with generation [0]
Captured log [second]: [trace] Received ack from [ActorRef<SWIM.Message>(sact://first@localhost:9001/system/swim)] with incarnation [0] and payload [none]
Captured log [second]: [trace] Received periodic trigger to ping random member
Captured log [second]: [trace] Sending ping to [ActorRef<SWIM.Message>(sact://first@localhost:9001/system/swim)] with payload [none]
Captured log [second]: [trace] Started timer [TimerKey(ask/timeout)] with generation [0]
Captured log [second]: [trace] Cancel timer [TimerKey(ask/timeout)] with generation [0]
Captured log [second]: [trace] Received ack from [ActorRef<SWIM.Message>(sact://first@localhost:9001/system/swim)] with incarnation [0] and payload [none]
Captured log [second]: [trace] Received periodic trigger to ping random member
Captured log [second]: [trace] Sending ping to [ActorRef<SWIM.Message>(sact://first@localhost:9001/system/swim)] with payload [none]
Captured log [second]: [trace] Started timer [TimerKey(ask/timeout)] with generation [0]
Captured log [second]: [trace] Cancel timer [TimerKey(ask/timeout)] with generation [0]
Captured log [second]: [trace] Received ack from [ActorRef<SWIM.Message>(sact://first@localhost:9001/system/swim)] with incarnation [0] and payload [none]
Captured log [second]: [trace] Received periodic trigger to ping random member
Captured log [second]: [trace] Sending ping to [ActorRef<SWIM.Message>(sact://first@localhost:9001/system/swim)] with payload [none]
Captured log [second]: [trace] Started timer [TimerKey(ask/timeout)] with generation [0]
Captured log [second]: [trace] Cancel timer [TimerKey(ask/timeout)] with generation [0]
Captured log [second]: [trace] Received ack from [ActorRef<SWIM.Message>(sact://first@localhost:9001/system/swim)] with incarnation [0] and payload [none]
Captured log [second]: [trace] Received periodic trigger to ping random member
Captured log [second]: [trace] Sending ping to [ActorRef<SWIM.Message>(sact://first@localhost:9001/system/swim)] with payload [none]
Captured log [second]: [trace] Started timer [TimerKey(ask/timeout)] with generation [0]
Captured log [second]: [trace] Cancel timer [TimerKey(ask/timeout)] with generation [0]
Captured log [second]: [trace] Received ack from [ActorRef<SWIM.Message>(sact://first@localhost:9001/system/swim)] with incarnation [0] and payload [none]
Captured log [second]: [trace] Received periodic trigger to ping random member
Captured log [second]: [trace] Sending ping to [ActorRef<SWIM.Message>(sact://first@localhost:9001/system/swim)] with payload [none]
Captured log [second]: [trace] Started timer [TimerKey(ask/timeout)] with generation [0]
Captured log [second]: [trace] Cancel timer [TimerKey(ask/timeout)] with generation [0]
Captured log [second]: [trace] Received ack from [ActorRef<SWIM.Message>(sact://first@localhost:9001/system/swim)] with incarnation [0] and payload [none]
Captured log [second]: [trace] Received periodic trigger to ping random member
Captured log [second]: [trace] Sending ping to [ActorRef<SWIM.Message>(sact://first@localhost:9001/system/swim)] with payload [none]
Captured log [second]: [trace] Started timer [TimerKey(ask/timeout)] with generation [0]
Captured log [second]: [trace] Cancel timer [TimerKey(ask/timeout)] with generation [0]
Captured log [second]: [trace] Received ack from [ActorRef<SWIM.Message>(sact://first@localhost:9001/system/swim)] with incarnation [0] and payload [none]
Captured log [second]: [trace] Received periodic trigger to ping random member
Captured log [second]: [trace] Sending ping to [ActorRef<SWIM.Message>(sact://first@localhost:9001/system/swim)] with payload [none]
Captured log [second]: [trace] Started timer [TimerKey(ask/timeout)] with generation [0]
Captured log [second]: [trace] Cancel timer [TimerKey(ask/timeout)] with generation [0]
Captured log [second]: [trace] Received ack from [ActorRef<SWIM.Message>(sact://first@localhost:9001/system/swim)] with incarnation [0] and payload [none]
Captured log [second]: [trace] Received periodic trigger to ping random member
Captured log [second]: [trace] Sending ping to [ActorRef<SWIM.Message>(sact://first@localhost:9001/system/swim)] with payload [none]
Captured log [second]: [trace] Started timer [TimerKey(ask/timeout)] with generation [0]
Captured log [second]: [trace] Cancel timer [TimerKey(ask/timeout)] with generation [0]
Captured log [second]: [trace] Received ack from [ActorRef<SWIM.Message>(sact://first@localhost:9001/system/swim)] with incarnation [0] and payload [none]
Captured log [second]: [trace] Received periodic trigger to ping random member
Captured log [second]: [trace] Sending ping to [ActorRef<SWIM.Message>(sact://first@localhost:9001/system/swim)] with payload [none]
Captured log [second]: [trace] Started timer [TimerKey(ask/timeout)] with generation [0]
Captured log [second]: [trace] Cancel timer [TimerKey(ask/timeout)] with generation [0]
Captured log [second]: [trace] Received ack from [ActorRef<SWIM.Message>(sact://first@localhost:9001/system/swim)] with incarnation [0] and payload [none]
Captured log [second]: [trace] Received periodic trigger to ping random member
Captured log [second]: [trace] Sending ping to [ActorRef<SWIM.Message>(sact://first@localhost:9001/system/swim)] with payload [none]
Captured log [second]: [trace] Started timer [TimerKey(ask/timeout)] with generation [0]
Captured log [second]: [trace] Cancel timer [TimerKey(ask/timeout)] with generation [0]
Captured log [second]: [trace] Received ack from [ActorRef<SWIM.Message>(sact://first@localhost:9001/system/swim)] with incarnation [0] and payload [none]
Captured log [second]: [trace] Received periodic trigger to ping random member
Captured log [second]: [trace] Sending ping to [ActorRef<SWIM.Message>(sact://first@localhost:9001/system/swim)] with payload [none]
Captured log [second]: [trace] Started timer [TimerKey(ask/timeout)] with generation [0]
Captured log [second]: [trace] Cancel timer [TimerKey(ask/timeout)] with generation [0]
Captured log [second]: [trace] Received ack from [ActorRef<SWIM.Message>(sact://first@localhost:9001/system/swim)] with incarnation [0] and payload [none]
Captured log [second]: [trace] Received periodic trigger to ping random member
Captured log [second]: [trace] Sending ping to [ActorRef<SWIM.Message>(sact://first@localhost:9001/system/swim)] with payload [none]
Captured log [second]: [trace] Started timer [TimerKey(ask/timeout)] with generation [0]
Captured log [second]: [trace] Cancel timer [TimerKey(ask/timeout)] with generation [0]
Captured log [second]: [trace] Received ack from [ActorRef<SWIM.Message>(sact://first@localhost:9001/system/swim)] with incarnation [0] and payload [none]
Captured log [second]: [trace] Received periodic trigger to ping random member
Captured log [second]: [trace] Sending ping to [ActorRef<SWIM.Message>(sact://first@localhost:9001/system/swim)] with payload [none]
Captured log [second]: [trace] Started timer [TimerKey(ask/timeout)] with generation [0]
Captured log [second]: [trace] Cancel timer [TimerKey(ask/timeout)] with generation [0]
Captured log [second]: [trace] Received ack from [ActorRef<SWIM.Message>(sact://first@localhost:9001/system/swim)] with incarnation [0] and payload [none]
========================================================================================================================
------------------------------------- ActorSystem(third, sact://third@localhost:9003) ------------------------------------------------
Captured log [third]: [trace] Started timer [TimerKey(receptionist/sync)] with generation [0]
Captured log [third]: [trace] Started timer [TimerKey(swim/periodic-ping)] with generation [0]
Captured log [third]: [debug] Spawning [SakkanaActor.Behavior<SakkanaActor.DowningStrategyMessage>.setup((Function))], on path: [/system/cluster/downingStrategy]
Captured log [third]: [info] Binding to: [sact://third@localhost:9003]
Captured log [third]: [trace] Successfully subscribed [ActorRef<ClusterEvent>(/system/cluster/downingStrategy/$sub-SakkanaActor.ClusterEvent-y)]
Captured log [third]: [info] Bound to [IPv4]127.0.0.1/127.0.0.1:9003
Captured log [third]: [trace] Received periodic trigger to ping random member
Captured log [third]: [info] Accept association with sact://second@localhost:9002!
Captured log [third]: [trace] Started timer [TimerKey(ask/timeout)] with generation [0]
Captured log [third]: [trace] Started timer [TimerKey(ask/timeout)] with generation [0]
Captured log [third]: [warning] Attempted associating with already associated node: [sact://second@localhost:9002], existing association: [associated(AssociatedState(channel: SocketChannel { selectable = BaseSocket { fd=777 }ย , localAddress = Optional([IPv4]127.0.0.1/127.0.0.1:9003), remoteAddress = Optional([IPv4]127.0.0.1/127.0.0.1:35090) }, selfNode: sact://third@localhost:9003, remoteNode: sact://second@localhost:9002))]
Captured log [third]: [trace] Cancel timer [TimerKey(ask/timeout)] with generation [0]
Captured log [third]: [info] Extending handshake offer to sact://first@localhost:9001)
Captured log [third]: [debug] Associated with: sact://first@localhost:9001. Membership: Membership([Member(sact://third@localhost:9003, status: up, reachability: reachable), Member(sact://second@localhost:9002, status: up, reachability: reachable), Member(sact://first@localhost:9001, status: up, reachability: reachable)])
Captured log [third]: [trace] Cancel timer [TimerKey(ask/timeout)] with generation [0]
Captured log [third]: [trace] Received periodic trigger to ping random member
Captured log [third]: [trace] Sending ping to [ActorRef<SWIM.Message>(sact://second@localhost:9002/system/swim)] with payload [membership([SakkanaActor.SWIMMember(ref: ActorRef<SWIM.Message>(sact://second@localhost:9002/system/swim), status: SakkanaActor.SWIM.Status.alive(incarnation: 0), protocolPeriod: 1), SakkanaActor.SWIMMember(ref: ActorRef<SWIM.Message>(sact://first@localhost:9001/system/swim), status: SakkanaActor.SWIM.Status.alive(incarnation: 0), protocolPeriod: 1), SakkanaActor.SWIMMember(ref: ActorRef<SWIM.Message>(/system/swim), status: SakkanaActor.SWIM.Status.alive(incarnation: 0), protocolPeriod: 0)])]
Captured log [third]: [trace] Started timer [TimerKey(ask/timeout)] with generation [0]
Captured log [third]: [trace] Cancel timer [TimerKey(ask/timeout)] with generation [0]
Captured log [third]: [trace] Received ack from [ActorRef<SWIM.Message>(sact://second@localhost:9002/system/swim)] with incarnation [0] and payload [membership([SakkanaActor.SWIMMember(ref: ActorRef<SWIM.Message>(/system/swim), status: SakkanaActor.SWIM.Status.alive(incarnation: 0), protocolPeriod: 0), SakkanaActor.SWIMMember(ref: ActorRef<SWIM.Message>(sact://first@localhost:9001/system/swim), status: SakkanaActor.SWIM.Status.alive(incarnation: 0), protocolPeriod: 0), SakkanaActor.SWIMMember(ref: ActorRef<SWIM.Message>(sact://second@localhost:9002/system/swim), status: SakkanaActor.SWIM.Status.alive(incarnation: 0), protocolPeriod: 0)])]
Captured log [third]: [debug] Ignoring gossip about member [SWIMMember(ref: ActorRef<SWIM.Message>(sact://first@localhost:9001/system/swim), status: SakkanaActor.SWIM.Status.alive(incarnation: 0), protocolPeriod: 0)] due to older status, incoming: [alive(incarnation: 0)], current: [alive(incarnation: 0)]
Captured log [third]: [trace] Received periodic trigger to ping random member
Captured log [third]: [trace] Sending ping to [ActorRef<SWIM.Message>(sact://first@localhost:9001/system/swim)] with payload [membership([SakkanaActor.SWIMMember(ref: ActorRef<SWIM.Message>(sact://second@localhost:9002/system/swim), status: SakkanaActor.SWIM.Status.alive(incarnation: 0), protocolPeriod: 1), SakkanaActor.SWIMMember(ref: ActorRef<SWIM.Message>(sact://first@localhost:9001/system/swim), status: SakkanaActor.SWIM.Status.alive(incarnation: 0), protocolPeriod: 1), SakkanaActor.SWIMMember(ref: ActorRef<SWIM.Message>(/system/swim), status: SakkanaActor.SWIM.Status.alive(incarnation: 0), protocolPeriod: 0)])]
Captured log [third]: [trace] Started timer [TimerKey(ask/timeout)] with generation [0]
Captured log [third]: [debug] Ignoring gossip about member [SWIMMember(ref: ActorRef<SWIM.Message>(sact://second@localhost:9002/system/swim), status: SakkanaActor.SWIM.Status.alive(incarnation: 0), protocolPeriod: 0)] due to older status, incoming: [alive(incarnation: 0)], current: [alive(incarnation: 0)]
Captured log [third]: [trace] Cancel timer [TimerKey(ask/timeout)] with generation [0]
Captured log [third]: [trace] Received ack from [ActorRef<SWIM.Message>(sact://first@localhost:9001/system/swim)] with incarnation [0] and payload [membership([SakkanaActor.SWIMMember(ref: ActorRef<SWIM.Message>(/system/swim), status: SakkanaActor.SWIM.Status.alive(incarnation: 0), protocolPeriod: 0), SakkanaActor.SWIMMember(ref: ActorRef<SWIM.Message>(sact://second@localhost:9002/system/swim), status: SakkanaActor.SWIM.Status.alive(incarnation: 0), protocolPeriod: 0), SakkanaActor.SWIMMember(ref: ActorRef<SWIM.Message>(sact://first@localhost:9001/system/swim), status: SakkanaActor.SWIM.Status.alive(incarnation: 0), protocolPeriod: 0)])]
Captured log [third]: [debug] Ignoring gossip about member [SWIMMember(ref: ActorRef<SWIM.Message>(sact://second@localhost:9002/system/swim), status: SakkanaActor.SWIM.Status.alive(incarnation: 0), protocolPeriod: 0)] due to older status, incoming: [alive(incarnation: 0)], current: [alive(incarnation: 0)]
Captured log [third]: [trace] Received periodic trigger to ping random member
Captured log [third]: [trace] Sending ping to [ActorRef<SWIM.Message>(sact://second@localhost:9002/system/swim)] with payload [membership([SakkanaActor.SWIMMember(ref: ActorRef<SWIM.Message>(sact://second@localhost:9002/system/swim), status: SakkanaActor.SWIM.Status.alive(incarnation: 0), protocolPeriod: 1), SakkanaActor.SWIMMember(ref: ActorRef<SWIM.Message>(sact://first@localhost:9001/system/swim), status: SakkanaActor.SWIM.Status.alive(incarnation: 0), protocolPeriod: 1), SakkanaActor.SWIMMember(ref: ActorRef<SWIM.Message>(/system/swim), status: SakkanaActor.SWIM.Status.alive(incarnation: 0), protocolPeriod: 0)])]
Captured log [third]: [trace] Started timer [TimerKey(ask/timeout)] with generation [0]
Captured log [third]: [debug] Ignoring gossip about member [SWIMMember(ref: ActorRef<SWIM.Message>(sact://second@localhost:9002/system/swim), status: SakkanaActor.SWIM.Status.alive(incarnation: 0), protocolPeriod: 0)] due to older status, incoming: [alive(incarnation: 0)], current: [alive(incarnation: 0)]
Captured log [third]: [trace] Cancel timer [TimerKey(ask/timeout)] with generation [0]
Captured log [third]: [trace] Received ack from [ActorRef<SWIM.Message>(sact://second@localhost:9002/system/swim)] with incarnation [0] and payload [membership([SakkanaActor.SWIMMember(ref: ActorRef<SWIM.Message>(/system/swim), status: SakkanaActor.SWIM.Status.alive(incarnation: 0), protocolPeriod: 0)])]
Captured log [third]: [trace] Received periodic trigger to ping random member
Captured log [third]: [trace] Sending ping to [ActorRef<SWIM.Message>(sact://first@localhost:9001/system/swim)] with payload [membership([SakkanaActor.SWIMMember(ref: ActorRef<SWIM.Message>(sact://second@localhost:9002/system/swim), status: SakkanaActor.SWIM.Status.alive(incarnation: 0), protocolPeriod: 1), SakkanaActor.SWIMMember(ref: ActorRef<SWIM.Message>(sact://first@localhost:9001/system/swim), status: SakkanaActor.SWIM.Status.alive(incarnation: 0), protocolPeriod: 1)])]
Captured log [third]: [trace] Started timer [TimerKey(ask/timeout)] with generation [0]
Captured log [third]: [trace] Cancel timer [TimerKey(ask/timeout)] with generation [0]
Captured log [third]: [trace] Received ack from [ActorRef<SWIM.Message>(sact://first@localhost:9001/system/swim)] with incarnation [0] and payload [none]
Captured log [third]: [trace] Received periodic trigger to ping random member
Captured log [third]: [trace] Sending ping to [ActorRef<SWIM.Message>(sact://second@localhost:9002/system/swim)] with payload [none]
Captured log [third]: [trace] Started timer [TimerKey(ask/timeout)] with generation [0]
Captured log [third]: [trace] Cancel timer [TimerKey(ask/timeout)] with generation [0]
Captured log [third]: [trace] Received ack from [ActorRef<SWIM.Message>(sact://second@localhost:9002/system/swim)] with incarnation [0] and payload [none]
Captured log [third]: [trace] Received periodic trigger to ping random member
Captured log [third]: [trace] Sending ping to [ActorRef<SWIM.Message>(sact://first@localhost:9001/system/swim)] with payload [none]
Captured log [third]: [trace] Started timer [TimerKey(ask/timeout)] with generation [0]
Captured log [third]: [trace] Cancel timer [TimerKey(ask/timeout)] with generation [0]
Captured log [third]: [trace] Received ack from [ActorRef<SWIM.Message>(sact://first@localhost:9001/system/swim)] with incarnation [0] and payload [none]
Captured log [third]: [trace] Received periodic trigger to ping random member
Captured log [third]: [trace] Sending ping to [ActorRef<SWIM.Message>(sact://second@localhost:9002/system/swim)] with payload [none]
Captured log [third]: [trace] Started timer [TimerKey(ask/timeout)] with generation [0]
Captured log [third]: [trace] Cancel timer [TimerKey(ask/timeout)] with generation [0]
Captured log [third]: [trace] Received ack from [ActorRef<SWIM.Message>(sact://second@localhost:9002/system/swim)] with incarnation [0] and payload [none]
Captured log [third]: [trace] Received periodic trigger to ping random member
Captured log [third]: [trace] Sending ping to [ActorRef<SWIM.Message>(sact://first@localhost:9001/system/swim)] with payload [none]
Captured log [third]: [trace] Started timer [TimerKey(ask/timeout)] with generation [0]
Captured log [third]: [trace] Cancel timer [TimerKey(ask/timeout)] with generation [0]
Captured log [third]: [trace] Received ack from [ActorRef<SWIM.Message>(sact://first@localhost:9001/system/swim)] with incarnation [0] and payload [none]
Captured log [third]: [trace] Received periodic trigger to ping random member
Captured log [third]: [trace] Sending ping to [ActorRef<SWIM.Message>(sact://second@localhost:9002/system/swim)] with payload [none]
Captured log [third]: [trace] Started timer [TimerKey(ask/timeout)] with generation [0]
Captured log [third]: [trace] Cancel timer [TimerKey(ask/timeout)] with generation [0]
Captured log [third]: [trace] Received ack from [ActorRef<SWIM.Message>(sact://second@localhost:9002/system/swim)] with incarnation [0] and payload [none]
Captured log [third]: [trace] Received periodic trigger to ping random member
Captured log [third]: [trace] Sending ping to [ActorRef<SWIM.Message>(sact://first@localhost:9001/system/swim)] with payload [none]
Captured log [third]: [trace] Started timer [TimerKey(ask/timeout)] with generation [0]
Captured log [third]: [trace] Cancel timer [TimerKey(ask/timeout)] with generation [0]
Captured log [third]: [trace] Received ack from [ActorRef<SWIM.Message>(sact://first@localhost:9001/system/swim)] with incarnation [0] and payload [none]
Captured log [third]: [trace] Received periodic trigger to ping random member
Captured log [third]: [trace] Sending ping to [ActorRef<SWIM.Message>(sact://second@localhost:9002/system/swim)] with payload [none]
Captured log [third]: [trace] Started timer [TimerKey(ask/timeout)] with generation [0]
Captured log [third]: [trace] Cancel timer [TimerKey(ask/timeout)] with generation [0]
Captured log [third]: [trace] Received ack from [ActorRef<SWIM.Message>(sact://second@localhost:9002/system/swim)] with incarnation [0] and payload [none]
Captured log [third]: [trace] Received periodic trigger to ping random member
Captured log [third]: [trace] Sending ping to [ActorRef<SWIM.Message>(sact://first@localhost:9001/system/swim)] with payload [none]
Captured log [third]: [trace] Started timer [TimerKey(ask/timeout)] with generation [0]
Captured log [third]: [trace] Cancel timer [TimerKey(ask/timeout)] with generation [0]
Captured log [third]: [trace] Received ack from [ActorRef<SWIM.Message>(sact://first@localhost:9001/system/swim)] with incarnation [0] and payload [none]
Captured log [third]: [trace] Received periodic trigger to ping random member
Captured log [third]: [trace] Sending ping to [ActorRef<SWIM.Message>(sact://second@localhost:9002/system/swim)] with payload [none]
Captured log [third]: [trace] Started timer [TimerKey(ask/timeout)] with generation [0]
Captured log [third]: [trace] Cancel timer [TimerKey(ask/timeout)] with generation [0]
Captured log [third]: [trace] Received ack from [ActorRef<SWIM.Message>(sact://second@localhost:9002/system/swim)] with incarnation [0] and payload [none]
Captured log [third]: [trace] Received periodic trigger to ping random member
Captured log [third]: [trace] Sending ping to [ActorRef<SWIM.Message>(sact://first@localhost:9001/system/swim)] with payload [none]
Captured log [third]: [trace] Started timer [TimerKey(ask/timeout)] with generation [0]
Captured log [third]: [trace] Cancel timer [TimerKey(ask/timeout)] with generation [0]
Captured log [third]: [trace] Received ack from [ActorRef<SWIM.Message>(sact://first@localhost:9001/system/swim)] with incarnation [0] and payload [none]
Captured log [third]: [trace] Received periodic trigger to ping random member
Captured log [third]: [trace] Sending ping to [ActorRef<SWIM.Message>(sact://second@localhost:9002/system/swim)] with payload [none]
Captured log [third]: [trace] Started timer [TimerKey(ask/timeout)] with generation [0]
Captured log [third]: [trace] Cancel timer [TimerKey(ask/timeout)] with generation [0]
Captured log [third]: [trace] Received ack from [ActorRef<SWIM.Message>(sact://second@localhost:9002/system/swim)] with incarnation [0] and payload [none]
Captured log [third]: [trace] Received periodic trigger to ping random member
Captured log [third]: [trace] Sending ping to [ActorRef<SWIM.Message>(sact://first@localhost:9001/system/swim)] with payload [none]
Captured log [third]: [trace] Started timer [TimerKey(ask/timeout)] with generation [0]
Captured log [third]: [trace] Cancel timer [TimerKey(ask/timeout)] with generation [0]
Captured log [third]: [trace] Received ack from [ActorRef<SWIM.Message>(sact://first@localhost:9001/system/swim)] with incarnation [0] and payload [none]
Captured log [third]: [trace] Received periodic trigger to ping random member
Captured log [third]: [trace] Sending ping to [ActorRef<SWIM.Message>(sact://second@localhost:9002/system/swim)] with payload [none]
Captured log [third]: [trace] Started timer [TimerKey(ask/timeout)] with generation [0]
Captured log [third]: [trace] Cancel timer [TimerKey(ask/timeout)] with generation [0]
Captured log [third]: [trace] Received ack from [ActorRef<SWIM.Message>(sact://second@localhost:9002/system/swim)] with incarnation [0] and payload [none]
Captured log [third]: [trace] Received periodic trigger to ping random member
Captured log [third]: [trace] Sending ping to [ActorRef<SWIM.Message>(sact://first@localhost:9001/system/swim)] with payload [none]
Captured log [third]: [trace] Started timer [TimerKey(ask/timeout)] with generation [0]
Captured log [third]: [trace] Cancel timer [TimerKey(ask/timeout)] with generation [0]
Captured log [third]: [trace] Received ack from [ActorRef<SWIM.Message>(sact://first@localhost:9001/system/swim)] with incarnation [0] and payload [none]
Captured log [third]: [trace] Received periodic trigger to ping random member
Captured log [third]: [trace] Sending ping to [ActorRef<SWIM.Message>(sact://second@localhost:9002/system/swim)] with payload [none]
Captured log [third]: [trace] Started timer [TimerKey(ask/timeout)] with generation [0]
Captured log [third]: [trace] Cancel timer [TimerKey(ask/timeout)] with generation [0]
Captured log [third]: [trace] Received ack from [ActorRef<SWIM.Message>(sact://second@localhost:9002/system/swim)] with incarnation [0] and payload [none]
Captured log [third]: [trace] Received periodic trigger to ping random member
Captured log [third]: [trace] Sending ping to [ActorRef<SWIM.Message>(sact://first@localhost:9001/system/swim)] with payload [none]
Captured log [third]: [trace] Started timer [TimerKey(ask/timeout)] with generation [0]
Captured log [third]: [trace] Cancel timer [TimerKey(ask/timeout)] with generation [0]
Captured log [third]: [trace] Received ack from [ActorRef<SWIM.Message>(sact://first@localhost:9001/system/swim)] with incarnation [0] and payload [none]
Captured log [third]: [trace] Received periodic trigger to ping random member
Captured log [third]: [trace] Sending ping to [ActorRef<SWIM.Message>(sact://second@localhost:9002/system/swim)] with payload [none]
Captured log [third]: [trace] Started timer [TimerKey(ask/timeout)] with generation [0]
Captured log [third]: [trace] Cancel timer [TimerKey(ask/timeout)] with generation [0]
Captured log [third]: [trace] Received ack from [ActorRef<SWIM.Message>(sact://second@localhost:9002/system/swim)] with incarnation [0] and payload [none]
Captured log [third]: [trace] Received periodic trigger to ping random member
Captured log [third]: [trace] Sending ping to [ActorRef<SWIM.Message>(sact://first@localhost:9001/system/swim)] with payload [none]
Captured log [third]: [trace] Started timer [TimerKey(ask/timeout)] with generation [0]
Captured log [third]: [trace] Cancel timer [TimerKey(ask/timeout)] with generation [0]
Captured log [third]: [trace] Received ack from [ActorRef<SWIM.Message>(sact://first@localhost:9001/system/swim)] with incarnation [0] and payload [none]
Captured log [third]: [trace] Received periodic trigger to ping random member
Captured log [third]: [trace] Sending ping to [ActorRef<SWIM.Message>(sact://second@localhost:9002/system/swim)] with payload [none]
Captured log [third]: [trace] Started timer [TimerKey(ask/timeout)] with generation [0]
Captured log [third]: [trace] Cancel timer [TimerKey(ask/timeout)] with generation [0]
Captured log [third]: [trace] Received ack from [ActorRef<SWIM.Message>(sact://second@localhost:9002/system/swim)] with incarnation [0] and payload [none]
Captured log [third]: [trace] Received periodic trigger to ping random member
Captured log [third]: [trace] Sending ping to [ActorRef<SWIM.Message>(sact://first@localhost:9001/system/swim)] with payload [none]
Captured log [third]: [trace] Started timer [TimerKey(ask/timeout)] with generation [0]
Captured log [third]: [trace] Cancel timer [TimerKey(ask/timeout)] with generation [0]
Captured log [third]: [trace] Received ack from [ActorRef<SWIM.Message>(sact://first@localhost:9001/system/swim)] with incarnation [0] and payload [none]
Captured log [third]: [trace] Received periodic trigger to ping random member
Captured log [third]: [trace] Sending ping to [ActorRef<SWIM.Message>(sact://second@localhost:9002/system/swim)] with payload [none]
Captured log [third]: [trace] Started timer [TimerKey(ask/timeout)] with generation [0]
Captured log [third]: [trace] Cancel timer [TimerKey(ask/timeout)] with generation [0]
Captured log [third]: [trace] Received ack from [ActorRef<SWIM.Message>(sact://second@localhost:9002/system/swim)] with incarnation [0] and payload [none]
Captured log [third]: [trace] Received periodic trigger to ping random member
Captured log [third]: [trace] Sending ping to [ActorRef<SWIM.Message>(sact://first@localhost:9001/system/swim)] with payload [none]
Captured log [third]: [trace] Started timer [TimerKey(ask/timeout)] with generation [0]
Captured log [third]: [trace] Cancel timer [TimerKey(ask/timeout)] with generation [0]
Captured log [third]: [trace] Received ack from [ActorRef<SWIM.Message>(sact://first@localhost:9001/system/swim)] with incarnation [0] and payload [none]
Captured log [third]: [trace] Received periodic trigger to ping random member
Captured log [third]: [trace] Sending ping to [ActorRef<SWIM.Message>(sact://second@localhost:9002/system/swim)] with payload [none]
Captured log [third]: [trace] Started timer [TimerKey(ask/timeout)] with generation [0]
Captured log [third]: [trace] Cancel timer [TimerKey(ask/timeout)] with generation [0]
Captured log [third]: [trace] Received ack from [ActorRef<SWIM.Message>(sact://second@localhost:9002/system/swim)] with incarnation [0] and payload [none]
Captured log [third]: [trace] Received periodic trigger to ping random member
Captured log [third]: [trace] Sending ping to [ActorRef<SWIM.Message>(sact://first@localhost:9001/system/swim)] with payload [none]
Captured log [third]: [trace] Started timer [TimerKey(ask/timeout)] with generation [0]
Captured log [third]: [trace] Cancel timer [TimerKey(ask/timeout)] with generation [0]
Captured log [third]: [trace] Received ack from [ActorRef<SWIM.Message>(sact://first@localhost:9001/system/swim)] with incarnation [0] and payload [none]
Captured log [third]: [trace] Received periodic trigger to ping random member
Captured log [third]: [trace] Sending ping to [ActorRef<SWIM.Message>(sact://second@localhost:9002/system/swim)] with payload [none]
Captured log [third]: [trace] Started timer [TimerKey(ask/timeout)] with generation [0]
Captured log [third]: [trace] Cancel timer [TimerKey(ask/timeout)] with generation [0]
Captured log [third]: [trace] Received ack from [ActorRef<SWIM.Message>(sact://second@localhost:9002/system/swim)] with incarnation [0] and payload [none]
Captured log [third]: [trace] Received periodic trigger to ping random member
Captured log [third]: [trace] Sending ping to [ActorRef<SWIM.Message>(sact://first@localhost:9001/system/swim)] with payload [none]
Captured log [third]: [trace] Started timer [TimerKey(ask/timeout)] with generation [0]
Captured log [third]: [trace] Cancel timer [TimerKey(ask/timeout)] with generation [0]
Captured log [third]: [trace] Received ack from [ActorRef<SWIM.Message>(sact://first@localhost:9001/system/swim)] with incarnation [0] and payload [none]
Captured log [third]: [trace] Received periodic trigger to ping random member
Captured log [third]: [trace] Sending ping to [ActorRef<SWIM.Message>(sact://second@localhost:9002/system/swim)] with payload [none]
Captured log [third]: [trace] Started timer [TimerKey(ask/timeout)] with generation [0]
Captured log [third]: [trace] Cancel timer [TimerKey(ask/timeout)] with generation [0]
Captured log [third]: [trace] Received ack from [ActorRef<SWIM.Message>(sact://second@localhost:9002/system/swim)] with incarnation [0] and payload [none]
========================================================================================================================
2019-08-21T13:17:29+0000 info: deadLetter=1 [sact://first@localhost:9001][TransportPipelines.swift:557][thread:140672753444608][/dead/system/swim] Dead letter: [remote(SakkanaActor.SWIM.RemoteMessage.ping(lastKnownStatus: SakkanaActor.SWIM.Status.alive(incarnation: 0), replyTo: ActorRef<SWIM.Ack>(sact://second@localhost:9002/user/$ask-db/$adapter-y), payload: SakkanaActor.SWIM.Payload.none))]:SakkanaActor.SWIM.Message was not delivered to ["/dead/system/swim"].
2019-08-21T13:17:29+0000 info: deadLetter=1 [sact://first@localhost:9001][TransportPipelines.swift:557][thread:140672745051904][/dead/system/swim] Dead letter: [remote(SakkanaActor.SWIM.RemoteMessage.ping(lastKnownStatus: SakkanaActor.SWIM.Status.alive(incarnation: 0), replyTo: ActorRef<SWIM.Ack>(sact://third@localhost:9003/user/$ask-bb/$adapter-y), payload: SakkanaActor.SWIM.Payload.none))]:SakkanaActor.SWIM.Message was not delivered to ["/dead/system/swim"].
2019-08-21T13:17:29+0000 info: deadLetter=1 [sact://second@localhost:9002][TransportPipelines.swift:557][thread:140672778622720][/dead/system/swim] Dead letter: [remote(SakkanaActor.SWIM.RemoteMessage.ping(lastKnownStatus: SakkanaActor.SWIM.Status.alive(incarnation: 0), replyTo: ActorRef<SWIM.Ack>(sact://third@localhost:9003/user/$ask-nb/$adapter-y), payload: SakkanaActor.SWIM.Payload.none))]:SakkanaActor.SWIM.Message was not delivered to ["/dead/system/swim"].
Test Case 'ClusterMembershipGossipTests.test_gossip_down_node_shouldReachAllNodes' failed (3.845 seconds)

Pretty backtraces on linux

Now that we don't capture any signals, and crash hard on faults (due to the #51 removal), we'd be left blind on some faults; we at least should use the swift-backtrace to get nice backtraces.
The upside is that Ian's work nicely symbolicates/demangles as well; in a nicer way than our previous PoCs did.

As Swift evolution discussions continue, the lib will adopt the APIs and we can hopefully rely on it as-is. If we'd capture faults we'd need more from it though.

CRDT: "Direct Replication" / at-Consistency writes

This is about implementing the direct replication for CRDTs when they do write(consistency: .all) or similar.

We should:

  • send the write to N random members
  • complete the write once we get N acks back and fail it otherwise

This is enough to have CRDTs "work" even without gossip, if we always were to write to all nodes, and no cluster membership changes.

We will follow up with gossip in separate ticket.

Document: Dead letters in depth

Dead letters carry much semantics and we should explain them in depth.

We have the /dead path which is useful; we can know if a message arrived at a thing that terminated (/dead/user/something), or if it never had a chance to be delivered /dead/letters :)

CRDT: Implement ORSet, for Receptionist

Implement an Observed Removed Set datatype.

For each element in the set, a list of add-tags and a list of remove-tags are maintained. An element is inserted into the OR-Set by having a new unique tag generated and added to the add-tag list for the element. Elements are removed from the OR-Set by having all the tags in the element's add-tag list added to the element's remove-tag (tombstone) list. To merge two OR-Sets, for each element, let its add-tag list be the union of the two add-tag lists, and likewise for the two remove-tag lists. An element is a member of the set if and only if the add-tag list less the remove-tag list is nonempty.[2] An optimization that eliminates the need for maintaining a tombstone set is possible; this avoids the potentially unbounded growth of the tombstone set. The optimization is achieved by maintaining a vector of timestamps for each replica.[15]

Workaround / remove Serialization.registerBoxing

Long story short:

  • if we have messages which contain a type erased thing like case write(_ id: Identity, _ data: AnyStateBasedCRDT, replyTo: ActorRef<WriteResult>)
  • we have to deserialize into the right erased "container" even if we are able to properly serialize depending on the actual underlying erased type
  • This is troublesome, as we need to restore the boxing

We do this by (which is somewhat of a hack...):

    public mutating func registerBoxing<M, Box>(from messageType: M.Type, into boxType: Box.Type, _ boxer: @escaping (M) -> Box) {
        let key = BoxingKey(toBeBoxed: messageType, box: boxType)
        self.boxing[key] = { m in boxer(m as! M) }
    }

    internal func box<Box>(_ value: Any, ofKnownType: Any.Type, as: Box.Type) -> Box? {
        let key = BoxingKey(toBeBoxed: ObjectIdentifier(ofKnownType), box: Box.self)
        if let boxer = self.boxing[key] {
            let box: Box = boxer(value) as! Box
            return box // in 2 lines to avoid warning about as! always being != nil
        } else {
            return nil
        }
    }

used like:

            settings.serialization.registerProtobufRepresentable(for: CRDT.ORSet<MyType>.self, underId: 1001)
            self.registerBoxing(from: CRDT.ORSet<MyType>.self, into: AnyCvRDT.self) { counter in AnyCvRDT(counter) }
            self.registerBoxing(from: CRDT.ORSet<MyType>.self, into: AnyDeltaCRDT.self) { counter in AnyDeltaCRDT(counter) }

Which is way too much hassle for users than we want them to have.

The register boxing must go away... so either:

  • a) we register boxings whenever we detect a CRDT type is registered for serialization?
  • b) we change our protocols to never use an erased type in the messaging?
    • that would also mean we make it impossible for users to do such trick...
  • c) we provide some "registerCRDT" extension on serialization that does this "for" users...
    • this can be easy to get wrong though ๐Ÿค”

Mostly thinking about doing b) but WDYT?

FAILED: ClusterMembershipGossipTests.test_gossip_down_node_shouldReachAllNodes

https://ci.swiftserver.group/job/swift-distributed-actors-swift50-p1rb/4/console

Fatal error: Failed to spawn test probe!: file /code/Sources/DistributedActorsTestKit/ActorTestKit.swift, line 79

Test Case 'ClusterMembershipGossipTests.test_gossip_down_node_shouldReachAllNodes' started at 2019-08-21 07:57:14.361
/code/Tests/DistributedActorsTests/Cluster/ClusterMembershipGossipTests.swift:52 : 

MEMBERSHIPS === -------------------------------------------------------------------------------------
Membership [sact://first@localhost:9001]:
   [sact://first@localhost:9001] STATUS: [     up]
   [sact://second@localhost:9002] STATUS: [     up]
   [sact://third@localhost:9003] STATUS: [   down]

Membership [sact://second@localhost:9002]:
   [sact://first@localhost:9001] STATUS: [     up]
   [sact://second@localhost:9002] STATUS: [     up]
   [sact://third@localhost:9003] STATUS: [   down]

Membership [sact://third@localhost:9003]:
   [sact://first@localhost:9001] STATUS: [     up]
   [sact://second@localhost:9002] STATUS: [     up]
   [sact://third@localhost:9003] STATUS: [   down]
END OF MEMBERSHIPS === ------------------------------------------------------------------------------ 
2019-08-21T07:57:14+0000 info: deadLetter=1 [sact://first@localhost:9001][TransportPipelines.swift:557][thread:140438035658496][/dead/system/swim] Dead letter: [remote(DistributedActors.SWIM.RemoteMessage.ping(lastKnownStatus: DistributedActors.SWIM.Status.alive(incarnation: 0), replyTo: ActorRef<SWIM.Ack>(sact://second@localhost:9002/user/$ask-8/$adapter-y), payload: DistributedActors.SWIM.Payload.membership([DistributedActors.SWIMMember(ref: ActorRef<SWIM.Message>(sact://third@localhost:9003/system/swim), status: DistributedActors.SWIM.Status.dead, protocolPeriod: 0)])))]:DistributedActors.SWIM.Message was not delivered to ["/dead/system/swim"].
Test Case 'ClusterMembershipGossipTests.test_gossip_down_node_shouldReachAllNodes' passed (0.784 seconds)
Test Case 'ClusterMembershipGossipTests.test_join_swimDiscovered_thirdNode' started at 2019-08-21 07:57:15.145
Fatal error: Failed to spawn test probe!: file /code/Sources/DistributedActorsTestKit/ActorTestKit.swift, line 79
Current stack trace:
0    libswiftCore.so                    0x00007fba5472d7a0 _swift_stdlib_reportFatalErrorInFile + 115
1    libswiftCore.so                    0x00007fba546689cc <unavailable> + 3463628
2    libswiftCore.so                    0x00007fba54668abe <unavailable> + 3463870
3    libswiftCore.so                    0x00007fba5446431a <unavailable> + 1348378
4    libswiftCore.so                    0x00007fba5463cab2 <unavailable> + 3283634
5    libswiftCore.so                    0x00007fba544635e9 <unavailable> + 1345001
6    DistributedActorsPackageTests.xctest 0x0000561aa9028643 <unavailable> + 6731331
7    DistributedActorsPackageTests.xctest 0x0000561aa9027d81 <unavailable> + 6729089
8    DistributedActorsPackageTests.xctest 0x0000561aa9005ce4 <unavailable> + 6589668
9    DistributedActorsPackageTests.xctest 0x0000561aa9176b64 <unavailable> + 8100708
10   DistributedActorsPackageTests.xctest 0x0000561aa9173ca8 <unavailable> + 8088744
11   DistributedActorsPackageTests.xctest 0x0000561aa91628b4 <unavailable> + 8018100
12   DistributedActorsPackageTests.xctest 0x0000561aa915f4b4 <unavailable> + 8004788
13   DistributedActorsPackageTests.xctest 0x0000561aa8abda06 <unavailable> + 1051142
14   DistributedActorsPackageTests.xctest 0x0000561aa915f51b <unavailable> + 8004891
15   libXCTest.so                       0x00007fba54a27051 <unavailable> + 192593
16   libXCTest.so                       0x00007fba54a26e7c <unavailable> + 192124
17   libXCTest.so                       0x00007fba54a26de4 <unavailable> + 191972
18   libXCTest.so                       0x00007fba54a270b9 <unavailable> + 192697
19   libXCTest.so                       0x00007fba54a19ac7 <unavailable> + 137927
20   libXCTest.so                       0x00007fba54a258b0 XCTestCase.invokeTest() + 77
21   libXCTest.so                       0x00007fba54a25520 XCTestCase.perform(_:) + 172
22   libXCTest.so                       0x00007fba54a29780 XCTest.run() + 194
23   libXCTest.so                       0x00007fba54a27340 XCTestSuite.perform(_:) + 232
24   libXCTest.so                       0x00007fba54a29780 XCTest.run() + 194
25   libXCTest.so                       0x00007fba54a27340 XCTestSuite.perform(_:) + 232
26   libXCTest.so                       0x00007fba54a29780 XCTest.run() + 194
27   libXCTest.so                       0x00007fba54a27340 XCTestSuite.perform(_:) + 256
28   libXCTest.so                       0x00007fba54a29780 XCTest.run() + 194
29   libXCTest.so                       0x00007fba54a24010 XCTMain(_:) + 1091
30   DistributedActorsPackageTests.xctest 0x0000561aa9004da9 <unavailable> + 6585769
31   libc.so.6                          0x00007fba524adab0 __libc_start_main + 231
32   DistributedActorsPackageTests.xctest 0x0000561aa8aa3aea <unavailable> + 944874
Exited with signal code 4

SwiftProtobuf: Consider swift_access_level file option

How about we introduce an FileOption in the files and use it like:

We want to offer some types as public so users can easily use them in their types; This is mostly about ActorAddress really;

We currently expose this by manually passing the public access level option on CLI https://github.com/apple/swift-distributed-actors/pull/58/files

We could alternatively:

for p in $(find . -name "*.proto"); do
    out_dir=$( dirname "$p" )
    base_name=$( echo basename "$p" | sed "s/.*\///" )
    out_name="${base_name%.*}.pb.swift"
    dest_dir="../Sources/DistributedActors/${out_dir}/Protobuf"
    dest_file="${dest_dir}/${out_name}"
    mkdir -p ${dest_dir}

    access_level=""
    if [[ $(grep '(swift_access_level) = PUBLIC' ${p} | wc -l) -eq 1 ]]; then
        access_level="--swift_opt=Visibility=Public"
    fi

    command="protoc --swift_out=. ${p} ${access_level}"
    echo $command
   `$command`
    mv "${out_dir}/${out_name}" "${dest_file}"
done

And in source:

import "google/protobuf/descriptor.proto";

enum SwiftAccessLevel {
    INTERNAL = 0;
    PUBLIC = 1;
}
extend google.protobuf.FileOptions {
    SwiftAccessLevel sact_access_level = 542700;
} // move this to some Options.proto and import it
option optimize_for = SPEED;
option swift_prefix = "Proto";
option (swift_access_level) = PUBLIC; // could be built-in to protobuf plugin, we'd import SwiftProtobufOptions.proto I guess

To be honest, this should be built-in in the Swift Protobuf plugin... https://github.com/apple/swift-protobuf

I'm checking if/how we can collaborate on that plugin.

Ignoring behavior .tell should not fault

        let ignore: ActorRef<Set<Int>> = try system.spawn("ignore", .setup { _ in .ignore })
        ignore.tell([])

This currently faults:

Fatal error: Illegal attempt to interpret message with .setup behavior! Behaviors MUST be canonicalized before interpreting. This is a bug, please open a ticket.: file /Users/ktoso/code/actors/Sources/DistributedActors/Supervision.swift, line 459

Though this specific one should work and ignore the message I think ๐Ÿค” Is the impl of interpret message just bad and should not fault?

FAILED: SerializationTests.test_deserialize_alreadyDeadActorRef_shouldDeserializeAsDeadLetters_forUserDefinedMessageType

19:23:54 Test Case 'SerializationTests.test_deserialize_alreadyDeadActorRef_shouldDeserializeAsDeadLetters_forUserDefinedMessageType' started at 2019-09-06 02:23:54.527
19:23:54 /code/Tests/DistributedActorsTests/SerializationTests.swift:214: error: SerializationTests.test_deserialize_alreadyDeadActorRef_shouldDeserializeAsDeadLetters_forUserDefinedMessageType : XCTAssertEqual failed: ("/user/dead-on-arrival") is not equal to ("/dead/user/dead-on-arrival") - 
19:23:54         "\(back.containedInterestingRef.address)".shouldEqual("/dead/user/dead-on-arrival")
19:23:54                                                   ^~~~~~~~~~~~
19:23:54 error: [/user/dead-on-arrival] does not equal expected: [/dead/user/dead-on-arrival]
19:23:54 
19:23:54 2019-09-06T02:23:54+0000 info: deadLetter=1 [SerializationTests][Mailbox.swift:261][/user/dead-on-arrival] Dead letter: [InterestingMessage()]:DistributedActorsTests.(unknown context at $55cfbdeca0e8).InterestingMessage was not delivered to ["/user/dead-on-arrival#1920974283"].
19:23:54 2019-09-06T02:23:54+0000 info: deadLetter=1 [SerializationTests][ActorShell+Children.swift:305][/user/dead-on-arrival] Dead letter: [stop]:DistributedActors.SystemMessage was not delivered to ["/user/dead-on-arrival#1920974283"].

FAILED: RemotingTLSTests.test_boundServer_shouldAcceptAssociateWithSSLEnabledOnNoHostnameVerificationWithIP

Test Case '-[DistributedActorsTests.RemotingTLSTests test_boundServer_shouldAcceptAssociateWithSSLEnabledOnNoHostnameVerificationWithIP]' started.
Fatal error: 'try!' expression unexpectedly raised an error: DistributedActors.ActorContextError.alreadyStopping: file /Users/drexin/projects/swift-distributed-actors/Sources/DistributedActors/ActorRef+Ask.swift, line 78
Exited with signal code 4

Need to find common way to spawn various entities

Some entities are not "real" actors or would have confusing semantics for users if they'd watch them -- e.g. does it make sense to watch a singleton?

Other times, due to lack of behavior adapters we may need to return some other type rather than ActorRef<MyInternalMessage> which is doable if we own the spawning in the "thing" but not if we expose its behavior so people can spawn it...

Could we "spawn" ServiceRefs? Not really I guess, we have to obtain them... So what's the best thing for things like WorkerPool, Singleton, VirtualActor / Sharding etc?


Latest thoughts here:

  • we use the *Shell pattern for "the thing that interprets a state machine inside it"

This sounds like it could be a pattern/type:

  • protocol ActorShell<Thing> { var name: ActorNaming; var props: Props?; var behavior: Behavior<Thing> }

Fork safety for ProcessIsolated

We must close all file descriptors when we posix_spawn.


TODOs:

Docs: Glossary

An overview of all important words, and links to docs which cover them in depth.

Incorrect message number processed per run

There's a bug in the mailbox that causes fewer messages than possible to be processed per run. The check compares to processed_activations, which is the message count shifted by 2, because the first two bits contain information about system messages. Instead the comparison should be with message_count(processed_activations). While this should not affect correctness, it does have impact on performance, because we do more context switches than necessary.

Remove fault handling code using signal handling

It "works"โ„ข in the sense that actors survive faults such as fatal errors and similar, however they leak memory then.

We likely do not want to ship with this mode, and may want to revive it when we'd need to even though it is memory wise not safe.

We hope to get capabilities to handle this properly in the future.

Hardening: Descriptor leaks between tests

Fatal error: 'try!' expression unexpectedly raised an error: kqueue() failed: Too many open files (errno: 24): file /Users/ktoso/code/actors/.build/checkouts/swift-nio/Sources/NIO/EventLoop.swift, line 1028
0   swift-distributed-actorsPackageTests 0x000000010fe44ff4 sact_get_backtrace + 52
1   swift-distributed-actorsPackageTests 0x000000010fe44f61 sact_dump_backtrace + 17
2   swift-distributed-actorsPackageTests 0x000000010fe4542c sact_sighandler + 44
3   libsystem_platform.dylib            0x00007fff67e5ab5d _sigtramp + 29
4   libswiftCore.dylib                  0x0000000111bc4f00 libswiftCore.dylib + 3840
5   libswiftCore.dylib                  0x0000000111c2f469 swift_unexpectedError + 649
6   swift-distributed-actorsPackageTests 0x0000000110780a82 $s3NIO27MultiThreadedEventLoopGroupC014setupThreadAnddE033_D5D78C61B22284700B9BD1ACFBC25157LL4name11initializerAA010SelectabledE0CSS_yAA9NIOThreadCctFZyAKcfU_ + 1298
7   swift-distributed-actorsPackageTests 0x000000011078798a $s3NIO27MultiThreadedEventLoopGroupC014setupThreadAnddE033_D5D78C61B22284700B9BD1ACFBC25157LL4name11initializerAA010SelectabledE0CSS_yAA9NIOThreadCctFZyAKcfU_TA + 42
8   swift-distributed-actorsPackageTests 0x0000000110780e3f $s3NIO9NIOThreadCIegg_ACytIegnr_TR + 15
9   swift-distributed-actorsPackageTests 0x0000000110831df1 $s3NIO9NIOThreadCIegg_ACytIegnr_TRTA + 17
10  swift-distributed-actorsPackageTests 0x00000001108321df $s3NIO9NIOThreadC11spawnAndRun4name12detachThread4bodyySSSg_SbyACctFZSvSgSvcfU_ + 991
11  swift-distributed-actorsPackageTests 0x0000000110832249 $s3NIO9NIOThreadC11spawnAndRun4name12detachThread4bodyySSSg_SbyACctFZSvSgSvcfU_To + 9
12  libsystem_pthread.dylib             0x00007fff67e632eb _pthread_body + 126
13  libsystem_pthread.dylib             0x00007fff67e66249 _pthread_start + 66
14  libsystem_pthread.dylib             0x00007fff67e6240d thread_start + 13
Exited with signal code 4

We should change self.cluster._shell.tell(.command(.unbind(receptacle))) // FIXME: should be shutdown to actually be shutdown and it should complete the receptacle once all connections have been closed etc.

I had a patch for this before and I think it was not enough to fix the issue, though it did delay it more.

Weirdly enough, it shows up on my macbook with:

$ launchctl limit maxfiles
	maxfiles    256            unlimited

flaky tests: ClusterAssociationTests.test_association_shouldEstablishSingleAssociationForConcurrentlyInitiatedHandshakes_incoming_outgoing

Test Case 'ClusterAssociationTests.test_association_shouldEstablishSingleAssociationForConcurrentlyInitiatedHandshakes_incoming_outgoing' started at 2019-08-27 11:52:09.716
Assertion failed: Channel should always be present after the initial initialization.: file /tmp/.nio-release-tools_naNW4a/sakkana/Sources/SakkanaActor/Cluster/ClusterShellState.swift, line 181
Current stack trace:
0    libswiftCore.so                    0x00007fdceb91baf0 _swift_stdlib_reportFatalErrorInFile + 115
1    libswiftCore.so                    0x00007fdceb8463d2 <unavailable> + 4719570
2    libswiftCore.so                    0x00007fdceb846568 <unavailable> + 4719976
3    libswiftCore.so                    0x00007fdceb5ed0e0 <unavailable> + 2257120
4    libswiftCore.so                    0x00007fdceb807b8b <unavailable> + 4463499
5    libswiftCore.so                    0x00007fdceb5ec159 <unavailable> + 2253145
6    sakkanaPackageTests.xctest         0x0000563ee92696a8 <unavailable> + 6506152
7    sakkanaPackageTests.xctest         0x0000563ee926a8a1 <unavailable> + 6510753
8    sakkanaPackageTests.xctest         0x0000563ee923e43a <unavailable> + 6329402
9    sakkanaPackageTests.xctest         0x0000563ee923cc31 <unavailable> + 6323249
10   sakkanaPackageTests.xctest         0x0000563ee9241b4f <unavailable> + 6343503
11   sakkanaPackageTests.xctest         0x0000563ee925b9eb <unavailable> + 6449643
12   sakkanaPackageTests.xctest         0x0000563ee91cab7e <unavailable> + 5856126
13   sakkanaPackageTests.xctest         0x0000563ee944f08f <unavailable> + 8495247
14   sakkanaPackageTests.xctest         0x0000563ee944dbde <unavailable> + 8489950
15   sakkanaPackageTests.xctest         0x0000563ee944d816 <unavailable> + 8488982
16   sakkanaPackageTests.xctest         0x0000563ee91922a6 <unavailable> + 5624486
17   sakkanaPackageTests.xctest         0x0000563ee935b5c8 <unavailable> + 7497160
18   sakkanaPackageTests.xctest         0x0000563ee935b9c6 <unavailable> + 7498182
19   sakkanaPackageTests.xctest         0x0000563ee9359c61 <unavailable> + 7490657
20   sakkanaPackageTests.xctest         0x0000563ee93598c1 <unavailable> + 7489729
21   sakkanaPackageTests.xctest         0x0000563ee9359f49 <unavailable> + 7491401
22   sakkanaPackageTests.xctest         0x0000563ee8ee4353 <unavailable> + 2814803
23   sakkanaPackageTests.xctest         0x0000563ee936071e <unavailable> + 7517982
24   sakkanaPackageTests.xctest         0x0000563ee9369f89 <unavailable> + 7557001
25   sakkanaPackageTests.xctest         0x0000563ee8feae0c <unavailable> + 3890700
26   sakkanaPackageTests.xctest         0x0000563ee934df21 <unavailable> + 7442209
27   sakkanaPackageTests.xctest         0x0000563ee8feae2c <unavailable> + 3890732
28   sakkanaPackageTests.xctest         0x0000563ee934e071 <unavailable> + 7442545
29   sakkanaPackageTests.xctest         0x0000563ee934d971 <unavailable> + 7440753
30   sakkanaPackageTests.xctest         0x0000563ee934dae1 <unavailable> + 7441121
31   sakkanaPackageTests.xctest         0x0000563ee94639f7 <unavailable> + 8579575
32   sakkanaPackageTests.xctest         0x0000563ee9463a89 <unavailable> + 8579721
33   sakkanaPackageTests.xctest         0x0000563ee9464915 <unavailable> + 8583445
34   sakkanaPackageTests.xctest         0x0000563ee9464969 <unavailable> + 8583529
35   libpthread.so.0                    0x00007fdceb1b0184 <unavailable> + 33156
36   libc.so.6                          0x00007fdce96bafd0 clone + 109
/tmp/.nio-release-tools_naNW4a/sakkana/Tests/SakkanaActorTests/Cluster/AssociationClusteredTests.swift:135: error: ClusterAssociationTests.test_association_shouldEstablishSingleAssociationForConcurrentlyInitiatedHandshakes_incoming_outgoing : failed - 
        _ = try p7337.expectMessage()
                     ^~~~~~~~~~~~~~
error: Did not receive message of type [HandshakeResult] within [3s], error: noMessagesInQueue
<EXPR>:0: error: ClusterAssociationTests.test_association_shouldEstablishSingleAssociationForConcurrentlyInitiatedHandshakes_incoming_outgoing : threw error "error(message: "\n        _ = try p7337.expectMessage()\n                     \u{1B}[0;31m^~~~~~~~~~~~~~\nerror: Did not receive message of type [HandshakeResult] within [3s], error: noMessagesInQueue\u{1B}[0;0m")"
------------------------------------- ActorSystem(first, sact://first@localhost:9001) ------------------------------------------------
Captured log [first]: [trace] Started timer [TimerKey(swim/periodic-ping)] with generation [0]
Captured log [first]: [debug] Spawning [SakkanaActor.Behavior<SakkanaActor.DowningStrategyMessage>.setup((Function))], on path: [/system/cluster/downingStrategy]
Captured log [first]: [info] Binding to: [sact://first@localhost:9001]
Captured log [first]: [trace] Started timer [TimerKey(receptionist/sync)] with generation [0]
Captured log [first]: [trace] Successfully subscribed [ActorRef<ClusterEvent>(/system/cluster/downingStrategy/$sub-SakkanaActor.ClusterEvent-y)]
Captured log [first]: [info] Bound to [IPv4]127.0.0.1/127.0.0.1:9001
Captured log [first]: [info] Extending handshake offer to sact://second@localhost:9002)
Captured log [first]: [warning] Failed await for outbound channel to sact://second@localhost:9002; Error was: ExecutionError(underlying: NIO.NIOConnectionError(host: "localhost", port: 9002, dnsAError: nil, dnsAAAAError: nil, connectionErrors: [NIO.SingleConnectionFailure(target: [IPv6]localhost/::1:9002, error: connect(descriptor:addr:size:) failed: Cannot assign requested address (errno: 99)), NIO.SingleConnectionFailure(target: [IPv4]localhost/127.0.0.1:9002, error: connection reset (error set): Connection refused (errno: 111))]))
Captured log [first]: [info] Schedule handshake retry to: [sact://second@localhost:9002] delay: [TimeAmount(100ms, nanoseconds: 100000000)]
Captured log [first]: [trace] Started timer [TimerKey(handshake-timer-sact://second@localhost:9002)] with generation [0]
Captured log [first]: [warning] Concurrently initiated handshakes from nodes [sact://first@localhost:9001](local) and [sact://second@localhost:9002](remote) detected! Resolving race by address ordering; This node WON (will negotiate and reply) tie-break. 
Captured log [first]: [warning] Supervision: Actor has FAULTED while processingUserMessages, handling with SakkanaActor.StoppingSupervisor<SakkanaActor.ClusterShell.Message>; Failure details: fault(Actor faulted while processing message '[inbound(SakkanaActor.ClusterShell.InboundMessage.handshakeOffer(SakkanaActor.Wire.HandshakeOffer(version: Version(0.0.1, reserved:0), from: sact://second:3977941148@localhost:9002, to: sact://first@localhost:9001), channel: SocketChannel { selectable = BaseSocket { fd=702 }ย , localAddress = Optional([IPv4]127.0.0.1/127.0.0.1:9001), remoteAddress = Optional([IPv4]127.0.0.1/127.0.0.1:43618) }, replyTo: NIO.EventLoopPromise<SakkanaActor.Wire.HandshakeResponse>(futureResult: NIO.EventLoopFuture<SakkanaActor.Wire.HandshakeResponse>)))]:Message':
/tmp/.nio-release-tools_naNW4a/sakkana/.build/x86_64-unknown-linux/debug/sakkanaPackageTests.xctest(+0x2afd43) [0x563ee8ee4d43]
/tmp/.nio-release-tools_naNW4a/sakkana/.build/x86_64-unknown-linux/debug/sakkanaPackageTests.xctest(+0x2b0231) [0x563ee8ee5231]
/lib/x86_64-linux-gnu/libpthread.so.0(+0x10330) [0x7fdceb1b8330]
/app/.ssp/lib/swift/linux/libswiftCore.so(+0x441b97) [0x7fdceb807b97]
/app/.ssp/lib/swift/linux/libswiftCore.so(+0x226159) [0x7fdceb5ec159]
/tmp/.nio-release-tools_naNW4a/sakkana/.build/x86_64-unknown-linux/debug/sakkanaPackageTests.xctest(+0x6346a8) [0x563ee92696a8]
/tmp/.nio-release-tools_naNW4a/sakkana/.build/x86_64-unknown-linux/debug/sakkanaPackageTests.xctest(+0x6358a1) [0x563ee926a8a1]
/tmp/.nio-release-tools_naNW4a/sakkana/.build/x86_64-unknown-linux/debug/sakkanaPackageTests.xctest(+0x60943a) [0x563ee923e43a]
/tmp/.nio-release-tools_naNW4a/sakkana/.build/x86_64-unknown-linux/debug/sakkanaPackageTests.xctest(+0x607c31) [0x563ee923cc31]
/tmp/.nio-release-tools_naNW4a/sakkana/.build/x86_64-unknown-linux/debug/sakkanaPackageTests.xctest(+0x60cb4f) [0x563ee9241b4f]
/tmp/.nio-release-tools_naNW4a/sakkana/.build/x86_64-unknown-linux/debug/sakkanaPackageTests.xctest(+0x6269eb) [0x563ee925b9eb]
/tmp/.nio-release-tools_naNW4a/sakkana/.build/x86_64-unknown-linux/debug/sakkanaPackageTests.xctest(+0x595b7e) [0x563ee91cab7e]
/tmp/.nio-release-tools_naNW4a/sakkana/.build/x86_64-unknown-linux/debug/sakkanaPackageTests.xctest(+0x81a08f) [0x563ee944f08f]
/tmp/.nio-release-tools_naNW4a/sakkana/.build/x86_64-unknown-linux/debug/sakkanaPackageTests.xctest(+0x818bde) [0x563ee944dbde]
/tmp/.nio-release-tools_naNW4a/sakkana/.build/x86_64-unknown-linux/debug/sakkanaPackageTests.xctest(+0x818816) [0x563ee944d816]
/tmp/.nio-release-tools_naNW4a/sakkana/.build/x86_64-unknown-linux/debug/sakkanaPackageTests.xctest(+0x55d2a6) [0x563ee91922a6]
/tmp/.nio-release-tools_naNW4a/sakkana/.build/x86_64-unknown-linux/debug/sakkanaPackageTests.xctest(+0x7265c8) [0x563ee935b5c8]
/tmp/.nio-release-tools_naNW4a/sakkana/.build/x86_64-unknown-linux/debug/sakkanaPackageTests.xctest(+0x7269c6) [0x563ee935b9c6]
/tmp/.nio-release-tools_naNW4a/sakkana/.build/x86_64-unknown-linux/debug/sakkanaPackageTests.xctest(+0x724c61) [0x563ee9359c61]
/tmp/.nio-release-tools_naNW4a/sakkana/.build/x86_64-unknown-linux/debug/sakkanaPackageTests.xctest(+0x7248c1) [0x563ee93598c1]
/tmp/.nio-release-tools_naNW4a/sakkana/.build/x86_64-unknown-linux/debug/sakkanaPackageTests.xctest(+0x724f49) [0x563ee9359f49]
/tmp/.nio-release-tools_naNW4a/sakkana/.build/x86_64-unknown-linux/debug/sakkanaPackageTests.xctest(+0x2af353) [0x563ee8ee4353]
/tmp/.nio-release-tools_naNW4a/sakkana/.build/x86_64-unknown-linux/debug/sakkanaPackageTests.xctest(+0x72b71e) [0x563ee936071e]
/tmp/.nio-release-tools_naNW4a/sakkana/.build/x86_64-unknown-linux/debug/sakkanaPackageTests.xctest(+0x734f89) [0x563ee9369f89]
/tmp/.nio-release-tools_naNW4a/sakkana/.build/x86_64-unknown-linux/debug/sakkanaPackageTests.xctest(+0x3b5e0c) [0x563ee8feae0c]
/tmp/.nio-release-tools_naNW4a/sakkana/.build/x86_64-unknown-linux/debug/sakkanaPackageTests.xctest(+0x718f21) [0x563ee934df21]
/tmp/.nio-release-tools_naNW4a/sakkana/.build/x86_64-unknown-linux/debug/sakkanaPackageTests.xctest(+0x3b5e2c) [0x563ee8feae2c]
/tmp/.nio-release-tools_naNW4a/sakkana/.build/x86_64-unknown-linux/debug/sakkanaPackageTests.xctest(+0x719071) [0x563ee934e071]
/tmp/.nio-release-tools_naNW4a/sakkana/.build/x86_64-unknown-linux/debug/sakkanaPackageTests.xctest(+0x718971) [0x563ee934d971]
/tmp/.nio-release-tools_naNW4a/sakkana/.build/x86_64-unknown-linux/debug/sakkanaPackageTests.xctest(+0x718ae1) [0x563ee934dae1]
/tmp/.nio-release-tools_naNW4a/sakkana/.build/x86_64-unknown-linux/debug/sakkanaPackageTests.xctest(+0x82e9f7) [0x563ee94639f7]
/tmp/.nio-release-tools_naNW4a/sakkana/.build/x86_64-unknown-linux/debug/sakkanaPackageTests.xctest(+0x82ea89) [0x563ee9463a89]
/tmp/.nio-release-tools_naNW4a/sakkana/.build/x86_64-unknown-linux/debug/sakkanaPackageTests.xctest(+0x82f915) [0x563ee9464915]
/tmp/.nio-release-tools_naNW4a/sakkana/.build/x86_64-unknown-linux/debug/sakkanaPackageTests.xctest(+0x82f969) [0x563ee9464969]
/lib/x86_64-linux-gnu/libpthread.so.0(+0x8184) [0x7fdceb1b0184]
/lib/x86_64-linux-gnu/libc.so.6(clone+0x6d) [0x7fdce96bb03d]
Captured log [first]: [error] Actor crashing, reason: [Actor faulted while processing message '[inbound(SakkanaActor.ClusterShell.InboundMessage.handshakeOffer(SakkanaActor.Wire.HandshakeOffer(version: Version(0.0.1, reserved:0), from: sact://second:3977941148@localhost:9002, to: sact://first@localhost:9001), channel: SocketChannel { selectable = BaseSocket { fd=702 }ย , localAddress = Optional([IPv4]127.0.0.1/127.0.0.1:9001), remoteAddress = Optional([IPv4]127.0.0.1/127.0.0.1:43618) }, replyTo: NIO.EventLoopPromise<SakkanaActor.Wire.HandshakeResponse>(futureResult: NIO.EventLoopFuture<SakkanaActor.Wire.HandshakeResponse>)))]:Message', with backtrace]:MessageProcessingFailure. Terminating actor, process and thread remain alive.
Captured log [first]: [trace] Cancel timer [TimerKey(handshake-timer-sact://second@localhost:9002)] with generation [0]
Captured log [first]: [trace] Received periodic trigger to ping random member
Captured log [first]: [trace] Received periodic trigger to ping random member
Captured log [first]: [trace] Received periodic trigger to ping random member
========================================================================================================================
------------------------------------- ActorSystem(second, sact://second@localhost:9002) ------------------------------------------------
Captured log [second]: [trace] Started timer [TimerKey(receptionist/sync)] with generation [0]
Captured log [second]: [trace] Started timer [TimerKey(swim/periodic-ping)] with generation [0]
Captured log [second]: [debug] Spawning [SakkanaActor.Behavior<SakkanaActor.DowningStrategyMessage>.setup((Function))], on path: [/system/cluster/downingStrategy]
Captured log [second]: [info] Binding to: [sact://second@localhost:9002]
Captured log [second]: [trace] Successfully subscribed [ActorRef<ClusterEvent>(/system/cluster/downingStrategy/$sub-SakkanaActor.ClusterEvent-y)]
Captured log [second]: [info] Bound to [IPv4]127.0.0.1/127.0.0.1:9002
Captured log [second]: [info] Extending handshake offer to sact://first@localhost:9001)
Captured log [second]: [trace] Received periodic trigger to ping random member
Captured log [second]: [trace] Received periodic trigger to ping random member
Captured log [second]: [trace] Received periodic trigger to ping random member
========================================================================================================================
Test Case 'ClusterAssociationTests.test_association_shouldEstablishSingleAssociationForConcurrentlyInitiatedHandshakes_incoming_outgoing' failed (3.35 seconds)

log_run_in_docker_v59HSr.log

Build with more strict build settings

In Simulator we build with a large number of various warning flags, including specifically -Wstrict-prototypes and -Werror Maybe there's some others we should use, like -Wunused-functions?

Warnings as errors is nice and we want to enable it; we have some warnings to fix first though

FAILED: CRDTActorOwnedTests.test_actorOwned_theLastWrittenOnUpdateCallbackWins

Failed in validation

https://ci.swiftserver.group/job/swift-distributed-actors-swift50-p1rb/6/consoleFull

19:36:55 Test Case 'CRDTActorOwnedTests.test_actorOwned_theLastWrittenOnUpdateCallbackWins' started at 2019-08-21 10:36:55.100
19:36:58 /code/Tests/DistributedActorsTests/CRDT/CRDTActorOwnedTests.swift:125: error: CRDTActorOwnedTests.test_actorOwned_theLastWrittenOnUpdateCallbackWins : failed - 
19:36:58         try ownerEventPB.expectMessage(.ownerDefinedOnUpdate)
19:36:58                           ^~~~~~~~~~~~~~
19:36:58 error: Did not receive expected [ownerDefinedOnUpdate]:OwnerEventProbeProtocol within [3s], error: noMessagesInQueue
19:36:58 <EXPR>:0: error: CRDTActorOwnedTests.test_actorOwned_theLastWrittenOnUpdateCallbackWins : threw error "error(message: "\n        try ownerEventPB.expectMessage(.ownerDefinedOnUpdate)\n                          \u{1B}[0;31m^~~~~~~~~~~~~~\nerror: Did not receive expected [ownerDefinedOnUpdate]:OwnerEventProbeProtocol within [3s], error: noMessagesInQueue\u{1B}[0;0m")"
19:36:58 2019-08-21T10:36:58+0000 info: deadLetter=1 [CRDTActorOwnedTests][ActorSystem.swift:215][/dead/letters] Dead letter: [command(DistributedActors.ClusterShell.CommandMessage.unbind(DistributedActors.BlockingReceptacle<()>))]:DistributedActors.ClusterShell.Message was not delivered to ["/dead/letters"].
19:36:58 Test Case 'CRDTActorOwnedTests.test_actorOwned_theLastWrittenOnUpdateCallbackWins' failed (3.307 seconds)

FAILED: WorkerPoolTests.test_workerPool_ask

6502:Test Case 'WorkerPoolTests.test_workerPool_ask' failed (0.221 seconds)
Test Case 'WorkerPoolTests.test_workerPool_ask' started at 2019-08-21 13:18:51.283
/tmp/.nio-release-tools_4zc1Ke/sakkana/Tests/SakkanaActorTests/Pattern/WorkerPoolTests.swift:180: error: WorkerPoolTests.test_workerPool_ask : XCTAssertEqual failed: ("work:BBB at worker-a") is not equal to ("work:AAA at worker-a") -
        try pA.expectMessage("work:AAA at \(workerA.path.name)")
                 ^~~~~~~~~~~~
error: [work:BBB at worker-a] does not equal expected: [work:AAA at worker-a]

/tmp/.nio-release-tools_4zc1Ke/sakkana/Tests/SakkanaActorTests/Pattern/WorkerPoolTests.swift:181: error: WorkerPoolTests.test_workerPool_ask : XCTAssertEqual failed: ("work:AAA at worker-b") is not equal to ("work:BBB at worker-b") -
        try pB.expectMessage("work:BBB at \(workerB.path.name)")
                 ^~~~~~~~~~~~
error: [work:AAA at worker-b] does not equal expected: [work:BBB at worker-b]

2019-08-21T13:18:51+0000 info: deadLetter=1 [WorkerPoolTests][ActorSystem.swift:214][/dead/letters] Dead letter: [command(SakkanaActor.ClusterShell.CommandMessage.unbind(SakkanaActor.BlockingReceptacle<()>))]:SakkanaActor.ClusterShell.Message was not delivered to ["/dead/letters"].
2019-08-21T13:18:51+0000 info: deadLetter=1 [WorkerPoolTests][DeathWatch.swift:154][/user/questioningTheWorkers] Dead letter: [terminated(ref: AddressableActorRef(/user/worker-a), existenceConfirmed: true, addressTerminated: false)]:SakkanaActor.SystemMessage was not delivered to ["/user/questioningTheWorkers#3534705797"].
2019-08-21T13:18:51+0000 info: deadLetter=1 [WorkerPoolTests][DeathWatch.swift:154][/user/questioningTheWorkers] Dead letter: [terminated(ref: AddressableActorRef(/user/worker-b), existenceConfirmed: true, addressTerminated: false)]:SakkanaActor.SystemMessage was not delivered to ["/user/questioningTheWorkers#3534705797"].

Likely slight race

FAILED: ClusterAssociationTests.test_association_shouldEstablishSingleAssociationForConcurrentlyInitiatedHandshakes_incoming_outgoing

May need a lock there (likely noted there a TODO even?)...

Test Case 'ClusterAssociationTests.test_association_shouldEstablishSingleAssociationForConcurrentlyInitiatedHandshakes_incoming_outgoing' started at 2019-08-21 07:57:02.793
Assertion failed: Channel should always be present after the initial initialization.: file /code/Sources/DistributedActors/Cluster/ClusterShellState.swift, line 181
Current stack trace:
0    libswiftCore.so                    0x00007fba5472d7a0 _swift_stdlib_reportFatalErrorInFile + 115
1    libswiftCore.so                    0x00007fba546689cc <unavailable> + 3463628
2    libswiftCore.so                    0x00007fba54668abe <unavailable> + 3463870
3    libswiftCore.so                    0x00007fba5446431a <unavailable> + 1348378
4    libswiftCore.so                    0x00007fba5463cab2 <unavailable> + 3283634
5    libswiftCore.so                    0x00007fba544635e9 <unavailable> + 1345001
6    DistributedActorsPackageTests.xctest 0x0000561aa8d8a958 <unavailable> + 3987800
7    DistributedActorsPackageTests.xctest 0x0000561aa8d8bb51 <unavailable> + 3992401
8    DistributedActorsPackageTests.xctest 0x0000561aa8d5f6ea <unavailable> + 3811050
9    DistributedActorsPackageTests.xctest 0x0000561aa8d5dee1 <unavailable> + 3804897
10   DistributedActorsPackageTests.xctest 0x0000561aa8d62dff <unavailable> + 3825151
11   DistributedActorsPackageTests.xctest 0x0000561aa8d7cc9b <unavailable> + 3931291
12   DistributedActorsPackageTests.xctest 0x0000561aa8ce933e <unavailable> + 3326782
13   DistributedActorsPackageTests.xctest 0x0000561aa8f6f25f <unavailable> + 5972575
14   DistributedActorsPackageTests.xctest 0x0000561aa8f6ddae <unavailable> + 5967278
15   DistributedActorsPackageTests.xctest 0x0000561aa8f6d9e6 <unavailable> + 5966310
16   DistributedActorsPackageTests.xctest 0x0000561aa8cb0f66 <unavailable> + 3096422
17   DistributedActorsPackageTests.xctest 0x0000561aa8e7c538 <unavailable> + 4977976
18   DistributedActorsPackageTests.xctest 0x0000561aa8e7c936 <unavailable> + 4978998
19   DistributedActorsPackageTests.xctest 0x0000561aa8e7abd1 <unavailable> + 4971473
20   DistributedActorsPackageTests.xctest 0x0000561aa8e7a831 <unavailable> + 4970545
21   DistributedActorsPackageTests.xctest 0x0000561aa8e7aeb9 <unavailable> + 4972217
22   DistributedActorsPackageTests.xctest 0x0000561aa8abc554 <unavailable> + 1045844
23   DistributedActorsPackageTests.xctest 0x0000561aa8e8168e <unavailable> + 4998798
24   DistributedActorsPackageTests.xctest 0x0000561aa8e8aef9 <unavailable> + 5037817
25   DistributedActorsPackageTests.xctest 0x0000561aa8ce2cac <unavailable> + 3300524
26   DistributedActorsPackageTests.xctest 0x0000561aa8e6ef71 <unavailable> + 4923249
27   DistributedActorsPackageTests.xctest 0x0000561aa8ce30dc <unavailable> + 3301596
28   DistributedActorsPackageTests.xctest 0x0000561aa8e6f0c1 <unavailable> + 4923585
29   DistributedActorsPackageTests.xctest 0x0000561aa8e6e9c1 <unavailable> + 4921793
30   DistributedActorsPackageTests.xctest 0x0000561aa8e6eb31 <unavailable> + 4922161
31   DistributedActorsPackageTests.xctest 0x0000561aa8f837b7 <unavailable> + 6055863
32   DistributedActorsPackageTests.xctest 0x0000561aa8f83849 <unavailable> + 6056009
33   DistributedActorsPackageTests.xctest 0x0000561aa8f846d5 <unavailable> + 6059733
34   DistributedActorsPackageTests.xctest 0x0000561aa8f84729 <unavailable> + 6059817
35   libpthread.so.0                    0x00007fba541036db <unavailable> + 30427
36   libc.so.6                          0x00007fba525ad850 clone + 63
/code/Tests/DistributedActorsTests/Cluster/AssociationClusteredTests.swift:137: error: ClusterAssociationTests.test_association_shouldEstablishSingleAssociationForConcurrentlyInitiatedHandshakes_incoming_outgoing : failed - 
        _ = try p7337.expectMessage()
                     ^~~~~~~~~~~~~~
error: Did not receive message of type [HandshakeResult] within [3s], error: noMessagesInQueue

Split up actors.md into "concepts" and "guide"/"get started"

We currently start our actor docs with a wall of text about the rules and core concepts.

I think this is fine and good, however it should be separate out so people can jump right into the "here's an actor, boom". People reading for the first time should indeed be exposed to some of the rules and rationale though.

I propose we do:

  • actor concepts
  • actor guide
  • failure handling
  • clustering
  • serialization
  • observability
  • examples
  • internals

?

Througput calculation in ActorPingPongBenchmark is wrong

It only accounts for half of the messages that are actually being sent.

let totalNumMessages = numPairs * numMessagesPerActorPair / 2

The additional / 2 is the problem and should be removed as we are already counting pairs, not total actors.

Ask: Create special minimal actor type for ask

The current version creates an actual actor per ask.

We should change that to a minimal ref, similar to adapters etc.


More than that, this matters for message ordering guarantees as if we spawn the ask actor we lose ordering guarantees that tell does guarantee; as the ask-actor currently issues the tell from it's setup; losing the ordering guarantee. This is what caused #14

I'll mark this as a bug, since it can lead to such unexpected issues.

FAILED: test_association_shouldEstablishSingleAssociationForConcurrentlyInitiatedHandshakes_incoming_outgoing

Heh a lot of FAILED tickets on the new CI, let's make sure to stabilize quickly.

10:38:19 Test Case 'ClusterAssociationTests.test_association_shouldEstablishSingleAssociationForConcurrentlyInitiatedHandshakes_incoming_outgoing' started at 2019-08-22 01:38:19.462
10:38:19 Assertion failed: Channel should always be present after the initial initialization.: file /code/Sources/DistributedActors/Cluster/ClusterShellState.swift, line 181
10:38:19 Current stack trace:
10:38:19 0    libswiftCore.so                    0x00007fe6b26687a0 _swift_stdlib_reportFatalErrorInFile + 115
10:38:19 1    libswiftCore.so                    0x00007fe6b25a39cc <unavailable> + 3463628
10:38:19 2    libswiftCore.so                    0x00007fe6b25a3abe <unavailable> + 3463870
10:38:19 3    libswiftCore.so                    0x00007fe6b239f31a <unavailable> + 1348378
10:38:19 4    libswiftCore.so                    0x00007fe6b2577ab2 <unavailable> + 3283634
10:38:19 5    libswiftCore.so                    0x00007fe6b239e5e9 <unavailable> + 1345001
10:38:19 6    DistributedActorsPackageTests.xctest 0x000055c85feb2958 <unavailable> + 3987800
10:38:19 7    DistributedActorsPackageTests.xctest 0x000055c85feb3b51 <unavailable> + 3992401
10:38:19 8    DistributedActorsPackageTests.xctest 0x000055c85fe876ea <unavailable> + 3811050
10:38:19 9    DistributedActorsPackageTests.xctest 0x000055c85fe85ee1 <unavailable> + 3804897
10:38:19 10   DistributedActorsPackageTests.xctest 0x000055c85fe8adff <unavailable> + 3825151
10:38:19 11   DistributedActorsPackageTests.xctest 0x000055c85fea4c9b <unavailable> + 3931291
10:38:19 12   DistributedActorsPackageTests.xctest 0x000055c85fe1133e <unavailable> + 3326782
10:38:19 13   DistributedActorsPackageTests.xctest 0x000055c86009725f <unavailable> + 5972575
10:38:19 14   DistributedActorsPackageTests.xctest 0x000055c860095dae <unavailable> + 5967278
10:38:19 15   DistributedActorsPackageTests.xctest 0x000055c8600959e6 <unavailable> + 5966310
10:38:19 16   DistributedActorsPackageTests.xctest 0x000055c85fdd8f66 <unavailable> + 3096422
10:38:19 17   DistributedActorsPackageTests.xctest 0x000055c85ffa4538 <unavailable> + 4977976
10:38:19 18   DistributedActorsPackageTests.xctest 0x000055c85ffa4936 <unavailable> + 4978998
10:38:19 19   DistributedActorsPackageTests.xctest 0x000055c85ffa2bd1 <unavailable> + 4971473
10:38:19 20   DistributedActorsPackageTests.xctest 0x000055c85ffa2831 <unavailable> + 4970545
10:38:19 21   DistributedActorsPackageTests.xctest 0x000055c85ffa2eb9 <unavailable> + 4972217
10:38:19 22   DistributedActorsPackageTests.xctest 0x000055c85fbe4554 <unavailable> + 1045844
10:38:19 23   DistributedActorsPackageTests.xctest 0x000055c85ffa968e <unavailable> + 4998798
10:38:19 24   DistributedActorsPackageTests.xctest 0x000055c85ffb2ef9 <unavailable> + 5037817
10:38:19 25   DistributedActorsPackageTests.xctest 0x000055c85fe0acac <unavailable> + 3300524
10:38:19 26   DistributedActorsPackageTests.xctest 0x000055c85ff96f71 <unavailable> + 4923249
10:38:19 27   DistributedActorsPackageTests.xctest 0x000055c85fe0b0dc <unavailable> + 3301596
10:38:19 28   DistributedActorsPackageTests.xctest 0x000055c85ff970c1 <unavailable> + 4923585
10:38:19 29   DistributedActorsPackageTests.xctest 0x000055c85ff969c1 <unavailable> + 4921793
10:38:19 30   DistributedActorsPackageTests.xctest 0x000055c85ff96b31 <unavailable> + 4922161
10:38:19 31   DistributedActorsPackageTests.xctest 0x000055c8600ab7b7 <unavailable> + 6055863
10:38:19 32   DistributedActorsPackageTests.xctest 0x000055c8600ab849 <unavailable> + 6056009
10:38:19 33   DistributedActorsPackageTests.xctest 0x000055c8600ac6d5 <unavailable> + 6059733
10:38:19 34   DistributedActorsPackageTests.xctest 0x000055c8600ac729 <unavailable> + 6059817
10:38:19 35   libpthread.so.0                    0x00007fe6b203e6db <unavailable> + 30427
10:38:19 36   libc.so.6                          0x00007fe6b04e8850 clone + 63
10:38:22 /code/Tests/DistributedActorsTests/Cluster/AssociationClusteredTests.swift:137: error: ClusterAssociationTests.test_association_shouldEstablishSingleAssociationForConcurrentlyInitiatedHandshakes_incoming_outgoing : failed - 
10:38:22         _ = try p7337.expectMessage()
10:38:22                      ^~~~~~~~~~~~~~
10:38:22 error: Did not receive message of type [HandshakeResult] within [3s], error: noMessagesInQueue
10:38:22 <EXPR>:0: error: ClusterAssociationTests.test_association_shouldEstablishSingleAssociationForConcurrentlyInitiatedHandshakes_incoming_outgoing : threw error "error(message: "\n        _ = try p7337.expectMessage()\n                     \u{1B}[0;31m^~~~~~~~~~~~~~\nerror: Did not receive message of type [HandshakeResult] within [3s], error: noMessagesInQueue\u{1B}[0;0m")"
10:38:22 ------------------------------------- ActorSystem(first, sact://first@localhost:9001) ------------------------------------------------
10:38:22 Captured log [first]: [trace] Started timer [TimerKey(receptionist/sync)] with generation [0]
10:38:22 Captured log [first]: [debug] Spawning [DistributedActors.Behavior<DistributedActors.DowningStrategyMessage>.setup((Function))], on path: [/system/cluster/downingStrategy]
10:38:22 Captured log [first]: [info] Binding to: [sact://first@localhost:9001]
10:38:22 Captured log [first]: [trace] Started timer [TimerKey(swim/periodic-ping)] with generation [0]
10:38:22 Captured log [first]: [trace] Successfully subscribed [ActorRef<ClusterEvent>(/system/cluster/downingStrategy/$sub-DistributedActors.ClusterEvent-y)]
10:38:22 Captured log [first]: [info] Bound to [IPv4]127.0.0.1/127.0.0.1:9001
10:38:22 Captured log [first]: [info] Extending handshake offer to sact://second@localhost:9002)
10:38:22 Captured log [first]: [warning] Failed await for outbound channel to sact://second@localhost:9002; Error was: ExecutionError(underlying: NIO.NIOConnectionError(host: "localhost", port: 9002, dnsAError: nil, dnsAAAAError: nil, connectionErrors: [NIO.SingleConnectionFailure(target: [IPv6]localhost/::1:9002, error: connect(descriptor:addr:size:) failed: Network is unreachable (errno: 101)), NIO.SingleConnectionFailure(target: [IPv4]localhost/127.0.0.1:9002, error: connection reset (error set): Connection refused (errno: 111))]))
10:38:22 Captured log [first]: [info] Schedule handshake retry to: [sact://second@localhost:9002] delay: [TimeAmount(100ms, nanoseconds: 100000000)]
10:38:22 Captured log [first]: [trace] Started timer [TimerKey(handshake-timer-sact://second@localhost:9002)] with generation [0]
10:38:22 Captured log [first]: [warning] Concurrently initiated handshakes from nodes [sact://first@localhost:9001](local) and [sact://second@localhost:9002](remote) detected! Resolving race by address ordering; This node WON (will negotiate and reply) tie-break. 
10:38:22 Captured log [first]: [warning] Supervision: Actor has FAULTED while processingUserMessages, handling with DistributedActors.StoppingSupervisor<DistributedActors.ClusterShell.Message>; Failure details: fault(Actor faulted while processing message '[inbound(DistributedActors.ClusterShell.InboundMessage.handshakeOffer(DistributedActors.Wire.HandshakeOffer(version: Version(0.0.1, reserved:0), from: sact://second:3987008798@localhost:9002, to: sact://first@localhost:9001), channel: SocketChannel { selectable = BaseSocket { fd=377 } , localAddress = Optional([IPv4]127.0.0.1/127.0.0.1:9001), remoteAddress = Optional([IPv4]127.0.0.1/127.0.0.1:34464) }, replyTo: NIO.EventLoopPromise<DistributedActors.Wire.HandshakeResponse>(futureResult: NIO.EventLoopFuture<DistributedActors.Wire.HandshakeResponse>)))]:Message':
10:38:22 /code/.build/x86_64-unknown-linux/debug/DistributedActorsPackageTests.xctest(+0xfff43) [0x55c85fbe4f43]
10:38:22 /code/.build/x86_64-unknown-linux/debug/DistributedActorsPackageTests.xctest(+0x100421) [0x55c85fbe5421]
10:38:22 /lib/x86_64-linux-gnu/libpthread.so.0(+0x12890) [0x7fe6b2049890]
10:38:22 /root/.swift/usr/lib/swift/linux/libswiftCore.so(+0x321aba) [0x7fe6b2577aba]
10:38:22 /root/.swift/usr/lib/swift/linux/libswiftCore.so(+0x1485e9) [0x7fe6b239e5e9]
10:38:22 /code/.build/x86_64-unknown-linux/debug/DistributedActorsPackageTests.xctest(+0x3cd958) [0x55c85feb2958]
10:38:22 /code/.build/x86_64-unknown-linux/debug/DistributedActorsPackageTests.xctest(+0x3ceb51) [0x55c85feb3b51]
10:38:22 /code/.build/x86_64-unknown-linux/debug/DistributedActorsPackageTests.xctest(+0x3a26ea) [0x55c85fe876ea]
10:38:22 /code/.build/x86_64-unknown-linux/debug/DistributedActorsPackageTests.xctest(+0x3a0ee1) [0x55c85fe85ee1]
10:38:22 /code/.build/x86_64-unknown-linux/debug/DistributedActorsPackageTests.xctest(+0x3a5dff) [0x55c85fe8adff]
10:38:22 /code/.build/x86_64-unknown-linux/debug/DistributedActorsPackageTests.xctest(+0x3bfc9b) [0x55c85fea4c9b]
10:38:22 /code/.build/x86_64-unknown-linux/debug/DistributedActorsPackageTests.xctest(+0x32c33e) [0x55c85fe1133e]
10:38:22 /code/.build/x86_64-unknown-linux/debug/DistributedActorsPackageTests.xctest(+0x5b225f) [0x55c86009725f]
10:38:22 /code/.build/x86_64-unknown-linux/debug/DistributedActorsPackageTests.xctest(+0x5b0dae) [0x55c860095dae]
10:38:22 /code/.build/x86_64-unknown-linux/debug/DistributedActorsPackageTests.xctest(+0x5b09e6) [0x55c8600959e6]
10:38:22 /code/.build/x86_64-unknown-linux/debug/DistributedActorsPackageTests.xctest(+0x2f3f66) [0x55c85fdd8f66]
10:38:22 /code/.build/x86_64-unknown-linux/debug/DistributedActorsPackageTests.xctest(+0x4bf538) [0x55c85ffa4538]
10:38:22 /code/.build/x86_64-unknown-linux/debug/DistributedActorsPackageTests.xctest(+0x4bf936) [0x55c85ffa4936]
10:38:22 /code/.build/x86_64-unknown-linux/debug/DistributedActorsPackageTests.xctest(+0x4bdbd1) [0x55c85ffa2bd1]
10:38:22 /code/.build/x86_64-unknown-linux/debug/DistributedActorsPackageTests.xctest(+0x4bd831) [0x55c85ffa2831]
10:38:22 /code/.build/x86_64-unknown-linux/debug/DistributedActorsPackageTests.xctest(+0x4bdeb9) [0x55c85ffa2eb9]
10:38:22 /code/.build/x86_64-unknown-linux/debug/DistributedActorsPackageTests.xctest(+0xff554) [0x55c85fbe4554]
10:38:22 /code/.build/x86_64-unknown-linux/debug/DistributedActorsPackageTests.xctest(+0x4c468e) [0x55c85ffa968e]
10:38:22 /code/.build/x86_64-unknown-linux/debug/DistributedActorsPackageTests.xctest(+0x4cdef9) [0x55c85ffb2ef9]
10:38:22 /code/.build/x86_64-unknown-linux/debug/DistributedActorsPackageTests.xctest(+0x325cac) [0x55c85fe0acac]
10:38:22 /code/.build/x86_64-unknown-linux/debug/DistributedActorsPackageTests.xctest(+0x4b1f71) [0x55c85ff96f71]
10:38:22 /code/.build/x86_64-unknown-linux/debug/DistributedActorsPackageTests.xctest(+0x3260dc) [0x55c85fe0b0dc]
10:38:22 /code/.build/x86_64-unknown-linux/debug/DistributedActorsPackageTests.xctest(+0x4b20c1) [0x55c85ff970c1]
10:38:22 /code/.build/x86_64-unknown-linux/debug/DistributedActorsPackageTests.xctest(+0x4b19c1) [0x55c85ff969c1]
10:38:22 /code/.build/x86_64-unknown-linux/debug/DistributedActorsPackageTests.xctest(+0x4b1b31) [0x55c85ff96b31]
10:38:22 /code/.build/x86_64-unknown-linux/debug/DistributedActorsPackageTests.xctest(+0x5c67b7) [0x55c8600ab7b7]
10:38:22 /code/.build/x86_64-unknown-linux/debug/DistributedActorsPackageTests.xctest(+0x5c6849) [0x55c8600ab849]
10:38:22 /code/.build/x86_64-unknown-linux/debug/DistributedActorsPackageTests.xctest(+0x5c76d5) [0x55c8600ac6d5]
10:38:22 /code/.build/x86_64-unknown-linux/debug/DistributedActorsPackageTests.xctest(+0x5c7729) [0x55c8600ac729]
10:38:22 /lib/x86_64-linux-gnu/libpthread.so.0(+0x76db) [0x7fe6b203e6db]
10:38:22 /lib/x86_64-linux-gnu/libc.so.6(clone+0x3f) [0x7fe6b04e888f]
10:38:22 Captured log [first]: [error] Actor crashing, reason: [Actor faulted while processing message '[inbound(DistributedActors.ClusterShell.InboundMessage.handshakeOffer(DistributedActors.Wire.HandshakeOffer(version: Version(0.0.1, reserved:0), from: sact://second:3987008798@localhost:9002, to: sact://first@localhost:9001), channel: SocketChannel { selectable = BaseSocket { fd=377 } , localAddress = Optional([IPv4]127.0.0.1/127.0.0.1:9001), remoteAddress = Optional([IPv4]127.0.0.1/127.0.0.1:34464) }, replyTo: NIO.EventLoopPromise<DistributedActors.Wire.HandshakeResponse>(futureResult: NIO.EventLoopFuture<DistributedActors.Wire.HandshakeResponse>)))]:Message', with backtrace]:MessageProcessingFailure. Terminating actor, process and thread remain alive.
10:38:22 Captured log [first]: [trace] Cancel timer [TimerKey(handshake-timer-sact://second@localhost:9002)] with generation [0]
10:38:22 Captured log [first]: [trace] Received periodic trigger to ping random member
10:38:22 Captured log [first]: [trace] Received periodic trigger to ping random member
10:38:22 ========================================================================================================================
10:38:22 ------------------------------------- ActorSystem(second, sact://second@localhost:9002) ------------------------------------------------
10:38:22 Captured log [second]: [trace] Started timer [TimerKey(receptionist/sync)] with generation [0]
10:38:22 Captured log [second]: [debug] Spawning [DistributedActors.Behavior<DistributedActors.DowningStrategyMessage>.setup((Function))], on path: [/system/cluster/downingStrategy]
10:38:22 Captured log [second]: [info] Binding to: [sact://second@localhost:9002]
10:38:22 Captured log [second]: [trace] Started timer [TimerKey(swim/periodic-ping)] with generation [0]
10:38:22 Captured log [second]: [trace] Successfully subscribed [ActorRef<ClusterEvent>(/system/cluster/downingStrategy/$sub-DistributedActors.ClusterEvent-y)]
10:38:22 Captured log [second]: [info] Bound to [IPv4]127.0.0.1/127.0.0.1:9002
10:38:22 Captured log [second]: [info] Extending handshake offer to sact://first@localhost:9001)
10:38:22 Captured log [second]: [trace] Received periodic trigger to ping random member
10:38:22 Captured log [second]: [trace] Received periodic trigger to ping random member
10:38:22 ========================================================================================================================
10:38:22 Test Case 'ClusterAssociationTests.test_association_shouldEstablishSingleAssociationForConcurrentlyInitiatedHandshakes_incoming_outgoing' failed (3.23 seconds)

Move nodes in membership to removed when ready to do so

We need to decide when it is safe to move a node do removed and do so, otherwise we'll continue gossiping dead nodes like we do now:

ping(
    lastKnownStatus: DistributedActors.SWIM.Status.alive(incarnation: 0),
    replyTo: ActorRef<SWIM.Ack>(/user / $ask - e),
    payload: DistributedActors.SWIM.Payload.membership([
        DistributedActors.SWIMMember(ref: ActorRef<SWIM.Message>(sact://[email protected]:7343/system/swim), status: DistributedActors.SWIM.Status.alive(incarnation: 0), protocolPeriod: 2),
        DistributedActors.SWIMMember(ref: ActorRef<SWIM.Message>(sact://[email protected]:7342/system/swim), status: DistributedActors.SWIM.Status.alive(incarnation: 0), protocolPeriod: 2),
        DistributedActors.SWIMMember(ref: ActorRef<SWIM.Message>(sact://[email protected]:7340/system/swim), status: DistributedActors.SWIM.Status.dead, protocolPeriod: 2),
        DistributedActors.SWIMMember(ref: ActorRef<SWIM.Message>(sact://[email protected]:7341/system/swim), status: DistributedActors.SWIM.Status.dead, protocolPeriod: 2),
        DistributedActors.SWIMMember(ref: ActorRef<SWIM.Message>(sact://[email protected]:7338/system/swim), status: DistributedActors.SWIM.Status.dead, protocolPeriod: 1),
        DistributedActors.SWIMMember(ref: ActorRef<SWIM.Message>(sact://[email protected]:7339/system/swim), status: DistributedActors.SWIM.Status.dead, protocolPeriod: 1)
    ]))

FAILED: WorkerPoolTests.test_workerPool_dynamic_removeDeadActors

That one is inherently racy AFAIR, so perhaps timings are aggressive; we'll see

TBH enjoying the CI since it catches more such issues than the beefy box we used before.

15:47:39 - Test Case 'WorkerPoolTests.test_workerPool_dynamic_removeDeadActors' started at 2019-08-22 06:47:39.564
15:47:39 - /code/Tests/SakkanaActorTests/Pattern/WorkerPoolTests.swift:117: error: WorkerPoolTests.test_workerPool_dynamic_removeDeadActors : failed - 
15:47:39 -             try pB.expectMessage("work:b at worker-b", within: .milliseconds(200))
15:47:39 -                   ^~~~~~~~~~~~~~
15:47:39 - error: Did not receive expected [work:b at worker-b]:String within [200ms], error: noMessagesInQueue
15:47:40 - /code/Tests/SakkanaActorTests/Pattern/WorkerPoolTests.swift:145: error: WorkerPoolTests.test_workerPool_dynamic_removeDeadActors : failed - 
15:47:40 -         try pA.expectNoMessage(for: .milliseconds(50))
15:47:40 -               ^~~~~~~~~~~~~~~~
15:47:40 - error: Received unexpected message [work:b at worker-a]:String. Did not expect to receive any messages for [50ms].
15:47:40 - <EXPR>:0: error: WorkerPoolTests.test_workerPool_dynamic_removeDeadActors : threw error "error(message: "\n        try pA.expectNoMessage(for: .milliseconds(50))\n              \u{1B}[0;31m^~~~~~~~~~~~~~~~\nerror: Received unexpected message [work:b at worker-a]:String. Did not expect to receive any messages for [50ms].\u{1B}[0;0m")"
15:47:40 - 2019-08-22T06:47:40+0000 info: deadLetter=1 [WorkerPoolTests][ActorSystem.swift:214][/dead/letters] Dead letter: [command(SakkanaActor.ClusterShell.CommandMessage.unbind(SakkanaActor.BlockingReceptacle<()>))]:SakkanaActor.ClusterShell.Message was not delivered to ["/dead/letters"].
15:47:40 - 2019-08-22T06:47:40+0000 info: deadLetter=1 [WorkerPoolTests][DeathWatch.swift:154][/user/pB] Dead letter: [terminated(ref: AddressableActorRef(/user/worker-b), existenceConfirmed: true, addressTerminated: false)]:SakkanaActor.SystemMessage was not delivered to ["/user/pB#1688073797"].
15:47:40 - 2019-08-22T06:47:40+0000 info: deadLetter=1 [WorkerPoolTests][DeathWatch.swift:154][/user/pC] Dead letter: [terminated(ref: AddressableActorRef(/user/worker-c), existenceConfirmed: true, addressTerminated: false)]:SakkanaActor.SystemMessage was not delivered to ["/user/pC#1650329986"].
15:47:40 - 2019-08-22T06:47:40+0000 info: deadLetter=1 [WorkerPoolTests][Mailbox.swift:252][/user/workersMayDie] Dead letter: [terminated(ref: AddressableActorRef(/user/worker-c), existenceConfirmed: true, addressTerminated: false)]:SakkanaActor.SystemMessage was not delivered to ["/user/workersMayDie#2350647450"].
15:47:40 - 2019-08-22T06:47:40+0000 info: deadLetter=1 [WorkerPoolTests][Mailbox.swift:252][/user/workersMayDie] Dead letter: [terminated(ref: AddressableActorRef(/user/worker-b), existenceConfirmed: true, addressTerminated: false)]:SakkanaActor.SystemMessage was not delivered to ["/user/workersMayDie#2350647450"].
15:47:40 - 2019-08-22T06:47:40+0000 info: deadLetter=1 [WorkerPoolTests][Mailbox.swift:243][/user/workersMayDie] Dead letter: [listing(Listing<String>([/user/worker-b#3356275494]))]:SakkanaActor.WorkerPoolMessage<Swift.String> was not delivered to ["/user/workersMayDie#2350647450"].
15:47:40 - 2019-08-22T06:47:40+0000 info: deadLetter=1 [WorkerPoolTests][Mailbox.swift:243][/user/workersMayDie] Dead letter: [listing(Listing<String>([]))]:SakkanaActor.WorkerPoolMessage<Swift.String> was not delivered to ["/user/workersMayDie#2350647450"].
15:47:40 - Test Case 'WorkerPoolTests.test_workerPool_dynamic_removeDeadActors' failed (1.16 seconds)

https://jenkins11.pie.apple.com/jenkins/job/kmalawski-sakkana-sakkana-swift-5.0-ci/250/console

Underlying changes subReceive/adapters

They would be built on the same foundation, meaning that in both cases the functions will be stored on the ActorShell itself and the refs create special messages, similar to .closure, which the ActorShell then recognizes and applies to the stored function. That way we can keep the references valid when the underlying function changes.

SWIM logging to refer to nodes, not SWIM.Member

Membership deciding a DEAD is currently logged as: 2019-08-28T19:04:37+0900 warning: [sact://ActorSystem@localhost:7337][SWIMShell.swift:301][thread:43407713][/system/swim] Marked [SWIMMember(ref: ActorRef<SWIM.Message>(/system/swim), status: DistributedActors.SWIM.Status.alive(incarnation: 0), protocolPeriod: 0)] as DEAD. Node was previously alive, and now forced DEAD. Current period [2]. which actually means that the self node 7337 is decided DEAD; but this is not obvious to a non experienced reader; membership information should be logged in terms of the node it is about which is more understandable to readers of logs.

This could follow through to keep the internal storage to be keyed as by Node though does not have to, as long as the logging is improved :)

ActorSystem.shutdown() should return a thing to await on

Currently it is blocking; with no option to just let it do its thing asynchronously;
We should return the blocking receptable backed thing perhaps, as we're also shutting down the event loops etc ๐Ÿค”

    // FIXME: make termination async and add an awaitable that signals completion of the termination

    /// Forcefully stops this actor system and all actors that live within.
    ///
    /// - Warning: Blocks current thread until the system has terminated.
    ///            Do not call from within actors or you may deadlock shutting down the system.
    public func shutdown() {

Tune SWIM probing timeouts

Best to put it onto some amazon instances poke around a bit and select something realistic / good "default", something like a few seconds is likely fine.

We currently probe every 300ms:

2019-08-28T22:22:57+0900 warning: [sact://ActorSystem@localhost:7337][SWIMShell.swift:219][thread:44036178][/system/swim] Did not receive ack from [ActorRef<SWIM.Message>(sact://[email protected]:7339/system/swim)] within [300ms]. Sending ping requests to other members.

Don't swift-format pb.swift files, causes unnecessary code churn

We format all .swift files, but that's a bit annoying with GeneratedFromProto.pb.swift files which are generated sources.

One has to reformat after every proto generation...

--

Alternative: always triger formatting on proto generated files as part of the generate_protos script.

TestKit cleanups and making consistent

E.g. how to use expectMessage within eventually blocks?

Check if we are missing any overloads.

Decide what to do about ShouldMatchers -- internally we want them, but not as part of API or we're ok to expose them? Go we guarantee bin/source compat for the test utils?

FAILED: VersionVectorTests.test_Dot_sort_shouldBeByReplicaThenByVersion

03:24:59 Test Case 'VersionVectorTests.test_Dot_sort_shouldBeByReplicaThenByVersion' started at 2019-09-05 10:24:59.586
03:24:59 /code/Tests/DistributedActorsTests/Clocks/VersionVectorTests.swift:194: error: VersionVectorTests.test_Dot_sort_shouldBeByReplicaThenByVersion : XCTAssertEqual failed: ("[dot:(actor:/user/B,1), dot:(actor:/user/B,2), dot:(actor:/user/A,3), dot:(actor:/user/C,5)]") is not equal to ("[dot:(actor:/user/A,3), dot:(actor:/user/B,1), dot:(actor:/user/B,2), dot:(actor:/user/C,5)]") - 
03:24:59         sortedDots.shouldEqual([dot2, dot3, dot1, dot4])
03:24:59                    ^~~~~~~~~~~~
03:24:59 error: [[dot:(actor:/user/B,1), dot:(actor:/user/B,2), dot:(actor:/user/A,3), dot:(actor:/user/C,5)]] does not equal expected: [[dot:(actor:/user/A,3), dot:(actor:/user/B,1), dot:(actor:/user/B,2), dot:(actor:/user/C,5)]]
03:24:59 
03:24:59 Test Case 'VersionVectorTests.test_Dot_sort_shouldBeByReplicaThenByVersion' failed (0.004 seconds)
03:24:59 Test Suite 'VersionVectorTests' failed at 2019-09-05 10:24:59.589
03:24:59 	 Executed 8 tests, with 1 failure (0 unexpected) in 0.194 (0.194) seconds
03:24:59 Test Suite 'WorkerPoolTests' started at 2019-09-05 10:24:59.589

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.