Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

FAILED: test_cleanUpAssociationTombstones message timeout #1128

Open
ktoso opened this issue Jul 3, 2023 · 1 comment
Open

FAILED: test_cleanUpAssociationTombstones message timeout #1128

ktoso opened this issue Jul 3, 2023 · 1 comment
Labels
failed 💥 Failed tickets are CI or benchmarking failures, should be investigated as soon as possible

Comments

@ktoso
Copy link
Member

ktoso commented Jul 3, 2023

16:42:24 Test Case 'ClusterSystemTests.test_cleanUpAssociationTombstones' started at 2023-07-03 07:42:20.269
16:42:24 /code/Tests/DistributedClusterTests/ClusterSystemTests.swift:168: error: ClusterSystemTests.test_cleanUpAssociationTombstones : failed - No result within 3s for block at /code/Tests/DistributedClusterTests/ClusterSystemTests.swift:168. Queried 30 times, within 3s. Last error: Boom(message: "Expected tombstones to get cleared")
16:42:24 <EXPR>:0: error: ClusterSystemTests.test_cleanUpAssociationTombstones : threw error "No result within 3s for block at /code/Tests/DistributedClusterTests/ClusterSystemTests.swift:168. Queried 30 times, within 3s. Last error: Boom(message: "Expected tombstones to get cleared")"
16:42:24 ------------------------------------- ClusterSystem(ClusterSystemTests) ------------------------------------------------
16:42:24 [captured] [ClusterSystemTests] 7:42:20.2720 trace [/system/clusterEventStream] [ClusterSystem.swift:998] Assign identity
16:42:24 [captured] [ClusterSystemTests] // "actor/type": ClusterEventStreamActor
16:42:24 [captured] [ClusterSystemTests] 7:42:20.2730 trace [/system/clusterEventStream] [ClusterSystem.swift:1011] Actor ready
16:42:24 [captured] [ClusterSystemTests] // "actor/type": ClusterEventStreamActor
16:42:24 [captured] [ClusterSystemTests] 7:42:20.2730 trace [/system/receptionist] [ClusterSystem.swift:998] Assign identity
16:42:24 [captured] [ClusterSystemTests] // "actor/type": OpLogDistributedReceptionist
16:42:24 [captured] [ClusterSystemTests] 7:42:20.2730 trace [/system/receptionist] [ClusterSystem.swift:1011] Actor ready
16:42:24 [captured] [ClusterSystemTests] // "actor/type": OpLogDistributedReceptionist
16:42:24 [captured] [ClusterSystemTests] 7:42:20.2730 trace [[$wellKnown: receptionist]] [ClusterSystem.swift:1047] Actor ready, well-known as: receptionist
16:42:24 [captured] [ClusterSystemTests] // "actor/type": OpLogDistributedReceptionist
16:42:24 [captured] [ClusterSystemTests] 7:42:20.2730 debug [[$wellKnown: receptionist]] [OperationLogDistributedReceptionist.swift:276] Initialized receptionist
16:42:24 [captured] [ClusterSystemTests] 7:42:20.2730 info  [ClusterSystem.swift:393] ClusterSystem [ClusterSystemTests] initialized; Cluster disabled, not listening for connections.
16:42:24 [captured] [ClusterSystemTests] 7:42:20.2730 debug [/system/receptionist-ref] [_OperationLogClusterReceptionistBehavior.swift:95] Initialized receptionist
16:42:24 [captured] [ClusterSystemTests] 7:42:20.2730 trace [/system/clusterEventStream] [ClusterEventStream.swift:172] Successfully added async subscriber [ObjectIdentifier(0x00007f3f5457d7e0)], offering membership snapshot
16:42:24 [captured] [ClusterSystemTests] 7:42:20.2740 trace [/system/clusterEventStream] [ClusterEventStream.swift:158] Successfully subscribed [_ActorRef<Cluster.Event>(/system/receptionist-ref/$sub-DistributedCluster.Cluster.Event-y)], offering membership snapshot
16:42:24 [captured] [ClusterSystemTests] 7:42:21.4750 trace [[$wellKnown: receptionist]] [OperationLogDistributedReceptionist.swift:759] Periodic ack tick
16:42:24 [captured] [ClusterSystemTests] 7:42:22.6740 trace [[$wellKnown: receptionist]] [OperationLogDistributedReceptionist.swift:759] Periodic ack tick
16:42:24 [captured] [ClusterSystemTests] 7:42:23.8740 trace [[$wellKnown: receptionist]] [OperationLogDistributedReceptionist.swift:759] Periodic ack tick
16:42:24 ========================================================================================================================
16:42:24 ------------------------------------- ClusterSystem(local, sact://local@127.0.0.1:9002) ------------------------------------------------
16:42:24 [captured] [local] 7:42:20.2820 trace [/system/clusterEventStream] [ClusterSystem.swift:998] Assign identity
16:42:24 [captured] [local] // "actor/type": ClusterEventStreamActor
16:42:24 [captured] [local] 7:42:20.2820 trace [/system/clusterEventStream] [ClusterSystem.swift:1011] Actor ready
16:42:24 [captured] [local] // "actor/type": ClusterEventStreamActor
16:42:24 [captured] [local] 7:42:20.2830 trace [/user/swim] [ClusterSystem.swift:998] Assign identity
16:42:24 [captured] [local] // "actor/type": SWIMActor
16:42:24 [captured] [local] 7:42:20.2830 trace [/user/swim] [ClusterSystem.swift:1011] Actor ready
16:42:24 [captured] [local] // "actor/type": SWIMActor
16:42:24 [captured] [local] 7:42:20.2840 trace [/system/receptionist] [ClusterSystem.swift:998] Assign identity
16:42:24 [captured] [local] // "actor/type": OpLogDistributedReceptionist
16:42:24 [captured] [local] 7:42:20.2840 trace [/system/receptionist] [ClusterSystem.swift:1011] Actor ready
16:42:24 [captured] [local] // "actor/type": OpLogDistributedReceptionist
16:42:24 [captured] [local] 7:42:20.2840 trace [[$wellKnown: receptionist]] [ClusterSystem.swift:1047] Actor ready, well-known as: receptionist
16:42:24 [captured] [local] // "actor/type": OpLogDistributedReceptionist
16:42:24 [captured] [local] 7:42:20.2840 debug [[$wellKnown: receptionist]] [OperationLogDistributedReceptionist.swift:276] Initialized receptionist
16:42:24 [captured] [local] 7:42:20.2840 trace [/sy
16:42:24 stem/downingStrategy] [ClusterSystem.swift:998] Assign identity
16:42:24 [captured] [local] // "actor/type": DowningStrategyShell
16:42:24 [captured] [local] 7:42:20.2840 trace [/system/clusterEventStream] [ClusterEventStream.swift:172] Successfully added async subscriber [ObjectIdentifier(0x00007f3f6c498ad0)], offering membership snapshot
16:42:24 [captured] [local] 7:42:20.2840 trace [/system/downingStrategy] [ClusterSystem.swift:1011] Actor ready
16:42:24 [captured] [local] // "actor/type": DowningStrategyShell
16:42:24 [captured] [local] 7:42:20.2850 info  [ClusterSystem.swift:387] ClusterSystem [local] initialized, listening on: sact://local@127.0.0.1:9002: _ActorRef<ClusterShell.Message>(/system/cluster)
16:42:24 [captured] [local] 7:42:20.2850 debug [/system/receptionist-ref] [_OperationLogClusterReceptionistBehavior.swift:95] Initialized receptionist
16:42:24 [captured] [local] 7:42:20.2850 info  [ClusterSystem.swift:389] Setting in effect: .autoLeaderElection: LeadershipSelectionSettings(underlying: DistributedCluster.ClusterSystemSettings.LeadershipSelectionSettings.(unknown context at $56290681d140)._LeadershipSelectionSettings.lowestReachable(minNumberOfMembers: 2))
16:42:24 [captured] [local] 7:42:20.2850 trace [/system/clusterEventStream] [ClusterEventStream.swift:172] Successfully added async subscriber [ObjectIdentifier(0x00007f3fc0256570)], offering membership snapshot
16:42:24 [captured] [local] 7:42:20.2850 info  [ClusterSystem.swift:390] Setting in effect: .downingStrategy: DowningStrategySettings(underlying: DistributedCluster.DowningStrategySettings.(unknown context at $56290681c4a0)._DowningStrategySettings.timeout(DistributedCluster.TimeoutBasedDowningStrategySettings(downUnreachableMembersAfter: 1.0 seconds)))
16:42:24 [captured] [local] 7:42:20.2850 info  [ClusterSystem.swift:391] Setting in effect: .onDownAction: OnDownActionStrategySettings(underlying: DistributedCluster.OnDownActionStrategySettings.(unknown context at $56290681c598)._OnDownActionStrategySettings.gracefulShutdown(delay: 3.0 seconds))
16:42:24 [captured] [local] 7:42:20.2850 trace [/system/clusterEventStream] [ClusterEventStream.swift:158] Successfully subscribed [_ActorRef<Cluster.Event>(/system/receptionist-ref/$sub-DistributedCluster.Cluster.Event-y)], offering membership snapshot
16:42:24 [captured] [local] 7:42:20.2850 trace [/system/clusterEventStream] [ClusterEventStream.swift:158] Successfully subscribed [_ActorRef<Cluster.Event>(/system/nodeDeathWatcher/$sub-DistributedCluster.Cluster.Event-y)], offering membership snapshot
16:42:24 [captured] [local] 7:42:20.2850 info [/system/cluster] [ClusterShell.swift:396] Binding to: [sact://local@127.0.0.1:9002]
16:42:24 [captured] [local] 7:42:20.2850 trace [/system/nodeDeathWatcher] [NodeDeathWatcher.swift:199] Membership snapshot: Membership(count: 0, leader: .none, members: [])
16:42:24 [captured] [local] 7:42:20.2850 trace [/system/cluster/leadership] [Leadership.swift:114] Configured with LowestReachableMember(minimumNumberOfMembersToDecide: 2, loseLeadershipIfBelowMinNrOfMembers: false)
16:42:24 [captured] [local] 7:42:20.2860 info [/system/cluster/leadership] [Leadership.swift:246] Not enough members [1/2] to run election, members: [Member(sact://local:14716080274836230658@127.0.0.1:9002, status: joining, reachability: reachable)]
16:42:24 [captured] [local] // "leadership/election": DistributedCluster.Leadership.LowestReachableMember
16:42:24 [captured] [local] 7:42:20.2860 trace [/system/clusterEventStream] [ClusterEventStream.swift:158] Successfully subscribed [_ActorRef<Cluster.Event>(/system/cluster/leadership)], offering membership snapshot
16:42:24 [captured] [local] 7:42:20.2870 info [/system/cluster] [ClusterShell.swift:407] Bound to [IPv4]127.0.0.1/127.0.0.1:9002
16:42:24 [captured] [local] 7:42:20.2880 trace [/system/nodeDeathWatcher] [NodeDeathWatcher.swift:211] Node change: sact://local:14716080274836230658@127.0.0.1:9002 :: [unknown] -> [joining]!
16:42:24 [captured] [local] // "node": sact://local:14716080274836230658@127.0.0.1:9002
16:42:24 [captured] [local] 7:42:20.2880 info [/system/cluster/leadership] [Leadership.swift:246] Not enough members [1/2] to run election, members: [Member(sact://local:14716080274836230658@127.0.0.1:9002, status: joinin
16:42:24 g, reachability: reachable)]
16:42:24 [captured] [local] // "leadership/election": DistributedCluster.Leadership.LowestReachableMember
16:42:24 [captured] [local] 7:42:20.2880 trace [/system/clusterEventStream] [ClusterEventStream.swift:195] Published event membershipChange(sact://local:14716080274836230658@127.0.0.1:9002 :: [unknown] -> [joining]) to 3 subscribers and 2 async subscribers
16:42:24 [captured] [local] // "eventStream/asyncSubscribers": 
16:42:24 [captured] [local] //   ObjectIdentifier(0x00007f3fc0256570)
16:42:24 [captured] [local] //   ObjectIdentifier(0x00007f3f6c498ad0)
16:42:24 [captured] [local] // "eventStream/event": DistributedCluster.Cluster.Event.membershipChange(sact://local:14716080274836230658@127.0.0.1:9002 :: [unknown] -> [joining])
16:42:24 [captured] [local] // "eventStream/subscribers": 
16:42:24 [captured] [local] //   /system/nodeDeathWatcher/$sub-DistributedCluster.Cluster.Event-y
16:42:24 [captured] [local] //   /system/cluster/leadership
16:42:24 [captured] [local] //   /system/receptionist-ref/$sub-DistributedCluster.Cluster.Event-y
16:42:24 [captured] [local] 7:42:20.2880 trace [/system/clusterEventStream] [ClusterEventStream.swift:158] Successfully subscribed [_ActorRef<Cluster.Event>(/system/cluster/gossip/$sub-DistributedCluster.Cluster.Event-y)], offering membership snapshot
16:42:24 [captured] [local] 7:42:20.2880 trace [/system/cluster/gossip] [Gossiper+Shell.swift:147] Update (locally) gossip payload [membership]
16:42:24 [captured] [local] // "gossip/identifier": membership
16:42:24 [captured] [local] // "gossip/payload": MembershipGossip(
16:42:24 [captured] [local] //   owner: sact://local:14716080274836230658@127.0.0.1:9002,
16:42:24 [captured] [local] //   seen: Cluster.Gossip.SeenTable(
16:42:24 [captured] [local] //     sact://local@127.0.0.1:9002 observed versions:
16:42:24 [captured] [local] //         node:sact://local@127.0.0.1:9002 @ 1
16:42:24 [captured] [local] // ),
16:42:24 [captured] [local] //   membership: Membership(
16:42:24 [captured] [local] //     _members: [
16:42:24 [captured] [local] //       sact://local@127.0.0.1:9002: Member(sact://local@127.0.0.1:9002, status: joining, reachability: reachable),
16:42:24 [captured] [local] //     ],
16:42:24 [captured] [local] //     _leaderNode: nil,
16:42:24 [captured] [local] //   ),
16:42:24 [captured] [local] // )
16:42:24 [captured] [local] 7:42:20.2920 debug [/system/cluster] [ClusterShell.swift:707] Association already allocated for remote: sact://remote@127.0.0.1:9003, existing association: [AssociatedState(associating(queue: DistributedCluster.MPSCLinkedQueue<DistributedCluster.TransportEnvelope>), selfNode: sact://local:14716080274836230658@127.0.0.1:9002, remoteNode: sact://remote:850464202261074644@127.0.0.1:9003)]
16:42:24 [captured] [local] 7:42:20.2920 debug [/system/cluster] [ClusterShell.swift:727] Initiated handshake: InitiatedState(remoteNode: sact://remote@127.0.0.1:9003, localNode: sact://local@127.0.0.1:9002, channel: nil)
16:42:24 [captured] [local] // "cluster/associatedNodes": [sact://remote:850464202261074644@127.0.0.1:9003]
16:42:24 [captured] [local] 7:42:20.2920 debug [/system/cluster] [ClusterShell.swift:751] Extending handshake offer
16:42:24 [captured] [local] // "handshake/remoteNode": sact://remote@127.0.0.1:9003
16:42:24 [captured] [local] 7:42:20.2940 trace [/system/transport.client] [TransportPipelines.swift:58] Offering handshake [DistributedCluster._ProtoHandshakeOffer:
16:42:24 version {
16:42:24   major: 1
16:42:24 }
16:42:24 originNode {
16:42:24   endpoint {
16:42:24     protocol: "sact"
16:42:24     system: "local"
16:42:24     hostname: "127.0.0.1"
16:42:24     port: 9002
16:42:24   }
16:42:24   nid: 14716080274836230658
16:42:24 }
16:42:24 targetEndpoint {
16:42:24   protocol: "sact"
16:42:24   system: "remote"
16:42:24   hostname: "127.0.0.1"
16:42:24   port: 9003
16:42:24 }
16:42:24 ]
16:42:24 [captured] [local] 7:42:20.2940 debug [/system/cluster] [ClusterShell.swift:707] Association already allocated for remote: sact://remote@127.0.0.1:9003, existing association: [AssociatedState(associating(queue: DistributedCluster.MPSCLinkedQueue<DistributedCluster.TransportEnvelope>), selfNode: sact://local:14716080274836230658@127.0.0.1:9002, remoteNode: sact://remote:850464202261074644@127.0.0.1:9003)]
16:42:24 [captured] [local] 7:42:20.2950 debug [/system/cluster] [ClusterShell.swift:733] Handshake in other state: inFlight(DistributedCluster.HandshakeStateMachine.InFlightState(state: Dist
16:42:24 ributedCluster.ClusterShellState(log: Logging.Logger(handler: DistributedActorsTestKit.LogCaptureLogHandler(label: "local", capture: DistributedActorsTestKit.LogCapture, metadata: ["cluster/node": sact://local@127.0.0.1:9002, "actor/path": /system/cluster]), label: "LogCapture(local)"), settings: DistributedCluster.ClusterSystemSettings(actor: DistributedCluster.ClusterSystemSettings.ActorSettings(maxBehaviorNestingDepth: 128), actorMetadata: DistributedCluster.ActorIDMetadataSettings(autoIncludedMetadata: [], propagateMetadata: Set(["$type", "$path", "$wellKnown"]), encodeCustomMetadata: (Function), decodeCustomMetadata: (Function)), plugins: DistributedCluster.PluginsSettings(plugins: []), receptionist: DistributedCluster.ReceptionistSettings(traceLogLevel: nil, listingFlushDelay: 0.25 seconds, ackPullReplicationIntervalSlow: 1.2 seconds, syncBatchSize: 50), serialization: DistributedCluster.Serialization.Settings(insecureSerializeNotRegisteredMessages: true, defaultSerializerID: serializerID:jsonCodable(3), localNode: sact://local:14716080274836230658@127.0.0.1:9002, inboundSerializerManifestMappings: [:], specializedSerializerMakers: [:], typeToManifestRegistry: [:], manifest2TypeRegistry: [:], allocator: NIOCore.ByteBufferAllocator(malloc: (Function), realloc: (Function), free: (Function), memcpy: (Function))), enabled: true, discovery: nil, endpoint: sact://local@127.0.0.1:9002, nid: 14716080274836230658, bindTimeout: 3.0 seconds, unbindTimeout: 3.0 seconds, connectTimeout: 0.5 seconds, handshakeReconnectBackoff: DistributedCluster.ExponentialBackoffStrategy(initialInterval: 0.3 seconds, multiplier: 1.5, capInterval: 3.0 seconds, randomFactor: 0.25, limitedRemainingAttempts: Optional(32), currentBaseInterval: 0.3 seconds), associationTombstoneTTL: 0.0 seconds, associationTombstoneCleanupInterval: 600.0 seconds, _protocolVersion: Version(1.0.0, reserved:0), membershipGossipInterval: 1.0 seconds, membershipGossipAcknowledgementTimeout: 1.0 seconds, membershipGossipIntervalRandomFactor: 0.2, autoLeaderElection: DistributedCluster.ClusterSystemSettings.LeadershipSelectionSettings(underlying: DistributedCluster.ClusterSystemSettings.LeadershipSelectionSettings.(unknown context at $56290681d140)._LeadershipSelectionSettings.lowestReachable(minNumberOfMembers: 2)), remoteCall: DistributedCluster.ClusterSystemSettings.RemoteCallSettings(defaultTimeout: 5.0 seconds, codableErrorAllowance: DistributedCluster.ClusterSystemSettings.RemoteCallSettings.CodableErrorAllowanceSettings(underlying: DistributedCluster.ClusterSystemSettings.RemoteCallSettings.CodableErrorAllowanceSettings.CodableErrorAllowance.all)), tls: nil, tlsPassphraseCallback: nil, eventLoopGroup: Optional(MultiThreadedEventLoopGroup { threadPattern = NIO-ELT-215-#* }), allocator: NIOCore.ByteBufferAllocator(malloc: (Function), realloc: (Function), free: (Function), memcpy: (Function)), downingStrategy: DistributedCluster.DowningStrategySettings(underlying: DistributedCluster.DowningStrategySettings.(unknown context at $56290681c4a0)._DowningStrategySettings.timeout(DistributedCluster.TimeoutBasedDowningStrategySettings(downUnreachableMembersAfter: 1.0 seconds))), onDownAction: DistributedCluster.OnDownActionStrategySettings(underlying: DistributedCluster.OnDownActionStrategySettings.(unknown context at $56290681c598)._OnDownActionStrategySettings.gracefulShutdown(delay: 3.0 seconds)), swim: SWIM.SWIM.Settings(logger: Logging.Logger(handler: DistributedActorsTestKit.LogCaptureLogHandler(label: "local", capture: DistributedActorsTestKit.LogCapture, metadata: ["cluster/node": sact://local@127.0.0.1:9002]), label: "LogCapture(local)"), gossip: SWIM.SWIMGossipSettings(maxNumberOfMessagesPerGossip: 12, gossipedEnoughTimesBaseMultiplier: 3.0), lifeguard: SWIM.SWIMLifeguardSettings(maxLocalHealthMultiplier: 2, suspicionTimeoutMax: 1.0 seconds, indirectPingTimeoutMultiplier: 0.8, suspicionTimeoutMin: 0.5 seconds, maxIndependentSuspicions: 4), metrics: SWIM.SWIMMetricsSettings(segmentSeparator: ".", systemName: Optional("local"), labelPrefix: Optional("cluster.swim")), no
16:42:24 de: nil, indirectProbeCount: 3, tombstoneTimeToLiveInTicks: 14400, tombstoneCleanupIntervalInTicks: 300, initialContactPoints: Set([]), probeInterval: 1.0 seconds, pingTimeout: 0.3 seconds, unreachability: SWIM.SWIM.Settings.UnreachabilitySettings.enabled, timeSourceNow: (Function), traceLogLevel: nil), logMembershipChanges: Optional(Logging.Logger.Level.debug), traceLogLevel: nil, logging: DistributedCluster.LoggingSettings(customizedLogger: true, _logger: Logging.Logger(handler: DistributedActorsTestKit.LogCaptureLogHandler(label: "local", capture: DistributedActorsTestKit.LogCapture, metadata: ["cluster/node": sact://local@127.0.0.1:9002]), label: "LogCapture(local)"), useBuiltInFormatter: true, verboseTimers: false, verboseSpawning: false, verboseResolve: false), metrics: DistributedCluster.MetricsSettings(segmentSeparator: ".", _systemName: Optional("local"), systemMetricsPrefix: nil, clusterSWIMMetricsPrefix: Optional("cluster.swim")), instrumentation: DistributedCluster.ClusterSystemSettings.InstrumentationSettings(makeInternalActorTransportInstrumentation: (Function), makeReceptionistInstrumentation: (Function)), installSwiftBacktrace: false, threadPoolSize: 8), events: DistributedCluster.ClusterEventStream(actor: Optional(DistributedCluster.ClusterEventStreamActor)), channel: ServerSocketChannel { BaseSocket { fd=216 }, active = true, localAddress = Optional([IPv4]127.0.0.1/127.0.0.1:9002), remoteAddress = nil }, selfNode: sact://local:14716080274836230658@127.0.0.1:9002, eventLoopGroup: MultiThreadedEventLoopGroup { threadPattern = NIO-ELT-215-#* }, allocator: NIOCore.ByteBufferAllocator(malloc: (Function), realloc: (Function), free: (Function), memcpy: (Function)), _handshakes: [sact://remote@127.0.0.1:9003: DistributedCluster.HandshakeStateMachine.State.initiated(InitiatedState(remoteNode: sact://remote@127.0.0.1:9003, localNode: sact://local@127.0.0.1:9002, channel: SocketChannel { BaseSocket { fd=239 }, active = true, localAddress = Optional([IPv4]127.0.0.1/127.0.0.1:43934), remoteAddress = Optional([IPv4]127.0.0.1/127.0.0.1:9003) }))], gossiperControl: DistributedCluster.GossiperControl<DistributedCluster.Cluster.MembershipGossip, DistributedCluster.Cluster.MembershipGossip>(ref: _ActorRef<GossipShell<DistributedCluster.Cluster.MembershipGossip, DistributedCluster.Cluster.MembershipGossip>.Message>(/system/cluster/gossip)), _latestGossip: DistributedCluster.Cluster.MembershipGossip(owner: sact://local:14716080274836230658@127.0.0.1:9002, seen: Cluster.MembershipGossip.SeenTable([sact://local:14716080274836230658@127.0.0.1:9002: [node:sact://local@127.0.0.1:9002: 1]]), membership: Membership(count: 1, leader: .none, members: [Member(sact://local:14716080274836230658@127.0.0.1:9002, status: joining, reachability: reachable)])))))
16:42:24 [captured] [local] 7:42:20.2960 debug [/system/transport.client] [TransportPipelines.swift:83] Received handshake accept from: [sact://remote:850464202261074644@127.0.0.1:9003]
16:42:24 [captured] [local] // "handshake/channel": SocketChannel { BaseSocket { fd=239 }, active = true, localAddress = Optional([IPv4]127.0.0.1/127.0.0.1:43934), remoteAddress = Optional([IPv4]127.0.0.1/127.0.0.1:9003) }
16:42:24 [captured] [local] 7:42:20.2960 debug [/system/cluster] [ClusterShell.swift:989] Accept association with sact://remote:850464202261074644@127.0.0.1:9003!
16:42:24 [captured] [local] // "handshake/channel": SocketChannel { BaseSocket { fd=239 }, active = true, localAddress = Optional([IPv4]127.0.0.1/127.0.0.1:43934), remoteAddress = Optional([IPv4]127.0.0.1/127.0.0.1:9003) }
16:42:24 [captured] [local] // "handshake/localNode": sact://local@127.0.0.1:9002
16:42:24 [captured] [local] // "handshake/remoteNode": sact://remote@127.0.0.1:9003
16:42:24 [captured] [local] 7:42:20.2970 trace [/system/cluster] [ClusterShell.swift:1018] Associated with: sact://remote:850464202261074644@127.0.0.1:9003
16:42:24 [captured] [local] // "membership": Membership(count: 2, leader: .none, members: [Member(sact://remote:850464202261074644@127.0.0.1:9003, status: joining, reachability: reachable), Member(sact://local:14716080274836230658@127.0.0.1:9002, status: joi
16:42:24 ning, reachability: reachable)])
16:42:24 [captured] [local] // "membership/change": sact://remote:850464202261074644@127.0.0.1:9003 :: [unknown] -> [joining]
16:42:24 [captured] [local] 7:42:20.2970 trace [/system/cluster/gossip] [Gossiper+Shell.swift:147] Update (locally) gossip payload [membership]
16:42:24 [captured] [local] // "gossip/identifier": membership
16:42:24 [captured] [local] // "gossip/payload": MembershipGossip(
16:42:24 [captured] [local] //   owner: sact://local:14716080274836230658@127.0.0.1:9002,
16:42:24 [captured] [local] //   seen: Cluster.Gossip.SeenTable(
16:42:24 [captured] [local] //     sact://local@127.0.0.1:9002 observed versions:
16:42:24 [captured] [local] //         node:sact://local@127.0.0.1:9002 @ 2
16:42:24 [captured] [local] // ),
16:42:24 [captured] [local] //   membership: Membership(
16:42:24 [captured] [local] //     _members: [
16:42:24 [captured] [local] //       sact://local@127.0.0.1:9002: Member(sact://local@127.0.0.1:9002, status: joining, reachability: reachable),
16:42:24 [captured] [local] //       sact://remote@127.0.0.1:9003: Member(sact://remote@127.0.0.1:9003, status: joining, reachability: reachable),
16:42:24 [captured] [local] //     ],
16:42:24 [captured] [local] //     _leaderNode: nil,
16:42:24 [captured] [local] //   ),
16:42:24 [captured] [local] // )
16:42:24 [captured] [local] 7:42:20.2970 debug [/user/swim] [SWIMInstance.swift:899] Received ack from [SWIMActor(sact://remote@127.0.0.1:9003/user/swim)] with incarnation [0] and payload [membership([SWIM.Member(SWIMActor(sact://remote@127.0.0.1:9003/user/swim), alive(incarnation: 0), protocolPeriod: 0)])]
16:42:24 [captured] [local] // "swim/incarnation": 0
16:42:24 [captured] [local] // "swim/members/all": 
16:42:24 [captured] [local] //   SWIM.Member(SWIMActor(/user/swim), alive(incarnation: 0), protocolPeriod: 0)
16:42:24 [captured] [local] //   SWIM.Member(SWIMActor(sact://remote@127.0.0.1:9003/user/swim), alive(incarnation: 0), protocolPeriod: 1)
16:42:24 [captured] [local] // "swim/members/count": 2
16:42:24 [captured] [local] // "swim/protocolPeriod": 1
16:42:24 [captured] [local] // "swim/suspects/count": 0
16:42:24 [captured] [local] // "swim/timeoutSuspectsBeforePeriodMax": 2
16:42:24 [captured] [local] // "swim/timeoutSuspectsBeforePeriodMin": 1
16:42:24 [captured] [local] 7:42:20.2970 trace [/user/swim] [SWIMInstance.swift:190] Adjusted LHM multiplier
16:42:24 [captured] [local] // "swim/lhm": 0
16:42:24 [captured] [local] // "swim/lhm/event": successfulProbe
16:42:24 [captured] [local] 7:42:20.2980 trace [/system/nodeDeathWatcher] [NodeDeathWatcher.swift:211] Node change: sact://remote:850464202261074644@127.0.0.1:9003 :: [unknown] -> [joining]!
16:42:24 [captured] [local] // "node": sact://remote:850464202261074644@127.0.0.1:9003
16:42:24 [captured] [local] 7:42:20.2980 debug [[$wellKnown: receptionist]] [OperationLogDistributedReceptionist.swift:970] New member, contacting its receptionist: sact://remote@127.0.0.1:9003
16:42:24 [captured] [local] 7:42:20.2980 trace [/system/clusterEventStream] [ClusterEventStream.swift:195] Published event membershipChange(sact://remote:850464202261074644@127.0.0.1:9003 :: [unknown] -> [joining]) to 4 subscribers and 2 async subscribers
16:42:24 [captured] [local] // "eventStream/asyncSubscribers": 
16:42:24 [captured] [local] //   ObjectIdentifier(0x00007f3fc0256570)
16:42:24 [captured] [local] //   ObjectIdentifier(0x00007f3f6c498ad0)
16:42:24 [captured] [local] // "eventStream/event": DistributedCluster.Cluster.Event.membershipChange(sact://remote:850464202261074644@127.0.0.1:9003 :: [unknown] -> [joining])
16:42:24 [captured] [local] // "eventStream/subscribers": 
16:42:24 [captured] [local] //   /system/nodeDeathWatcher/$sub-DistributedCluster.Cluster.Event-y
16:42:24 [captured] [local] //   /system/cluster/leadership
16:42:24 [captured] [local] //   /system/cluster/gossip/$sub-DistributedCluster.Cluster.Event-y
16:42:24 [captured] [local] //   /system/receptionist-ref/$sub-DistributedCluster.Cluster.Event-y
16:42:24 [captured] [local] 7:42:20.2980 trace [[$wellKnown: receptionist]] [OperationLogDistributedReceptionist.swift:812] Replicate ops to: [$wellKnown: receptionist]
16:42:24 [captured] [local] 7:42:20.2980 debug [/system/receptionist-ref] [_OperationLogClusterReceptionistBehavior.swift:626] New member, contacting its receptionist: sact://remote@127.0.0.1:9003
16:42:24 [captured] [local] 7:42:20.2980 debug
16:42:24 [/system/cluster/leadership] [Leadership.swift:303] Selected new leader: [nil -> Member(sact://local@127.0.0.1:9002, status: joining, reachability: reachable)]
16:42:24 [captured] [local] // "leadership/election": DistributedCluster.Leadership.LowestReachableMember
16:42:24 [captured] [local] // "membership": Membership(count: 2, leader: Member(sact://local@127.0.0.1:9002, status: joining, reachability: reachable), members: [Member(sact://remote:850464202261074644@127.0.0.1:9003, status: joining, reachability: reachable), Member(sact://local:14716080274836230658@127.0.0.1:9002, status: joining, reachability: reachable)])
16:42:24 [captured] [local] 7:42:20.2980 trace [[$wellKnown: receptionist]] [OperationLogDistributedReceptionist.swift:827] No ops to replay
16:42:24 [captured] [local] // "receptionist/ops/replay/atSeqNr": 0
16:42:24 [captured] [local] // "receptionist/peer": [$wellKnown: receptionist]
16:42:24 [captured] [local] 7:42:20.2980 trace [/system/cluster/gossip] [Gossiper+Shell.swift:359] Got introduced to peer [_ActorRef<GossipShell<DistributedCluster.Cluster.MembershipGossip, DistributedCluster.Cluster.MembershipGossip>.Message>(sact://remote@127.0.0.1:9003/system/cluster/gossip)]
16:42:24 [captured] [local] // "gossip/peerCount": 1
16:42:24 [captured] [local] // "gossip/peers": [sact://remote@127.0.0.1:9003/system/cluster/gossip]
16:42:24 [captured] [local] 7:42:20.2980 trace [/system/cluster/gossip] [Gossiper+Shell.swift:272] Schedule next gossip round in 987ms 619μs (1s ± 20.0%)
16:42:24 [captured] [local] 7:42:20.2980 debug [/user/swim] [SWIMActor.swift:135] Sending ping
16:42:24 [captured] [local] // "swim/gossip/payload": membership([SWIM.Member(SWIMActor(/user/swim), alive(incarnation: 0), protocolPeriod: 0), SWIM.Member(SWIMActor(sact://remote@127.0.0.1:9003/user/swim), alive(incarnation: 0), protocolPeriod: 1)])
16:42:24 [captured] [local] // "swim/incarnation": 0
16:42:24 [captured] [local] // "swim/members/all": 
16:42:24 [captured] [local] //   SWIM.Member(SWIMActor(/user/swim), alive(incarnation: 0), protocolPeriod: 0)
16:42:24 [captured] [local] //   SWIM.Member(SWIMActor(sact://remote@127.0.0.1:9003/user/swim), alive(incarnation: 0), protocolPeriod: 1)
16:42:24 [captured] [local] // "swim/members/count": 2
16:42:24 [captured] [local] // "swim/protocolPeriod": 1
16:42:24 [captured] [local] // "swim/suspects/count": 0
16:42:24 [captured] [local] // "swim/target": SWIMActor(sact://remote@127.0.0.1:9003/user/swim)
16:42:24 [captured] [local] // "swim/timeout": 1.0 seconds
16:42:24 [captured] [local] // "swim/timeoutSuspectsBeforePeriodMax": 2
16:42:24 [captured] [local] // "swim/timeoutSuspectsBeforePeriodMin": 1
16:42:24 [captured] [local] 7:42:20.2980 debug [/system/nodeDeathWatcher] [NodeDeathWatcher.swift:228] Received: remoteActorWatched(watcher: _AddressableActorRef(/system/cluster/gossip), remoteNode: sact://remote:850464202261074644@127.0.0.1:9003)
16:42:24 [captured] [local] 7:42:20.2990 debug [/system/cluster] [ClusterShellState.swift:428] Leader change: LeadershipChange(oldLeader: nil, newLeader: Optional(Member(sact://local:14716080274836230658@127.0.0.1:9002, status: joining, reachability: reachable)), file: "/code/Sources/DistributedCluster/Cluster/Cluster+Membership.swift", line: 396)
16:42:24 [captured] [local] // "membership/count": 2
16:42:24 [captured] [local] 7:42:20.2990 trace [/system/cluster] [ClusterShellState.swift:468] Membership updated on [sact://local@127.0.0.1:9002] by leadershipChange(DistributedCluster.Cluster.LeadershipChange(oldLeader: nil, newLeader: Optional(Member(sact://local:14716080274836230658@127.0.0.1:9002, status: joining, reachability: reachable)), file: "/code/Sources/DistributedCluster/Cluster/Cluster+Membership.swift", line: 396)): leader: Member(sact://local@127.0.0.1:9002, status: joining, reachability: reachable)
16:42:24   sact://local:14716080274836230658@127.0.0.1:9002 status [joining]
16:42:24   sact://remote:850464202261074644@127.0.0.1:9003 status [joining]
16:42:24 [captured] [local] 7:42:20.3000 trace [/system/cluster/gossip] [Gossiper+Shell.swift:147] Update (locally) gossip payload [membership]
16:42:24 [captured] [local] // "gossip/identifier": membership
16:42:24 [captured] [local] // "gossip/payload": MembershipGossip(
16:42:24 [captured] [local] //   owner: sact://local:14716080274836230658@127.0.0.1:9
16:42:24 002,
16:42:24 [captured] [local] //   seen: Cluster.Gossip.SeenTable(
16:42:24 [captured] [local] //     sact://local@127.0.0.1:9002 observed versions:
16:42:24 [captured] [local] //         node:sact://local@127.0.0.1:9002 @ 3
16:42:24 [captured] [local] // ),
16:42:24 [captured] [local] //   membership: Membership(
16:42:24 [captured] [local] //     _members: [
16:42:24 [captured] [local] //       sact://local@127.0.0.1:9002: Member(sact://local@127.0.0.1:9002, status: joining, reachability: reachable),
16:42:24 [captured] [local] //       sact://remote@127.0.0.1:9003: Member(sact://remote@127.0.0.1:9003, status: joining, reachability: reachable),
16:42:24 [captured] [local] //     ],
16:42:24 [captured] [local] //     _leaderNode: sact://local:14716080274836230658@127.0.0.1:9002,
16:42:24 [captured] [local] //   ),
16:42:24 [captured] [local] // )
16:42:24 [captured] [local] 7:42:20.3000 trace [/system/cluster/gossip] [Gossiper+Shell.swift:147] Update (locally) gossip payload [membership]
16:42:24 [captured] [local] // "gossip/identifier": membership
16:42:24 [captured] [local] // "gossip/payload": MembershipGossip(
16:42:24 [captured] [local] //   owner: sact://local:14716080274836230658@127.0.0.1:9002,
16:42:24 [captured] [local] //   seen: Cluster.Gossip.SeenTable(
16:42:24 [captured] [local] //     sact://local@127.0.0.1:9002 observed versions:
16:42:24 [captured] [local] //         node:sact://local@127.0.0.1:9002 @ 4
16:42:24 [captured] [local] // ),
16:42:24 [captured] [local] //   membership: Membership(
16:42:24 [captured] [local] //     _members: [
16:42:24 [captured] [local] //       sact://local@127.0.0.1:9002: Member(sact://local@127.0.0.1:9002, status: joining, reachability: reachable),
16:42:24 [captured] [local] //       sact://remote@127.0.0.1:9003: Member(sact://remote@127.0.0.1:9003, status: joining, reachability: reachable),
16:42:24 [captured] [local] //     ],
16:42:24 [captured] [local] //     _leaderNode: sact://local:14716080274836230658@127.0.0.1:9002,
16:42:24 [captured] [local] //   ),
16:42:24 [captured] [local] // )
16:42:24 [captured] [local] 7:42:20.3000 trace [/system/clusterEventStream] [ClusterEventStream.swift:195] Published event leadershipChange(DistributedCluster.Cluster.LeadershipChange(oldLeader: nil, newLeader: Optional(Member(sact://local:14716080274836230658@127.0.0.1:9002, status: joining, reachability: reachable)), file: "/code/Sources/DistributedCluster/Cluster/Cluster+Membership.swift", line: 396)) to 4 subscribers and 2 async subscribers
16:42:24 [captured] [local] // "eventStream/asyncSubscribers": 
16:42:24 [captured] [local] //   ObjectIdentifier(0x00007f3fc0256570)
16:42:24 [captured] [local] //   ObjectIdentifier(0x00007f3f6c498ad0)
16:42:24 [captured] [local] // "eventStream/event": DistributedCluster.Cluster.Event.leadershipChange(DistributedCluster.Cluster.LeadershipChange(oldLeader: nil, newLeader: Optional(Member(sact://local:14716080274836230658@127.0.0.1:9002, status: joining, reachability: reachable)), file: "/code/Sources/DistributedCluster/Cluster/Cluster+Membership.swift", line: 396))
16:42:24 [captured] [local] // "eventStream/subscribers": 
16:42:24 [captured] [local] //   /system/nodeDeathWatcher/$sub-DistributedCluster.Cluster.Event-y
16:42:24 [captured] [local] //   /system/cluster/leadership
16:42:24 [captured] [local] //   /system/cluster/gossip/$sub-DistributedCluster.Cluster.Event-y
16:42:24 [captured] [local] //   /system/receptionist-ref/$sub-DistributedCluster.Cluster.Event-y
16:42:24 [captured] [local] 7:42:20.3010 trace  [ClusterSystem.swift:1405] Receive invocation: InvocationMessage(callID: 8DB32FCF-3E8D-4658-ABAE-6A9B62ACAD53, target: DistributedCluster.SWIMActor.ping(origin:payload:sequenceNumber:), genericSubstitutions: [], arguments: 3) to: sact://local:14716080274836230658@127.0.0.1:9002/user/swim["$path": /user/swim]
16:42:24 [captured] [local] // "invocation": InvocationMessage(callID: 8DB32FCF-3E8D-4658-ABAE-6A9B62ACAD53, target: DistributedCluster.SWIMActor.ping(origin:payload:sequenceNumber:), genericSubstitutions: [], arguments: 3)
16:42:24 [captured] [local] // "recipient/id": sact://local:14716080274836230658@127.0.0.1:9002/user/swim["$path": /user/swim]
16:42:24 [captured] [local] 7:42:20.3010 trace [sact://local@127.0.0.1:9002/user/swim] [ClusterSystem.swift:948] Resolved as local instance
16:42:24 [captured] [loca
16:42:24 l] // "actor": SWIMActor(/user/swim)
16:42:24 [captured] [local] 7:42:20.3020 trace [/user/swim] [SWIMActor.swift:427] Received ping@1
16:42:24 [captured] [local] // "swim/incarnation": 0
16:42:24 [captured] [local] // "swim/members/all": 
16:42:24 [captured] [local] //   SWIM.Member(SWIMActor(/user/swim), alive(incarnation: 0), protocolPeriod: 0)
16:42:24 [captured] [local] //   SWIM.Member(SWIMActor(sact://remote@127.0.0.1:9003/user/swim), alive(incarnation: 0), protocolPeriod: 1)
16:42:24 [captured] [local] // "swim/members/count": 2
16:42:24 [captured] [local] // "swim/ping/origin": sact://remote@127.0.0.1:9003/user/swim
16:42:24 [captured] [local] // "swim/ping/payload": membership([SWIM.Member(SWIMActor(sact://remote@127.0.0.1:9003/user/swim), alive(incarnation: 0), protocolPeriod: 0), SWIM.Member(SWIMActor(/user/swim), alive(incarnation: 0), protocolPeriod: 1)])
16:42:24 [captured] [local] // "swim/ping/seqNr": 1
16:42:24 [captured] [local] // "swim/protocolPeriod": 1
16:42:24 [captured] [local] // "swim/suspects/count": 0
16:42:24 [captured] [local] // "swim/timeoutSuspectsBeforePeriodMax": 2
16:42:24 [captured] [local] // "swim/timeoutSuspectsBeforePeriodMin": 1
16:42:24 [captured] [local] 7:42:20.3020 trace [/user/swim] [SWIMInstance.swift:1401] Gossip about member sact://127.0.0.1:9003#850464202261074644, incoming: [alive(incarnation: 0)] does not supersede current: [alive(incarnation: 0)]
16:42:24 [captured] [local] // "swim/incarnation": 0
16:42:24 [captured] [local] // "swim/members/all": 
16:42:24 [captured] [local] //   SWIM.Member(SWIMActor(/user/swim), alive(incarnation: 0), protocolPeriod: 0)
16:42:24 [captured] [local] //   SWIM.Member(SWIMActor(sact://remote@127.0.0.1:9003/user/swim), alive(incarnation: 0), protocolPeriod: 1)
16:42:24 [captured] [local] // "swim/members/count": 2
16:42:24 [captured] [local] // "swim/protocolPeriod": 1
16:42:24 [captured] [local] // "swim/suspects/count": 0
16:42:24 [captured] [local] // "swim/timeoutSuspectsBeforePeriodMax": 2
16:42:24 [captured] [local] // "swim/timeoutSuspectsBeforePeriodMin": 1
16:42:24 [captured] [local] 7:42:20.3020 trace  [ClusterSystem.swift:1555] Result handler, onReturn
16:42:24 [captured] [local] // "call/id": 8DB32FCF-3E8D-4658-ABAE-6A9B62ACAD53
16:42:24 [captured] [local] // "type": PingResponse<SWIMActor, SWIMActor>
16:42:24 [captured] [local] 7:42:20.3050 trace [sact://local@127.0.0.1:9002/user/swim] [ClusterSystem.swift:948] Resolved as local instance
16:42:24 [captured] [local] // "actor": SWIMActor(/user/swim)
16:42:24 [captured] [local] 7:42:20.3050 trace [/user/swim] [SWIMInstance.swift:1401] Gossip about member sact://127.0.0.1:9003#850464202261074644, incoming: [alive(incarnation: 0)] does not supersede current: [alive(incarnation: 0)]
16:42:24 [captured] [local] // "swim/incarnation": 0
16:42:24 [captured] [local] // "swim/members/all": 
16:42:24 [captured] [local] //   SWIM.Member(SWIMActor(/user/swim), alive(incarnation: 0), protocolPeriod: 0)
16:42:24 [captured] [local] //   SWIM.Member(SWIMActor(sact://remote@127.0.0.1:9003/user/swim), alive(incarnation: 0), protocolPeriod: 1)
16:42:24 [captured] [local] // "swim/members/count": 2
16:42:24 [captured] [local] // "swim/protocolPeriod": 1
16:42:24 [captured] [local] // "swim/suspects/count": 0
16:42:24 [captured] [local] // "swim/timeoutSuspectsBeforePeriodMax": 2
16:42:24 [captured] [local] // "swim/timeoutSuspectsBeforePeriodMin": 1
16:42:24 [captured] [local] 7:42:20.3050 debug [/user/swim] [SWIMInstance.swift:899] Received ack from [SWIMActor(sact://remote@127.0.0.1:9003/user/swim)] with incarnation [0] and payload [membership([SWIM.Member(SWIMActor(/user/swim), alive(incarnation: 0), protocolPeriod: 1), SWIM.Member(SWIMActor(sact://remote@127.0.0.1:9003/user/swim), alive(incarnation: 0), protocolPeriod: 0)])]
16:42:24 [captured] [local] // "swim/incarnation": 0
16:42:24 [captured] [local] // "swim/members/all": 
16:42:24 [captured] [local] //   SWIM.Member(SWIMActor(/user/swim), alive(incarnation: 0), protocolPeriod: 0)
16:42:24 [captured] [local] //   SWIM.Member(SWIMActor(sact://remote@127.0.0.1:9003/user/swim), alive(incarnation: 0), protocolPeriod: 1)
16:42:24 [captured] [local] // "swim/members/count": 2
16:42:24 [captured] [local] // "swim/protocolPeriod": 1
16:42:24 [captured] [local] // "swim/suspects/count": 0
16:42:24 [captured] [local] // "swim/timeoutSuspectsBeforePeriodMax": 2
16:42:24 [captured] [local] // "swim/timeoutSuspectsBeforePeriodMin": 1
16:42:24 [captured] [local] 7:42:20.3050 trac
16:42:24 e [/user/swim] [SWIMInstance.swift:190] Adjusted LHM multiplier
16:42:24 [captured] [local] // "swim/lhm": 0
16:42:24 [captured] [local] // "swim/lhm/event": successfulProbe
16:42:24 [captured] [local] 7:42:21.2860 trace [/system/cluster/gossip] [Gossiper+Shell.swift:192] New gossip round, selected [1] peers, from [1] peers
16:42:24 [captured] [local] // "gossip/id": membership
16:42:24 [captured] [local] // "gossip/peers/selected": 
16:42:24 [captured] [local] //   _AddressableActorRef(sact://remote@127.0.0.1:9003/system/cluster/gossip)
16:42:24 [captured] [local] 7:42:21.2860 trace [/system/cluster/gossip] [Gossiper+Shell.swift:233] Sending gossip to sact://remote@127.0.0.1:9003/system/cluster/gossip
16:42:24 [captured] [local] // "actor/message": MembershipGossip(owner: sact://local:14716080274836230658@127.0.0.1:9002, seen: Cluster.MembershipGossip.SeenTable([sact://local:14716080274836230658@127.0.0.1:9002: [node:sact://local@127.0.0.1:9002: 4]]), membership: Membership(count: 2, leader: .none, members: [Member(sact://remote:850464202261074644@127.0.0.1:9003, status: joining, reachability: reachable), Member(sact://local:14716080274836230658@127.0.0.1:9002, status: joining, reachability: reachable)]))
16:42:24 [captured] [local] // "gossip/peers/count": 1
16:42:24 [captured] [local] // "gossip/target": sact://remote@127.0.0.1:9003/system/cluster/gossip
16:42:24 [captured] [local] 7:42:21.2860 trace [/user/swim] [SWIMActor.swift:99] Periodic ping random member, among: 1
16:42:24 [captured] [local] // "swim/incarnation": 0
16:42:24 [captured] [local] // "swim/members/all": 
16:42:24 [captured] [local] //   SWIM.Member(SWIMActor(/user/swim), alive(incarnation: 0), protocolPeriod: 0)
16:42:24 [captured] [local] //   SWIM.Member(SWIMActor(sact://remote@127.0.0.1:9003/user/swim), alive(incarnation: 0), protocolPeriod: 1)
16:42:24 [captured] [local] // "swim/members/count": 2
16:42:24 [captured] [local] // "swim/protocolPeriod": 2
16:42:24 [captured] [local] // "swim/suspects/count": 0
16:42:24 [captured] [local] // "swim/timeoutSuspectsBeforePeriodMax": 2
16:42:24 [captured] [local] // "swim/timeoutSuspectsBeforePeriodMin": 1
16:42:24 [captured] [local] 7:42:21.2870 trace [/system/cluster/gossip] [Gossiper+Shell.swift:272] Schedule next gossip round in 1s 87ms (1s ± 20.0%)
16:42:24 [captured] [local] 7:42:21.2870 debug [/user/swim] [SWIMActor.swift:135] Sending ping
16:42:24 [captured] [local] // "swim/gossip/payload": membership([SWIM.Member(SWIMActor(/user/swim), alive(incarnation: 0), protocolPeriod: 0), SWIM.Member(SWIMActor(sact://remote@127.0.0.1:9003/user/swim), alive(incarnation: 0), protocolPeriod: 1)])
16:42:24 [captured] [local] // "swim/incarnation": 0
16:42:24 [captured] [local] // "swim/members/all": 
16:42:24 [captured] [local] //   SWIM.Member(SWIMActor(/user/swim), alive(incarnation: 0), protocolPeriod: 0)
16:42:24 [captured] [local] //   SWIM.Member(SWIMActor(sact://remote@127.0.0.1:9003/user/swim), alive(incarnation: 0), protocolPeriod: 1)
16:42:24 [captured] [local] // "swim/members/count": 2
16:42:24 [captured] [local] // "swim/protocolPeriod": 2
16:42:24 [captured] [local] // "swim/suspects/count": 0
16:42:24 [captured] [local] // "swim/target": SWIMActor(sact://remote@127.0.0.1:9003/user/swim)
16:42:24 [captured] [local] // "swim/timeout": 0.3 seconds
16:42:24 [captured] [local] // "swim/timeoutSuspectsBeforePeriodMax": 2
16:42:24 [captured] [local] // "swim/timeoutSuspectsBeforePeriodMin": 1
16:42:24 [captured] [local] 7:42:21.2910 trace [/system/cluster/gossip] [Gossiper+Shell.swift:250] Gossip ACKed
16:42:24 [captured] [local] // "gossip/ack": MembershipGossip(owner: sact://remote:850464202261074644@127.0.0.1:9003, seen: Cluster.MembershipGossip.SeenTable([sact://remote:850464202261074644@127.0.0.1:9003: [node:sact://local@127.0.0.1:9002: 4, node:sact://remote@127.0.0.1:9003: 5], sact://local:14716080274836230658@127.0.0.1:9002: [node:sact://local@127.0.0.1:9002: 4]]), membership: Membership(count: 2, leader: .none, members: [Member(sact://remote:850464202261074644@127.0.0.1:9003, status: down, reachability: reachable), Member(sact://local:14716080274836230658@127.0.0.1:9002, status: joining, reachability: reachable)]))
16:42:24 [captured] [local] 7:42:21.2920 trace [/system/cluster/gossip] [Gossiper+Shell.swift:147] Update (locally) gossip payload [membership]
@ktoso
Copy link
Member Author

ktoso commented Jul 3, 2023


16:42:24 [captured] [local] // "gossip/identifier": membership
16:42:24 [captur
16:42:24 ed] [local] // "gossip/payload": MembershipGossip(
16:42:24 [captured] [local] //   owner: sact://local:14716080274836230658@127.0.0.1:9002,
16:42:24 [captured] [local] //   seen: Cluster.Gossip.SeenTable(
16:42:24 [captured] [local] //     sact://local@127.0.0.1:9002 observed versions:
16:42:24 [captured] [local] //         node:sact://local@127.0.0.1:9002 @ 4
16:42:24 [captured] [local] //         node:sact://remote@127.0.0.1:9003 @ 5
16:42:24 [captured] [local] //     sact://remote@127.0.0.1:9003 observed versions:
16:42:24 [captured] [local] //         node:sact://local@127.0.0.1:9002 @ 4
16:42:24 [captured] [local] //         node:sact://remote@127.0.0.1:9003 @ 5
16:42:24 [captured] [local] // ),
16:42:24 [captured] [local] //   membership: Membership(
16:42:24 [captured] [local] //     _members: [
16:42:24 [captured] [local] //       sact://local@127.0.0.1:9002: Member(sact://local@127.0.0.1:9002, status: joining, reachability: reachable),
16:42:24 [captured] [local] //       sact://remote@127.0.0.1:9003: Member(sact://remote@127.0.0.1:9003, status: down, reachability: reachable),
16:42:24 [captured] [local] //     ],
16:42:24 [captured] [local] //     _leaderNode: sact://local:14716080274836230658@127.0.0.1:9002,
16:42:24 [captured] [local] //   ),
16:42:24 [captured] [local] // )
16:42:24 [captured] [local] 7:42:21.2920 trace [/system/cluster] [ClusterShell.swift:606] Local membership version is [.concurrent] to incoming gossip; Merge resulted in 1 changes.
16:42:24 [captured] [local] // "gossip/before": MembershipGossip(
16:42:24 [captured] [local] //   owner: sact://local:14716080274836230658@127.0.0.1:9002,
16:42:24 [captured] [local] //   seen: Cluster.Gossip.SeenTable(
16:42:24 [captured] [local] //     sact://local@127.0.0.1:9002 observed versions:
16:42:24 [captured] [local] //         node:sact://local@127.0.0.1:9002 @ 4
16:42:24 [captured] [local] // ),
16:42:24 [captured] [local] //   membership: Membership(
16:42:24 [captured] [local] //     _members: [
16:42:24 [captured] [local] //       sact://local@127.0.0.1:9002: Member(sact://local@127.0.0.1:9002, status: joining, reachability: reachable),
16:42:24 [captured] [local] //       sact://remote@127.0.0.1:9003: Member(sact://remote@127.0.0.1:9003, status: joining, reachability: reachable),
16:42:24 [captured] [local] //     ],
16:42:24 [captured] [local] //     _leaderNode: sact://local:14716080274836230658@127.0.0.1:9002,
16:42:24 [captured] [local] //   ),
16:42:24 [captured] [local] // )
16:42:24 [captured] [local] // "gossip/incoming": MembershipGossip(
16:42:24 [captured] [local] //   owner: sact://local:14716080274836230658@127.0.0.1:9002,
16:42:24 [captured] [local] //   seen: Cluster.Gossip.SeenTable(
16:42:24 [captured] [local] //     sact://local@127.0.0.1:9002 observed versions:
16:42:24 [captured] [local] //         node:sact://local@127.0.0.1:9002 @ 4
16:42:24 [captured] [local] //         node:sact://remote@127.0.0.1:9003 @ 5
16:42:24 [captured] [local] //     sact://remote@127.0.0.1:9003 observed versions:
16:42:24 [captured] [local] //         node:sact://local@127.0.0.1:9002 @ 4
16:42:24 [captured] [local] //         node:sact://remote@127.0.0.1:9003 @ 5
16:42:24 [captured] [local] // ),
16:42:24 [captured] [local] //   membership: Membership(
16:42:24 [captured] [local] //     _members: [
16:42:24 [captured] [local] //       sact://local@127.0.0.1:9002: Member(sact://local@127.0.0.1:9002, status: joining, reachability: reachable),
16:42:24 [captured] [local] //       sact://remote@127.0.0.1:9003: Member(sact://remote@127.0.0.1:9003, status: down, reachability: reachable),
16:42:24 [captured] [local] //     ],
16:42:24 [captured] [local] //     _leaderNode: nil,
16:42:24 [captured] [local] //   ),
16:42:24 [captured] [local] // )
16:42:24 [captured] [local] // "gossip/now": MembershipGossip(
16:42:24 [captured] [local] //   owner: sact://local:14716080274836230658@127.0.0.1:9002,
16:42:24 [captured] [local] //   seen: Cluster.Gossip.SeenTable(
16:42:24 [captured] [local] //     sact://local@127.0.0.1:9002 observed versions:
16:42:24 [captured] [local] //         node:sact://local@127.0.0.1:9002 @ 4
16:42:24 [captured] [local] //         node:sact://remote@127.0.0.1:9003 @ 5
16:42:24 [captured] [local] //     sact://remote@127.0.0.1:9003 observed versions:
16:42:24 [captured] [local] //         node:sact://local@127.0.0.1:9002 @ 4
16:42:24 [captured] [local] //         node:sact://remote@12
16:42:24 7.0.0.1:9003 @ 5
16:42:24 [captured] [local] // ),
16:42:24 [captured] [local] //   membership: Membership(
16:42:24 [captured] [local] //     _members: [
16:42:24 [captured] [local] //       sact://local@127.0.0.1:9002: Member(sact://local@127.0.0.1:9002, status: joining, reachability: reachable),
16:42:24 [captured] [local] //       sact://remote@127.0.0.1:9003: Member(sact://remote@127.0.0.1:9003, status: down, reachability: reachable),
16:42:24 [captured] [local] //     ],
16:42:24 [captured] [local] //     _leaderNode: sact://local:14716080274836230658@127.0.0.1:9002,
16:42:24 [captured] [local] //   ),
16:42:24 [captured] [local] // )
16:42:24 [captured] [local] // "membership/changes": 
16:42:24 [captured] [local] //   sact://remote:850464202261074644@127.0.0.1:9003 :: [joining] -> [   down]
16:42:24 [captured] [local] // "tag": membership
16:42:24 [captured] [local] 7:42:21.2930 trace [/system/nodeDeathWatcher] [NodeDeathWatcher.swift:206] Node down: sact://remote:850464202261074644@127.0.0.1:9003 :: [joining] -> [   down]!
16:42:24 [captured] [local] // "node": sact://remote:850464202261074644@127.0.0.1:9003
16:42:24 [captured] [local] 7:42:21.2930 trace [sact://local@127.0.0.1:9002/user/swim] [ClusterSystem.swift:948] Resolved as local instance
16:42:24 [captured] [local] // "actor": SWIMActor(/user/swim)
16:42:24 [captured] [local] 7:42:21.2930 info [/system/cluster/leadership] [Leadership.swift:246] Not enough members [1/2] to run election, members: [Member(sact://local:14716080274836230658@127.0.0.1:9002, status: joining, reachability: reachable)]
16:42:24 [captured] [local] // "leadership/election": DistributedCluster.Leadership.LowestReachableMember
16:42:24 [captured] [local] 7:42:21.2930 trace [/system/cluster] [ClusterShell+LeaderActions.swift:77] Performing leader actions: [DistributedCluster.ClusterShellState.LeaderAction.moveMember(sact://local:14716080274836230658@127.0.0.1:9002 :: [joining] -> [     up]), DistributedCluster.ClusterShellState.LeaderAction.removeMember(alreadyDownMember: Member(sact://remote:850464202261074644@127.0.0.1:9003, status: down, reachability: reachable))]
16:42:24 [captured] [local] // "gossip/converged": true
16:42:24 [captured] [local] // "gossip/current": MembershipGossip(owner: sact://local:14716080274836230658@127.0.0.1:9002, seen: Cluster.MembershipGossip.SeenTable([sact://local:14716080274836230658@127.0.0.1:9002: [node:sact://local@127.0.0.1:9002: 4, node:sact://remote@127.0.0.1:9003: 5], sact://remote:850464202261074644@127.0.0.1:9003: [node:sact://remote@127.0.0.1:9003: 5, node:sact://local@127.0.0.1:9002: 4]]), membership: Membership(count: 2, leader: Member(sact://local@127.0.0.1:9002, status: joining, reachability: reachable), members: [Member(sact://remote:850464202261074644@127.0.0.1:9003, status: down, reachability: reachable), Member(sact://local:14716080274836230658@127.0.0.1:9002, status: joining, reachability: reachable)]))
16:42:24 [captured] [local] // "leader/actions": [DistributedCluster.ClusterShellState.LeaderAction.moveMember(sact://local:14716080274836230658@127.0.0.1:9002 :: [joining] -> [     up]), DistributedCluster.ClusterShellState.LeaderAction.removeMember(alreadyDownMember: Member(sact://remote:850464202261074644@127.0.0.1:9003, status: down, reachability: reachable))]
16:42:24 [captured] [local] // "tag": leader-action
16:42:24 [captured] [local] 7:42:21.2930 warning [/user/swim] [SWIMActor.swift:527] Confirmed node .dead: MemberStatusChangedEvent(SWIM.Member(SWIMActor(sact://remote@127.0.0.1:9003/user/swim), dead, protocolPeriod: 2), previousStatus: alive(incarnation: 0))
16:42:24 [captured] [local] // "swim/change": MemberStatusChangedEvent(SWIM.Member(SWIMActor(sact://remote@127.0.0.1:9003/user/swim), dead, protocolPeriod: 2), previousStatus: alive(incarnation: 0))
16:42:24 [captured] [local] // "swim/incarnation": 0
16:42:24 [captured] [local] // "swim/members/all": 
16:42:24 [captured] [local] //   SWIM.Member(SWIMActor(/user/swim), alive(incarnation: 0), protocolPeriod: 0)
16:42:24 [captured] [local] // "swim/members/count": 1
16:42:24 [captured] [local] // "swim/protocolPeriod": 2
16:42:24 [captured] [local] // "swim/suspects/count": 0
16:42:24 [captured] [local] // "swim/timeoutSuspectsBeforePeriodMax": 2
16:42:24 [captured] [local] // "swim/timeoutSuspectsBeforePeriodMin": 1
16:42:24 [captured] [loc
16:42:24 al] 7:42:21.2930 trace [/system/downingStrategy] [DowningStrategy.swift:135] Cancel timer for member: Member(sact://remote@127.0.0.1:9003, status: down, reachability: reachable)
16:42:24 [captured] [local] 7:42:21.2930 trace [[$wellKnown: receptionist]] [OperationLogDistributedReceptionist.swift:987] Pruning cluster member: sact://remote@127.0.0.1:9003
16:42:24 [captured] [local] 7:42:21.2930 trace [/system/clusterEventStream] [ClusterEventStream.swift:195] Published event membershipChange(sact://remote:850464202261074644@127.0.0.1:9003 :: [joining] -> [   down]) to 4 subscribers and 2 async subscribers
16:42:24 [captured] [local] // "eventStream/asyncSubscribers": 
16:42:24 [captured] [local] //   ObjectIdentifier(0x00007f3fc0256570)
16:42:24 [captured] [local] //   ObjectIdentifier(0x00007f3f6c498ad0)
16:42:24 [captured] [local] // "eventStream/event": DistributedCluster.Cluster.Event.membershipChange(sact://remote:850464202261074644@127.0.0.1:9003 :: [joining] -> [   down])
16:42:24 [captured] [local] // "eventStream/subscribers": 
16:42:24 [captured] [local] //   /system/nodeDeathWatcher/$sub-DistributedCluster.Cluster.Event-y
16:42:24 [captured] [local] //   /system/cluster/leadership
16:42:24 [captured] [local] //   /system/cluster/gossip/$sub-DistributedCluster.Cluster.Event-y
16:42:24 [captured] [local] //   /system/receptionist-ref/$sub-DistributedCluster.Cluster.Event-y
16:42:24 [captured] [local] 7:42:21.2930 trace [/system/cluster/gossip] [Gossiper+Shell.swift:76] Peer terminated: sact://remote@127.0.0.1:9003/system/cluster/gossip, will not gossip to it anymore
16:42:24 [captured] [local] 7:42:21.2930 trace [/system/receptionist-ref] [_OperationLogClusterReceptionistBehavior.swift:643] Pruning cluster member: sact://remote@127.0.0.1:9003
16:42:24 [captured] [local] 7:42:21.2930 trace [/system/cluster/gossip] [Gossiper+Shell.swift:81] No peers available, cancelling periodic gossip timer
16:42:24 [captured] [local] 7:42:21.2930 debug [/system/cluster] [ClusterShell+LeaderActions.swift:132] Leader moved member: sact://local:14716080274836230658@127.0.0.1:9002 :: [joining] -> [     up]
16:42:24 [captured] [local] // "tag": leader-action
16:42:24 [captured] [local] 7:42:21.2930 debug [/user/swim] [SWIMInstance.swift:237] Attempt to re-add already confirmed dead peer SWIMActor(sact://remote@127.0.0.1:9003/user/swim), ignoring it.
16:42:24 [captured] [local] 7:42:21.2930 warning  [ClusterShell.swift:151] Terminate existing association [sact://remote:850464202261074644@127.0.0.1:9003].
16:42:24 [captured] [local] 7:42:21.2930 warning  [ClusterShell.swift:156] Confirm .dead to underlying SWIM, node: sact://remote:850464202261074644@127.0.0.1:9003
16:42:24 [captured] [local] 7:42:21.2930 debug [/user/swim] [SWIMInstance.swift:899] Received ack from [SWIMActor(sact://remote@127.0.0.1:9003/user/swim)] with incarnation [0] and payload [membership([SWIM.Member(SWIMActor(/user/swim), alive(incarnation: 0), protocolPeriod: 1), SWIM.Member(SWIMActor(sact://remote@127.0.0.1:9003/user/swim), alive(incarnation: 0), protocolPeriod: 0)])]
16:42:24 [captured] [local] // "swim/incarnation": 0
16:42:24 [captured] [local] // "swim/members/all": 
16:42:24 [captured] [local] //   SWIM.Member(SWIMActor(/user/swim), alive(incarnation: 0), protocolPeriod: 0)
16:42:24 [captured] [local] // "swim/members/count": 1
16:42:24 [captured] [local] // "swim/protocolPeriod": 2
16:42:24 [captured] [local] // "swim/suspects/count": 0
16:42:24 [captured] [local] // "swim/timeoutSuspectsBeforePeriodMax": 2
16:42:24 [captured] [local] // "swim/timeoutSuspectsBeforePeriodMin": 1
16:42:24 [captured] [local] 7:42:21.2930 trace [/user/swim] [SWIMInstance.swift:190] Adjusted LHM multiplier
16:42:24 [captured] [local] // "swim/lhm": 0
16:42:24 [captured] [local] // "swim/lhm/event": successfulProbe
16:42:24 [captured] [local] 7:42:21.2930 debug [/system/nodeDeathWatcher] [NodeDeathWatcher.swift:228] Received: remoteActorWatched(watcher: _AddressableActorRef(/system/cluster/gossip), remoteNode: sact://remote:850464202261074644@127.0.0.1:9003)
16:42:24 [captured] [local] 7:42:21.2930 debug [/system/cluster/gossip] [Gossiper+Shell.swift:308] Automatically discovered peer
16:42:24 [captured] [local] // "gossip/peer": _ActorRef<GossipShell<DistributedCluster.Cluster.MembershipGossip, DistributedCluster.Cluster.MembershipGossip>.Message>(sact://remote@127.0.0.1:9003
16:42:24 /system/cluster/gossip)
16:42:24 [captured] [local] // "gossip/peerCount": 1
16:42:24 [captured] [local] // "gossip/peers": [sact://remote@127.0.0.1:9003/system/cluster/gossip]
16:42:24 [captured] [local] 7:42:21.2940 info [/system/cluster] [ClusterShell+LeaderActions.swift:152] Leader removed member: Member(sact://remote@127.0.0.1:9003, status: down, reachability: reachable), all nodes are certain to have seen it as [.down] before
16:42:24 [captured] [local] // "gossip/before": MembershipGossip(owner: sact://local:14716080274836230658@127.0.0.1:9002, seen: Cluster.MembershipGossip.SeenTable([sact://local:14716080274836230658@127.0.0.1:9002: [node:sact://local@127.0.0.1:9002: 5, node:sact://remote@127.0.0.1:9003: 5], sact://remote:850464202261074644@127.0.0.1:9003: [node:sact://remote@127.0.0.1:9003: 5, node:sact://local@127.0.0.1:9002: 4]]), membership: Membership(count: 2, leader: Member(sact://local@127.0.0.1:9002, status: up, reachability: reachable), members: [Member(sact://remote:850464202261074644@127.0.0.1:9003, status: down, reachability: reachable), Member(sact://local:14716080274836230658@127.0.0.1:9002, status: up, reachability: reachable, _upNumber: 1)]))
16:42:24 [captured] [local] // "gossip/current": MembershipGossip(owner: sact://local:14716080274836230658@127.0.0.1:9002, seen: Cluster.MembershipGossip.SeenTable([sact://local:14716080274836230658@127.0.0.1:9002: [node:sact://local@127.0.0.1:9002: 6]]), membership: Membership(count: 1, leader: Member(sact://local@127.0.0.1:9002, status: up, reachability: reachable), members: [Member(sact://local:14716080274836230658@127.0.0.1:9002, status: up, reachability: reachable, _upNumber: 1)]))
16:42:24 [captured] [local] // "tag": leader-action
16:42:24 [captured] [local] 7:42:21.2940 trace [/system/cluster/gossip] [Gossiper+Shell.swift:76] Peer terminated: sact://remote@127.0.0.1:9003/system/cluster/gossip, will not gossip to it anymore
16:42:24 [captured] [local] 7:42:21.2940 trace [/system/cluster/gossip] [Gossiper+Shell.swift:81] No peers available, cancelling periodic gossip timer
16:42:24 [captured] [local] 7:42:21.2940 warning [/user/swim] [SWIMActor.swift:527] Confirmed node .dead: MemberStatusChangedEvent(SWIM.Member(SWIMActor(sact://remote@127.0.0.1:9003/user/swim), dead, protocolPeriod: 2), previousStatus: alive(incarnation: 0))
16:42:24 [captured] [local] // "swim/change": MemberStatusChangedEvent(SWIM.Member(SWIMActor(sact://remote@127.0.0.1:9003/user/swim), dead, protocolPeriod: 2), previousStatus: alive(incarnation: 0))
16:42:24 [captured] [local] // "swim/incarnation": 0
16:42:24 [captured] [local] // "swim/members/all": 
16:42:24 [captured] [local] //   SWIM.Member(SWIMActor(/user/swim), alive(incarnation: 0), protocolPeriod: 0)
16:42:24 [captured] [local] // "swim/members/count": 1
16:42:24 [captured] [local] // "swim/protocolPeriod": 2
16:42:24 [captured] [local] // "swim/suspects/count": 0
16:42:24 [captured] [local] // "swim/timeoutSuspectsBeforePeriodMax": 2
16:42:24 [captured] [local] // "swim/timeoutSuspectsBeforePeriodMin": 1
16:42:24 [captured] [local] 7:42:21.2940 trace [/system/cluster] [ClusterShell+LeaderActions.swift:109] Membership state after leader actions: Membership(count: 1, leader: Member(sact://local@127.0.0.1:9002, status: up, reachability: reachable), members: [Member(sact://local:14716080274836230658@127.0.0.1:9002, status: up, reachability: reachable, _upNumber: 1)])
16:42:24 [captured] [local] // "gossip/before": MembershipGossip(owner: sact://local:14716080274836230658@127.0.0.1:9002, seen: Cluster.MembershipGossip.SeenTable([sact://local:14716080274836230658@127.0.0.1:9002: [node:sact://local@127.0.0.1:9002: 4, node:sact://remote@127.0.0.1:9003: 5], sact://remote:850464202261074644@127.0.0.1:9003: [node:sact://remote@127.0.0.1:9003: 5, node:sact://local@127.0.0.1:9002: 4]]), membership: Membership(count: 2, leader: Member(sact://local@127.0.0.1:9002, status: joining, reachability: reachable), members: [Member(sact://remote:850464202261074644@127.0.0.1:9003, status: down, reachability: reachable), Member(sact://local:14716080274836230658@127.0.0.1:9002, status: joining, reachability: reachable)]))
16:42:24 [captured] [local] // "gossip/current": MembershipGossip(owner: sact://local:14716080274836230658@12
16:42:24 7.0.0.1:9002, seen: Cluster.MembershipGossip.SeenTable([sact://local:14716080274836230658@127.0.0.1:9002: [node:sact://local@127.0.0.1:9002: 6]]), membership: Membership(count: 1, leader: Member(sact://local@127.0.0.1:9002, status: up, reachability: reachable), members: [Member(sact://local:14716080274836230658@127.0.0.1:9002, status: up, reachability: reachable, _upNumber: 1)]))
16:42:24 [captured] [local] // "tag": leader-action
16:42:24 [captured] [local] 7:42:21.2940 trace [/system/nodeDeathWatcher] [NodeDeathWatcher.swift:211] Node change: sact://local:14716080274836230658@127.0.0.1:9002 :: [joining] -> [     up]!
16:42:24 [captured] [local] // "node": sact://local:14716080274836230658@127.0.0.1:9002
16:42:24 [captured] [local] 7:42:21.2940 trace [/system/clusterEventStream] [ClusterEventStream.swift:195] Published event membershipChange(sact://local:14716080274836230658@127.0.0.1:9002 :: [joining] -> [     up]) to 4 subscribers and 2 async subscribers
16:42:24 [captured] [local] // "eventStream/asyncSubscribers": 
16:42:24 [captured] [local] //   ObjectIdentifier(0x00007f3fc0256570)
16:42:24 [captured] [local] //   ObjectIdentifier(0x00007f3f6c498ad0)
16:42:24 [captured] [local] // "eventStream/event": DistributedCluster.Cluster.Event.membershipChange(sact://local:14716080274836230658@127.0.0.1:9002 :: [joining] -> [     up])
16:42:24 [captured] [local] // "eventStream/subscribers": 
16:42:24 [captured] [local] //   /system/nodeDeathWatcher/$sub-DistributedCluster.Cluster.Event-y
16:42:24 [captured] [local] //   /system/cluster/leadership
16:42:24 [captured] [local] //   /system/cluster/gossip/$sub-DistributedCluster.Cluster.Event-y
16:42:24 [captured] [local] //   /system/receptionist-ref/$sub-DistributedCluster.Cluster.Event-y
16:42:24 [captured] [local] 7:42:21.2940 trace [/system/cluster/gossip] [Gossiper+Shell.swift:147] Update (locally) gossip payload [membership]
16:42:24 [captured] [local] // "gossip/identifier": membership
16:42:24 [captured] [local] // "gossip/payload": MembershipGossip(
16:42:24 [captured] [local] //   owner: sact://local:14716080274836230658@127.0.0.1:9002,
16:42:24 [captured] [local] //   seen: Cluster.Gossip.SeenTable(
16:42:24 [captured] [local] //     sact://local@127.0.0.1:9002 observed versions:
16:42:24 [captured] [local] //         node:sact://local@127.0.0.1:9002 @ 5
16:42:24 [captured] [local] //         node:sact://remote@127.0.0.1:9003 @ 5
16:42:24 [captured] [local] //     sact://remote@127.0.0.1:9003 observed versions:
16:42:24 [captured] [local] //         node:sact://local@127.0.0.1:9002 @ 4
16:42:24 [captured] [local] //         node:sact://remote@127.0.0.1:9003 @ 5
16:42:24 [captured] [local] // ),
16:42:24 [captured] [local] //   membership: Membership(
16:42:24 [captured] [local] //     _members: [
16:42:24 [captured] [local] //       sact://local@127.0.0.1:9002: Member(sact://local@127.0.0.1:9002, status: up, reachability: reachable),
16:42:24 [captured] [local] //       sact://remote@127.0.0.1:9003: Member(sact://remote@127.0.0.1:9003, status: down, reachability: reachable),
16:42:24 [captured] [local] //     ],
16:42:24 [captured] [local] //     _leaderNode: sact://local:14716080274836230658@127.0.0.1:9002,
16:42:24 [captured] [local] //   ),
16:42:24 [captured] [local] // )
16:42:24 [captured] [local] 7:42:21.2940 trace [/system/receptionist-ref] [_OperationLogClusterReceptionistBehavior.swift:643] Pruning cluster member: sact://remote@127.0.0.1:9003
16:42:24 [captured] [local] 7:42:21.2940 info [/system/cluster/leadership] [Leadership.swift:246] Not enough members [1/2] to run election, members: [Member(sact://local:14716080274836230658@127.0.0.1:9002, status: up, reachability: reachable, _upNumber: 1)]
16:42:24 [captured] [local] // "leadership/election": DistributedCluster.Leadership.LowestReachableMember
16:42:24 [captured] [local] 7:42:21.2940 trace [/system/downingStrategy] [DowningStrategy.swift:135] Cancel timer for member: Member(sact://remote@127.0.0.1:9003, status: removed, reachability: reachable)
16:42:24 [captured] [local] 7:42:21.2940 trace [/system/clusterEventStream] [ClusterEventStream.swift:195] Published event membershipChange(sact://remote:850464202261074644@127.0.0.1:9003 :: [   down] -> [removed]) to 4 subscribers and 2 async subscribers
16:42:24 [captured] [local] // "eventStream/asyncSubscribers": 
16:42:24 [captured] [local] //   ObjectIdent
16:42:24 ifier(0x00007f3fc0256570)
16:42:24 [captured] [local] //   ObjectIdentifier(0x00007f3f6c498ad0)
16:42:24 [captured] [local] // "eventStream/event": DistributedCluster.Cluster.Event.membershipChange(sact://remote:850464202261074644@127.0.0.1:9003 :: [   down] -> [removed])
16:42:24 [captured] [local] // "eventStream/subscribers": 
16:42:24 [captured] [local] //   /system/nodeDeathWatcher/$sub-DistributedCluster.Cluster.Event-y
16:42:24 [captured] [local] //   /system/cluster/leadership
16:42:24 [captured] [local] //   /system/cluster/gossip/$sub-DistributedCluster.Cluster.Event-y
16:42:24 [captured] [local] //   /system/receptionist-ref/$sub-DistributedCluster.Cluster.Event-y
16:42:24 [captured] [local] 7:42:21.2940 trace [[$wellKnown: receptionist]] [OperationLogDistributedReceptionist.swift:987] Pruning cluster member: sact://remote@127.0.0.1:9003
16:42:24 [captured] [local] 7:42:21.2940 trace [/system/nodeDeathWatcher] [NodeDeathWatcher.swift:206] Node down: sact://remote:850464202261074644@127.0.0.1:9003 :: [   down] -> [removed]!
16:42:24 [captured] [local] // "node": sact://remote:850464202261074644@127.0.0.1:9003
16:42:24 [captured] [local] 7:42:21.2940 trace  [ClusterShell.swift:208] Closed connection with sact://remote@127.0.0.1:9003: success(DistributedCluster.Association.Tombstone(remoteNode: sact://remote:850464202261074644@127.0.0.1:9003, removalDeadline: Swift.ContinuousClock.Instant(_value: 99152366.44533896 seconds)))
16:42:24 [captured] [local] 7:42:21.2940 trace [/system/cluster/gossip] [Gossiper+Shell.swift:147] Update (locally) gossip payload [membership]
16:42:24 [captured] [local] // "gossip/identifier": membership
16:42:24 [captured] [local] // "gossip/payload": MembershipGossip(
16:42:24 [captured] [local] //   owner: sact://local:14716080274836230658@127.0.0.1:9002,
16:42:24 [captured] [local] //   seen: Cluster.Gossip.SeenTable(
16:42:24 [captured] [local] //     sact://local@127.0.0.1:9002 observed versions:
16:42:24 [captured] [local] //         node:sact://local@127.0.0.1:9002 @ 6
16:42:24 [captured] [local] // ),
16:42:24 [captured] [local] //   membership: Membership(
16:42:24 [captured] [local] //     _members: [
16:42:24 [captured] [local] //       sact://local@127.0.0.1:9002: Member(sact://local@127.0.0.1:9002, status: up, reachability: reachable),
16:42:24 [captured] [local] //     ],
16:42:24 [captured] [local] //     _leaderNode: sact://local:14716080274836230658@127.0.0.1:9002,
16:42:24 [captured] [local] //   ),
16:42:24 [captured] [local] // )
16:42:24 [captured] [local] 7:42:21.2940 info [/system/cluster/leadership] [Leadership.swift:246] Not enough members [1/2] to run election, members: [Member(sact://local:14716080274836230658@127.0.0.1:9002, status: up, reachability: reachable, _upNumber: 1)]
16:42:24 [captured] [local] // "leadership/election": DistributedCluster.Leadership.LowestReachableMember
16:42:24 [captured] [local] 7:42:21.2950 debug [/system/cluster/gossip] [Gossiper+Shell.swift:308] Automatically discovered peer
16:42:24 [captured] [local] // "gossip/peer": _ActorRef<GossipShell<DistributedCluster.Cluster.MembershipGossip, DistributedCluster.Cluster.MembershipGossip>.Message>(sact://remote@127.0.0.1:9003/system/cluster/gossip)
16:42:24 [captured] [local] // "gossip/peerCount": 1
16:42:24 [captured] [local] // "gossip/peers": [sact://remote@127.0.0.1:9003/system/cluster/gossip]
16:42:24 [captured] [local] 7:42:21.2950 debug [/system/cluster] [ClusterShell.swift:707] Association already allocated for remote: sact://remote@127.0.0.1:9003, existing association: [AssociatedState(associating(queue: DistributedCluster.MPSCLinkedQueue<DistributedCluster.TransportEnvelope>), selfNode: sact://local:14716080274836230658@127.0.0.1:9002, remoteNode: sact://remote:850464202261074644@127.0.0.1:9003)]
16:42:24 [captured] [local] 7:42:21.2950 debug [/system/nodeDeathWatcher] [NodeDeathWatcher.swift:228] Received: remoteActorWatched(watcher: _AddressableActorRef(/system/cluster/gossip), remoteNode: sact://remote:850464202261074644@127.0.0.1:9003)
16:42:24 [captured] [local] 7:42:21.2950 debug [/system/cluster] [ClusterShell.swift:727] Initiated handshake: InitiatedState(remoteNode: sact://remote@127.0.0.1:9003, localNode: sact://local@127.0.0.1:9002, channel: nil)
16:42:24 [captured] [local] // "cluster/associatedNodes": [sact://remote:85046420226107464
16:42:24 4@127.0.0.1:9003]
16:42:24 [captured] [local] 7:42:21.2950 debug [/system/cluster] [ClusterShell.swift:751] Extending handshake offer
16:42:24 [captured] [local] // "handshake/remoteNode": sact://remote@127.0.0.1:9003
16:42:24 [captured] [local] 7:42:21.2950 trace [/system/cluster/gossip] [Gossiper+Shell.swift:76] Peer terminated: sact://remote@127.0.0.1:9003/system/cluster/gossip, will not gossip to it anymore
16:42:24 [captured] [local] 7:42:21.2950 trace [/system/cluster/gossip] [Gossiper+Shell.swift:81] No peers available, cancelling periodic gossip timer
16:42:24 [captured] [local] 7:42:21.2970 trace [/system/transport.client] [TransportPipelines.swift:58] Offering handshake [DistributedCluster._ProtoHandshakeOffer:
16:42:24 version {
16:42:24   major: 1
16:42:24 }
16:42:24 originNode {
16:42:24   endpoint {
16:42:24     protocol: "sact"
16:42:24     system: "local"
16:42:24     hostname: "127.0.0.1"
16:42:24     port: 9002
16:42:24   }
16:42:24   nid: 14716080274836230658
16:42:24 }
16:42:24 targetEndpoint {
16:42:24   protocol: "sact"
16:42:24   system: "remote"
16:42:24   hostname: "127.0.0.1"
16:42:24   port: 9003
16:42:24 }
16:42:24 ]
16:42:24 [captured] [local] 7:42:21.2980 debug [/system/transport.client] [TransportPipelines.swift:90] Received handshake reject from: [sact://remote:850464202261074644@127.0.0.1:9003] reason: [Node already leaving cluster.], closing channel.
16:42:24 [captured] [local] // "handshake/channel": SocketChannel { BaseSocket { fd=239 }, active = true, localAddress = Optional([IPv4]127.0.0.1/127.0.0.1:43936), remoteAddress = Optional([IPv4]127.0.0.1/127.0.0.1:9003) }
16:42:24 [captured] [local] 7:42:21.2980 warning [/system/cluster] [ClusterShell.swift:1090] Handshake rejected by [sact://remote@127.0.0.1:9003], it was associating and is now tombstoned
16:42:24 [captured] [local] // "handshake/peer": sact://remote@127.0.0.1:9003
16:42:24 [captured] [local] // "handshakes": [DistributedCluster.HandshakeStateMachine.State.initiated(InitiatedState(remoteNode: sact://remote@127.0.0.1:9003, localNode: sact://local@127.0.0.1:9002, channel: SocketChannel { BaseSocket { fd=239 }, active = false, localAddress = Optional([IPv4]127.0.0.1/127.0.0.1:43936), remoteAddress = nil }))]
16:42:24 [captured] [local] 7:42:21.2990 warning  [ClusterShell.swift:151] Terminate existing association [sact://remote:850464202261074644@127.0.0.1:9003].
16:42:24 [captured] [local] 7:42:21.2990 warning  [ClusterShell.swift:156] Confirm .dead to underlying SWIM, node: sact://remote:850464202261074644@127.0.0.1:9003
16:42:24 [captured] [local] 7:42:21.4850 trace [[$wellKnown: receptionist]] [OperationLogDistributedReceptionist.swift:759] Periodic ack tick
16:42:24 [captured] [local] 7:42:22.6850 trace [[$wellKnown: receptionist]] [OperationLogDistributedReceptionist.swift:759] Periodic ack tick
16:42:24 [captured] [local] 7:42:23.8860 trace [[$wellKnown: receptionist]] [OperationLogDistributedReceptionist.swift:759] Periodic ack tick
16:42:24 ========================================================================================================================
16:42:24 ------------------------------------- ClusterSystem(remote, sact://remote@127.0.0.1:9003) ------------------------------------------------
16:42:24 [captured] [remote] 7:42:20.2890 trace [/system/clusterEventStream] [ClusterSystem.swift:998] Assign identity
16:42:24 [captured] [remote] // "actor/type": ClusterEventStreamActor
16:42:24 [captured] [remote] 7:42:20.2900 trace [/system/clusterEventStream] [ClusterSystem.swift:1011] Actor ready
16:42:24 [captured] [remote] // "actor/type": ClusterEventStreamActor
16:42:24 [captured] [remote] 7:42:20.2900 trace [/user/swim] [ClusterSystem.swift:998] Assign identity
16:42:24 [captured] [remote] // "actor/type": SWIMActor
16:42:24 [captured] [remote] 7:42:20.2900 trace [/user/swim] [ClusterSystem.swift:1011] Actor ready
16:42:24 [captured] [remote] // "actor/type": SWIMActor
16:42:24 [captured] [remote] 7:42:20.2910 trace [/system/receptionist] [ClusterSystem.swift:998] Assign identity
16:42:24 [captured] [remote] // "actor/type": OpLogDistributedReceptionist
16:42:24 [captured] [remote] 7:42:20.2910 trace [/system/receptionist] [ClusterSystem.swift:1011] Actor ready
16:42:24 [captured] [remote] // "actor/type": OpLogDistributedReceptionist
16:42:24 [captured] [remote] 7:42:20.2910 trace [[$wellKnown: receptionist]] [ClusterSystem.swift:1047] Actor ready, well-known as: receptionist
16:42:24 [captured] [remote] // "actor/type": OpLogDistributedReception
16:42:24 ist
16:42:24 [captured] [remote] 7:42:20.2910 debug [[$wellKnown: receptionist]] [OperationLogDistributedReceptionist.swift:276] Initialized receptionist
16:42:24 [captured] [remote] 7:42:20.2910 trace [/system/clusterEventStream] [ClusterEventStream.swift:172] Successfully added async subscriber [ObjectIdentifier(0x00007f3f783702b0)], offering membership snapshot
16:42:24 [captured] [remote] 7:42:20.2910 trace [/system/downingStrategy] [ClusterSystem.swift:998] Assign identity
16:42:24 [captured] [remote] // "actor/type": DowningStrategyShell
16:42:24 [captured] [remote] 7:42:20.2910 trace [/system/downingStrategy] [ClusterSystem.swift:1011] Actor ready
16:42:24 [captured] [remote] // "actor/type": DowningStrategyShell
16:42:24 [captured] [remote] 7:42:20.2920 info  [ClusterSystem.swift:387] ClusterSystem [remote] initialized, listening on: sact://remote@127.0.0.1:9003: _ActorRef<ClusterShell.Message>(/system/cluster)
16:42:24 [captured] [remote] 7:42:20.2920 debug [/system/receptionist-ref] [_OperationLogClusterReceptionistBehavior.swift:95] Initialized receptionist
16:42:24 [captured] [remote] 7:42:20.2920 info  [ClusterSystem.swift:389] Setting in effect: .autoLeaderElection: LeadershipSelectionSettings(underlying: DistributedCluster.ClusterSystemSettings.LeadershipSelectionSettings.(unknown context at $56290681d140)._LeadershipSelectionSettings.lowestReachable(minNumberOfMembers: 2))
16:42:24 [captured] [remote] 7:42:20.2920 info  [ClusterSystem.swift:390] Setting in effect: .downingStrategy: DowningStrategySettings(underlying: DistributedCluster.DowningStrategySettings.(unknown context at $56290681c4a0)._DowningStrategySettings.timeout(DistributedCluster.TimeoutBasedDowningStrategySettings(downUnreachableMembersAfter: 1.0 seconds)))
16:42:24 [captured] [remote] 7:42:20.2920 trace [/system/clusterEventStream] [ClusterEventStream.swift:172] Successfully added async subscriber [ObjectIdentifier(0x00007f3fb8245a90)], offering membership snapshot
16:42:24 [captured] [remote] 7:42:20.2920 info  [ClusterSystem.swift:391] Setting in effect: .onDownAction: OnDownActionStrategySettings(underlying: DistributedCluster.OnDownActionStrategySettings.(unknown context at $56290681c598)._OnDownActionStrategySettings.gracefulShutdown(delay: 3.0 seconds))
16:42:24 [captured] [remote] 7:42:20.2920 trace [/system/clusterEventStream] [ClusterEventStream.swift:158] Successfully subscribed [_ActorRef<Cluster.Event>(/system/receptionist-ref/$sub-DistributedCluster.Cluster.Event-y)], offering membership snapshot
16:42:24 [captured] [remote] 7:42:20.2920 trace [/system/clusterEventStream] [ClusterEventStream.swift:158] Successfully subscribed [_ActorRef<Cluster.Event>(/system/nodeDeathWatcher/$sub-DistributedCluster.Cluster.Event-y)], offering membership snapshot
16:42:24 [captured] [remote] 7:42:20.2930 info [/system/cluster] [ClusterShell.swift:396] Binding to: [sact://remote@127.0.0.1:9003]
16:42:24 [captured] [remote] 7:42:20.2930 trace [/system/cluster/leadership] [Leadership.swift:114] Configured with LowestReachableMember(minimumNumberOfMembersToDecide: 2, loseLeadershipIfBelowMinNrOfMembers: false)
16:42:24 [captured] [remote] 7:42:20.2930 trace [/system/nodeDeathWatcher] [NodeDeathWatcher.swift:199] Membership snapshot: Membership(count: 0, leader: .none, members: [])
16:42:24 [captured] [remote] 7:42:20.2930 info [/system/cluster/leadership] [Leadership.swift:246] Not enough members [1/2] to run election, members: [Member(sact://remote:850464202261074644@127.0.0.1:9003, status: joining, reachability: reachable)]
16:42:24 [captured] [remote] // "leadership/election": DistributedCluster.Leadership.LowestReachableMember
16:42:24 [captured] [remote] 7:42:20.2930 trace [/system/clusterEventStream] [ClusterEventStream.swift:158] Successfully subscribed [_ActorRef<Cluster.Event>(/system/cluster/leadership)], offering membership snapshot
16:42:24 [captured] [remote] 7:42:20.2940 info [/system/cluster] [ClusterShell.swift:407] Bound to [IPv4]127.0.0.1/127.0.0.1:9003
16:42:24 [captured] [remote] 7:42:20.2940 trace [/system/nodeDeathWatcher] [NodeDeathWatcher.swift:211] Node change: sact://remote:850464202261074644@127.0.0.1:9003 :: [unknown] -> [joining]!
16:42:24 [captured] [remote] // "node": sact://remote:850464202261074644@127.0.0.1:9003
16:42:24 [ca
16:42:24 ptured] [remote] 7:42:20.2940 info [/system/cluster/leadership] [Leadership.swift:246] Not enough members [1/2] to run election, members: [Member(sact://remote:850464202261074644@127.0.0.1:9003, status: joining, reachability: reachable)]
16:42:24 [captured] [remote] // "leadership/election": DistributedCluster.Leadership.LowestReachableMember
16:42:24 [captured] [remote] 7:42:20.2940 trace [/system/clusterEventStream] [ClusterEventStream.swift:195] Published event membershipChange(sact://remote:850464202261074644@127.0.0.1:9003 :: [unknown] -> [joining]) to 3 subscribers and 2 async subscribers
16:42:24 [captured] [remote] // "eventStream/asyncSubscribers": 
16:42:24 [captured] [remote] //   ObjectIdentifier(0x00007f3fb8245a90)
16:42:24 [captured] [remote] //   ObjectIdentifier(0x00007f3f783702b0)
16:42:24 [captured] [remote] // "eventStream/event": DistributedCluster.Cluster.Event.membershipChange(sact://remote:850464202261074644@127.0.0.1:9003 :: [unknown] -> [joining])
16:42:24 [captured] [remote] // "eventStream/subscribers": 
16:42:24 [captured] [remote] //   /system/receptionist-ref/$sub-DistributedCluster.Cluster.Event-y
16:42:24 [captured] [remote] //   /system/nodeDeathWatcher/$sub-DistributedCluster.Cluster.Event-y
16:42:24 [captured] [remote] //   /system/cluster/leadership
16:42:24 [captured] [remote] 7:42:20.2940 trace [/system/clusterEventStream] [ClusterEventStream.swift:158] Successfully subscribed [_ActorRef<Cluster.Event>(/system/cluster/gossip/$sub-DistributedCluster.Cluster.Event-y)], offering membership snapshot
16:42:24 [captured] [remote] 7:42:20.2940 trace [/system/cluster/gossip] [Gossiper+Shell.swift:147] Update (locally) gossip payload [membership]
16:42:24 [captured] [remote] // "gossip/identifier": membership
16:42:24 [captured] [remote] // "gossip/payload": MembershipGossip(
16:42:24 [captured] [remote] //   owner: sact://remote:850464202261074644@127.0.0.1:9003,
16:42:24 [captured] [remote] //   seen: Cluster.Gossip.SeenTable(
16:42:24 [captured] [remote] //     sact://remote@127.0.0.1:9003 observed versions:
16:42:24 [captured] [remote] //         node:sact://remote@127.0.0.1:9003 @ 1
16:42:24 [captured] [remote] // ),
16:42:24 [captured] [remote] //   membership: Membership(
16:42:24 [captured] [remote] //     _members: [
16:42:24 [captured] [remote] //       sact://remote@127.0.0.1:9003: Member(sact://remote@127.0.0.1:9003, status: joining, reachability: reachable),
16:42:24 [captured] [remote] //     ],
16:42:24 [captured] [remote] //     _leaderNode: nil,
16:42:24 [captured] [remote] //   ),
16:42:24 [captured] [remote] // )
16:42:24 [captured] [remote] 7:42:20.2950 debug [/system/transport.server] [TransportPipelines.swift:134] Received handshake offer from: [sact://local:14716080274836230658@127.0.0.1:9002] with protocol version: [Version(1.0.0, reserved:0)]
16:42:24 [captured] [remote] // "handshake/channel": SocketChannel { BaseSocket { fd=240 }, active = true, localAddress = Optional([IPv4]127.0.0.1/127.0.0.1:9003), remoteAddress = Optional([IPv4]127.0.0.1/127.0.0.1:43934) }
16:42:24 [captured] [remote] 7:42:20.2950 trace [/system/cluster] [ClusterShell.swift:837] Accept handshake with sact://local:14716080274836230658@127.0.0.1:9002!
16:42:24 [captured] [remote] // "handshake/channel": SocketChannel { BaseSocket { fd=240 }, active = true, localAddress = Optional([IPv4]127.0.0.1/127.0.0.1:9003), remoteAddress = Optional([IPv4]127.0.0.1/127.0.0.1:43934) }
16:42:24 [captured] [remote] 7:42:20.2960 debug [/system/transport.server] [TransportPipelines.swift:143] Write accept handshake to: [sact://local@127.0.0.1:9002]
16:42:24 [captured] [remote] // "handshake/channel": SocketChannel { BaseSocket { fd=240 }, active = true, localAddress = Optional([IPv4]127.0.0.1/127.0.0.1:9003), remoteAddress = Optional([IPv4]127.0.0.1/127.0.0.1:43934) }
16:42:24 [captured] [remote] 7:42:20.2960 trace [/system/cluster] [ClusterShell.swift:863] Associated with: sact://local:14716080274836230658@127.0.0.1:9002
16:42:24 [captured] [remote] // "membership": Membership(count: 2, leader: .none, members: [Member(sact://local:14716080274836230658@127.0.0.1:9002, status: joining, reachability: reachable), Member(sact://remote:850464202261074644@127.0.0.1:9003, status: joining, reachability: reachable)])
16:42:24 [captured] [remote] // "membership/change": sact://local:1
16:42:24 4716080274836230658@127.0.0.1:9002 :: [unknown] -> [joining]
16:42:24 [captured] [remote] 7:42:20.2960 trace [/system/cluster/gossip] [Gossiper+Shell.swift:147] Update (locally) gossip payload [membership]
16:42:24 [captured] [remote] // "gossip/identifier": membership
16:42:24 [captured] [remote] // "gossip/payload": MembershipGossip(
16:42:24 [captured] [remote] //   owner: sact://remote:850464202261074644@127.0.0.1:9003,
16:42:24 [captured] [remote] //   seen: Cluster.Gossip.SeenTable(
16:42:24 [captured] [remote] //     sact://remote@127.0.0.1:9003 observed versions:
16:42:24 [captured] [remote] //         node:sact://remote@127.0.0.1:9003 @ 2
16:42:24 [captured] [remote] // ),
16:42:24 [captured] [remote] //   membership: Membership(
16:42:24 [captured] [remote] //     _members: [
16:42:24 [captured] [remote] //       sact://local@127.0.0.1:9002: Member(sact://local@127.0.0.1:9002, status: joining, reachability: reachable),
16:42:24 [captured] [remote] //       sact://remote@127.0.0.1:9003: Member(sact://remote@127.0.0.1:9003, status: joining, reachability: reachable),
16:42:24 [captured] [remote] //     ],
16:42:24 [captured] [remote] //     _leaderNode: nil,
16:42:24 [captured] [remote] //   ),
16:42:24 [captured] [remote] // )
16:42:24 [captured] [remote] 7:42:20.2960 trace [/system/nodeDeathWatcher] [NodeDeathWatcher.swift:211] Node change: sact://local:14716080274836230658@127.0.0.1:9002 :: [unknown] -> [joining]!
16:42:24 [captured] [remote] // "node": sact://local:14716080274836230658@127.0.0.1:9002
16:42:24 [captured] [remote] 7:42:20.2960 trace [/system/clusterEventStream] [ClusterEventStream.swift:195] Published event membershipChange(sact://local:14716080274836230658@127.0.0.1:9002 :: [unknown] -> [joining]) to 4 subscribers and 2 async subscribers
16:42:24 [captured] [remote] // "eventStream/asyncSubscribers": 
16:42:24 [captured] [remote] //   ObjectIdentifier(0x00007f3fb8245a90)
16:42:24 [captured] [remote] //   ObjectIdentifier(0x00007f3f783702b0)
16:42:24 [captured] [remote] // "eventStream/event": DistributedCluster.Cluster.Event.membershipChange(sact://local:14716080274836230658@127.0.0.1:9002 :: [unknown] -> [joining])
16:42:24 [captured] [remote] // "eventStream/subscribers": 
16:42:24 [captured] [remote] //   /system/nodeDeathWatcher/$sub-DistributedCluster.Cluster.Event-y
16:42:24 [captured] [remote] //   /system/cluster/gossip/$sub-DistributedCluster.Cluster.Event-y
16:42:24 [captured] [remote] //   /system/cluster/leadership
16:42:24 [captured] [remote] //   /system/receptionist-ref/$sub-DistributedCluster.Cluster.Event-y
16:42:24 [captured] [remote] 7:42:20.2960 debug [[$wellKnown: receptionist]] [OperationLogDistributedReceptionist.swift:970] New member, contacting its receptionist: sact://local@127.0.0.1:9002
16:42:24 [captured] [remote] 7:42:20.2960 debug [/user/swim] [SWIMInstance.swift:899] Received ack from [SWIMActor(sact://local@127.0.0.1:9002/user/swim)] with incarnation [0] and payload [membership([SWIM.Member(SWIMActor(sact://local@127.0.0.1:9002/user/swim), alive(incarnation: 0), protocolPeriod: 0)])]
16:42:24 [captured] [remote] // "swim/incarnation": 0
16:42:24 [captured] [remote] // "swim/members/all": 
16:42:24 [captured] [remote] //   SWIM.Member(SWIMActor(/user/swim), alive(incarnation: 0), protocolPeriod: 0)
16:42:24 [captured] [remote] //   SWIM.Member(SWIMActor(sact://local@127.0.0.1:9002/user/swim), alive(incarnation: 0), protocolPeriod: 1)
16:42:24 [captured] [remote] // "swim/members/count": 2
16:42:24 [captured] [remote] // "swim/protocolPeriod": 1
16:42:24 [captured] [remote] // "swim/suspects/count": 0
16:42:24 [captured] [remote] // "swim/timeoutSuspectsBeforePeriodMax": 2
16:42:24 [captured] [remote] // "swim/timeoutSuspectsBeforePeriodMin": 1
16:42:24 [captured] [remote] 7:42:20.2960 debug [/system/cluster/leadership] [Leadership.swift:303] Selected new leader: [nil -> Member(sact://local@127.0.0.1:9002, status: joining, reachability: reachable)]
16:42:24 [captured] [remote] // "leadership/election": DistributedCluster.Leadership.LowestReachableMember
16:42:24 [captured] [remote] // "membership": Membership(count: 2, leader: Member(sact://local@127.0.0.1:9002, status: joining, reachability: reachable), members: [Member(sact://local:14716080274836230658@127.0.0.1:9002, status: joining, reachability: reachable), Member(sact://remote:850464202261074644@127.0.0.1:9003, status: joining, r
16:42:24 eachability: reachable)])
16:42:24 [captured] [remote] 7:42:20.2960 trace [/user/swim] [SWIMInstance.swift:190] Adjusted LHM multiplier
16:42:24 [captured] [remote] // "swim/lhm": 0
16:42:24 [captured] [remote] // "swim/lhm/event": successfulProbe
16:42:24 [captured] [remote] 7:42:20.2960 trace [[$wellKnown: receptionist]] [OperationLogDistributedReceptionist.swift:812] Replicate ops to: [$wellKnown: receptionist]
16:42:24 [captured] [remote] 7:42:20.2960 debug [/system/cluster] [ClusterShell.swift:707] Association already allocated for remote: sact://local@127.0.0.1:9002, existing association: [AssociatedState(associated(channel: SocketChannel { BaseSocket { fd=240 }, active = true, localAddress = Optional([IPv4]127.0.0.1/127.0.0.1:9003), remoteAddress = Optional([IPv4]127.0.0.1/127.0.0.1:43934) }), selfNode: sact://remote:850464202261074644@127.0.0.1:9003, remoteNode: sact://local:14716080274836230658@127.0.0.1:9002)]
16:42:24 [captured] [remote] 7:42:20.2960 debug [/system/receptionist-ref] [_OperationLogClusterReceptionistBehavior.swift:626] New member, contacting its receptionist: sact://local@127.0.0.1:9002
16:42:24 [captured] [remote] 7:42:20.2960 trace [[$wellKnown: receptionist]] [OperationLogDistributedReceptionist.swift:827] No ops to replay
16:42:24 [captured] [remote] // "receptionist/ops/replay/atSeqNr": 0
16:42:24 [captured] [remote] // "receptionist/peer": [$wellKnown: receptionist]
16:42:24 [captured] [remote] 7:42:20.2970 debug [/system/nodeDeathWatcher] [NodeDeathWatcher.swift:228] Received: remoteActorWatched(watcher: _AddressableActorRef(/system/cluster/gossip), remoteNode: sact://local:14716080274836230658@127.0.0.1:9002)
16:42:24 [captured] [remote] 7:42:20.2970 trace [/system/cluster/gossip] [Gossiper+Shell.swift:359] Got introduced to peer [_ActorRef<GossipShell<DistributedCluster.Cluster.MembershipGossip, DistributedCluster.Cluster.MembershipGossip>.Message>(sact://local@127.0.0.1:9002/system/cluster/gossip)]
16:42:24 [captured] [remote] // "gossip/peerCount": 1
16:42:24 [captured] [remote] // "gossip/peers": [sact://local@127.0.0.1:9002/system/cluster/gossip]
16:42:24 [captured] [remote] 7:42:20.2970 trace [/system/cluster/gossip] [Gossiper+Shell.swift:272] Schedule next gossip round in 1s 125ms (1s ± 20.0%)
16:42:24 [captured] [remote] 7:42:20.2970 debug [/user/swim] [SWIMActor.swift:135] Sending ping
16:42:24 [captured] [remote] // "swim/gossip/payload": membership([SWIM.Member(SWIMActor(/user/swim), alive(incarnation: 0), protocolPeriod: 0), SWIM.Member(SWIMActor(sact://local@127.0.0.1:9002/user/swim), alive(incarnation: 0), protocolPeriod: 1)])
16:42:24 [captured] [remote] // "swim/incarnation": 0
16:42:24 [captured] [remote] // "swim/members/all": 
16:42:24 [captured] [remote] //   SWIM.Member(SWIMActor(/user/swim), alive(incarnation: 0), protocolPeriod: 0)
16:42:24 [captured] [remote] //   SWIM.Member(SWIMActor(sact://local@127.0.0.1:9002/user/swim), alive(incarnation: 0), protocolPeriod: 1)
16:42:24 [captured] [remote] // "swim/members/count": 2
16:42:24 [captured] [remote] // "swim/protocolPeriod": 1
16:42:24 [captured] [remote] // "swim/suspects/count": 0
16:42:24 [captured] [remote] // "swim/target": SWIMActor(sact://local@127.0.0.1:9002/user/swim)
16:42:24 [captured] [remote] // "swim/timeout": 1.0 seconds
16:42:24 [captured] [remote] // "swim/timeoutSuspectsBeforePeriodMax": 2
16:42:24 [captured] [remote] // "swim/timeoutSuspectsBeforePeriodMin": 1
16:42:24 [captured] [remote] 7:42:20.2980 debug [/system/cluster] [ClusterShellState.swift:428] Leader change: LeadershipChange(oldLeader: nil, newLeader: Optional(Member(sact://local:14716080274836230658@127.0.0.1:9002, status: joining, reachability: reachable)), file: "/code/Sources/DistributedCluster/Cluster/Cluster+Membership.swift", line: 396)
16:42:24 [captured] [remote] // "membership/count": 2
16:42:24 [captured] [remote] 7:42:20.2980 trace [/system/cluster] [ClusterShellState.swift:468] Membership updated on [sact://remote@127.0.0.1:9003] by leadershipChange(DistributedCluster.Cluster.LeadershipChange(oldLeader: nil, newLeader: Optional(Member(sact://local:14716080274836230658@127.0.0.1:9002, status: joining, reachability: reachable)), file: "/code/Sources/DistributedCluster/Cluster/Cluster+Membership.swift", line: 396)): leader: Member(sact://local@127.0.0.1:9002, status: joining, reachabilit
16:42:24 y: reachable)
16:42:24   sact://local:14716080274836230658@127.0.0.1:9002 status [joining]
16:42:24   sact://remote:850464202261074644@127.0.0.1:9003 status [joining]
16:42:24 [captured] [remote] 7:42:20.2980 trace [/system/cluster/gossip] [Gossiper+Shell.swift:147] Update (locally) gossip payload [membership]
16:42:24 [captured] [remote] // "gossip/identifier": membership
16:42:24 [captured] [remote] // "gossip/payload": MembershipGossip(
16:42:24 [captured] [remote] //   owner: sact://remote:850464202261074644@127.0.0.1:9003,
16:42:24 [captured] [remote] //   seen: Cluster.Gossip.SeenTable(
16:42:24 [captured] [remote] //     sact://remote@127.0.0.1:9003 observed versions:
16:42:24 [captured] [remote] //         node:sact://remote@127.0.0.1:9003 @ 3
16:42:24 [captured] [remote] // ),
16:42:24 [captured] [remote] //   membership: Membership(
16:42:24 [captured] [remote] //     _members: [
16:42:24 [captured] [remote] //       sact://local@127.0.0.1:9002: Member(sact://local@127.0.0.1:9002, status: joining, reachability: reachable),
16:42:24 [captured] [remote] //       sact://remote@127.0.0.1:9003: Member(sact://remote@127.0.0.1:9003, status: joining, reachability: reachable),
16:42:24 [captured] [remote] //     ],
16:42:24 [captured] [remote] //     _leaderNode: sact://local:14716080274836230658@127.0.0.1:9002,
16:42:24 [captured] [remote] //   ),
16:42:24 [captured] [remote] // )
16:42:24 [captured] [remote] 7:42:20.2990 warning [/system/cluster] [ClusterShell.swift:1169] Received .restInPeace from sact://local@127.0.0.1:9002, meaning this node is known to be .down or worse, and should terminate. Initiating self .down-ing.
16:42:24 [captured] [remote] // "sender/node": sact://local:14716080274836230658@127.0.0.1:9002
16:42:24 [captured] [remote] 7:42:20.2990 debug [/system/cluster] [ClusterShell.swift:1267] Cluster membership change: sact://remote:850464202261074644@127.0.0.1:9003 :: [joining] -> [   down]
16:42:24 [captured] [remote] // "cluster/membership": 
16:42:24 [captured] [remote] //   Member(sact://local@127.0.0.1:9002, status: joining, reachability: reachable)
16:42:24 [captured] [remote] //   Member(sact://remote@127.0.0.1:9003, status: down, reachability: reachable)
16:42:24 [captured] [remote] // "cluster/membership/change": sact://remote:850464202261074644@127.0.0.1:9003 :: [joining] -> [   down]
16:42:24 [captured] [remote] 7:42:20.2990 trace [/system/cluster/gossip] [Gossiper+Shell.swift:147] Update (locally) gossip payload [membership]
16:42:24 [captured] [remote] // "gossip/identifier": membership
16:42:24 [captured] [remote] // "gossip/payload": MembershipGossip(
16:42:24 [captured] [remote] //   owner: sact://remote:850464202261074644@127.0.0.1:9003,
16:42:24 [captured] [remote] //   seen: Cluster.Gossip.SeenTable(
16:42:24 [captured] [remote] //     sact://remote@127.0.0.1:9003 observed versions:
16:42:24 [captured] [remote] //         node:sact://remote@127.0.0.1:9003 @ 4
16:42:24 [captured] [remote] // ),
16:42:24 [captured] [remote] //   membership: Membership(
16:42:24 [captured] [remote] //     _members: [
16:42:24 [captured] [remote] //       sact://local@127.0.0.1:9002: Member(sact://local@127.0.0.1:9002, status: joining, reachability: reachable),
16:42:24 [captured] [remote] //       sact://remote@127.0.0.1:9003: Member(sact://remote@127.0.0.1:9003, status: joining, reachability: reachable),
16:42:24 [captured] [remote] //     ],
16:42:24 [captured] [remote] //     _leaderNode: sact://local:14716080274836230658@127.0.0.1:9002,
16:42:24 [captured] [remote] //   ),
16:42:24 [captured] [remote] // )
16:42:24 [captured] [remote] 7:42:20.2990 warning [/system/cluster] [ClusterShell.swift:1279] Self node was marked [.down]!
16:42:24 [captured] [remote] // "cluster/membership": Membership(count: 2, leader: Member(sact://local@127.0.0.1:9002, status: joining, reachability: reachable), members: [Member(sact://local:14716080274836230658@127.0.0.1:9002, status: joining, reachability: reachable), Member(sact://remote:850464202261074644@127.0.0.1:9003, status: down, reachability: reachable)])
16:42:24 [captured] [remote] 7:42:20.2990 trace [sact://remote@127.0.0.1:9003/user/swim] [ClusterSystem.swift:948] Resolved as local instance
16:42:24 [captured] [remote] // "actor": SWIMActor(/user/swim)
16:42:24 [captured] [remote] 7:42:20.2990 trace [/system/clusterEventStream] [ClusterEventStream.swift:195] Published
16:42:24 event leadershipChange(DistributedCluster.Cluster.LeadershipChange(oldLeader: nil, newLeader: Optional(Member(sact://local:14716080274836230658@127.0.0.1:9002, status: joining, reachability: reachable)), file: "/code/Sources/DistributedCluster/Cluster/Cluster+Membership.swift", line: 396)) to 4 subscribers and 2 async subscribers
16:42:24 [captured] [remote] // "eventStream/asyncSubscribers": 
16:42:24 [captured] [remote] //   ObjectIdentifier(0x00007f3fb8245a90)
16:42:24 [captured] [remote] //   ObjectIdentifier(0x00007f3f783702b0)
16:42:24 [captured] [remote] // "eventStream/event": DistributedCluster.Cluster.Event.leadershipChange(DistributedCluster.Cluster.LeadershipChange(oldLeader: nil, newLeader: Optional(Member(sact://local:14716080274836230658@127.0.0.1:9002, status: joining, reachability: reachable)), file: "/code/Sources/DistributedCluster/Cluster/Cluster+Membership.swift", line: 396))
16:42:24 [captured] [remote] // "eventStream/subscribers": 
16:42:24 [captured] [remote] //   /system/nodeDeathWatcher/$sub-DistributedCluster.Cluster.Event-y
16:42:24 [captured] [remote] //   /system/cluster/gossip/$sub-DistributedCluster.Cluster.Event-y
16:42:24 [captured] [remote] //   /system/cluster/leadership
16:42:24 [captured] [remote] //   /system/receptionist-ref/$sub-DistributedCluster.Cluster.Event-y
16:42:24 [captured] [remote] 7:42:20.2990 warning [/user/swim] [SWIMActor.swift:527] Confirmed node .dead: MemberStatusChangedEvent(SWIM.Member(SWIMActor(/user/swim), dead, protocolPeriod: 1), previousStatus: alive(incarnation: 0))
16:42:24 [captured] [remote] // "swim/change": MemberStatusChangedEvent(SWIM.Member(SWIMActor(/user/swim), dead, protocolPeriod: 1), previousStatus: alive(incarnation: 0))
16:42:24 [captured] [remote] // "swim/incarnation": 0
16:42:24 [captured] [remote] // "swim/members/all": 
16:42:24 [captured] [remote] //   SWIM.Member(SWIMActor(sact://local@127.0.0.1:9002/user/swim), alive(incarnation: 0), protocolPeriod: 1)
16:42:24 [captured] [remote] // "swim/members/count": 1
16:42:24 [captured] [remote] // "swim/protocolPeriod": 1
16:42:24 [captured] [remote] // "swim/suspects/count": 0
16:42:24 [captured] [remote] // "swim/timeoutSuspectsBeforePeriodMax": 2
16:42:24 [captured] [remote] // "swim/timeoutSuspectsBeforePeriodMin": 1
16:42:24 [captured] [remote] 7:42:20.2990 trace [/system/downingStrategy] [DowningStrategy.swift:135] Cancel timer for member: Member(sact://remote@127.0.0.1:9003, status: down, reachability: reachable)
16:42:24 [captured] [remote] 7:42:20.2990 trace [/system/clusterEventStream] [ClusterEventStream.swift:195] Published event membershipChange(sact://remote:850464202261074644@127.0.0.1:9003 :: [joining] -> [   down]) to 4 subscribers and 2 async subscribers
16:42:24 [captured] [remote] // "eventStream/asyncSubscribers": 
16:42:24 [captured] [remote] //   ObjectIdentifier(0x00007f3fb8245a90)
16:42:24 [captured] [remote] //   ObjectIdentifier(0x00007f3f783702b0)
16:42:24 [captured] [remote] // "eventStream/event": DistributedCluster.Cluster.Event.membershipChange(sact://remote:850464202261074644@127.0.0.1:9003 :: [joining] -> [   down])
16:42:24 [captured] [remote] // "eventStream/subscribers": 
16:42:24 [captured] [remote] //   /system/nodeDeathWatcher/$sub-DistributedCluster.Cluster.Event-y
16:42:24 [captured] [remote] //   /system/cluster/gossip/$sub-DistributedCluster.Cluster.Event-y
16:42:24 [captured] [remote] //   /system/cluster/leadership
16:42:24 [captured] [remote] //   /system/receptionist-ref/$sub-DistributedCluster.Cluster.Event-y
16:42:24 [captured] [remote] 7:42:20.2990 trace [[$wellKnown: receptionist]] [OperationLogDistributedReceptionist.swift:987] Pruning cluster member: sact://remote@127.0.0.1:9003
16:42:24 [captured] [remote] 7:42:20.2990 trace [[$wellKnown: receptionist]] [ClusterSystem.swift:918] Resolved as local well-known instance: 'receptionist
16:42:24 [captured] [remote] 7:42:20.2990 trace [sact://remote@127.0.0.1:9003/user/swim] [ClusterSystem.swift:948] Resolved as local instance
16:42:24 [captured] [remote] // "actor": SWIMActor(/user/swim)
16:42:24 [captured] [remote] 7:42:20.3000 trace [/system/nodeDeathWatcher] [NodeDeathWatcher.swift:206] Node down: sact://remote:850464202261074644@127.0.0.1:9003 :: [joining] -> [   down]!
16:42:24 [captured] [remote] // "node": sact://remote:850464202261074644@127.0.0.1:9003
16:42:24 [captured] [remote] 7:42:20.3000 trace [/system/rec
16:42:24 eptionist-ref] [_OperationLogClusterReceptionistBehavior.swift:643] Pruning cluster member: sact://remote@127.0.0.1:9003
16:42:24 [captured] [remote] 7:42:20.3000 info [/system/cluster/leadership] [Leadership.swift:246] Not enough members [1/2] to run election, members: [Member(sact://local:14716080274836230658@127.0.0.1:9002, status: joining, reachability: reachable)]
16:42:24 [captured] [remote] // "leadership/election": DistributedCluster.Leadership.LowestReachableMember
16:42:24 [captured] [remote] 7:42:20.3000 trace [/system/cluster/gossip] [Gossiper+Shell.swift:147] Update (locally) gossip payload [membership]
16:42:24 [captured] [remote] // "gossip/identifier": membership
16:42:24 [captured] [remote] // "gossip/payload": MembershipGossip(
16:42:24 [captured] [remote] //   owner: sact://remote:850464202261074644@127.0.0.1:9003,
16:42:24 [captured] [remote] //   seen: Cluster.Gossip.SeenTable(
16:42:24 [captured] [remote] //     sact://remote@127.0.0.1:9003 observed versions:
16:42:24 [captured] [remote] //         node:sact://remote@127.0.0.1:9003 @ 5
16:42:24 [captured] [remote] // ),
16:42:24 [captured] [remote] //   membership: Membership(
16:42:24 [captured] [remote] //     _members: [
16:42:24 [captured] [remote] //       sact://local@127.0.0.1:9002: Member(sact://local@127.0.0.1:9002, status: joining, reachability: reachable),
16:42:24 [captured] [remote] //       sact://remote@127.0.0.1:9003: Member(sact://remote@127.0.0.1:9003, status: down, reachability: reachable),
16:42:24 [captured] [remote] //     ],
16:42:24 [captured] [remote] //     _leaderNode: sact://local:14716080274836230658@127.0.0.1:9002,
16:42:24 [captured] [remote] //   ),
16:42:24 [captured] [remote] // )
16:42:24 [captured] [remote] 7:42:20.3000 warning  [DowningSettings.swift:83] This node was marked as [.down], performing OnDownAction as configured: shutting down the system, in 3.0 seconds
16:42:24 [captured] [remote] 7:42:20.3010 trace  [ClusterSystem.swift:1405] Receive invocation: InvocationMessage(callID: B752E559-1F22-4F65-A774-6E8AA9EFBD40, target: DistributedCluster.SWIMActor.ping(origin:payload:sequenceNumber:), genericSubstitutions: [], arguments: 3) to: sact://remote:850464202261074644@127.0.0.1:9003/user/swim["$path": /user/swim]
16:42:24 [captured] [remote] // "invocation": InvocationMessage(callID: B752E559-1F22-4F65-A774-6E8AA9EFBD40, target: DistributedCluster.SWIMActor.ping(origin:payload:sequenceNumber:), genericSubstitutions: [], arguments: 3)
16:42:24 [captured] [remote] // "recipient/id": sact://remote:850464202261074644@127.0.0.1:9003/user/swim["$path": /user/swim]
16:42:24 [captured] [remote] 7:42:20.3020 trace [sact://remote@127.0.0.1:9003/user/swim] [ClusterSystem.swift:948] Resolved as local instance
16:42:24 [captured] [remote] // "actor": SWIMActor(/user/swim)
16:42:24 [captured] [remote] 7:42:20.3020 trace [/user/swim] [SWIMActor.swift:427] Received ping@1
16:42:24 [captured] [remote] // "swim/incarnation": 0
16:42:24 [captured] [remote] // "swim/members/all": 
16:42:24 [captured] [remote] //   SWIM.Member(SWIMActor(sact://local@127.0.0.1:9002/user/swim), alive(incarnation: 0), protocolPeriod: 1)
16:42:24 [captured] [remote] // "swim/members/count": 1
16:42:24 [captured] [remote] // "swim/ping/origin": sact://local@127.0.0.1:9002/user/swim
16:42:24 [captured] [remote] // "swim/ping/payload": membership([SWIM.Member(SWIMActor(sact://local@127.0.0.1:9002/user/swim), alive(incarnation: 0), protocolPeriod: 0), SWIM.Member(SWIMActor(/user/swim), alive(incarnation: 0), protocolPeriod: 1)])
16:42:24 [captured] [remote] // "swim/ping/seqNr": 1
16:42:24 [captured] [remote] // "swim/protocolPeriod": 1
16:42:24 [captured] [remote] // "swim/suspects/count": 0
16:42:24 [captured] [remote] // "swim/timeoutSuspectsBeforePeriodMax": 2
16:42:24 [captured] [remote] // "swim/timeoutSuspectsBeforePeriodMin": 1
16:42:24 [captured] [remote] 7:42:20.3020 trace [/user/swim] [SWIMInstance.swift:1401] Gossip about member sact://127.0.0.1:9002#14716080274836230658, incoming: [alive(incarnation: 0)] does not supersede current: [alive(incarnation: 0)]
16:42:24 [captured] [remote] // "swim/incarnation": 0
16:42:24 [captured] [remote] // "swim/members/all": 
16:42:24 [captured] [remote] //   SWIM.Member(SWIMActor(sact://local@127.0.0.1:9002/user/swim), alive(incarnation: 0), protocolPeriod: 1)
16:42:24 [captured] [remote] // "swim/members/count": 1
16:42:24 [cap
16:42:24 tured] [remote] // "swim/protocolPeriod": 1
16:42:24 [captured] [remote] // "swim/suspects/count": 0
16:42:24 [captured] [remote] // "swim/timeoutSuspectsBeforePeriodMax": 2
16:42:24 [captured] [remote] // "swim/timeoutSuspectsBeforePeriodMin": 1
16:42:24 [captured] [remote] 7:42:20.3020 trace  [ClusterSystem.swift:1555] Result handler, onReturn
16:42:24 [captured] [remote] // "call/id": B752E559-1F22-4F65-A774-6E8AA9EFBD40
16:42:24 [captured] [remote] // "type": PingResponse<SWIMActor, SWIMActor>
16:42:24 [captured] [remote] 7:42:20.3050 trace [sact://remote@127.0.0.1:9003/user/swim] [ClusterSystem.swift:948] Resolved as local instance
16:42:24 [captured] [remote] // "actor": SWIMActor(/user/swim)
16:42:24 [captured] [remote] 7:42:20.3050 trace [/user/swim] [SWIMInstance.swift:1401] Gossip about member sact://127.0.0.1:9002#14716080274836230658, incoming: [alive(incarnation: 0)] does not supersede current: [alive(incarnation: 0)]
16:42:24 [captured] [remote] // "swim/incarnation": 0
16:42:24 [captured] [remote] // "swim/members/all": 
16:42:24 [captured] [remote] //   SWIM.Member(SWIMActor(sact://local@127.0.0.1:9002/user/swim), alive(incarnation: 0), protocolPeriod: 1)
16:42:24 [captured] [remote] // "swim/members/count": 1
16:42:24 [captured] [remote] // "swim/protocolPeriod": 1
16:42:24 [captured] [remote] // "swim/suspects/count": 0
16:42:24 [captured] [remote] // "swim/timeoutSuspectsBeforePeriodMax": 2
16:42:24 [captured] [remote] // "swim/timeoutSuspectsBeforePeriodMin": 1
16:42:24 [captured] [remote] 7:42:20.3050 debug [/user/swim] [SWIMInstance.swift:899] Received ack from [SWIMActor(sact://local@127.0.0.1:9002/user/swim)] with incarnation [0] and payload [membership([SWIM.Member(SWIMActor(sact://local@127.0.0.1:9002/user/swim), alive(incarnation: 0), protocolPeriod: 0), SWIM.Member(SWIMActor(/user/swim), alive(incarnation: 0), protocolPeriod: 1)])]
16:42:24 [captured] [remote] // "swim/incarnation": 0
16:42:24 [captured] [remote] // "swim/members/all": 
16:42:24 [captured] [remote] //   SWIM.Member(SWIMActor(sact://local@127.0.0.1:9002/user/swim), alive(incarnation: 0), protocolPeriod: 1)
16:42:24 [captured] [remote] // "swim/members/count": 1
16:42:24 [captured] [remote] // "swim/protocolPeriod": 1
16:42:24 [captured] [remote] // "swim/suspects/count": 0
16:42:24 [captured] [remote] // "swim/timeoutSuspectsBeforePeriodMax": 2
16:42:24 [captured] [remote] // "swim/timeoutSuspectsBeforePeriodMin": 1
16:42:24 [captured] [remote] 7:42:20.3050 trace [/user/swim] [SWIMInstance.swift:190] Adjusted LHM multiplier
16:42:24 [captured] [remote] // "swim/lhm": 0
16:42:24 [captured] [remote] // "swim/lhm/event": successfulProbe
16:42:24 [captured] [remote] 7:42:21.2890 trace [/system/cluster/gossip] [Gossiper+Shell.swift:96] Received gossip [membership]
16:42:24 [captured] [remote] // "gossip/identity": membership
16:42:24 [captured] [remote] // "gossip/incoming": MembershipGossip(owner: sact://local:14716080274836230658@127.0.0.1:9002, seen: Cluster.MembershipGossip.SeenTable([sact://local:14716080274836230658@127.0.0.1:9002: [node:sact://local@127.0.0.1:9002: 4]]), membership: Membership(count: 2, leader: .none, members: [Member(sact://local:14716080274836230658@127.0.0.1:9002, status: joining, reachability: reachable), Member(sact://remote:850464202261074644@127.0.0.1:9003, status: joining, reachability: reachable)]))
16:42:24 [captured] [remote] // "gossip/origin": sact://local@127.0.0.1:9002/system/cluster/gossip
16:42:24 [captured] [remote] 7:42:21.2890 trace  [ClusterSystem.swift:1405] Receive invocation: InvocationMessage(callID: 68270814-CE50-4DC0-B052-E1A13D1EEC91, target: DistributedCluster.SWIMActor.ping(origin:payload:sequenceNumber:), genericSubstitutions: [], arguments: 3) to: sact://remote:850464202261074644@127.0.0.1:9003/user/swim["$path": /user/swim]
16:42:24 [captured] [remote] // "invocation": InvocationMessage(callID: 68270814-CE50-4DC0-B052-E1A13D1EEC91, target: DistributedCluster.SWIMActor.ping(origin:payload:sequenceNumber:), genericSubstitutions: [], arguments: 3)
16:42:24 [captured] [remote] // "recipient/id": sact://remote:850464202261074644@127.0.0.1:9003/user/swim["$path": /user/swim]
16:42:24 [captured] [remote] 7:42:21.2900 trace [/system/cluster/gossip] [Gossiper+Shell.swift:147] Update (locally) gossip payload [membership]
16:42:24 [captured] [remote] // "gossip/identifier": membership
16:42:24 [captured] [remote] // "gossip/payload
16:42:24 ": MembershipGossip(
16:42:24 [captured] [remote] //   owner: sact://remote:850464202261074644@127.0.0.1:9003,
16:42:24 [captured] [remote] //   seen: Cluster.Gossip.SeenTable(
16:42:24 [captured] [remote] //     sact://remote@127.0.0.1:9003 observed versions:
16:42:24 [captured] [remote] //         node:sact://remote@127.0.0.1:9003 @ 5
16:42:24 [captured] [remote] // ),
16:42:24 [captured] [remote] //   membership: Membership(
16:42:24 [captured] [remote] //     _members: [
16:42:24 [captured] [remote] //       sact://local@127.0.0.1:9002: Member(sact://local@127.0.0.1:9002, status: joining, reachability: reachable),
16:42:24 [captured] [remote] //       sact://remote@127.0.0.1:9003: Member(sact://remote@127.0.0.1:9003, status: down, reachability: reachable),
16:42:24 [captured] [remote] //     ],
16:42:24 [captured] [remote] //     _leaderNode: sact://local:14716080274836230658@127.0.0.1:9002,
16:42:24 [captured] [remote] //   ),
16:42:24 [captured] [remote] // )
16:42:24 [captured] [remote] 7:42:21.2900 trace [/system/cluster] [ClusterShell.swift:606] Local membership version is [.concurrent] to incoming gossip; Merge resulted in 0 changes.
16:42:24 [captured] [remote] // "gossip/before": MembershipGossip(
16:42:24 [captured] [remote] //   owner: sact://remote:850464202261074644@127.0.0.1:9003,
16:42:24 [captured] [remote] //   seen: Cluster.Gossip.SeenTable(
16:42:24 [captured] [remote] //     sact://remote@127.0.0.1:9003 observed versions:
16:42:24 [captured] [remote] //         node:sact://remote@127.0.0.1:9003 @ 5
16:42:24 [captured] [remote] // ),
16:42:24 [captured] [remote] //   membership: Membership(
16:42:24 [captured] [remote] //     _members: [
16:42:24 [captured] [remote] //       sact://local@127.0.0.1:9002: Member(sact://local@127.0.0.1:9002, status: joining, reachability: reachable),
16:42:24 [captured] [remote] //       sact://remote@127.0.0.1:9003: Member(sact://remote@127.0.0.1:9003, status: down, reachability: reachable),
16:42:24 [captured] [remote] //     ],
16:42:24 [captured] [remote] //     _leaderNode: sact://local:14716080274836230658@127.0.0.1:9002,
16:42:24 [captured] [remote] //   ),
16:42:24 [captured] [remote] // )
16:42:24 [captured] [remote] // "gossip/incoming": MembershipGossip(
16:42:24 [captured] [remote] //   owner: sact://remote:850464202261074644@127.0.0.1:9003,
16:42:24 [captured] [remote] //   seen: Cluster.Gossip.SeenTable(
16:42:24 [captured] [remote] //     sact://local@127.0.0.1:9002 observed versions:
16:42:24 [captured] [remote] //         node:sact://local@127.0.0.1:9002 @ 4
16:42:24 [captured] [remote] //     sact://remote@127.0.0.1:9003 observed versions:
16:42:24 [captured] [remote] //         node:sact://local@127.0.0.1:9002 @ 4
16:42:24 [captured] [remote] //         node:sact://remote@127.0.0.1:9003 @ 5
16:42:24 [captured] [remote] // ),
16:42:24 [captured] [remote] //   membership: Membership(
16:42:24 [captured] [remote] //     _members: [
16:42:24 [captured] [remote] //       sact://local@127.0.0.1:9002: Member(sact://local@127.0.0.1:9002, status: joining, reachability: reachable),
16:42:24 [captured] [remote] //       sact://remote@127.0.0.1:9003: Member(sact://remote@127.0.0.1:9003, status: down, reachability: reachable),
16:42:24 [captured] [remote] //     ],
16:42:24 [captured] [remote] //     _leaderNode: nil,
16:42:24 [captured] [remote] //   ),
16:42:24 [captured] [remote] // )
16:42:24 [captured] [remote] // "gossip/now": MembershipGossip(
16:42:24 [captured] [remote] //   owner: sact://remote:850464202261074644@127.0.0.1:9003,
16:42:24 [captured] [remote] //   seen: Cluster.Gossip.SeenTable(
16:42:24 [captured] [remote] //     sact://remote@127.0.0.1:9003 observed versions:
16:42:24 [captured] [remote] //         node:sact://remote@127.0.0.1:9003 @ 5
16:42:24 [captured] [remote] // ),
16:42:24 [captured] [remote] //   membership: Membership(
16:42:24 [captured] [remote] //     _members: [
16:42:24 [captured] [remote] //       sact://local@127.0.0.1:9002: Member(sact://local@127.0.0.1:9002, status: joining, reachability: reachable),
16:42:24 [captured] [remote] //       sact://remote@127.0.0.1:9003: Member(sact://remote@127.0.0.1:9003, status: down, reachability: reachable),
16:42:24 [captured] [remote] //     ],
16:42:24 [captured] [remote] //     _leaderNode: sact://local:14716080274836230658@127.0.0.1:9002,
16:42:24 [captured] [remote] //   ),
16:42:24 [captured] [remote] /
16:42:24 / )
16:42:24 [captured] [remote] // "membership/changes": 
16:42:24 [captured] [remote] //   
16:42:24 [captured] [remote] // "tag": membership
16:42:24 [captured] [remote] 7:42:21.2900 trace [sact://remote@127.0.0.1:9003/user/swim] [ClusterSystem.swift:948] Resolved as local instance
16:42:24 [captured] [remote] // "actor": SWIMActor(/user/swim)
16:42:24 [captured] [remote] 7:42:21.2910 trace [/user/swim] [SWIMActor.swift:427] Received ping@2
16:42:24 [captured] [remote] // "swim/incarnation": 0
16:42:24 [captured] [remote] // "swim/members/all": 
16:42:24 [captured] [remote] //   SWIM.Member(SWIMActor(sact://local@127.0.0.1:9002/user/swim), alive(incarnation: 0), protocolPeriod: 1)
16:42:24 [captured] [remote] // "swim/members/count": 1
16:42:24 [captured] [remote] // "swim/ping/origin": sact://local@127.0.0.1:9002/user/swim
16:42:24 [captured] [remote] // "swim/ping/payload": membership([SWIM.Member(SWIMActor(sact://local@127.0.0.1:9002/user/swim), alive(incarnation: 0), protocolPeriod: 0), SWIM.Member(SWIMActor(/user/swim), alive(incarnation: 0), protocolPeriod: 1)])
16:42:24 [captured] [remote] // "swim/ping/seqNr": 2
16:42:24 [captured] [remote] // "swim/protocolPeriod": 1
16:42:24 [captured] [remote] // "swim/suspects/count": 0
16:42:24 [captured] [remote] // "swim/timeoutSuspectsBeforePeriodMax": 2
16:42:24 [captured] [remote] // "swim/timeoutSuspectsBeforePeriodMin": 1
16:42:24 [captured] [remote] 7:42:21.2910 trace [/user/swim] [SWIMInstance.swift:1401] Gossip about member sact://127.0.0.1:9002#14716080274836230658, incoming: [alive(incarnation: 0)] does not supersede current: [alive(incarnation: 0)]
16:42:24 [captured] [remote] // "swim/incarnation": 0
16:42:24 [captured] [remote] // "swim/members/all": 
16:42:24 [captured] [remote] //   SWIM.Member(SWIMActor(sact://local@127.0.0.1:9002/user/swim), alive(incarnation: 0), protocolPeriod: 1)
16:42:24 [captured] [remote] // "swim/members/count": 1
16:42:24 [captured] [remote] // "swim/protocolPeriod": 1
16:42:24 [captured] [remote] // "swim/suspects/count": 0
16:42:24 [captured] [remote] // "swim/timeoutSuspectsBeforePeriodMax": 2
16:42:24 [captured] [remote] // "swim/timeoutSuspectsBeforePeriodMin": 1
16:42:24 [captured] [remote] 7:42:21.2910 trace  [ClusterSystem.swift:1555] Result handler, onReturn
16:42:24 [captured] [remote] // "call/id": 68270814-CE50-4DC0-B052-E1A13D1EEC91
16:42:24 [captured] [remote] // "type": PingResponse<SWIMActor, SWIMActor>
16:42:24 [captured] [remote] 7:42:21.2950 warning [/system/cluster] [ClusterShell.swift:1169] Received .restInPeace from sact://local@127.0.0.1:9002, meaning this node is known to be .down or worse, and should terminate. Initiating self .down-ing.
16:42:24 [captured] [remote] // "sender/node": sact://local:14716080274836230658@127.0.0.1:9002
16:42:24 [captured] [remote] 7:42:21.2950 trace [/system/cluster/gossip] [Gossiper+Shell.swift:147] Update (locally) gossip payload [membership]
16:42:24 [captured] [remote] // "gossip/identifier": membership
16:42:24 [captured] [remote] // "gossip/payload": MembershipGossip(
16:42:24 [captured] [remote] //   owner: sact://remote:850464202261074644@127.0.0.1:9003,
16:42:24 [captured] [remote] //   seen: Cluster.Gossip.SeenTable(
16:42:24 [captured] [remote] //     sact://remote@127.0.0.1:9003 observed versions:
16:42:24 [captured] [remote] //         node:sact://remote@127.0.0.1:9003 @ 5
16:42:24 [captured] [remote] // ),
16:42:24 [captured] [remote] //   membership: Membership(
16:42:24 [captured] [remote] //     _members: [
16:42:24 [captured] [remote] //       sact://local@127.0.0.1:9002: Member(sact://local@127.0.0.1:9002, status: joining, reachability: reachable),
16:42:24 [captured] [remote] //       sact://remote@127.0.0.1:9003: Member(sact://remote@127.0.0.1:9003, status: down, reachability: reachable),
16:42:24 [captured] [remote] //     ],
16:42:24 [captured] [remote] //     _leaderNode: sact://local:14716080274836230658@127.0.0.1:9002,
16:42:24 [captured] [remote] //   ),
16:42:24 [captured] [remote] // )
16:42:24 [captured] [remote] 7:42:21.2970 debug [/system/transport.server] [TransportPipelines.swift:134] Received handshake offer from: [sact://local:14716080274836230658@127.0.0.1:9002] with protocol version: [Version(1.0.0, reserved:0)]
16:42:24 [captured] [remote] // "handshake/channel": SocketChannel { BaseSocket { fd=240 }, active = true, localAddress = Optional([IPv4]127.0.0.1/127.0.0.1:9003), remoteAddress = Optional([IPv4]127.0.0.1/127.0.0
16:42:24 .1:43936) }
16:42:24 [captured] [remote] 7:42:21.2970 notice [/system/cluster] [ClusterShell.swift:792] Received handshake while already [down]
16:42:24 [captured] [remote] 7:42:21.2970 debug [/system/transport.server] [TransportPipelines.swift:152] Write reject handshake offer to: [sact://local@127.0.0.1:9002] reason: [Node already leaving cluster.]
16:42:24 [captured] [remote] // "handshake/channel": SocketChannel { BaseSocket { fd=240 }, active = true, localAddress = Optional([IPv4]127.0.0.1/127.0.0.1:9003), remoteAddress = Optional([IPv4]127.0.0.1/127.0.0.1:43936) }
16:42:24 [captured] [remote] 7:42:21.2990 trace [/user/swim] [SWIMActor.swift:99] Periodic ping random member, among: 0
16:42:24 [captured] [remote] // "swim/incarnation": 0
16:42:24 [captured] [remote] // "swim/members/all": 
16:42:24 [captured] [remote] //   SWIM.Member(SWIMActor(sact://local@127.0.0.1:9002/user/swim), alive(incarnation: 0), protocolPeriod: 1)
16:42:24 [captured] [remote] // "swim/members/count": 1
16:42:24 [captured] [remote] // "swim/protocolPeriod": 2
16:42:24 [captured] [remote] // "swim/suspects/count": 0
16:42:24 [captured] [remote] // "swim/timeoutSuspectsBeforePeriodMax": 2
16:42:24 [captured] [remote] // "swim/timeoutSuspectsBeforePeriodMin": 1
16:42:24 [captured] [remote] 7:42:21.2990 debug [/user/swim] [SWIMActor.swift:135] Sending ping
16:42:24 [captured] [remote] // "swim/gossip/payload": membership([SWIM.Member(SWIMActor(sact://local@127.0.0.1:9002/user/swim), alive(incarnation: 0), protocolPeriod: 1), SWIM.Member(SWIMActor(/user/swim), alive(incarnation: 0), protocolPeriod: 0)])
16:42:24 [captured] [remote] // "swim/incarnation": 0
16:42:24 [captured] [remote] // "swim/members/all": 
16:42:24 [captured] [remote] //   SWIM.Member(SWIMActor(sact://local@127.0.0.1:9002/user/swim), alive(incarnation: 0), protocolPeriod: 1)
16:42:24 [captured] [remote] // "swim/members/count": 1
16:42:24 [captured] [remote] // "swim/protocolPeriod": 2
16:42:24 [captured] [remote] // "swim/suspects/count": 0
16:42:24 [captured] [remote] // "swim/target": SWIMActor(sact://local@127.0.0.1:9002/user/swim)
16:42:24 [captured] [remote] // "swim/timeout": 0.3 seconds
16:42:24 [captured] [remote] // "swim/timeoutSuspectsBeforePeriodMax": 2
16:42:24 [captured] [remote] // "swim/timeoutSuspectsBeforePeriodMin": 1
16:42:24 [captured] [remote] 7:42:21.4230 trace [/system/cluster/gossip] [Gossiper+Shell.swift:192] New gossip round, selected [1] peers, from [1] peers
16:42:24 [captured] [remote] // "gossip/id": membership
16:42:24 [captured] [remote] // "gossip/peers/selected": 
16:42:24 [captured] [remote] //   _AddressableActorRef(sact://local@127.0.0.1:9002/system/cluster/gossip)
16:42:24 [captured] [remote] 7:42:21.4230 trace [/system/cluster/gossip] [Gossiper+Shell.swift:233] Sending gossip to sact://local@127.0.0.1:9002/system/cluster/gossip
16:42:24 [captured] [remote] // "actor/message": MembershipGossip(owner: sact://remote:850464202261074644@127.0.0.1:9003, seen: Cluster.MembershipGossip.SeenTable([sact://remote:850464202261074644@127.0.0.1:9003: [node:sact://local@127.0.0.1:9002: 4, node:sact://remote@127.0.0.1:9003: 5], sact://local:14716080274836230658@127.0.0.1:9002: [node:sact://local@127.0.0.1:9002: 4]]), membership: Membership(count: 2, leader: .none, members: [Member(sact://local:14716080274836230658@127.0.0.1:9002, status: joining, reachability: reachable), Member(sact://remote:850464202261074644@127.0.0.1:9003, status: down, reachability: reachable)]))
16:42:24 [captured] [remote] // "gossip/peers/count": 1
16:42:24 [captured] [remote] // "gossip/target": sact://local@127.0.0.1:9002/system/cluster/gossip
16:42:24 [captured] [remote] 7:42:21.4230 trace [/system/cluster/gossip] [Gossiper+Shell.swift:272] Schedule next gossip round in 1s 85ms (1s ± 20.0%)
16:42:24 [captured] [remote] 7:42:21.4920 trace [[$wellKnown: receptionist]] [OperationLogDistributedReceptionist.swift:759] Periodic ack tick
16:42:24 [captured] [remote] 7:42:21.6000 debug [/user/swim] [SWIMActor.swift:153] .ping resulted in error
16:42:24 [captured] [remote] // "error": RemoteCallError(timedOut(2B7E5B42-5D9A-4F24-B78B-18C3BD35B011, DistributedCluster.TimeoutError(message: "Remote call [2B7E5B42-5D9A-4F24-B78B-18C3BD35B011] to [DistributedCluster.SWIMActor.ping(origin:payload:sequenceNumber:)](sact://local@127.0.0.1:9002/user/swim) timed out", timeout: 0.3 seconds)), at: DistributedClust
16:42:24 er/ClusterSystem.swift:1272)
16:42:24 [captured] [remote] // "swim/incarnation": 0
16:42:24 [captured] [remote] // "swim/members/all": 
16:42:24 [captured] [remote] //   SWIM.Member(SWIMActor(sact://local@127.0.0.1:9002/user/swim), alive(incarnation: 0), protocolPeriod: 1)
16:42:24 [captured] [remote] // "swim/members/count": 1
16:42:24 [captured] [remote] // "swim/ping/sequenceNumber": 2
16:42:24 [captured] [remote] // "swim/ping/target": SWIMActor(sact://local@127.0.0.1:9002/user/swim)
16:42:24 [captured] [remote] // "swim/protocolPeriod": 2
16:42:24 [captured] [remote] // "swim/suspects/count": 0
16:42:24 [captured] [remote] // "swim/timeoutSuspectsBeforePeriodMax": 2
16:42:24 [captured] [remote] // "swim/timeoutSuspectsBeforePeriodMin": 1
16:42:24 [captured] [remote] 7:42:21.6000 trace [/user/swim] [SWIMInstance.swift:190] Adjusted LHM multiplier
16:42:24 [captured] [remote] // "swim/lhm": 1
16:42:24 [captured] [remote] // "swim/lhm/event": failedProbe
16:42:24 [captured] [remote] 7:42:22.3030 trace [/user/swim] [SWIMActor.swift:99] Periodic ping random member, among: 0
16:42:24 [captured] [remote] // "swim/incarnation": 0
16:42:24 [captured] [remote] // "swim/members/all": 
16:42:24 [captured] [remote] //   SWIM.Member(SWIMActor(sact://local@127.0.0.1:9002/user/swim), suspect(incarnation: 0, suspectedBy: Set([sact://127.0.0.1:9003#850464202261074644])), protocolPeriod: 2, suspicionStartedAt: DispatchTime(rawValue: 99152366751359262))
16:42:24 [captured] [remote] // "swim/members/count": 1
16:42:24 [captured] [remote] // "swim/protocolPeriod": 3
16:42:24 [captured] [remote] // "swim/suspects/count": 1
16:42:24 [captured] [remote] // "swim/timeoutSuspectsBeforePeriodMax": 1
16:42:24 [captured] [remote] // "swim/timeoutSuspectsBeforePeriodMin": 1
16:42:24 [captured] [remote] 7:42:22.3040 debug [/user/swim] [SWIMActor.swift:135] Sending ping
16:42:24 [captured] [remote] // "swim/gossip/payload": membership([SWIM.Member(SWIMActor(sact://local@127.0.0.1:9002/user/swim), suspect(incarnation: 0, suspectedBy: Set([sact://127.0.0.1:9003#850464202261074644])), protocolPeriod: 2, suspicionStartedAt: DispatchTime(rawValue: 99152366751359262)), SWIM.Member(SWIMActor(/user/swim), alive(incarnation: 0), protocolPeriod: 0)])
16:42:24 [captured] [remote] // "swim/incarnation": 0
16:42:24 [captured] [remote] // "swim/members/all": 
16:42:24 [captured] [remote] //   SWIM.Member(SWIMActor(sact://local@127.0.0.1:9002/user/swim), suspect(incarnation: 0, suspectedBy: Set([sact://127.0.0.1:9003#850464202261074644])), protocolPeriod: 2, suspicionStartedAt: DispatchTime(rawValue: 99152366751359262))
16:42:24 [captured] [remote] // "swim/members/count": 1
16:42:24 [captured] [remote] // "swim/protocolPeriod": 3
16:42:24 [captured] [remote] // "swim/suspects/count": 1
16:42:24 [captured] [remote] // "swim/target": SWIMActor(sact://local@127.0.0.1:9002/user/swim)
16:42:24 [captured] [remote] // "swim/timeout": 0.6 seconds
16:42:24 [captured] [remote] // "swim/timeoutSuspectsBeforePeriodMax": 1
16:42:24 [captured] [remote] // "swim/timeoutSuspectsBeforePeriodMin": 1
16:42:24 [captured] [remote] 7:42:22.4260 debug [/system/cluster/gossip] [Gossiper+Shell.swift:255] Did not receive ACK for of [membership] gossip
16:42:24 [captured] [remote] // "error": RemoteCallError(timedOut(FE52FF85-B432-4A55-BEF1-BF3AF0F12BE2, DistributedCluster.TimeoutError(message: "No response received for ask to [sact://local@127.0.0.1:9002/system/cluster/gossip] within timeout [1s]. Ask was initiated from function [sendGossip(_:identifier:_:to:onGossipAck:)] in [/code/Sources/DistributedCluster/Gossip/Gossiper+Shell.swift:243] and expected response of type [DistributedCluster.Cluster.MembershipGossip].", timeout: 1.0 seconds)), at: DistributedCluster/ActorRef+Ask.swift:267)
16:42:24 [captured] [remote] // "payload": MembershipGossip(owner: sact://remote:850464202261074644@127.0.0.1:9003, seen: Cluster.MembershipGossip.SeenTable([sact://remote:850464202261074644@127.0.0.1:9003: [node:sact://local@127.0.0.1:9002: 4, node:sact://remote@127.0.0.1:9003: 5], sact://local:14716080274836230658@127.0.0.1:9002: [node:sact://local@127.0.0.1:9002: 4]]), membership: Membership(count: 2, leader: .none, members: [Member(sact://local:14716080274836230658@127.0.0.1:9002, status: joining, reachability: reachable), Member(sact://remote:850464202261074644@127.0.0.1:9003, status: down, reachability: reachable)]))
16:42:24 [captu
16:42:24 red] [remote] // "target": _ActorRef<GossipShell<DistributedCluster.Cluster.MembershipGossip, DistributedCluster.Cluster.MembershipGossip>.Message>(sact://local@127.0.0.1:9002/system/cluster/gossip)
16:42:24 [captured] [remote] 7:42:22.5090 trace [/system/cluster/gossip] [Gossiper+Shell.swift:192] New gossip round, selected [1] peers, from [1] peers
16:42:24 [captured] [remote] // "gossip/id": membership
16:42:24 [captured] [remote] // "gossip/peers/selected": 
16:42:24 [captured] [remote] //   _AddressableActorRef(sact://local@127.0.0.1:9002/system/cluster/gossip)
16:42:24 [captured] [remote] 7:42:22.5090 trace [/system/cluster/gossip] [Gossiper+Shell.swift:233] Sending gossip to sact://local@127.0.0.1:9002/system/cluster/gossip
16:42:24 [captured] [remote] // "actor/message": MembershipGossip(owner: sact://remote:850464202261074644@127.0.0.1:9003, seen: Cluster.MembershipGossip.SeenTable([sact://remote:850464202261074644@127.0.0.1:9003: [node:sact://local@127.0.0.1:9002: 4, node:sact://remote@127.0.0.1:9003: 5], sact://local:14716080274836230658@127.0.0.1:9002: [node:sact://local@127.0.0.1:9002: 4]]), membership: Membership(count: 2, leader: .none, members: [Member(sact://local:14716080274836230658@127.0.0.1:9002, status: joining, reachability: reachable), Member(sact://remote:850464202261074644@127.0.0.1:9003, status: down, reachability: reachable)]))
16:42:24 [captured] [remote] // "gossip/peers/count": 1
16:42:24 [captured] [remote] // "gossip/target": sact://local@127.0.0.1:9002/system/cluster/gossip
16:42:24 [captured] [remote] 7:42:22.5100 trace [/system/cluster/gossip] [Gossiper+Shell.swift:272] Schedule next gossip round in 1s 107ms (1s ± 20.0%)
16:42:24 [captured] [remote] 7:42:22.6920 trace [[$wellKnown: receptionist]] [OperationLogDistributedReceptionist.swift:759] Periodic ack tick
16:42:24 [captured] [remote] 7:42:22.9050 debug [/user/swim] [SWIMActor.swift:153] .ping resulted in error
16:42:24 [captured] [remote] // "error": RemoteCallError(timedOut(4C0C0D5D-8105-43F4-B112-E1F84ABD4B87, DistributedCluster.TimeoutError(message: "Remote call [4C0C0D5D-8105-43F4-B112-E1F84ABD4B87] to [DistributedCluster.SWIMActor.ping(origin:payload:sequenceNumber:)](sact://local@127.0.0.1:9002/user/swim) timed out", timeout: 0.6 seconds)), at: DistributedCluster/ClusterSystem.swift:1272)
16:42:24 [captured] [remote] // "swim/incarnation": 0
16:42:24 [captured] [remote] // "swim/members/all": 
16:42:24 [captured] [remote] //   SWIM.Member(SWIMActor(sact://local@127.0.0.1:9002/user/swim), suspect(incarnation: 0, suspectedBy: Set([sact://127.0.0.1:9003#850464202261074644])), protocolPeriod: 2, suspicionStartedAt: DispatchTime(rawValue: 99152366751359262))
16:42:24 [captured] [remote] // "swim/members/count": 1
16:42:24 [captured] [remote] // "swim/ping/sequenceNumber": 3
16:42:24 [captured] [remote] // "swim/ping/target": SWIMActor(sact://local@127.0.0.1:9002/user/swim)
16:42:24 [captured] [remote] // "swim/protocolPeriod": 3
16:42:24 [captured] [remote] // "swim/suspects/count": 1
16:42:24 [captured] [remote] // "swim/timeoutSuspectsBeforePeriodMax": 1
16:42:24 [captured] [remote] // "swim/timeoutSuspectsBeforePeriodMin": 1
16:42:24 [captured] [remote] 7:42:22.9050 trace [/user/swim] [SWIMInstance.swift:190] Adjusted LHM multiplier
16:42:24 [captured] [remote] // "swim/lhm": 2
16:42:24 [captured] [remote] // "swim/lhm/event": failedProbe
16:42:24 [captured] [remote] 7:42:23.3010 warning  [DowningSettings.swift:86] Shutting down...
16:42:24 [captured] [remote] 7:42:23.3010 debug  [ClusterSystem.swift:512] Shutting down actor system [remote]. All actors will be stopped.
16:42:24 [captured] [remote] 7:42:23.3010 trace [/system/cluster/gossip] [Gossiper+Shell.swift:147] Update (locally) gossip payload [membership]
16:42:24 [captured] [remote] // "gossip/identifier": membership
16:42:24 [captured] [remote] // "gossip/payload": MembershipGossip(
16:42:24 [captured] [remote] //   owner: sact://remote:850464202261074644@127.0.0.1:9003,
16:42:24 [captured] [remote] //   seen: Cluster.Gossip.SeenTable(
16:42:24 [captured] [remote] //     sact://remote@127.0.0.1:9003 observed versions:
16:42:24 [captured] [remote] //         node:sact://remote@127.0.0.1:9003 @ 5
16:42:24 [captured] [remote] // ),
16:42:24 [captured] [remote] //   membership: Membership(
16:42:24 [captured] [remote] //     _members: [
16:42:24 [captured] [remote] //
16:42:24 sact://local@127.0.0.1:9002: Member(sact://local@127.0.0.1:9002, status: joining, reachability: reachable),
16:42:24 [captured] [remote] //       sact://remote@127.0.0.1:9003: Member(sact://remote@127.0.0.1:9003, status: down, reachability: reachable),
16:42:24 [captured] [remote] //     ],
16:42:24 [captured] [remote] //     _leaderNode: sact://local:14716080274836230658@127.0.0.1:9002,
16:42:24 [captured] [remote] //   ),
16:42:24 [captured] [remote] // )
16:42:24 [captured] [remote] 7:42:23.3020 info [/system/cluster] [ClusterShell.swift:1213] Unbound server socket [127.0.0.1:9003], node: sact://remote:850464202261074644@127.0.0.1:9003
16:42:24 [captured] [remote] 7:42:23.8920 trace [[$wellKnown: receptionist]] [OperationLogDistributedReceptionist.swift:759] Periodic ack tick
16:42:24 [captured] [remote] 7:42:24.3080 info [/user/swim] [SWIMActor.swift:341] Node sact://127.0.0.1:9002#14716080274836230658 determined [.unreachable]! The node is not yet marked [.down], a downing strategy or other Cluster.Event subscriber may act upon this information.
16:42:24 [captured] [remote] // "swim/member": SWIM.Member(SWIMActor(sact://local@127.0.0.1:9002/user/swim), unreachable(incarnation: 0), protocolPeriod: 3)
16:42:24 [captured] [remote] 7:42:24.3090 trace [/user/swim] [SWIMActor.swift:99] Periodic ping random member, among: 0
16:42:24 [captured] [remote] // "swim/incarnation": 0
16:42:24 [captured] [remote] // "swim/members/all": 
16:42:24 [captured] [remote] //   SWIM.Member(SWIMActor(sact://local@127.0.0.1:9002/user/swim), unreachable(incarnation: 0), protocolPeriod: 3)
16:42:24 [captured] [remote] // "swim/members/count": 1
16:42:24 [captured] [remote] // "swim/protocolPeriod": 4
16:42:24 [captured] [remote] // "swim/suspects/count": 0
16:42:24 [captured] [remote] // "swim/timeoutSuspectsBeforePeriodMax": 1
16:42:24 [captured] [remote] // "swim/timeoutSuspectsBeforePeriodMin": 1
16:42:24 [captured] [remote] 7:42:24.3090 debug [/user/swim] [SWIMActor.swift:135] Sending ping
16:42:24 [captured] [remote] // "swim/gossip/payload": membership([SWIM.Member(SWIMActor(sact://local@127.0.0.1:9002/user/swim), unreachable(incarnation: 0), protocolPeriod: 3), SWIM.Member(SWIMActor(/user/swim), alive(incarnation: 0), protocolPeriod: 0)])
16:42:24 [captured] [remote] // "swim/incarnation": 0
16:42:24 [captured] [remote] // "swim/members/all": 
16:42:24 [captured] [remote] //   SWIM.Member(SWIMActor(sact://local@127.0.0.1:9002/user/swim), unreachable(incarnation: 0), protocolPeriod: 3)
16:42:24 [captured] [remote] // "swim/members/count": 1
16:42:24 [captured] [remote] // "swim/protocolPeriod": 4
16:42:24 [captured] [remote] // "swim/suspects/count": 0
16:42:24 [captured] [remote] // "swim/target": SWIMActor(sact://local@127.0.0.1:9002/user/swim)
16:42:24 [captured] [remote] // "swim/timeout": 0.9 seconds
16:42:24 [captured] [remote] // "swim/timeoutSuspectsBeforePeriodMax": 1
16:42:24 [captured] [remote] // "swim/timeoutSuspectsBeforePeriodMin": 1
16:42:24 [captured] [remote] 7:42:24.3100 debug [/user/swim] [SWIMActor.swift:153] .ping resulted in error
16:42:24 [captured] [remote] // "error": RemoteCallError(clusterAlreadyShutDown, at: DistributedCluster/ClusterSystem.swift:1154)
16:42:24 [captured] [remote] // "swim/incarnation": 0
16:42:24 [captured] [remote] // "swim/members/all": 
16:42:24 [captured] [remote] //   SWIM.Member(SWIMActor(sact://local@127.0.0.1:9002/user/swim), unreachable(incarnation: 0), protocolPeriod: 3)
16:42:24 [captured] [remote] // "swim/members/count": 1
16:42:24 [captured] [remote] // "swim/ping/sequenceNumber": 4
16:42:24 [captured] [remote] // "swim/ping/target": SWIMActor(sact://local@127.0.0.1:9002/user/swim)
16:42:24 [captured] [remote] // "swim/protocolPeriod": 4
16:42:24 [captured] [remote] // "swim/suspects/count": 0
16:42:24 [captured] [remote] // "swim/timeoutSuspectsBeforePeriodMax": 1
16:42:24 [captured] [remote] // "swim/timeoutSuspectsBeforePeriodMin": 1
16:42:24 [captured] [remote] 7:42:24.3100 trace [/user/swim] [SWIMInstance.swift:190] Adjusted LHM multiplier
16:42:24 [captured] [remote] // "swim/lhm": 2
16:42:24 [captured] [remote] // "swim/lhm/event": failedProbe
16:42:24 ========================================================================================================================
16:42:24 Test Case 'ClusterSystemTests.test_cleanUpAssociationTombstones' failed (4.277 seconds)

@ktoso ktoso added the failed 💥 Failed tickets are CI or benchmarking failures, should be investigated as soon as possible label Jul 3, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
failed 💥 Failed tickets are CI or benchmarking failures, should be investigated as soon as possible
Projects
None yet
Development

No branches or pull requests

1 participant