Skip to content
This repository has been archived by the owner on Feb 15, 2024. It is now read-only.

Nodejoin Spam and Temp Queue size attack #26

Open
EggPool opened this issue Aug 16, 2020 · 11 comments
Open

Nodejoin Spam and Temp Queue size attack #26

EggPool opened this issue Aug 16, 2020 · 11 comments

Comments

@EggPool
Copy link

EggPool commented Aug 16, 2020

By mitigating the memory and resource use of incoming nodejoin messages, v572 opens for a new kind of attack:
To enter the cycle, you have to enter the queue ("nodes" file).
Since 572, to enter that queue you first have to enter a - size limited - temporary queue.
The size of this temporary queue is static, currently 1000.
The scarce resource to enter the real queue is no more IPv4, it's a slot in that temporary queue.

Several things gave me hints:

  • several new users complaining about their newly in queue verifier not showing on nyzo.co, after days.
  • heavy nodejoin spam, either continuous or by burst, sometimes targetting specific class of verifiers.
  • such targeted verifiers (version xxx001 for instance) hardly showing new verifiers

The extended logging from my NCFP-10 PR was aimed at collecting data, but also run an optional extra script to auto mitigate these attacks, by rate-limiting them.

When a verifier is subject to nodejoin spam, its log looks like:

[1596735532.002 (2020-08-06 17:38:52.002 UTC)]: added new out-of-cycle node to queue: 6c31...076d
[1596735532.003 (2020-08-06 17:38:52.003 UTC)]: removed node from new out-of-cycle queue due to size
[1596735532.004 (2020-08-06 17:38:52.004 UTC)]: nodejoin_from 3.248.137.142 6c31...076d Launch9
[1596735532.023 (2020-08-06 17:38:52.023 UTC)]: added new out-of-cycle node to queue: 3c94...c24b
[1596735532.023 (2020-08-06 17:38:52.023 UTC)]: removed node from new out-of-cycle queue due to size
[1596735532.026 (2020-08-06 17:38:52.026 UTC)]: nodejoin_from 44.226.163.107 3c94...c24b WWW_BUY_COM 16
[1596735532.033 (2020-08-06 17:38:52.033 UTC)]: nodejoin_from 44.227.41.15 4cc5...7e91 New12 here
[1596735532.034 (2020-08-06 17:38:52.034 UTC)]: added new out-of-cycle node to queue: ecbf...c341
[1596735532.035 (2020-08-06 17:38:52.035 UTC)]: removed node from new out-of-cycle queue due to size
[1596735532.038 (2020-08-06 17:38:52.038 UTC)]: added new out-of-cycle node to queue: 91fc...b7ec
[1596735532.038 (2020-08-06 17:38:52.038 UTC)]: removed node from new out-of-cycle queue due to size
[1596735532.038 (2020-08-06 17:38:52.038 UTC)]: nodejoin_from 44.226.13.107 91fc...b7ec Foxwie10
[1596735532.049 (2020-08-06 17:38:52.049 UTC)]: nodejoin_from 35.183.163.76 ecbf...c341 ZBank16
[1596735532.061 (2020-08-06 17:38:52.061 UTC)]: added new out-of-cycle node to queue: db0e...f9bc
[1596735532.061 (2020-08-06 17:38:52.061 UTC)]: removed node from new out-of-cycle queue due to size

This goes on and on, with around 100 nodejoins per second.
The logs show the process is effective: many many temp nodes do drop off the temp queue.
When the temp queue is full - and it clearly is in these occasions - then every time a new nodejoin comes in, a random node from the temp queue drops.
Send enough and you wipe out almost everyone else.

If you can borrow more than 1000 ips, even for a short period of time (amazon ips, alibaba, socks proxies, botnets, some blockchains renting their nodes: renting 10k ips is easy and cheap)
then you can flood the temp queue of the in-cycle nodes, get all new temp candidates out and have more chances only yours remain.
Spam with temp ips, spam with your new nodes ips, you enter the real queue, others don't.
Regularly spam nodejoin with borrowed ips, you make it statistically harder for others to join.

This is not a very efficient attack. It would need to be done at scale and in a continuous way to be useful,

However, the logs show this does happen IRL, and in a continuous way (at least on some verifiers)

Another sample from a node that already banned a lot of spamming ips:
[1597137583.389 (2020-08-11 09:19:43.389 UTC)]: removed node from new out-of-cycle queue due to size
[1597137583.390 (2020-08-11 09:19:43.390 UTC)]: nodejoin_from 44.227.158.100 6ed0...ca7a NowFuture14
[1597137583.395 (2020-08-11 09:19:43.395 UTC)]: added new out-of-cycle node to queue: e37e...e3e4
[1597137583.396 (2020-08-11 09:19:43.396 UTC)]: nodejoin_from 99.79.142.3 e37e...e3e4 DoYouW0
[1597137583.411 (2020-08-11 09:19:43.411 UTC)]: nodejoin_from 50.116.12.77 8a1d...de82 feiya200_u
[1597137583.417 (2020-08-11 09:19:43.417 UTC)]: added new out-of-cycle node to queue: 8adb...5069
[1597137583.417 (2020-08-11 09:19:43.417 UTC)]: removed node from new out-of-cycle queue due to size
[1597137583.420 (2020-08-11 09:19:43.420 UTC)]: added new out-of-cycle node to queue: f968...6fc3
[1597137583.420 (2020-08-11 09:19:43.420 UTC)]: removed node from new out-of-cycle queue due to size
[1597137583.420 (2020-08-11 09:19:43.420 UTC)]: nodejoin_from 99.79.129.23 8adb...5069 PACITES4
[1597137583.422 (2020-08-11 09:19:43.422 UTC)]: added new out-of-cycle node to queue: 0a40...6494
[1597137583.422 (2020-08-11 09:19:43.422 UTC)]: nodejoin_from 99.79.175.115 f968...6fc3 noNation19
[1597137583.422 (2020-08-11 09:19:43.422 UTC)]: nodejoin_from 52.39.118.45 0a40...6494 Detail POD 17
[1597137583.424 (2020-08-11 09:19:43.424 UTC)]: added new out-of-cycle node to queue: 0533...5c9f
[1597137583.424 (2020-08-11 09:19:43.424 UTC)]: removed node from new out-of-cycle queue due to size

Temp queue full, nodes dropping. This is happening, since weeks if not more. This node can hardly see real new candidates.
The bottleneck is the temp queue size.

In addition to the temp queue size "attack", the heavy nodejoin spam itself is an effective DoS attack.
It now seems to be targetted toward low mesh count verifiers, and/or verifiers running a xxx001 version.
In cycle verifiers under attack experience high cpu load, ram usage, can swap and end up in a state where they can't follow the cycle anymore.
Some technical users had to modify the code or use xxx001 version to get extended IP log of the attacks, so to identify the sources of the attacks and ban them via iptables - (Mostly amazon ec2 ips).

@m-burner
Copy link

This is a real problem.
I'm collecting information about these addresses and if there are more than 100 requests per hour, ban via iptables.
For example:

501  nodejoin_from 35.182.9.139 8e49...e1e3 DoYouW8
507  nodejoin_from 99.79.156.196 bc33...c613 DoYouW9
510  nodejoin_from 52.4.13.207 49be...d93d JJSIN7
543  nodejoin_from 52.211.129.250 a242...d6b2 Launch8
564  nodejoin_from 44.230.117.57 b181...48e4 ZIHXIN19
592  nodejoin_from 15.222.41.241 ea51...695a Nescafe9
614  nodejoin_from 15.223.84.159 727e...d6c6 hardNW23
627  nodejoin_from 3.224.22.144 01a5...7d02 JJSIN65
630  nodejoin_from 35.183.180.32 28d1...c557 hardNW2
643  nodejoin_from 3.230.240.64 0323...e475 Gift53_

But using external utilities is not the solution.
It's better to send IP-addresses that are spamming out to a temporary ban in verifier code.

@reb0rn21
Copy link

reb0rn21 commented Aug 17, 2020

I can't find mine nyzo log i saved, i only have whole tcp dump but it can be seen some IPs are spamming it
time of log was when node join ddos is ongoing: GMT+2: 18:26 to 18:28 on 8/11/2020

some source IP that sent packets more then dozen xx in 2min:
109 3.132.170.59
20 52.211.186.53
80+ 44.226.62.83

this is only what i checked first few IPs as its tcp dump only some do spam, but it clearly can be seen that they where ddos mostly from amazon IPs

tcp.log

@this-is-z0rn
Copy link

Some of my verifier stats

dropped nodes due to temp queue being full
cat /var/log/nyzo-verifier-stdout.log|grep "due to size"|wc -l

from Verifer https://nyzo.co/status?id=18c6.0af8
31794

from Verifer https://nyzo.co/status?id=1397.8327
16428

IPs sending much more nodejoin message than usual
tail -n10000 /var/log/nyzo-verifier-stdout.log | grep "nodejoin_from" | cut -d " " -f 2 | LC_ALL=C sort | uniq -c | sort -n | sed "s/^[ ]*//"

from Verifer https://nyzo.co/status?id=18c6.0af8
20 50.17.100.13
20 63.35.70.42
21 3.134.236.25
21 3.17.209.117
22 3.134.58.24
22 34.245.246.109
27 44.226.25.146
31 155.94.140.214
31 34.241.94.43
58 52.209.162.55
64 108.128.211.49
69 52.60.114.234
69 52.60.60.29
70 35.182.62.170
71 54.77.88.193
71 99.79.158.81
72 54.246.131.90
74 99.80.28.4
76 44.227.223.95
76 52.60.208.184
80 99.81.1.30
83 52.60.236.144
89 52.60.82.192
95 52.208.175.62
96 3.132.78.99
98 34.250.89.63
98 44.230.47.77
99 44.224.202.198
100 44.227.112.115
100 54.69.209.98
101 44.225.10.203
102 15.222.75.161
102 44.228.55.159
103 15.222.50.168
103 15.222.67.135
103 44.227.15.160
104 44.227.102.153
105 15.222.75.37
105 18.220.128.89
105 52.60.179.45
105 54.72.219.71
106 3.17.30.225
106 35.182.48.249
106 52.215.54.19
107 18.188.76.23
107 3.232.0.84
107 35.183.146.70
107 63.33.170.71
107 63.34.123.168
108 18.217.71.76
108 35.182.95.236
108 44.225.119.183
109 44.225.225.71
109 54.154.40.210
110 52.60.147.199
110 52.60.171.69
111 44.229.68.201
111 99.79.87.50
112 3.20.217.31
114 44.230.117.57

from Verifer https://nyzo.co/status?id=1397.8327
19 99.79.182.68
20 34.245.246.109
20 34.250.68.170
20 34.255.134.92
20 99.79.164.250
21 18.206.132.88
21 50.17.100.13
21 63.35.70.42
22 3.134.236.25
22 3.134.58.24
22 3.135.30.40
22 3.17.209.117
22 63.35.28.177
24 34.241.94.43
34 155.94.140.214
50 52.209.162.55
57 108.128.211.49
61 52.60.60.29
62 52.60.114.234
63 35.182.62.170
63 54.77.88.193
64 54.246.131.90
65 99.79.158.81
68 99.80.28.4
69 52.60.208.184
70 44.227.223.95
72 99.81.1.30
77 52.60.236.144
84 52.60.82.192
97 44.228.55.159
102 52.208.175.62
104 3.132.78.99
105 34.250.89.63
106 44.224.202.198
106 44.230.47.77
107 44.227.112.115
108 35.183.146.70
108 54.69.209.98
110 15.222.50.168
110 35.182.95.236
110 44.225.10.203
110 44.227.15.160
110 52.60.179.45
111 15.222.75.161
112 15.222.67.135
112 44.227.102.153
112 54.72.219.71
113 18.188.76.23
113 35.182.48.249
113 52.215.54.19
114 15.222.75.37
115 18.220.128.89
115 3.17.30.225
116 63.34.123.168
116 99.79.87.50
117 18.217.71.76
117 44.225.119.183
117 44.225.225.71
117 63.33.170.71
118 44.229.68.201
118 52.60.171.69
119 3.232.0.84
119 44.230.117.57
119 54.154.40.210
120 52.60.147.199
121 3.20.217.31

@EggPool
Copy link
Author

EggPool commented Aug 26, 2020

Latest victim from nodejoin spam

− Radiofoot left at block 8807968, cycle length: 2201

  • Radiofoot joined at block 8805766, cycle length: 2202

Owner sent me its log and feedback.
That was an official 595 version.
Verifier was target of nodejoin spam. Extract of logs once dropped:

added new out-of-cycle node to queue: d931...a383
removed node from new out-of-cycle queue due to size
Unable to determine top verifier. This is normal; out-of-cycle verifiers are not typically notified about new-verifier votes.
added new out-of-cycle node to queue: 1ed8...d893
removed node from new out-of-cycle queue due to size
added new out-of-cycle node to queue: cff7...ae8c
removed node from new out-of-cycle queue due to size
Unable to determine top verifier. This is normal; out-of-cycle verifiers are not typically notified about new-verifier votes.
added new out-of-cycle node to queue: 61de...e857
removed node from new out-of-cycle queue due to size
added new out-of-cycle node to queue: e179...5f4d
removed node from new out-of-cycle queue due to size
Unable to determine top verifier. This is normal; out-of-cycle verifiers are not typically notified about new-verifier votes.
added new out-of-cycle node to queue: e20d...c941
removed node from new out-of-cycle queue due to size
added new out-of-cycle node to queue: b7e5...e774
removed node from new out-of-cycle queue due to size
Unable to determine top verifier. This is normal; out-of-cycle verifiers are not typically notified about new-verifier votes.
added new out-of-cycle node to queue: 8d77...aea1
removed node from new out-of-cycle queue due to size
Unable to determine top verifier. This is normal; out-of-cycle verifiers are not typically notified about new-verifier votes.
added new out-of-cycle node to queue: c447...8195
removed node from new out-of-cycle queue due to size
added new out-of-cycle node to queue: ef52...d52f
removed node from new out-of-cycle queue due to size
added new out-of-cycle node to queue: 4ac3...6fc1
Unable to determine top verifier. This is normal; out-of-cycle verifiers are not typically notified about new-verifier votes.
added new out-of-cycle node to queue: 64b3...45a2
removed node from new out-of-cycle queue due to size
added new out-of-cycle node to queue: 1b30...0489
removed node from new out-of-cycle queue due to size
Unable to determine top verifier. This is normal; out-of-cycle verifiers are not typically notified about new-verifier votes.
added new out-of-cycle node to queue: 698f...8453
removed node from new out-of-cycle queue due to size
removed node 9c45...127e from mesh on Radiofoot
Unable to determine top verifier. This is normal; out-of-cycle verifiers are not typically notified about new-verifier votes.
Unable to determine top verifier. This is normal; out-of-cycle verifiers are not typically notified about new-verifier votes.
added new out-of-cycle node to queue: 4437...3941
removed node from new out-of-cycle queue due to size
added new out-of-cycle node to queue: a618...1638
removed node from new out-of-cycle queue due to size
added new out-of-cycle node to queue: 14cf...5da7
removed node from new out-of-cycle queue due to size
added new out-of-cycle node to queue: 5494...20cb
removed node from new out-of-cycle queue due to size
Unable to determine top verifier. This is normal; out-of-cycle verifiers are not typically notified about new-verifier votes.
Unable to determine top verifier. This is normal; out-of-cycle verifiers are not typically notified about new-verifier votes.
added new out-of-cycle node to queue: c41e...9542
removed node from new out-of-cycle queue due to size
added new out-of-cycle node to queue: 4adf...bf49
removed node from new out-of-cycle queue due to size
added new out-of-cycle node to queue: 6b67...1702
removed node from new out-of-cycle queue due to size
Unable to determine top verifier. This is normal; out-of-cycle verifiers are not typically notified about new-verifier votes.
added new out-of-cycle node to queue: 3444...c6f0
Unable to determine top verifier. This is normal; out-of-cycle verifiers are not typically notified about new-verifier votes.
added new out-of-cycle node to queue: 2383...4b14
removed node from new out-of-cycle queue due to size
added new out-of-cycle node to queue: 6aa8...4679
removed node from new out-of-cycle queue due to size
Unable to determine top verifier. This is normal; out-of-cycle verifiers are not typically notified about new-verifier votes.
added new out-of-cycle node to queue: 6b9b...a461
added new out-of-cycle node to queue: 846e...2293
removed node from new out-of-cycle queue due to size
Unable to determine top verifier. This is normal; out-of-cycle verifiers are not typically notified about new-verifier votes.
added new out-of-cycle node to queue: 4b80...c95b
removed node from new out-of-cycle queue due to size
added new out-of-cycle node to queue: fe90...9aed
removed node from new out-of-cycle queue due to size
added new out-of-cycle node to queue: b314...d113
Unable to determine top verifier. This is normal; out-of-cycle verifiers are not typically notified about new-verifier votes.
added new out-of-cycle node to queue: ab8a...7b80
removed node from new out-of-cycle queue due to size
added new out-of-cycle node to queue: 41e1...6907
removed node from new out-of-cycle queue due to size
Unable to determine top verifier. This is normal; out-of-cycle verifiers are not typically notified about new-verifier votes.
added new out-of-cycle node to queue: fefa...6aca
removed node from new out-of-cycle queue due to size
added new out-of-cycle node to queue: 6d01...6994
removed node from new out-of-cycle queue due to size
added new out-of-cycle node to queue: bae7...4d26
removed node from new out-of-cycle queue due to size
added new out-of-cycle node to queue: 1997...3d43
removed node from new out-of-cycle queue due to size

Goes on and on.
Running the official version, that user is not able to log the spamming ips and block them.

Verifier was protected by a v595 Sentinel. That sentinel was found stuck in "uncertain" state, not updating anymore, maybe as a result of its managed verifiers being spammed and lagging.
Sentinel had in-cycle nodes as block sources.

@reb0rn21
Copy link

reb0rn21 commented Aug 26, 2020

I think any possible ddos with join or any other mean should be defined what is ok and what is ddos and that code be pushed in main repo, so if normal node do 1-2 join msg per some time, then we need to set some limits and ban all the rest that cross the line

nodes that did most join spam:
103 35.183.213.247
119 44.226.117.123
119 99.79.156.196
120 34.251.240.60
121 99.79.28.207
123 44.226.35.146
124 13.52.234.69
124 34.246.157.134
124 52.52.126.199
124 52.52.76.148
125 100.21.241.20
125 15.222.68.247
125 44.226.174.209
126 44.224.210.129
127 15.222.77.205
127 3.132.184.209
127 44.225.93.240
127 44.226.193.198
128 52.17.234.107
128 52.8.174.194
130 44.224.62.79
131 15.222.48.163
131 3.219.88.173
131 52.60.66.88
131 54.77.220.93
132 18.200.104.40
133 15.222.70.5
133 34.241.66.95
133 34.243.225.48
133 52.60.227.162
135 13.52.233.228
135 13.57.162.158
135 34.252.154.131
135 52.16.87.107
135 54.215.193.59
136 52.53.104.213
136 52.53.61.80
136 52.8.71.190
137 13.52.228.31
137 3.208.188.129
137 99.80.173.222
138 50.18.216.130
139 52.8.69.84
143 44.230.46.61

@EggPool
Copy link
Author

EggPool commented Aug 28, 2020

More logs from the Radiofoot verifier attached.
Shows the verifier in queue, getting voted in, providing a block, getting spammed to death, frozen, then unsuccessful restart once dropped, then second restart.

Number of removed nodes from 1000 slots temp file, during that period:
cat Radiofoot_drop.log|grep "due to size"|wc -l
11943

Number of new out of cycle added (not nodejoins, new temp adds)
cat Radiofoot_drop.log |grep "added new"|sort| uniq -c| sort -rn|more
1839 added new out-of-cycle node to queue: eab2...cc41
1270 added new out-of-cycle node to queue: 538e...56df
1211 added new out-of-cycle node to queue: a3ea...e02d
1107 added new out-of-cycle node to queue: bd63...1cf0
863 added new out-of-cycle node to queue: 4e58...2346
851 added new out-of-cycle node to queue: 8120...c7f8
834 added new out-of-cycle node to queue: a26a...4f81
827 added new out-of-cycle node to queue: 82aa...83f9
809 added new out-of-cycle node to queue: 6da6...6d4a
723 added new out-of-cycle node to queue: 8211...1e2f
698 added new out-of-cycle node to queue: 0e8f...1031
677 added new out-of-cycle node to queue: 429e...6a0e
403 added new out-of-cycle node to queue: af3e...7e16
356 added new out-of-cycle node to queue: 9712...145a
258 added new out-of-cycle node to queue: 28a7...28d0
225 added new out-of-cycle node to queue: 444c...1cb0
123 added new out-of-cycle node to queue: bf99...6b73
119 added new out-of-cycle node to queue: 6ab6...aba9
119 added new out-of-cycle node to queue: 34f3...5470
94 added new out-of-cycle node to queue: 8927...3fea
92 added new out-of-cycle node to queue: 842c...9b5a
40 added new out-of-cycle node to queue: 3caf...234b
11 added new out-of-cycle node to queue: b0ec...8ff2
10 added new out-of-cycle node to queue: f737...0001
10 added new out-of-cycle node to queue: b4d6...36f1
8 added new out-of-cycle node to queue: f7ff...23fe
7 added new out-of-cycle node to queue: 7f7b...fa9f
6 added new out-of-cycle node to queue: e1a6...1b2b
6 added new out-of-cycle node to queue: d69c...0b82
6 added new out-of-cycle node to queue: 88f6...4b96
6 added new out-of-cycle node to queue: 838a...207b
6 added new out-of-cycle node to queue: 40e2...5f55
6 added new out-of-cycle node to queue: 339e...f30f
6 added new out-of-cycle node to queue: 2ada...1fc0
6 added new out-of-cycle node to queue: 06e8...f9c5
5 added new out-of-cycle node to queue: 5805...ef98
5 added new out-of-cycle node to queue: 463a...397c
5 added new out-of-cycle node to queue: 3bab...73f0
4 added new out-of-cycle node to queue: f64a...d001
4 added new out-of-cycle node to queue: f01c...1520
4 added new out-of-cycle node to queue: ee6c...a50d

IPs and nodejoins were not logged on that instance.
More verbose logging at that level is just one line to add and are invaluable to see, analyse and mitigate this issue.
This should be part of the default logging behaviour.

Radiofoot_drop.log

@cobra-lab
Copy link

cobra-lab commented Aug 30, 2020

How are things going with the fight against nodejoin spam?
Now the situation is that new nodes cannot get in waiting queue because of spam.
There is no diversification now. The queue is captured by the old nodes and the new ones are not allowed.
This is the first thing to do to improve the project - fix nodejoin spam to new users can come in.

@reb0rn21
Copy link

As far I know you have to wait 24h for your queue node to be visible...

@EggPool
Copy link
Author

EggPool commented Aug 31, 2020

@cobra-lab
New "regular" nodes still can enter the queue. Can take longer however (24h for nyzo.co, a few days for targetted in-cycle)

Not all in-cycles verifiers are heavily targetted atm.
What is needed is more users reporting evidence of nodejoin spam and volume of that spam here, so Nyzo devs can have a more complete understanding of the issue and its importance.
I can help collect logs If you are targetted and don't know what to do.

@awshrb
Copy link

awshrb commented Sep 9, 2020

My verifier got this "queue attack" too,these new nodes are all amazon ips,version 595,verifier nicknames FastQ8 Techgo20 Sleeping 4 inFACT19 ...

cleaning up because block 8970372 was frozen
sending verifier-removal vote to node at IP: 95.217.7.129
added new out-of-cycle node to queue: c6ff...6267
added new out-of-cycle node to queue: e16a...d2a2
removed node from new out-of-cycle queue due to size
added new out-of-cycle node to queue: f230...da02
removed node from new out-of-cycle queue due to size
added new out-of-cycle node to queue: 28a7...28d0
added new out-of-cycle node to queue: bfa8...92d4
removed node from new out-of-cycle queue due to size
added new out-of-cycle node to queue: c6ff...6267
added new out-of-cycle node to queue: 8332...b08a
removed node from new out-of-cycle queue due to size
added new out-of-cycle node to queue: 0aff...0745
removed node from new out-of-cycle queue due to size
maximum cycle transaction amount=∩0.000000, balance=∩74409295.118960
fetching block 8970373 (3126...0b46) from mesh on han
trying to fetch MissingBlockRequest25 from jiangxin022
top verifier af0f...35a5 has 1483 votes with a cycle length of 2221 (66.8%)
added new out-of-cycle node to queue: 13e1...d857
removed node from new out-of-cycle queue due to size
fetching block 8970373 (3126...0b46) from mesh on han
trying to fetch MissingBlockRequest25 from z0rn-de-fs-2
got missing block: MissingBlockResponse[block=[Block: v=2, height=8970373, hash=3126...0b46, id=8c21...24f2]]
maximum cycle transaction amount=∩0.000000, balance=∩74409295.118960
waiting for message queue to clear from thread [Verifier-mainLoop], size is 2
^^^^^^^^^^^^^^^^^^^^^ casting vote for height 8970373
broadcasting message: BlockVote19 to 2279
got missing block: MissingBlockResponse[block=[Block: v=2, height=8970373, hash=3126...0b46, id=8c21...24f2]]
added new out-of-cycle node to queue: e16a...d2a2
removed node from new out-of-cycle queue due to size
storing new vote, height=8970373, hash=3126...0b46
added new out-of-cycle node to queue: f230...da02
removed node from new out-of-cycle queue due to size
freezing block [Block: v=2, height=8970373, hash=3126...0b46, id=8c21...24f2] with standard mechanism
after registering block [Block: v=2, height=8970373, hash=3126...0b46, id=8c21...24f2] in BlockchainMetricsManager(), cycleTransactionSum=∩5,515.139934, cycleLength=2221
BlockVoteManager: removing vote map of size 2218
cleaning up because block 8970373 was frozen
sending verifier-removal vote to node at IP: 138.201.246.8
added new out-of-cycle node to queue: c6ff...6267
added new out-of-cycle node to queue: 8332...b08a
removed node from new out-of-cycle queue due to size
added new out-of-cycle node to queue: bfa8...92d4
removed node from new out-of-cycle queue due to size
added new out-of-cycle node to queue: e16a...d2a2
removed node from new out-of-cycle queue due to size
maximum cycle transaction amount=∩0.000000, balance=∩74409295.118960
added new out-of-cycle node to queue: b15c...3481
removed node from new out-of-cycle queue due to size
added new out-of-cycle node to queue: c6ff...6267
added new out-of-cycle node to queue: f230...da02
removed node from new out-of-cycle queue due to size
maximum cycle transaction amount=∩0.000000, balance=∩74409295.118960
fetching block 8970374 (7582...e4fd) from mesh on han
trying to fetch MissingBlockRequest25 from jing
top verifier af0f...35a5 has 1483 votes with a cycle length of 2221 (66.8%)
added new out-of-cycle node to queue: 8332...b08a
removed node from new out-of-cycle queue due to size
^^^^^^^^^^^^^^^^^^^^^ casting vote for height 8970374
broadcasting message: BlockVote19 to 2279
got missing block: MissingBlockResponse[block=[Block: v=2, height=8970374, hash=7582...e4fd, id=22c7...43e3]]
added new out-of-cycle node to queue: bfa8...92d4
removed node from new out-of-cycle queue due to size
storing new vote, height=8970374, hash=7582...e4fd
added new out-of-cycle node to queue: 4866...47c1
added new out-of-cycle node to queue: 24cb...2427
freezing block [Block: v=2, height=8970374, hash=7582...e4fd, id=22c7...43e3] with standard mechanism
after registering block [Block: v=2, height=8970374, hash=7582...e4fd, id=22c7...43e3] in BlockchainMetricsManager(), cycleTransactionSum=∩5,515.139934, cycleLength=2221
BlockVoteManager: removing vote map of size 2060
cleaning up because block 8970374 was frozen
sending verifier-removal vote to node at IP: 195.201.217.213
added new out-of-cycle node to queue: c6ff...6267
added new out-of-cycle node to queue: e16a...d2a2
removed node from new out-of-cycle queue due to size
added new out-of-cycle node to queue: f230...da02
removed node from new out-of-cycle queue due to size
added new out-of-cycle node to queue: 8332...b08a
removed node from new out-of-cycle queue due to size
added new out-of-cycle node to queue: c6ff...6267
maximum cycle transaction amount=∩0.000000, balance=∩74409295.118960
added new out-of-cycle node to queue: bfa8...92d4
removed node from new out-of-cycle queue due to size
fetching block 8970375 (8135...cabc) from mesh on han
trying to fetch MissingBlockRequest25 from VR01b
fetching block 8970375 (8135...cabc) from mesh on han
trying to fetch MissingBlockRequest25 from bolo7
top verifier af0f...35a5 has 1483 votes with a cycle length of 2221 (66.8%)
fetching block 8970375 (8135...cabc) from mesh on han
trying to fetch MissingBlockRequest25 from CPRCP14
added new out-of-cycle node to queue: e16a...d2a2
removed node from new out-of-cycle queue due to size
got missing block: MissingBlockResponse[block=[Block: v=2, height=8970375, hash=8135...cabc, id=17dc...bc90]]
maximum cycle transaction amount=∩0.000000, balance=∩74409295.118960
maximum cycle transaction amount=∩0.000000, balance=∩74409295.118960
waiting for message queue to clear from thread [Verifier-mainLoop], size is 2
^^^^^^^^^^^^^^^^^^^^^ casting vote for height 8970375
broadcasting message: BlockVote19 to 2279
got missing block: MissingBlockResponse[block=[Block: v=2, height=8970375, hash=8135...cabc, id=17dc...bc90]]
added new out-of-cycle node to queue: f230...da02

@EggPool
Copy link
Author

EggPool commented Sep 10, 2020

@dystophia proposed this patch - handling auto ban of spamming ips - to open-nyzo

Open-Nyzo#1

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants