Skip to content
This repository has been archived by the owner on Oct 4, 2019. It is now read-only.

ECIP-1010 Delay Difficulty Bomb Explosion #4

Closed
elaineo opened this issue Sep 13, 2016 · 45 comments
Closed

ECIP-1010 Delay Difficulty Bomb Explosion #4

elaineo opened this issue Sep 13, 2016 · 45 comments
Assignees
Labels

Comments

@elaineo
Copy link
Member

elaineo commented Sep 13, 2016

This issue is dedicated to discussion of ECIP-1010 Delay Difficulty Bomb Explosion by @splix.
Located here: https://github.com/ethereumproject/ECIPs/blob/master/EIPs/ECIP-1010.md

@elaineo elaineo changed the title Jump in Difficulty ECIP-1010 Delay Difficulty Bomb Explosion Sep 13, 2016
@elaineo
Copy link
Member Author

elaineo commented Sep 13, 2016

(I'm moving my earlier comment here because the PR was merged)

When you "freeze", there is a huge jump in difficulty:

  • Block 5,000,000 == 2**48 == 281 TH
  • Block 5,200,000 == 2**50 == 1126 TH

Whereas with the "delay the bomb", the difficulty increases more gradually. Because of this, I think "delay the bomb" is a safer approach, unless we can be absolutely certain that we will do another HF before block 5,000,000

@splix
Copy link
Contributor

splix commented Sep 13, 2016

Sorry, that was a mistake in my formula, missed that part yesterday. Should be 28 instead of 48.

Current ECIP have correct value:

  • Block 4,000,000 == 2**28 == 268,435,456
  • Block 5,000,000 == 2**28 == 268,435,456
  • Block 5,200,000 == 2**30 == 1 TH

@splix
Copy link
Contributor

splix commented Sep 13, 2016

Current formula in ECIP is:

pause_block = 3000000 //15 Jan 2017
cont_block = 5000000 //15 Dec 2017
delay = (cont_block - pause_block) / 100000 //20
fixed_diff = (pause_block / 100000) - 2 //28

n = block.number < pause_block ? 2 : (block.number / 100000) - fixed_diff
n = block.number >= cont_block ? delay + 2 : n

block_diff = parent_diff 
      + parent_diff / 2048 * max(1 - (block_timestamp - parent_timestamp) / 10, -99) 
      + int(2**((block.number / 100000) - n))

@realcodywburns
Copy link
Contributor

You are correct elaine. In the spreadsheet I had to also change the formula after the freeze to INT(2^((QUOTIENT(B52,100000)-22))) in order to prevent an instant "explosion". we would need to do this in the actual code as well.

@splix
Copy link
Contributor

splix commented Sep 13, 2016

I'll rework the formula, will try to simplify it, now i'm not sure about correct value :)

UPD value seems to be correct, but it's hard to read and easy to get lost here

@elaineo
Copy link
Member Author

elaineo commented Sep 13, 2016

Right, so we will have a long period of near-constant difficulty, and then a sudden "explosion"

Freeze:

  • Block 4,000,000 == 2**28 == 268,435,456
  • Block 5,000,000 == 2**28 == 268,435,456
  • Block 5,200,000 == 2**30 == 1 TH (boom!)

Delay (assuming int(2**((block.number / 300000)) )

  • Block 4,000,000 == 2**13.3
  • Block 5,000,000 == 2**16.6
  • Block 5,200,000 == 2**17.3

So it's more of a gradual increase. If we HF again well before block 5,200,000 it's no problem, but if cut it close then it could be bad.

@splix
Copy link
Contributor

splix commented Sep 13, 2016

Current difficulty is 8TH already (and it's with just 10% of ETH hashrate), so 1TH is very far from BOOM

@splix
Copy link
Contributor

splix commented Sep 13, 2016

And yes, we want to BOOM some time after the block 5,000,000. Just not current 3,000,000

@elaineo
Copy link
Member Author

elaineo commented Sep 13, 2016

Okay. It's easy enough to write/test multiple patches. I can create some stress tests today for different scenarios so we can decide about optimal block number. Do you guys use geth and ethtest, or do you prefer something else?

@splix
Copy link
Contributor

splix commented Sep 13, 2016

If I change format to following, will it be easier to read?

if (block.number < pause_block) {
    explosion = (block.number / 100000) - 2    
} else if (block.number < cont_block) {
    explosion = fixed_diff 
} else { // block.number >= cont_block    
    explosion = (block.number / 100000) - delay - 2
}

block_diff = parent_diff 
      + parent_diff / 2048 * max(1 - (block_timestamp - parent_timestamp) / 10, -99) 
      + int(2**explosion)

@mkimid
Copy link

mkimid commented Sep 14, 2016

Need a hardfork to get 12~14 more months (2.5M blocks) and then, hardfork again ? I feel as it is not a good way.

@arvicco
Copy link
Contributor

arvicco commented Sep 14, 2016

@mkimid The reasoning here is that we absolutely need to hardfork SOON, in order to fix difficulty bomb that will explode early next year. But by the time of this necessary hardfork, most likely we won't be able to come to a consensus on monetary policy and long-term consensus algo. Regarding the consensus algo (PoW/PoS/hybrid) we may need to see first how ETH fares when it moves to Casper PoS (which is expected by mid-2017). So, by the end of 2017 we will have a more cohesive view on the most important aspects of monetary policy and viability of PoS vs other consensus algo options. So, we'll be in a much stronger position to fix these issues then, and diff bomb delay will help us to come to a conclusion, rather than procrastinate.

So, the strategy is: fix the most critical (and non-contentious) stuff like diff bomb (and possibly replay attacks) first, then provide a long-term solution to other issues based on solid data and rational discussion after a year.

@splix
Copy link
Contributor

splix commented Sep 14, 2016

I don't think this is a hardfork, as it keeps backward compatibility and nothing changed to the protocol or blockchain state.

It just set minimal difficulty requirement to a fixed value. Old client will be able to participate with new clients, and they will be cut out only at their moment of difficulty explosion. So it doesn't change anything for them too.

@elaineo
Copy link
Member Author

elaineo commented Sep 14, 2016

@splix Old clients will be validating mined blocks based on the calcDifficulty function, so they will reject the new blocks

@splix
Copy link
Contributor

splix commented Sep 14, 2016

@elaineo oh, you're right actually, I forgot about other parameters

@mkimid
Copy link

mkimid commented Sep 14, 2016

@arvicco I fully understand your viewpoint, and thank you for your detail explanation. IMO, The diff bomb effect will started from BLOCK #4M noticeably, and, it will came after 9 months from now. the other way, we have 9 months to get a 'consensus'. maybe it is enough time to get a 'consensus', and it is much better than double hardfork. And, ETH has just x8~x9 time bigger hashrate&diff then ETC now, and just have one more month than ETC, I just worry and concen the risk of hardfork,

DAYS BLOCK # BTIME DIIFF (G)
0 2200000 14.70 7,580,000,000,000.00 <-- base on etcstat.net's number
34.02 2400000 14.70 7,578,264,643,522.75
68.03 2600000 14.69 7,576,543,513,515.87
102.04 2800000 14.69 7,574,874,344,678.13
136.04 3000000 14.69 7,573,408,082,222.16
170.03 3200000 14.69 7,572,748,513,447.29
204.04 3400000 14.69 7,575,310,794,066.63
238.11 3600000 14.72 7,590,755,552,310.80
272.49 3800000 14.85 7,657,725,278,946.29
308.09 4000000 15.38 7,930,789,436,562.56
348.62 4200000 17.51 9,028,217,570,494.97
408.88 4400000 26.03 13,422,954,081,097.10
548.06 4600000 60.13 31,004,632,913,278.70
1,002.80 4800000 196.45 101,297,488,834,056.00

(1) for 0 days, i have took the numbers from etcstat.net
(2) I have calculated diff base on the current block time
(3) I have recalculated the diff base on the calculated diff from (2), base on same hashrate
(4) I have repeat (2) ~ (3)

@realcodywburns
Copy link
Contributor

I would call it an "upgrade" rather than a 'hard fork'. Unless there are ethereum purists dedicated to running the code as written to the ice age, then all nodes should upgrade. By the time block 3000000 arrives the bomb value will be @ 268435456 and growing. The split wont be noticeable until block 3100000 when non-upgraded nodes will move to 536870912 and, with lack of sufficient hash rate, freeze.

The irony of my thinking mirroring the thought of the ef pre-dao-fork is not lost on me. The difference in the two situations is the reason for the splitting of the code then vs now. And those chosing not to upgrade are more than welcome to continue operating until it is unfeasible to do so . The other difference is that at block 3100000 the difficulty bomb wil be 4096X larger than it was on block 1920000

@mikeyb
Copy link
Contributor

mikeyb commented Sep 15, 2016

Are we also considering a DAG file size slowdown? If DAG size generation keeps up at the constant pace PoW mining will become obsolete long before the frozen bomb goes off.

I suggest we either extend epoch times or slow down DAG size

Unless I am missing something? If we plan to keep PoW in any form, this will also have to be considered.

https://github.com/Genoil/dagSimCL

@arvicco
Copy link
Contributor

arvicco commented Sep 15, 2016

@mikeyb I have seen this idea floated on Slack, but there was no formal proposal. Anyone is welcome to write up another ECIP, you know... ;)

@elaineo
Copy link
Member Author

elaineo commented Sep 15, 2016

@mikeyb "If DAG size generation keeps up at the constant pace PoW mining will become obsolete long before the frozen bomb goes off." <-- I haven't run any numbers on this. Is it actually true that there will be such a huge drop in hash rate that PoW won't work by 2017?

@realcodywburns
Copy link
Contributor

realcodywburns commented Sep 15, 2016

I'll see if i can work out a dag size model and add it in to the google sheet.

@mikeyb
Copy link
Contributor

mikeyb commented Sep 15, 2016

@elaineo

@realcodywburns has generated some data in his spreadsheet with DAG size and dates by proposal action with current epoch cycle of 30000 blocks. Going to see if he can work his wizard sheets magic to calculate what happens if we change the epoch frequency as well to resize the DAG less often

EDIT: TIL github doesnt show new messages automatically in here

@elaineo
Copy link
Member Author

elaineo commented Sep 15, 2016

Proposed timeline:

  • 9/19 Decide on implementation
  • Identify potential points of failure
  • Identify critical dates and conditions for fork
  • 9/22 Complete suite of test vectors
  • ?? All clients patched, with tests passing on local builds
  • Run tests on test network with modified network conditions
  • Deploy testnet for global testing
  • 12/31 Hard Fork!

If you have the ability to edit my comment, feel free to edit directly and fill in dates.

@elaineo
Copy link
Member Author

elaineo commented Sep 15, 2016

Let's define the conditions for deploying a hard fork.

Do we need to make the activation conditional? Without knowing how quickly clients will adopt the HF update, we can delay activation until a supermajority of hash-power is running the new client. For example, we can say that activation is achieved when 9,000 of 10,000 consecutive blocks have a version number that matches the updated client.

Or, if we want to keep it simple, just choose an activation block number (like how EF did the fork).

@splix
Copy link
Contributor

splix commented Sep 15, 2016

I'm for a block number, it's much simpler to implement delay for 3,000,000-5,000,000

@elaineo
Copy link
Member Author

elaineo commented Sep 15, 2016

Are we decided on those numbers? @realcodywburns , @mikeyb what are your thoughts on the DAG size?

@mikeyb
Copy link
Contributor

mikeyb commented Sep 15, 2016

@elaineo @realcodywburns is still crunching some figures with different variables. Should have that data soon.

RE: 12/31 hard fork. Please lets not do that on NYE :) January 2nd would be best if we wanted to do it around that time frame. Should give us all a chance to recover from the massive partying on new years

EDIT: Or if we go by block number, lets try and plan for a block not on a holiday

@splix
Copy link
Contributor

splix commented Sep 15, 2016

Block 3,000,000 is expected for 15 Jan 2017

@arvicco
Copy link
Contributor

arvicco commented Sep 15, 2016

Yes, New Year celebration is a big thing is some countries (including Russia where it continues until Jan 7th normally). ;) So, I would be in favor of a specific block number (preferably a round one) that's expected to occur after Jan 7th.

@arvicco
Copy link
Contributor

arvicco commented Sep 15, 2016

Looks like block 3,000,000 is a perfect milestone to effectuate the hardfork then.

@realcodywburns
Copy link
Contributor

realcodywburns commented Sep 16, 2016

It looks like the dag is more or less a secondary bomb. It was designed to keep Asics out of mining, but it was never limited because the thought was PoS would come. If it is allowed to keep growing it will essentially make mining only available to those who can afford high dollar cards with high ram. Essentially having the same effect as Asics, centralizing the mining power to a few. @mikeyb is working on an ecip and doing further testing. My recommendation would be to stop the dag growth and keep it at a number high enough to prevent Asics but low enough to keep everyone equal. Also aim to have a 'dag steering committee' or something that reviews the gpu market and makes recommendations everytime a system upgrade is announced.

@elaineo
Copy link
Member Author

elaineo commented Sep 16, 2016

@realcodywburns should this be changed with the current HF, or wait til the next one?

@realcodywburns
Copy link
Contributor

@elaineo it depends on the purpose of the dag in the grand scheme of ethereum classic. At it's current rate it will reach 2gb in February 2017 and 2.5gb ~August 2017. The limiting factor in the mining cards seems to be the bus speed and cards with 3gb or less become useless around 2.2gb dag size. If the intent is to stop Asics from coming, the current dag would work for a few years. If we want 'normal users' to be able to mine with current rigs, then 2gb should work just fine. The serious miners do not buy off the shelf components. Asics or not, they can dump the cash on custom gear that will centralize the hash power.

@splix
Copy link
Contributor

splix commented Sep 16, 2016

DAG growth doesn't look like a part Difficulty Bomb problem. Can we split this two tasks into separate issues?

@realcodywburns
Copy link
Contributor

realcodywburns commented Sep 16, 2016

Yes, It is a separate issue. It will have its own ecip soon.

@splix
Copy link
Contributor

splix commented Sep 18, 2016

Got some stats about node types currently used in ETC network:

clienttypes

With versions:

clientversions

@kimisan
Copy link
Member

kimisan commented Sep 18, 2016

Wow, cool! Does it have api ?

@splix
Copy link
Contributor

splix commented Sep 18, 2016

Unfortunately not, gathered this stats semi-manually currently, just to look into which clients we need to update for this ecip

@kimisan
Copy link
Member

kimisan commented Sep 21, 2016

the bomb chart update
ethereumclassic/explorer#50

http://imgur.com/ijOwcD0
Image of Yaktocat

@trustfarm-dev
Copy link
Contributor

@elaineo @splix @realcodywburns ,
I'm sorry to leave my not exact comments.

I think remove difficulties bomb , we must consider DAG size.

Easily , real examples I describe,
AMD 280x real hash rate case::
2015.08 , DAG 1GB , Hash 26Mh/s
2016.09 , DAG 1.6GB , Hash 15Mh/s

So, simple rough idea suggestion is ,

method 1.

  1. DAG Size and DAG-initialvector Initialzation , if DAG Size over 2GB.
    that's to reset 1GB.

method 2.
2. DAG Size over 2GB , after another 30000Blocks, DAG Size will decrease , and 1GB then increase.
make swing in 1GB ~ 2GB Boundary.

@elaineo
Copy link
Member Author

elaineo commented Sep 28, 2016

@trustfarm-dev this should be a separate ECIP. @mikeyb was looking into this (i think?)

@arvicco
Copy link
Contributor

arvicco commented Sep 28, 2016

@trustfarm-dev Hmmm, cyclic DAG size oscillations, instead of constant bloat... interesting, I like this. Since the only function of DAG was to make mining memory-hard and hamper ASIC development, this approach serves such purpose just as well.

@arvicco
Copy link
Contributor

arvicco commented Sep 28, 2016

@elaineo
Maybe we can "pre-allocate" some ECIP number to DAG issue (say, 1011) and start discussing it before the formal proposal is made?

@trustfarm-dev
Copy link
Contributor

@elaineo , @arvicco Will you make 1011 discussion thread?

@arvicco
Copy link
Contributor

arvicco commented Sep 28, 2016

Please move all DAG discussion to: #6

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

No branches or pull requests

8 participants