Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Should base throughput be higher? Mempool backlog woes #246

Open
ghost opened this issue Oct 24, 2021 · 8 comments
Open

Should base throughput be higher? Mempool backlog woes #246

ghost opened this issue Oct 24, 2021 · 8 comments

Comments

@ghost
Copy link

ghost commented Oct 24, 2021

I observe many 90 input transactions backlog in the mempool each about 13kb. There is one right now as I type. Including just two of these transactions puts miners in the penalty zone; thus, they require aeon users to pay fees 10x the standard fee to compensate miner's penalty.

Here are some such backlogs:

Height - Transaction count - Time to clear mempool
1403464 - 5 tx - 15 min
1403252 - 7 tx - 24 min
1402901 - 6 tx - 23 min
1402629 - 13 tx - 48 min

(There should be more I didn't look farther. )

So during these times of low usage where the median limit is unlikely to increase, should not Aeon users be able to submit small bursts of txs and have them confirmed in a timely manner? Yes, they can pay a higher fee; but, given that aeon users are already paying for tail emission, I believe the network should support small bursts of transactions at no additional cost to the users. Imagine an exchange or mining pool paying out withdraws that take hours to confirm at the default fee level.

To solve this, I propose increase the penalty free zone at least to 250,000 bytes. At that point, Aeon has a throughput of 10 * max_tx_size per block. That would be somewhere around 10-100 txs per block as the default low fee, penalty free zone.

Thank you for reading and please comment if you agree/disagree with my proposal, or if this topic deserves additional research.

@ghost
Copy link
Author

ghost commented Oct 24, 2021

Happening now with txes spending 1 hour + in mempool
Screenshot from 2021-10-24 16-47-23

@iamsmooth
Copy link

Not in favor of increasing the penalty free zone much, as it likely directly leads to a big increase in the long term costs of running a node even without much actual demand, but I could imagine, in vague terms without proposing anything specific, some modifications to the algorithm that facilitate faster clearance bursts if there are enough fees paid.

@ghost
Copy link
Author

ghost commented Nov 3, 2021

I think there could be more discussion involving this penalty formula. Lifting the limit would help to patch its flaws, I want to detail some thoughts on the subject. While in its current state, there is no reason for security concern, it doesn't really make sense or match the idea of bloat prevention and preventing long term costs. It has also been hardly reviewed or critiqued when it is an critical component to the network economics. If this could be thoroughly examined and justified then this issue could be dropped.

Some issues I have are its derivation. In the formula, the penalty proportion is dependent on the median limit itself. This is what we are observing here, is that for when the median limit is a low value like 20,000 , the penalty increases rapidly per additional byte over the limit.

Here is a simple example. If the median limit is 20,000 and you want to include 10,000 bytes over the limit, you will need to pay

penalty = base subsidy * ((block size / median size of last 100 blocks) - 1)² 
penalty = 1.2*(30,000/20,000-1)² 
penalty = 0.3 coins (0.03072fee/kb)

Now, compare to when the limit is 200,000 bytes

penalty = 1.2*(210,000/200,000-1)² = 0.003 coins (0.003072fee/kb)

Now the cost to increase the block by the same amount is ten times less and much more reasonable fee, almost the min fee. That makes adding more bloat easier as the size increases which is counterintuitive to reducing long-term costs. Is there any justification why the penalty is a function of the median? In my opinion, that is not a appropriate function and doesn't represent a solution (although it works, kind of).

The penalty should be so that adding one additional transaction over the limit is the base fee and the fee/kb rises quadratically from there. My main point of this comment is that it should be strictly a function of the bytes over the limit so that adding more bytes is the same regardless of the median limit. If you take your reference transaction to be 5,000 bytes, and x as bytes over median limit:

penalty = (fee/kb @5000 = 0.001)
penalty = (c*1.2*(x/5,000)^2/5000)*1024 = 0.001. x= 5000 -> c= 0.0040690104
penalty = 0.0040690104*1.2*(x/5,000)^2

So now let's re-examine that 10,000 byte transaction:

penalty = 0.0040690104*1.2*(10,000/5,000)^2 = 0.01953124992 (0.002 fee/kb)

And this can be adjusted by increasing the power, for example it we agree that a 10,000 bytes over the limit should be 0.004 fee/kb or 0.008 fee/kb, you can make the power to be 3 or 4 respectively.

@iamsmooth
Copy link

I'm definitely favorably inclined to reexamining the penalty formulas and blockchain growth control methods. There is a delicate balance between constraining surge usage and long term node feasibility. For example, I believe we altered the Monero formula to try to avoid long term exponential growth (I don't recall the details at the moment, so I can't comment on how successful this might have been) and only have linear growth, while not changing the short term behavior much. It's unlikely to be the ideal tradeoff in present form. So please continue the discussion.

@iamsmooth
Copy link

iamsmooth commented Nov 5, 2021

I'd be open to considering something more like Ethereum's EIP-1559 where some portion of the fees are always burned (adjusted by the protocol along with block size), rather than having a "penalty free zone" and arbitrary minimum fee (also would replace the current oversize block fee burn), both of which are kind of fishy.

@ghost
Copy link
Author

ghost commented Nov 18, 2021

Thanks for explaining the goals of this feature. I can see how fee burning would be an effective tool in limiting blockchain growth and it would also work nicely when paired with tail emission. That deserves more exploration.

@ghost
Copy link
Author

ghost commented Dec 18, 2021

Burned fees limit the total amount of added bytes by the supply. Suppose we limit the total amount of blockchain growth to 1TB, take the min fee to be (total supply)/(1TB). That is min-fee of 0.01676380634. Seems to be in the right ballpark. With linear tail emission that allows the blockchain limit to grow linearly with the supply.

Another thing I would consider is to have an adjusting min fee to encourage usage. Set the min fee to be inversely proportional to the total supply. Then as supply continues to increase the min-fee decreases until it reaches a market equilibrium point where users can find it affordable to transact. As fees are burned up, the supply decreases, thus increasing the min-fee and discouraging usage. This allows an automatic correction mechanism so the fee price can always approach a market equilibrium. This process could be accelerated with an exponential tail emission.

@iamsmooth
Copy link

Burned fees limit the total amount of added bytes by the supply

I think that depends on the formula, and relationship to the tail reward. For example, in Monero and Aeon currently, blockchain growth rate depends on paying penalty by forfeiting reward, and the rate of growth is somewhat limited, but there is no limit on how much it can grow in absolute terms per se.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant