Skip to content
This repository has been archived by the owner on Nov 2, 2018. It is now read-only.

Storage obligations are not removed cleanly from host.db #2901

Open
nielscastien opened this issue Mar 23, 2018 · 1 comment
Open

Storage obligations are not removed cleanly from host.db #2901

nielscastien opened this issue Mar 23, 2018 · 1 comment
Labels

Comments

@nielscastien
Copy link
Contributor

nielscastien commented Mar 23, 2018

New storage obligations (so's) are created in managedAddStorageObligation in storageobligatons.go whenever a new contract is created or when a contract is renewed. This is a 5 step process:

  1. Add SectorRoots to the host (for new contracts this is nil)
  2. Add storage obligation to the database
  3. Update FinancialMetrics for the host
  4. Submit transaction to the transaction pool
  5. Queue the ActionItem in bucketActionItems

Only if the so makes it in the bucketActionItems of the host, this action is tracked in the future. Without an entry in bucketActionItems, the so will never be updated or removed from the database.

There may be errors between the different steps leading to a return from the function. This will result in an inconsistent state of the host.db since the database is updated and sectors added but the so will not make it into the bucketActionItems list.

The big increase in contract count (and locked collateral) that some hosts experience, are because managedAddStorageObligation is called by managedFinalizeContract in a loop. Whenever there is an error, the financial metrics of the host are updated 6 times.

Own investigation of the host.db (host is running approx. 10 months) reveals that > 20% of the so's with status obligationUnresolved in the database never made it to the bucketActionItems. I.e. for these items it holds that so.proofDeadline() is smaller than current block height. These so's still count as active contracts, locked/risked collateral and potential revenues, leading to several other issues:

  1. it is almost impossible to do a clean shut down of the host because the contract number and locked collateral will never reach zero.
  2. locked collateral is used in other functions to see if the host can accept new contracts, leading to missed opportunities for the host if this locked collateral is based on 'rejected' so's.
  3. value for locked collateral, risked collateral and potential revenues are way off, making it impossible for the host to try to find the best settings.

As these errors are persistent, there is currently no way to correct this, e.g. via a restart of the daemon.

This issue is linked to PR #2884.
Possible rootcause for issues #2416, #2262, #2144

@ChrisSchinnerl
Copy link
Contributor

@DavidVorick @lukechampine what do you guys think? Seems like we don't roll the database back correctly if managedAddStorageObligation fails after adding the storage obligation to the bucket or if there is a crash after adding the storage obligation.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

No branches or pull requests

3 participants