You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Following an internal discussion on memory usage of cardano-db-sync the conversation lead to interesting insights into possible future solutions regarding UTxO.
In theory we can reuse postgres as an alternative storage, but there are some big challenges
The UTxO-HD current work in consensus, allows to mock parts of the ledger state. Without that, we would have to rewrite most of the consensus logic that we use to apply blocks. This is far from trivial given that consensus even adjusts the UTxO in some era transitions.
We would also have to port utility functions from the ledger, which for example for a specific tx return the needed inputs that have to be resolved. The plan is that ledger code doesn't need to be adjusted for UTxO-HD, but we would have to validate that we can reuse the current ledger code as is.
Querying the db instead of resolving outputs from the memory is time consuming, so we would have to evaluate how syncing speed is affected.
Eventually when UTxO-HD is done we would have to reintegrate the ledger-consensus api
On a previous pi we made a minimum integration of the utxo-hd feauture branches, as a proof of concept that it won't cause issues later.
click to expand full conversation
Sean Gillespie:
I'm almost certainly wrong, but my thinking was that the RAM bottleneck is tied to the ledger state. That basically means, the longer the chain the more memory is required
Kostas Dermentzis
8GB should be fine indeed for testnets
It seems that sometimes it works just fine with ~3gb but sometimes requirements seem to grow to something like 6gb
probably something to investigate if there are any logs or when how often it happens
Gytis Ivaškevičius:
with lower resources it sometimes starts getting OOM killed, we are setting all testnets resources to 6gb and it should be good
samuel.leathers:
likely utxo-hd might not help db-sync, because db-sync will need to read all that information from the node which will then load it into memory.
Kostas Dermentzis:
DBSync could maintain its own instance of the UTxO db, completely separate from the node's instance.
Alternatively, db-sync could reuse its existing postgres-db and in particular the tx_out table to resolve outputs by inputs
Both solutions mean the ledger state instance that db-sync maintains doesn't have to include the UTxO, so it will be much smaller
samuel.leathers:
yeah, I'm not suggesting db-sync queries the utxo directly from the node. I was just saying I don't understand how utxo-hd is going to help with current memory usage (other than the differential in node memory usage), since db-sync queries the blocks and builds it's own state, will db-sync itself get any benefit from utxo-hd? I think the answer is no.
So you might drop a number of GB from the node not having to store utxo-hd in memory, but we don't anticipate it will reduce db-sync's memory consumption.
Kostas Dermentzis:
I think it can directly reduce db-sync memory if we decide to integrate it. DBSync basically maintains the same in memory ledger state as the node. Since the node can reduce its memory by tranfering the UTxO to disk, so can DBSync.
samuel.leathers:
couldn't db-sync do that currently since it has a copy of the utxo state in a database?
I don't see how that's particularly dependent on the node implementing utxo-hd.
Kostas Dermentzis:
In theory we can reuse postgres as an alternative storage, but there are some big challenges
The UTxO-HD current work in consensus, allows to mock parts of the ledger state. Without that, we would have to rewrite most of the consensus logic that we use to apply blocks. This is far from trivial given that consensus even adjusts the UTxO in some era transitions.
We would also have to port utility functions from the ledger, which for example for a specific tx return the needed inputs that have to be resolved. The plan is that ledger code doesn't need to be adjusted for UTxO-HD, but we would have to validate that we can reuse the current ledger code as is.
Querying the db instead of resolving outputs from the memory is time consuming, so we would have to evaluate how syncing speed is affected.
Eventually when UTxO-HD is done we would have to reintegrate the ledger-consensus api
On a previous pi we made a minimum integration of the utxo-hd feauture branches, as a proof of concept that it won't cause issues later. samuel.leathers would you support implementing a proof of concept for the above during this pi and evaluate its performance? Javier Sagredo do you have any opinion or is there any changes since we last discussed about this possibility (long time ago)
samuel.leathers:
I'd absolutely support this, but as a secondary goal to a 9.0 compatible mainnet ready db-sync 🙂 We should aim to have a mainnet db-sync that can cross the fork shortly after 9.0 node is released so downstream exchanges/dapps/builders can start integration.
Kostas Dermentzis:
That's definitely a priority
The text was updated successfully, but these errors were encountered:
Following an internal discussion on memory usage of cardano-db-sync the conversation lead to interesting insights into possible future solutions regarding UTxO.
In theory we can reuse postgres as an alternative storage, but there are some big challenges
On a previous pi we made a minimum integration of the utxo-hd feauture branches, as a proof of concept that it won't cause issues later.
click to expand full conversation
Sean Gillespie:
I'm almost certainly wrong, but my thinking was that the RAM bottleneck is tied to the ledger state. That basically means, the longer the chain the more memory is required
Kostas Dermentzis:
That's correct. UTxO-HD is not a priority currently so we're not left with may options to improve memory https://github.com/orgs/IntersectMBO/projects/8/views/3?filterQuery=performance%3A%22Memory%22
Kostas Dermentzis
8GB should be fine indeed for testnets
It seems that sometimes it works just fine with ~3gb but sometimes requirements seem to grow to something like 6gb
probably something to investigate if there are any logs or when how often it happens
Gytis Ivaškevičius:
with lower resources it sometimes starts getting OOM killed, we are setting all testnets resources to 6gb and it should be good
samuel.leathers:
likely utxo-hd might not help db-sync, because db-sync will need to read all that information from the node which will then load it into memory.
Kostas Dermentzis:
DBSync could maintain its own instance of the UTxO db, completely separate from the node's instance.
Alternatively, db-sync could reuse its existing postgres-db and in particular the tx_out table to resolve outputs by inputs
Both solutions mean the ledger state instance that db-sync maintains doesn't have to include the UTxO, so it will be much smaller
samuel.leathers:
yeah, I'm not suggesting db-sync queries the utxo directly from the node. I was just saying I don't understand how utxo-hd is going to help with current memory usage (other than the differential in node memory usage), since db-sync queries the blocks and builds it's own state, will db-sync itself get any benefit from utxo-hd? I think the answer is no.
So you might drop a number of GB from the node not having to store utxo-hd in memory, but we don't anticipate it will reduce db-sync's memory consumption.
Kostas Dermentzis:
I think it can directly reduce db-sync memory if we decide to integrate it. DBSync basically maintains the same in memory ledger state as the node. Since the node can reduce its memory by tranfering the UTxO to disk, so can DBSync.
samuel.leathers:
couldn't db-sync do that currently since it has a copy of the utxo state in a database?
I don't see how that's particularly dependent on the node implementing utxo-hd.
Kostas Dermentzis:
In theory we can reuse postgres as an alternative storage, but there are some big challenges
On a previous pi we made a minimum integration of the utxo-hd feauture branches, as a proof of concept that it won't cause issues later.
samuel.leathers
would you support implementing a proof of concept for the above during this pi and evaluate its performance?Javier Sagredo
do you have any opinion or is there any changes since we last discussed about this possibility (long time ago)samuel.leathers:
I'd absolutely support this, but as a secondary goal to a 9.0 compatible mainnet ready db-sync 🙂 We should aim to have a mainnet db-sync that can cross the fork shortly after 9.0 node is released so downstream exchanges/dapps/builders can start integration.
Kostas Dermentzis:
That's definitely a priority
The text was updated successfully, but these errors were encountered: