Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

status-go integration: workflow questions #1882

Open
vitvly opened this issue Nov 3, 2023 · 6 comments
Open

status-go integration: workflow questions #1882

vitvly opened this issue Nov 3, 2023 · 6 comments

Comments

@vitvly
Copy link
Member

vitvly commented Nov 3, 2023

While testing proxy integration via make test-verif-proxy-wrapper in this PR status-im/status-go#4254, i've encountered some hurdles that i think should be solved prior to providing this functionality in Status Desktop

  1. Not really clear when exactly do we begin receiving Nimbus headers. Maybe there's some way to speed this up. Initial header arrive quite inconsistently, sometimes immediately, sometimes after several minutes.
  2. Once the first headers are received, status-go installs proxy handlers so that to re-route supported JSON-RPC calls to Nimbus proxy. However, it seems that some extra delay is necessary before the nimbus proxy is up, sometimes it raises exceptions when requests are fired too early. How do we know that proxy is up and operational?
  3. In some cases the values returned from proxy are different from the ones fetched directly from Infura. Not sure why is this the case
  4. For now i use a hardcoded value for trusted-block-root. I saw in proxy README that it's possible to fetch those programmatically from a trusted beacon node. Can i predefine some node URL in status-go config?

cc @Ivansete-status

@zah
Copy link
Member

zah commented Nov 3, 2023

Since we are not very familiar with the integration, can you provide more detailed reproduction steps for the issues?

@kdeme
Copy link
Contributor

kdeme commented Nov 6, 2023

I'm not very familiar with the integration either, but I'll try to answer your questions:

  1. Do you mean that the time it takes before the verified proxy can start delivering data is very inconsistent? Yes, this is because before the verified proxy can function, it needs to do the Beacon light client sync. The time to sync depends on how old your trusted-block-root is. But more importantly, it highly depends on how quickly/slowly it finds peers that serve beacon light client data.
  2. Which kind of exception exactly? The JSON-RPC proxy is up rather quickly. But it will response with an exception to some calls if it is still doing the beacon light client sync or if it just finished this but awaits still the first newly produced beacon block: https://github.com/status-im/nimbus-eth1/blob/master/nimbus_verified_proxy/rpc/rpc_eth_api.nim#L86
    Perhaps this is the error you are seeing? Or are they actually JSON-RPC time-outs?
  3. Do you mean that you do a side by side comparison versus latest head from the verified proxy and what you get from Infura? This is possible/likely as the values here are based on the Beacon light client data which has a delay of 4/3 slot (15 seconds). Full nodes have 0 delay.
  4. Yes, you can fetch the latest from a trusted full node. However, dropping some node as default in a config would cause most (all?) users to use that one, making it a centralized weak link. Thus a user should define their trusted node there to grab it for, or simply the trusted-block-root directly. A hardcoded value is not safe due to weak subjectivity: https://github.com/ethereum/consensus-specs/blob/v1.4.0-beta.2/specs/phase0/weak-subjectivity.md
    While currently not developed, I do think there must be more UX friendly options where you provide a trusted block root once, which can then initiate sync, and on restart sync can continue from there. However, a node should not be down too long in that case, or it might needs a trusted block root again (weak subjectivity).

@arnetheduck
Copy link
Member

cc @etan-status for light client best practises

@etan-status
Copy link
Contributor

There is research that suggests that weak subjectivity period for light client data can be be longer than for a full node: https://github.com/metacraft-labs/DendrETH/tree/main/docs/long-range-syncing

In nimbus_light_client, the trusted block root is tracked across restarts. Same logic works well for https://eth-light.xyz to persist sync progress across restarts.

The startup latency is expected while others beyond Nimbus and Lodestar are working on implementing light client data serving. Latency will improve once more peers have the data made available, no need to optimize this on our end.

@vitvly
Copy link
Member Author

vitvly commented Nov 6, 2023

I'm not very familiar with the integration either, but I'll try to answer your questions:

  1. Do you mean that the time it takes before the verified proxy can start delivering data is very inconsistent? Yes, this is because before the verified proxy can function, it needs to do the Beacon light client sync. The time to sync depends on how old your trusted-block-root is. But more importantly, it highly depends on how quickly/slowly it finds peers that serve beacon light client data.
  2. Which kind of exception exactly? The JSON-RPC proxy is up rather quickly. But it will response with an exception to some calls if it is still doing the beacon light client sync or if it just finished this but awaits still the first newly produced beacon block: https://github.com/status-im/nimbus-eth1/blob/master/nimbus_verified_proxy/rpc/rpc_eth_api.nim#L86
    Perhaps this is the error you are seeing? Or are they actually JSON-RPC time-outs?
  3. Do you mean that you do a side by side comparison versus latest head from the verified proxy and what you get from Infura? This is possible/likely as the values here are based on the Beacon light client data which has a delay of 4/3 slot (15 seconds). Full nodes have 0 delay.
  4. Yes, you can fetch the latest from a trusted full node. However, dropping some node as default in a config would cause most (all?) users to use that one, making it a centralized weak link. Thus a user should define their trusted node there to grab it for, or simply the trusted-block-root directly. A hardcoded value is not safe due to weak subjectivity: https://github.com/ethereum/consensus-specs/blob/v1.4.0-beta.2/specs/phase0/weak-subjectivity.md
    While currently not developed, I do think there must be more UX friendly options where you provide a trusted block root once, which can then initiate sync, and on restart sync can continue from there. However, a node should not be down too long in that case, or it might needs a trusted block root again (weak subjectivity).

thanks for comments!

  1. This is clear. The age of trusted-block-root turned out to be less important than actual time required to find peers, at least during my testing.
  2. On my end i just receive a rather uninformative eth_getBalance raised an exception. Is there a way for the user to know when light client sync is finished? Or should one just invoke API methods periodically until successful response?
  3. Aha, that could be the case here. Most likely not an issue then.
  4. Do i understand correctly that nimbus-light-client can track trusted-block-root, and thus one can store it in the database?

@vitvly
Copy link
Member Author

vitvly commented Nov 6, 2023

Since we are not very familiar with the integration, can you provide more detailed reproduction steps for the issues?

sorry, sure, a detailed list of required steps below:

  1. Fetch status-go code from Nimbus integration status-go#4254
  2. Fetch nimbus-eth1 code from nimbus-verified-proxy integration with status-go #1572.
  3. cd nimbus-eth1, make libverifproxy
  4. cd status-go. If nimbus-eth1 folder is in a different dir than status-go, specify NIMBUS_ETH1_PATH environment variable. (by default Makefile assumes that they are in the same dir)
  5. make vendor, make test-verif-proxy-wrapper

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants