Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Set up a fluffy history bridge #182

Open
kdeme opened this issue Mar 28, 2024 · 12 comments
Open

Set up a fluffy history bridge #182

kdeme opened this issue Mar 28, 2024 · 12 comments
Assignees

Comments

@kdeme
Copy link
Contributor

kdeme commented Mar 28, 2024

In order to gossip Ethereum chain history data into the Portal network we need to run the
new fluffy portal_bridge on our infra.

Brief documentation here of the portal_bridge: https://fluffy.guide/history-content-bridging.html#seeding-history-data-with-the-portal_bridge

We basically want to set up:

  • Fluffy node with storage capacity 0, e.g.: ./build/fluffy --metrics --rpc --storage-capacity:0
  • Portal bridge injecting latest + audit + backfill from era1 files, e.g.: ./build/portal_bridge history --latest:true --backfill:true --audit:true --era1-dir:/somedir/era1/ --web3-url:${WEB3_URL}
  • Access to a running Ethereum full node / web3 provider. This can be the same node as required for the glados setup, see issue: Add Glados instance as part of Fluffy deployment #158
  • The portal_bridge also needs file access to all era1 files. These can be found here: https://era1.ethportal.net/

The era1 files take about 428GB of space.
The portal_bridge itself does not require any additional storage space.
And the fluffy node will store practically nothing in its database with a with storage capacity set to 0.

It would be good to have access to the metrics of the Fluffy node to see the gossip stats.

@kdeme
Copy link
Contributor Author

kdeme commented Mar 28, 2024

cc @jakubgs

@jakubgs jakubgs self-assigned this May 22, 2024
@jakubgs
Copy link
Member

jakubgs commented May 22, 2024

What network is this intended for? Mainnet?

@jakubgs
Copy link
Member

jakubgs commented May 22, 2024

Our current nodes run on something called testnet0:

# Separate variable to not change node names.
nimbus_fluffy_network_nice_name: 'mainnet'
nimbus_fluffy_network: 'testnet0'

Appears this was changed by you:

So it's a testnet I guess.

@jakubgs
Copy link
Member

jakubgs commented May 23, 2024

Some notes after discussing this with Kim:

  • portal_bridge is a long-running service that will talk to two services and use era1 files:
    • Fully synced EL node used by portal bridge via --web3-url flag.
    • Fluffy node with --storage-capacity:0 to which portal bridge will use via --rpc-address.
    • The era1 files from era1.ethportal.net which are an archive of EL blocks from before merge.
  • It's okay to re-use a Geth node from nimbus.mainnet fleet. Multi-EL might be supported later.
  • The Fluffy node will communicate with Portal Network and inject new and old block data.
  • Re-using existing nimbus.fluffy hosts is fine but we might have to migrate later.
    • Much higher bandwidth usage can be expected.
  • The era1 files are static and are not expected to change.
  • This setup does not need to be highly available yet.
  • Fluffy node metrics should be collected.
  • Future plans include beacon network and state network bridges.

Looks like I will need an extra SSD for the era1 files.

@jakubgs
Copy link
Member

jakubgs commented May 23, 2024

Created a ticket to get an extra 500 GB SSD:
https://client.innovahosting.net/viewticket.php?tid=428306&c=tkxOauXp

@jakubgs
Copy link
Member

jakubgs commented May 24, 2024

Indeed, it is about 458 GB in total:

 > c https://era1.ethportal.net/ | awk -F'[<> ]' '/kB<\/td>/{count = count + $7}END{print count}' 
458497023

@jakubgs
Copy link
Member

jakubgs commented May 24, 2024

Support responded:

I have sent an invoice for an 800GB SAS SSD, as we didn't have 500 GB.

Ah well.

@jakubgs
Copy link
Member

jakubgs commented May 24, 2024

Created separate repo for portal bridge:

@jakubgs
Copy link
Member

jakubgs commented May 24, 2024

Got the drive:

jakubgs@metal-01.ih-eu-mda1.nimbus.fluffy:~ % sudo ssacli ctrl slot=0 physicaldrive all show 

Smart Array P440ar in Slot 0 (Embedded)

   Array A

      physicaldrive 2I:1:5 (port 2I:box 1:bay 5, SAS SSD, 400 GB, OK)

   Array B

      physicaldrive 1I:1:1 (port 1I:box 1:bay 1, SAS SSD, 1.6 TB, OK)

   Unassigned

      physicaldrive 2I:1:6 (port 2I:box 1:bay 6, SAS SSD, 800 GB, OK)

Created a logical volume for it:

jakubgs@metal-01.ih-eu-mda1.nimbus.fluffy:~ % sudo ssacli ctrl slot=0 create type=ld drives=2I:1:6
jakubgs@metal-01.ih-eu-mda1.nimbus.fluffy:~ % sudo ssacli ctrl slot=0 logicaldrive all show

Smart Array P440ar in Slot 0 (Embedded)

   Array A

      logicaldrive 1 (372.58 GB, RAID 0, OK)

   Array B

      logicaldrive 2 (1.46 TB, RAID 0, OK)

   Array C

      logicaldrive 3 (745.19 GB, RAID 0, OK)

jakubgs added a commit that referenced this issue May 24, 2024
#182

Signed-off-by: Jakub Sokołowski <jakub@status.im>
@jakubgs
Copy link
Member

jakubgs commented May 24, 2024

Mounted:

jakubgs@metal-01.ih-eu-mda1.nimbus.fluffy:~ % df -h /data /era
Filesystem      Size  Used Avail Use% Mounted on
/dev/sdb        1.5T  1.2T  229G  84% /data
/dev/sdc        733G   28K  696G   1% /era

@jakubgs
Copy link
Member

jakubgs commented May 24, 2024

Started downloading of era1 files in a tmux session:

jakubgs@metal-01.ih-eu-mda1.nimbus.fluffy:/era % ERA1_URL=https://era1.ethportal.net/
jakubgs@metal-01.ih-eu-mda1.nimbus.fluffy:/era % FILES=$(c "${ERA1_URL}" | awk -F'[<>]' '/<td><p /{print $5}')      
jakubgs@metal-01.ih-eu-mda1.nimbus.fluffy:/era % for FILE in $(echo $FILES); do wget ${ERA1_URL}${FILE}; done

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants