Skip to content

casperholders/casperparser

CasperParser - A parser for the Casper Blockchain

CoverageLines of Code Maintainability Rating Quality Gate Status Reliability Rating Security Rating

Built with :

Prerequisites

This software use the library Asynq and PGX.
You need a Redis instance / cluster / sentinel HA to enqueue tasks.

Docker local instance example :

docker run --name redis -p 127.0.0.1:6379:6379 -d redis

You need a Postgresql instance to store the data generated by the software.

Docker local instance example :

docker run --name some-postgres -p 127.0.0.1:5432:5432 -e POSTGRES_PASSWORD=mysecretpassword -e POSTGRES_DB=casperparser -d postgres

Usage documentation

See Documentation

You need a single client to push the existing blocks and listen to the blockchain for new blocks.

You need at least a worker to process the events pushed by the client.

The software is highly scalable, if you have enough resources spawn more workers to process the events quicker.

Tested on a kubernetes cluster with a single postgres instance and single redis instance the CasperParser can parse the full testnet under an hour (8m+ events) with 35 workers.

Optional add-on

The software will directly apply the migration but if you want you can use the migrate cli to apply the migration to the database :

make run_migrate

There's a web UI for the lib Asynq called Asynqmon.

docker run --rm --name asynqmon -p 127.0.0.1:8080:8080 -d hibiken/asynqmon

Or you can install the asynq lib locally because it's also a CLI that provide a dashboard to monitor your queues :

go install github.com/hibiken/asynq/tools/asynq

And then run asynq :

asynq

And then finally if you want to setup a REST API with the data generated with this parser you can use Postgrest.

docker run --rm --net=host -p 3000:3000 \
  -e PGRST_DB_URI="postgres://postgres@localhost/postgres" \
  -e PGRST_DB_SCHEMAS="casperparser" \
  -e PGRST_DB_ANON_ROLE="postgres" \
  postgrest/postgrest

Database Schema

Here's the database Schema.

Database Schema

Tables

  • Accounts : Hold the public key / account-hash & main purse of all accounts
  • Bids : Hold the bids of all validators
  • Delegators : Hold all delegators
  • Blocks : Hold all the blocks, a block is tied to a raw block
  • Raw blocks : Hash of the block and the data retrieve from the RPC
  • Deploys : Hold all deploys, tied to a raw deploy and a block
  • Raw deploys : Hash of the deploy and the data retrieve from the RPC
  • Rewards : Rewards of an era, tied to a block
  • Contract packages : Hold all contract packages, tied to a deploy (null for system contracts)
  • Contracts : Hold all contracts, tied to a deploy and a package
  • Contract Named Keys : Tied to a contract and a named keys
  • Named keys : Hold all named keys with their initial value or updated if reparsed since the first parse
  • Purses : Hold all purses and their balances

Views

  • Mouvements : stats for different types of CSPR mouvements (transfer/staking)
  • Full stats : stats for the different types of deploys
  • Allowance : All allowance for erc20 token holders
  • Simple stats : Number of deploys per day
  • Total staking : Staking balance per public key
  • Total rewards : Rewards per public key
  • Stakers : Number of stakers
  • Rich list : List of the richest accounts
  • Contract list : simplified list of contracts

How to build

Local dev

make build

This will produce a single binary at the project root folder.

Local tests

!! Important !!

In order to run correctly the tests locally you need to have a .env file and a .casperParser.yaml file.
You can use those provided as an example and replace the different values needed.

CASPER_PARSER_REDIS=127.0.0.1:6379 //Redis single instance connection string
CASPER_PARSER_RPC=http://127.0.0.1:7777/rpc //Node rpc endpoint
CASPER_PARSER_DATABASE=postgres://postgres:myPassword@localhost:5432/gotest //DB Connexion string
CASPER_PARSER_EVENT=http://127.0.0.1:9999/events/main //Node events endpoint

Run tests

make coverage

This will run the tests and open a webpage with the coverage results.

Documentation

The code is documented like all the commands of the CLI. Simply use the CLI with the flag --help to display the user documentation.

For the project documentation if you want to have a web page with a standard godoc documentation install godoc :

go install golang.org/x/tools/cmd/godoc 

And then run godoc :

godoc -http=:6060 

You can now access the documentation at http://localhost:6060/

You can also auto generate the documentation for the commands of the cli with :

go run doc/doc.go

Api endpoints

No api endpoint are available with this software.

Configuration

You can configure the CLI by either the config file, env variables or flags. Or a combination of those three options. The CLI flags will override the env variables keys, and the env variables will override the config file keys.

Export your .env file to pass the values to the CLI

export $(grep -v '^#' .env | xargs)

Config File explanation

rpc: "node_url_rpc_endpoint" (Provide your node RPC endpoint)
database: "postgres://postgres:myPassword@localhost:5432/casperparser" (Provide your database connexion string, prefer the env variable for that)
redis: "127.0.0.1:6379" (Redis single instance address)
cluster: ["REDIS_INSTANCE_URL1", "REDIS_INSTANCE_URL2"] (Will override the redis key and use the cluster key instead for an HA Redis)
sentinel:  ["REDIS_SENTINEL_URL1", "REDIS_SENTINEL_URL2"] (Will override the redis and cluster key and use the sentinel key instead for a HA/Failover Redis) 
master: "master_redis_sentinel_name" (Required when the sentinel key is used, default value : mymaster)
concurrency: 100 //Number of concurrent workers that will be spwaned with the worker command
config:
  contractTypes:
    erc20: # Name of the contract type
      entrypoints:
        - name: "entrypoint_name" # Entrypoint name
          args: [ "arg1", "arg2", "arg3" ] # Args of the entrypoint
      namedkeys: [ "nameKey1", "nameKey2", "nameKey3" ] #Named keys of the contract
  moduleBytes:
    [deployName]: (Will be used as the metadata_type in the db)
        strictArgs: [true/false - Will check the exact number of args]
        args: ["arg1", "arg2"] (Array of string containing the args name to find)

Env File explanation

CASPER_PARSER_REDIS=127.0.0.1:6379 //(Redis single instance address)
CASPER_PARSER_RPC=http://localhost:8080/rpc //(Provide your node RPC endpoint)
CASPER_PARSER_DATABASE=postgres://postgres:myPassword@localhost:5432/casperparser //Provide your database connexion string
CASPER_PARSER_CLUSTER=REDIS_INSTANCE_URL1,REDIS_INSTANCE_URL2 //(Will override the redis key and use the cluster key instead for an HA Redis)
CASPER_PARSER_SENTINEL=REDIS_SENTINEL_URL1,REDIS_SENTINEL_URL2 //(Will override the redis and cluster key and use the sentinel key instead for a HA/Failover Redis)
CASPER_PARSER_MASTER=60 //(Required when the sentinel key is used, default value : mymaster)
CASPER_PARSER_CONCURRENCY=100 //Number of concurrent workers that will be spwaned with the worker command

Docker build

docker build . 

Kubernetes deployment

Warning: The current kubernetes files are specific to my kubenertes architecture. It's basically an example how to use CasperHolders on Kubernetes.

kubectl apply -f kubernetes/(testnet|mainnet)/

Releases

No releases published

Sponsor this project

 

Packages

No packages published

Languages