New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Timing info in external test JSON reports #15023
Conversation
01dfa10
to
7248746
Compare
7248746
to
6eceae6
Compare
6eceae6
to
019d45e
Compare
So, with this PR merged, this is the hacky way to quickly get a timing table from external test benchmarks: Scriptjq '[
["brink", ."brink" ."ir-optimize-evm+yul".compilation_time.user],
["colony", ."colony" ."ir-optimize-evm+yul".compilation_time.user],
["elementfi", ."elementfi" ."ir-optimize-evm+yul".compilation_time.user],
["ens", ."ens" ."ir-optimize-evm+yul".compilation_time.user],
["euler", ."euler" ."ir-optimize-evm+yul".compilation_time.user],
["gnosis", ."gnosis" ."ir-optimize-evm+yul".compilation_time.user],
["gp2", ."gp2" ."ir-optimize-evm+yul".compilation_time.user],
["perpetual-pools", ."perpetual-pools" ."ir-optimize-evm+yul".compilation_time.user],
["pool-together", ."pool-together" ."ir-optimize-evm+yul".compilation_time.user],
["uniswap", ."uniswap" ."ir-optimize-evm+yul".compilation_time.user],
["yield_liquidator", ."yield_liquidator" ."ir-optimize-evm+yul".compilation_time.user],
["zeppelin", ."zeppelin" ."ir-optimize-evm+yul".compilation_time.user]
]' all-benchmarks.json | python -c "$(cat <<EOF
import tabulate, json, sys
print(tabulate.tabulate(json.load(sys.stdin), tablefmt="github", headers=["Test", "Time"]))
EOF
)" Could be improved to avoid having to list the keys by hand, and to round the numbers, but for now it's good enough. EDIT: Version without hard-coded keys: jq '[to_entries[] | [.key, .value."ir-optimize-evm+yul".compilation_time.user]]' all-benchmarks.json | python -c "$(cat <<EOF
import tabulate, json, sys
print(tabulate.tabulate(json.load(sys.stdin), tablefmt="github", headers=["Test", "Time"]))
EOF
)" Result
|
Here are some more streamlined scripts, with automatic iteration over keys, rounding and downloading of benchmark results. To use them, just fill in the branch name and paste it into shell. Timing of a single branchbranch="<BRANCH NAME HERE>"
preset=ir-optimize-evm+yul
function timing-table-script {
cat <<EOF
import tabulate, json, sys
def as_seconds(value):
return (str(value) + " s") if value is not None else None
table = json.load(sys.stdin)
table = [[row[0], as_seconds(row[1])] + row[2:] for row in table]
headers = ["Project", "Time"]
alignment = ("left", "right")
print(tabulate.tabulate(table, tablefmt="pipe", headers=headers, colalign=alignment))
EOF
}
function tabulate-ext-timing {
python -c "$(timing-table-script)" "$@"
}
function ext-timing-list {
local preset="$1"
jq "[to_entries[] | [.key, (.value.\"${preset}\".compilation_time.user | if . != null then round else . end)]]" "${@:2}"
}
scripts/externalTests/download_benchmarks.py --branch "$branch"
cat "all-benchmarks-${branch}"*.json | ext-timing-list "$preset" | tabulate-ext-timing
Timing comparison between two branchesbefore_branch=develop
after_branch="<BRANCH NAME HERE>"
preset=ir-optimize-evm+yul
function diff-table-script {
cat <<EOF
import tabulate, json, sys
def time_diff(before, after):
return (after - before) if after is not None and before is not None else None
def as_seconds(value):
return (str(value) + " s") if value is not None else None
data = json.load(sys.stdin)
table = [[
project,
as_seconds(data[0][project]),
as_seconds(data[1][project]),
as_seconds(time_diff(data[0][project], data[1][project])),
] for project in data[0].keys()]
headers = ["Project", "Before", "After", "Diff"]
alignment = ("left", "right", "right", "right")
print(tabulate.tabulate(table, tablefmt="pipe", headers=headers, colalign=alignment))
EOF
}
function tabulate-ext-timing-diff {
python -c "$(diff-table-script)" "$@"
}
function ext-timing-dict {
local preset="$1"
jq "[to_entries[] | {(.key): (.value.\"${preset}\".compilation_time.user | if . != null then round else . end)}] | add" "${@:2}"
}
scripts/externalTests/download_benchmarks.py --branch "$before_branch"
scripts/externalTests/download_benchmarks.py --branch "$after_branch"
{
cat "all-benchmarks-${before_branch}"*.json | ext-timing-dict "$preset"
cat "all-benchmarks-${after_branch}"*.json | ext-timing-dict "$preset"
} | jq --slurp | tabulate-ext-timing-diff
|
Gathering info about compilation time in external tests to create a comparison like #14909 (comment) is incredibly time consuming. I have to navigate to 10-15 CI pages, locate
time
output in the long CI log for each, copy it and manually format it info something human-readable, like a table. This PRs is the first step toward automating this somewhat.Now that info will be present in the combined JSON report from all external tests and I'll be able to pull it out with a simple script. This is just the bare minimum to make it less annoying for me. The extra data is not yet processed by the scripts that format gas tables or diff them. For now I'm planning to create a quick, hacky script to do further processing but if it's reusable enough I might submit it in a follow-up PR.
The new info can be found in the
reports/externalTests/all-benchmarks.json
artifact of thec_ext_benchmarks
job, under the<project>.<preset>.compilation_time
keys. It's also present in the same form in individual reports attached as artifacts to each parallel run of each external test job.