You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The latest state of libcfgraph (last updated on Dec 4th 2023) contains:
1,602,023 artifacts
18,390,176 unique paths
618,908,726 path-to-artifact relationships
We can naively put all the JSONs in a artifact -> JSON blob table. Using the new JSONB, this takes 46GB uncompressed, 2.5GB zstd. However, finding files in this table is very slow because it needs to scan the JSON blob for each artifact. Adding a virtual JSON index doesn't help much and it increases storage significantly. On the upside, this can store ALL metadata in a very simple way and takes <10 min to populate.
To optimize for file querying, I also created a database with a file_path -> conda_artifacts table, indexed by file_path. The conda_artifacts field is a text field where each line is a conda artifact "route" (channel/subdir/filename). This has a lot of duplication, but the exact queries are blazingly fast. It takes 37GB uncompressed, but compresses nicely to 850MB zstd! We can also add a FTS5 index for the paths, which allows for fast partial searches at a relatively small cost.
I also experimented with some forms of string interning to avoid the artifacts duplication, but it's VERY slow to populate (estimates of 30-60h, compared to the <20min mark we have with the non interned version), and would also involve slower retrievals, so I think this is a good compromise.
The code is available in this repository: https://github.com/jaimergp/conda-forge-paths. I added a GHA workflow, but the runner dies trying to clone libcfgraph 🚀 😂 My plan is to upload a couple of database.zst files to GH releases and have that a starting point.
Hm, I learnt about RETURNING and realized we can store the artifact paths on the go at no cost, and instead store the IDs, which should have little cost at query time. I added full-text-search to enable partial searches as well, and didn't change the size significantly. This all means that with this new approach the uncompressed database is only 8.8GB! Compressed size doesn't change much: 634MB.
We also get a new table for free: all the artifacts, and I also stored the timestamps, which will be useful at update time.
This task consists of building a sqlite database with all the package metadata. Equivalent to the deprecated
regro/libcfgraph:/artifacts
repository.The text was updated successfully, but these errors were encountered: