You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, each benchmark is newly created by the python package whenever a get_benchmark method call is executed.
This could be improvement, since many benchmarks are already created and available using the database of the MQT Bench webserver.
Therefore, the following steps would be useful when get_benchmark is called:
Check if the MQT Bench database is downloaded locally. If yes, check if the benchmark is already part of that.
If the database is not downloaded but part of it, get the benchmark using the MQT Bench webpage.
Only if 1) and 2) are not successful, create the benchmark locally as it is currently implemented.
The text was updated successfully, but these errors were encountered:
this should, most likely, be an option instead of a default behavior, so that a user can really choose the behavior they want, e.g., caching=off|local|online|automatic where local means 1), online means 2), and automatic is the combination of both
I am not yet sure whether this feature should be opt-in or opt-out and what the appropriate default should be.
People might want to use the package offline or not allow it to access the internet/download lots of files. This has to be kept in mind and should somehow be reflected in the implementation.
(Optional, wild and crazy idea) Could there be a way to submit generated benchmarks to the online repository/website? This would need some kind of approval from us as maintainers. Could you somehow programmatically generate a pull request that "proposes" a new zip file with additional files? Or trigger a GitHub workflow that takes the latest zip file and augments it with some newly added files and attaches them to a release that states the added files in the release notes? Probably, this should be its own issue, but since it's a rather crazy idea, I'll just keep it here for now.
Currently, each benchmark is newly created by the python package whenever a
get_benchmark
method call is executed.This could be improvement, since many benchmarks are already created and available using the database of the MQT Bench webserver.
Therefore, the following steps would be useful when
get_benchmark
is called:The text was updated successfully, but these errors were encountered: