You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe.
Researchers perform the same simulations on the benchmark datasets using available models again and again wasting time and resources. Simulations play an important role in deciding the models for their own datasets from the same or different domains as the benchmark datasets.
Describe the solution you'd like
An archive of simulation results (maybe a GitHub repo or a page in the documentation) might be helpful for quickly going through the performance of different models for benchmark datasets. This can help in decision-making and avoid repeated simulations on benchmark datasets by new researchers trying to use ASReview for their own datasets.
Teachability, Documentation, Adoption, Migration Strategy
I think a filterable table is an ideal option to present the simulation results containing the fields for details such as feature extractor, classifier, balancer, and query strategy used The fields for simulation results such as recalls at different levels, WSS, ERF, and ATD along with the dataset information such as the name of the dataset, topic(s), number of records, number and percentage of included records, etc should be included. Some other information can also be included such as who performed the simulation and the random seed, the time required (including information about the hardware used) for the simulation, etc. Adding the recall plots will be a plus.
This can allow researchers to quickly see which models they should try first for their own simulations depending upon their domain and other factors such as the number of records and expected relevant records.
The text was updated successfully, but these errors were encountered:
Hi @rohitgarud. We've been playing with this idea for a while, and we would love your input on the project. Let's set up this project as a collaboration!
Feature Request
Is your feature request related to a problem? Please describe.
Researchers perform the same simulations on the benchmark datasets using available models again and again wasting time and resources. Simulations play an important role in deciding the models for their own datasets from the same or different domains as the benchmark datasets.
Describe the solution you'd like
An archive of simulation results (maybe a GitHub repo or a page in the documentation) might be helpful for quickly going through the performance of different models for benchmark datasets. This can help in decision-making and avoid repeated simulations on benchmark datasets by new researchers trying to use ASReview for their own datasets.
Teachability, Documentation, Adoption, Migration Strategy
I think a filterable table is an ideal option to present the simulation results containing the fields for details such as feature extractor, classifier, balancer, and query strategy used The fields for simulation results such as recalls at different levels, WSS, ERF, and ATD along with the dataset information such as the name of the dataset, topic(s), number of records, number and percentage of included records, etc should be included. Some other information can also be included such as who performed the simulation and the random seed, the time required (including information about the hardware used) for the simulation, etc. Adding the recall plots will be a plus.
This can allow researchers to quickly see which models they should try first for their own simulations depending upon their domain and other factors such as the number of records and expected relevant records.
The text was updated successfully, but these errors were encountered: