Move computation of p-values out of get_pairwise_comparisons()
?
#750
Labels
question
Further information is requested
Milestone
Currently, three functions exist that do something related to pairwise comparisons:
get_pairwise_comparisons()
outputs a data.table with three different things:add_relative_skill()
adds relative skill scores to an existingscores
object (calling get_pairwise_comparisons()`)plot_pairwise_comparisons()
visualises either mean score ratios or the p-values.Should the calculation of p-values and mean score ratios/relative skill scores be done by the same function?
Pro:
both mean score ratios and computation of p-values require a similar mechanic in which two models are compared against each other
the function is currently called
get_pairwise_comparisons()
so it makes sense to have those two things togetherContra:
get_pairwise_comparisons()
andplot_pairwise_comparisons()
currently have code for both things.In terms of currently suggested workflows we have the following:
For getting relative skill scores, you call
as_forecast(data) |> score() |> add_relative_skill()
.For visualising mean score ratios, you call
For visualising p-values, you call
We previously even had a nice plot that showed both p-values and mean score ratios in a single plot (using the upper and lower triangle), but that broke and we ditched a while ago.
Options:
get_pairwise_comparisons()
toget_score_ratios()
. Re-introduce functionality later.The text was updated successfully, but these errors were encountered: