Difficulty specifying k for HitsAtK metric in an evaluator #1382
-
Hey, I have a model trained on my own KG, it works well and now I want to generate results for certain metrics, specifically, I'd like to measure its hits@ 25, 50 and 100, but I am having trouble getting an evaluator to work for that use case. I can generate the default rankbased results just fine with this from pykeen.evaluation import evaluator_resolver
from pykeen.triples import TriplesFactory
import torch
from pykeen.metrics.ranking import HitsAtK
from pykeen.evaluation import RankBasedEvaluator
# load the model "Models/rotate_monarch_filtered.trained_model.pkl"
loaded_model = torch.load("Models/rotate_monarch_filtered.trained_model.pkl",map_location=torch.device('cuda'))
# load the test triples
test_triples_factory = TriplesFactory.from_path('ELs_for_Rotate/Monarch_KG_Filtered/test.txt')
evaluator = evaluator_resolver.make("rankbased", clear_on_finalize=False)
evaluator.evaluate(model=loaded_model, mapped_triples=test_triples_factory.mapped_triples)
res = evaluator.finalize_with_confidence(n_boot=10)
print(res) But when I try to create my own Hits@50 metric with this code I get the error evaluator = evaluator_resolver.make(RankBasedEvaluator(metrics=HitsAtK(k=50)), clear_on_finalize=False)
evaluator.evaluate(model=loaded_model, mapped_triples=test_triples_factory.mapped_triples)
res = evaluator.finalize().to_dict()
print(res) File "/scratch/Shares/layer/workspace/michael_sandbox/LinkPrediction/del.py", line 31, in <module>
res = evaluator.finalize().to_dict()
File "/Users/mibr6115/anaconda3/envs/link/lib/python3.9/site-packages/pykeen/evaluation/rank_based_evaluator.py", line 350, in finalize
result = RankBasedMetricResults.from_ranks(
File "/Users/mibr6115/anaconda3/envs/link/lib/python3.9/site-packages/pykeen/evaluation/rank_based_evaluator.py", line 145, in from_ranks
for metric, pack in itertools.product(metrics, rank_and_candidates)
File "/Users/mibr6115/anaconda3/envs/link/lib/python3.9/site-packages/pykeen/evaluation/rank_based_evaluator.py", line 120, in _iter_ranks
c_ranks = np.concatenate([ranks_flat[side, rank_type] for side in sides])
File "<__array_function__ internals>", line 200, in concatenate
ValueError: need at least one array to concatenate Anyone know how I can accomplish this / what I am doing wrong? |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 2 replies
-
Hi @MSBradshaw , I think the issue lies in evaluator = evaluator_resolver.make(RankBasedEvaluator(metrics=HitsAtK(k=50)), clear_on_finalize=False) Since the first argument is already an instance of an evaluator, it will just pass it through. So either directly instantiate without the resolver evaluator = RankBasedEvaluator(metrics=HitsAtK(k=50), clear_on_finalize=False) or pass a combination of class/name of class + parameters to the resolver evaluator = evaluator_resolver.make(RankBasedEvaluator, metrics=HitsAtK(k=50), clear_on_finalize=False) |
Beta Was this translation helpful? Give feedback.
Hi @MSBradshaw ,
I think the issue lies in
Since the first argument is already an instance of an evaluator, it will just pass it through. So either directly instantiate without the resolver
or pass a combination of class/name of class + parameters to the resolver