Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Weird runtime variations - are there any caching effects? #24

Open
flackbash opened this issue Jan 26, 2024 · 0 comments
Open

Weird runtime variations - are there any caching effects? #24

flackbash opened this issue Jan 26, 2024 · 0 comments

Comments

@flackbash
Copy link

Dear Tom,

First of all thank you for publishing this awesome and easy to use entity linker.

I've been running experiments with ReFinED for a while but only started using it on GPU a few days ago. I noticed some weird variations in the runtime on GPU (maybe they were there on CPU as well, and I didn't pay close attention to the runtime before, but I think I would have noticed):
If I run ReFinED over a benchmark for the first time (or for the first time after linking over several other benchmarks), it takes quite a while (in fact at least as long as on my CPU-only machine: 76s for the Wiki-Fair benchmark). If I run it again immediately on the same benchmark it is lightning fast and links the whole thing in 4s.

Is there any caching used that might explain this behavior? If so, can I disable it to get comparable runtime measurements?

The loading of the model does not count towards my time measurement. The model is loaded before the measurement is started:

self.refined = Refined.from_pretrained(model_name=model_name, entity_set=entity_set)

I'm using ReFinED from inside the ELEVANT entity linking evaluation tool with the AIDA model and the 33M entity set.

Thanks in advance,
Natalie

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant