Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Benchmarking of all pre-trained weights #1792

Open
adamjstewart opened this issue Jan 1, 2024 · 4 comments
Open

Benchmarking of all pre-trained weights #1792

adamjstewart opened this issue Jan 1, 2024 · 4 comments
Labels
documentation Improvements or additions to documentation good first issue A good issue for a new contributor to work on models Models and pretrained weights

Comments

@adamjstewart
Copy link
Collaborator

Issue

In order to select the best pre-trained weights, users need reliable benchmark scores on a number of datasets. We have this for all Landsat weights and some Sentinel-2 weights but are still missing benchmark datasets and evaluation scores for most other weights.

Fix

For Sentinel-2, we should fill in the blanks of the table. For NAIP, Sentinel-1, fMoW, etc., we should decide on a set of benchmark datasets and evaluate all models.

@adamjstewart adamjstewart added documentation Improvements or additions to documentation good first issue A good issue for a new contributor to work on labels Jan 1, 2024
@kvenkman
Copy link

kvenkman commented Feb 5, 2024

I'd be interested in contributing to this - could I have this assigned to me? I'd be interested in experiments to help with the benchmarking effort.

@adamjstewart
Copy link
Collaborator Author

Hi @kvenkman, thanks for volunteering! How much compute do you have access to? This issue will likely require a lot of GPU time.

@kvenkman
Copy link

kvenkman commented Feb 6, 2024

Hi @adamjstewart , I have access to a 1070ti and 1080 on separate desktop computers. They're relatively older machines, but I do have uninterrupted access to them.

@adamjstewart
Copy link
Collaborator Author

Thanks @kvenkman. That's not a lot, but it should be possible to work on Sentinel-1, NAIP, and/or fMoW since there is only a single model per satellite that needs to be evaluated.

Here are the steps I think you need to complete for each one of those:

  1. Find ~3 benchmark datasets that include images from that satellite and target labels
  2. Add those datasets to TorchGeo (if they aren't already in TorchGeo)
  3. Add data modules for those datasets (if they aren't already in TorchGeo)
  4. Using our CLI, try various hyperparameters to get optimal performance on that dataset

This is obviously a big project, but even if you only manage to finish steps 1 and 2 for a model, that gets us a long way there. Let me know if you have any questions.

@adamjstewart adamjstewart added the models Models and pretrained weights label Apr 4, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
documentation Improvements or additions to documentation good first issue A good issue for a new contributor to work on models Models and pretrained weights
Projects
None yet
Development

No branches or pull requests

2 participants