Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[FEATURE]: GeoJSON from model predictions #445

Open
srmsoumya opened this issue Jan 21, 2022 · 2 comments
Open

[FEATURE]: GeoJSON from model predictions #445

srmsoumya opened this issue Jan 21, 2022 · 2 comments

Comments

@srmsoumya
Copy link
Collaborator

Is your feature request related to a problem? Please describe.
Deep Learning models have a limitation on the size of imagery they can take as input. We have to split large AOIs into smaller tiles & feed them to model. This introduces added artifacts, where there is a single building spanning multiple tiles.

Describe the solution you'd like
Once I have all the vector tiles from model prediction, I would like to intelligently merge the polygons that represent the same object & generate a GeoJSON file for an entire AOI.

@remtav
Copy link

remtav commented Feb 2, 2022

My team has the same need, except we don't do overlapping predictions. Currently, we read an image tile by tile (with rasterio's Window), then stack predictions in a large array. My colleague Vic has adapted code to smoothen the predictions at the border of different tiles before we write this large array to a final raster.

Here's our inference script if this can be of any use. Also, we are well aware that our code is not as clean as what a profesionnal developer would come up with. Please be indulgent.

@rbavery
Copy link
Contributor

rbavery commented Feb 4, 2022

Thanks for the share @remtav ! We're aware of similar implementations https://github.com/obss/sahi and https://github.com/BloodAxe/pytorch-toolbelt and are still considering if/when it is useful to integrate this functionality in this library.

I think it might be, since this would decouple the inference merging step from the inference step, which means one wouldn't need to wait for a whole scene to be inferenced before running inference on the next scene and maybe bringing some performance boosts (you could have a lambda func for inference and a lambda func for merging). It could also allow easier testing of different merging strategies by decoupling the dependencies for each step and making it easier to set up experiments that test each independently.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants