Skip to content

schefferac2020/Da-TRASH

Repository files navigation

Da-TRASH: Depth-appended Tabletop Recycling Algorithm for Segmenting Havoc

Link to Paper

In this work, our team aims to reproduce and extend upon the paper Learning RGB-D Feature Embeddings for Unseen Object Instance Segmentation by Y. Xiang et. al. This work makes use of non-photorealistic, synthetic RGB + Depth data to produce surprisingly accurate instance segmentation masks of unknown objects. In this report, our team will validate the specific result of the paper suggesting that combining RGB and depth feature vectors elementwise is most effective for this task. Additionally, our team will attempt to extend upon this work by adapting the model to produce adequate results on simple RGB images by first predicting the corresponding depth image with another machine learning model. This extension allows this instance segmentation model to be run on simpler cameras, without depth-sensing capabilities. We show improved segmentation accuracy of our new model, Da-TRASH, on trash segmentation datasets as well as traditional tabletop datasets.

Da-TRASH Architecture

newVis

Da-TRASH Results Example

OSD_results

To Run Da-TRASH on custom images, first generate depth maps using ZoeDepth.ipynb and then get segmentation labels using the test_sample function in the OSDTesting.ipynb

If you want to train your model use Overfit.ipynb

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published