Skip to content

AbhishekRS4/see_before_act_for_grasping

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

54 Commits
 
 
 
 
 
 

Repository files navigation

Learning to See before Learning to Act

Implementation Notes

  • This repo contains the work carried out as part of the Master's course Cognitive_Robotics at University of Groningen
  • This repo contains a variation of the idea presented in Learning to See before Learning to Act paper, more details can be found in References
  • Firstly, the training is done on passive vision task to learn to detect objects. We chose segmentation and in particular object grasp affordance segmentation instead of foreground segmentation as mentioned in the paper
  • The trained passive vision task model is then transferred to learn an active vision task which is grasping

Pretrained model used for passive vision task [segmentation]

Active vision task [grasping]

Instruction to run scripts for passive vision task [segmentation]

python3 src/passive_task_segmentation/train.py --help
  • To list all inference options
python3 src/passive_task_segmentation/infer.py --help

Instruction to run scripts for active vision task [grasping]

python3 src/active_task_grasping/train.py --help

Simulation experiments to evaluate grasping performance

Contact info of team members

References