Skip to content

Latest commit

 

History

History
52 lines (43 loc) · 2.46 KB

README.md

File metadata and controls

52 lines (43 loc) · 2.46 KB

Enhanced-NeoNav

This is the implementation of our RA-L paper Towards Target-Driven Visual Navigation in Indoor Scenes via Generative Imitation Learning, training and evaluation on Active Vision Dataset (depth only). This is an enhanced version of NeoNav

Navigation Model

Implementation

Training

  • The environment: Cuda 10.0, Python 3.6.4, PyTorch 1.0.1
  • Please download "depth_imgs.npy" file from the AVD_Minimal and put the file in the current folder.
  • To train the navigation model from scratch, use "python3 cnetworkd.py".

Testing

  • To evaluate our model, please run "python3 eva_checkpointd1.py".
  • The video can be downloaded HERE.

Results

* The time for the robot to finish each locomotion is much longer. For example, the move right action predicted by our navigation model is converted to rotate right at 45^\circ/s for 2s, move forward at 0.25m/s for 2s, and rotate left at 45^\circ/s for 2s. 
* The saltatorial velocity control results in jerky motions. 
* Extension to continuous velocity control would make the method applicable in realistic environments.

Contact

To ask questions or report issues please open an issue on the issues tracker.

Citation

If you use this work in your research, please cite the paper:

@article{wu2020towards,
  title={Towards target-driven visual navigation in indoor scenes via generative imitation learning},
  author={Wu, Qiaoyun and Gong, Xiaoxi and Xu, Kai and Manocha, Dinesh and Dong, Jingxuan and Wang, Jun},
  journal={IEEE Robotics and Automation Letters},
  volume={6},
  number={1},
  pages={175--182},
  year={2020},
  publisher={IEEE}
}