Discovery and Learning of Minecraft Navigation Goals from Pixels and Coordinates
-
Updated
Jun 9, 2021 - HTML
Discovery and Learning of Minecraft Navigation Goals from Pixels and Coordinates
PixelEDL: Unsupervised Skill Discovery and Learning from Pixels
Evaluating pre-trained navigation agents under corruptions
The repository of ECCV 2020 paper `Active Visual Information Gathering for Vision-Language Navigation`
Implementation of Multiplicative Compositional Policies (MCP)
Official Code for "Language Models as Zero-Shot Planners: Extracting Actionable Knowledge for Embodied Agents"
Evaluation tasks for ObjectNav models
🚀 Run AI2-THOR with Google Colab
[ACM MM 2021 Oral] Official repo of "Neighbor-view Enhanced Model for Vision and Language Navigation"
ManipulaTHOR, a framework that facilitates visual manipulation of objects using a robotic arm
NeurIPS 2022 Paper "VLMbench: A Compositional Benchmark for Vision-and-Language Manipulation"
[IROS22 Oral] Optimization of Forcemyography Sensor Placement for Arm Movement Recognition https://arxiv.org/abs/2207.10915
A project space for Embodied Emulated Personas - Embodied neural networks trained by LLM chatbot teachers
🏘️ Scaling Embodied AI by Procedurally Generating Interactive 3D Houses
Paper & Project lists of cutting-edge research on visual navigation and embodied AI.
Good Time to Ask: A Learning Framework for Asking for Help in Embodied Visual Navigation
transformer + reinforcement learning for navigation + POMPD
📣 [IEEE IROS 2023] Official Repository of IROS 23 paper "Uncertainty-Aware Lidar Place Recognition in Novel Environments"
📱👉🏠 Perform conditional procedural generation to generate houses like your own!
Official Github repository for "Renderable Neural Radiance Map for Visual Navigation". (CVPR 2023)
Add a description, image, and links to the embodied-ai topic page so that developers can more easily learn about it.
To associate your repository with the embodied-ai topic, visit your repo's landing page and select "manage topics."