Simple and unified interface to zero-shot computer vision models curated for robotics use cases.
-
Updated
May 16, 2024 - Python
Simple and unified interface to zero-shot computer vision models curated for robotics use cases.
Code for RA-L paper "One-shot Learning for Task-oriented Grasping"
Summary of key papers and blogs about diffusion models to learn about the topic. Detailed list of all published diffusion robotics papers.
paper list of robotic grasping and some related works
A public sandbox for simulating grasping in Gazebo with the iCub humanoid
Python module for GQ-CNN training and deployment with ROS integration.
End-to-End Learning to Grasp from Object Point Clouds
Scripts and configuration files to launch when bringing up REEM-C.
3d files for creating shells of split-cylinder artefacts with embedded ATI-IA force sensors
GrabNet: A Generative model to generate realistic 3D hands grasping unseen objects (ECCV2020)
Diffusion models to generate unconstrained and constrained grasps on 3D objects - Acronym annd CONG dataset
Official PyTorch implementation of Synergies Between Affordance and Geometry: 6-DoF Grasp Detection via Implicit Representations
Automatic Grasp Success/Failure Checker based on Gazebo
Code for our CVPR'23 paper - "FLEX: Full-Body Grasping Without Full-Body Grasps"
ROS package for Robotiq Gripper 85 and Hand-e
Toolbox for our GraspNet-1Billion dataset.
A Smart Suit for Patients with Paralysis using Artificial Intelligence and Robotics
giving Baxter-Robot Face/Object Detection capabilities and basic object grasping
Add a description, image, and links to the grasping topic page so that developers can more easily learn about it.
To associate your repository with the grasping topic, visit your repo's landing page and select "manage topics."