Skip to content

adi611/hal4500

Repository files navigation

hal4500

Vision integrated to LLM 🌌🤖 Introducing HAL4500: Bridging the Gap Between Sci-Fi and Reality! 🚀

Greetings, fellow space enthusiasts and tech aficionados! 🚀✨ Remember HAL9000 from the iconic "2001: A Space Odyssey"? HAL9000's legacy lives on as we embark on a journey to bring the future closer to the present with HAL4500! 🤖🌠

Imagine a world where machines understand us, collaborate with us, and assist us in real-time. Well, HAL4500 is here to take us one step closer to that vision. 🌐🔮

🔍 Object Detection Magic: Our journey starts with YOLOv8, a state-of-the-art object detection model trained on the extensive MS COCO dataset. HAL4500, like a digital detective, can effortlessly detect and recognize objects held in your hands. 📦🔍

🧠 LangChain Logic: But HAL4500 doesn't stop there. It's powered by LangChain, a dynamic autonomous agent capable of logic-based decision-making. HAL4500 can understand your voice commands, engage in conversations, and decide when to deploy its digital tools. It's like having a knowledgeable companion at your fingertips. 💬🤯

Instructions

Create a .env file in the root and add:

HEARING_PORT = ****
OPEN_AI_API = ********************************
  1. Install the dependencies usng environment.yaml or environments.txt
  2. Run the vision.py script
  3. Run the hearing.py script
  4. Run the main.py script
hal_demonstration.mp4

Creators

  1. Aditya Agarwal (@adi611)
  2. Akash Parua (@AkashParua)

About

Vision integrated to LLM

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published