Skip to content
View nomadka's full-sized avatar

Block or report nomadka

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Maximum 250 characters. Please don’t include any personal information such as legal names or email addresses. Markdown is supported. This note will only be visible to you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
nomadka/README.md

Hi Welcome to Noushiq's Repo

GitHub LinkedIn

AI and Computer Vision Enthusiast | Autonomous Systems | Data Engineering | Automotive Engineering

AI and Computer Vision engineer working at the intersection of perception, machine learning, and autonomous systems. I enjoy diving deep into data, writing practical code, and shaping ideas into deployable solutions.

  • 🌍 I'm based in Germany
  • ✉️ You can contact me at noushikayilan@gmail.com
  • 🧠 I'm currently learning Vision Language Models, Sensor Fusion, World Model
  • 👥 I'm looking to collaborate on Computer Vision and AI related projects

PythonC++GitGNU BashVS CodeFast APIMySQLLinuxUbuntuAmazon Web ServicesPyTorchDockerHugging Face

📚 Publication

Enhancing LLM-based Autonomous Driving with Modular Traffic Light and Sign Recognition

Venue

Projects

TLSR: Modular Traffic Light & Sign Recognition

Enhancing LLM-based Autonomous Driving

This work introduces TLSR, a modular architecture designed to enhance LLM-based autonomous driving systems through explicit traffic light and traffic sign reasoning. The proposed framework integrates seamlessly with existing LLM-driven planners such as LMDrive and BEVDriver and operates in a closed-loop simulation environment using CARLA. A state-of-the-art object detection model is pre-trained and fine-tuned to accurately detect traffic lights and traffic signs within the simulation. To improve robustness, the architecture incorporates a relevance prediction algorithm and a state validation mechanism to reduce misclassifications. Detected traffic cues are transformed into structured natural language representations and injected into the LLM input, enforcing attention to safety-critical elements. The framework is plug-and-play, model-agnostic, and supports both single-view and multi-view camera configurations. Extensive evaluation on the LangAuto benchmark demonstrates driving performance improvements of up to 14% over LMDrive and 7% over BEVDriver, alongside a consistent reduction in traffic light and traffic sign infractions.

  • Key Tech: Computer Vision, Autonomous Driving VLMs, Python, PyTorch, OpenCV

Result 1 Result 2 Result 3

Pinned Loading

  1. camera_caliberation camera_caliberation Public

    The repository contains files that can be quickly and effectively integerated into any camera caliberation workflows. It will provide the camera intrinsic and extrinsic parameters, further helping …

    Python

  2. videoDetector_v1.0 videoDetector_v1.0 Public

    This project presents a simple MATLAB implementation for object detection and tracking in videos using an Aggregate Channel Features (ACF) detector. The aim of this program is to demonstrate how AC…

    MATLAB

  3. opendilab/LMDrive opendilab/LMDrive Public

    [CVPR 2024] LMDrive: Closed-Loop End-to-End Driving with Large Language Models

    Jupyter Notebook 892 76

  4. NOHA-Projects/localChatbot NOHA-Projects/localChatbot Public

    Developing a chatbot with langchain and streamlit

    Python

  5. micro_pen micro_pen Public

    This is simple form filler agent that utilzes embedded data to fill details in a form such as excel sheet or web browser

    Python