Use deep learning model to produce a pixel-by-pixel classification of images and identify road for autonomous driving vehicles
-
Updated
Oct 12, 2020 - Jupyter Notebook
Use deep learning model to produce a pixel-by-pixel classification of images and identify road for autonomous driving vehicles
Chatbot using Transformer Model and DialoGPT
Bahdanau Attention Mechanism | Tensorflow Custom Layers/Model/Loss Function/Metrics | LSTM | Encoder | Decoder | Cross-Attention | Language Translation | Bleu Score | Dropout
Train a Seq2Seq Model with Attention to Translate from One Language to Another
A SegNet model trained for segmentation of Lanes suitable for driving for automobiles.
This project is about training a deep neural network to identify and track a target in simulation using Udacity's RoboND drone simulator. 🛸 Applications like this are key to many fields of robotics and the techniques applied can be extended to scenarios like advanced cruise control in autonomous vehicles or human-robot collaboration. 👨🏫
A deep learning model that achieves video super-resolution tasks with temporal and spatial attention in cascade
This GitHub repository houses an innovative implementation of Neural Machine Translation (NMT) using state-of-the-art sequence-to-sequence networks. The primary focus is on enhancing translation quality through progressively advanced architectural improvements.
This GitHub repository contains the implementation of a deep learning model capable of generating captions for images in the form of speech.
Deep Convolutional Encoder-Decoder Architecture implemented along with max-pooling indices for pixel-wise semantic segmentation using CamVid dataset.
My implementation of autoencoders
A systematic way to convert verbatim medical terms with the proposed encoder/decoder+attention block solution. (python, TensorFlow)
Generate Images from text prompt using Stable Diffusion Model
A seq2seq model using LSTM-based encoder-decoder architecture for metadata generation from code snippets, optimizing software maintenance process
Image captioning with a benchmark of CNN-based encoder and GRU-based inject-type (init-inject, pre-inject, par-inject) and merge decoder architectures
Annotated vanilla implementation in PyTorch of the Transformer model introduced in 'Attention Is All You Need'
Enhances quality of image and video with Encoder-Decoder Neural Network
Add a description, image, and links to the encoder-decoder-architecture topic page so that developers can more easily learn about it.
To associate your repository with the encoder-decoder-architecture topic, visit your repo's landing page and select "manage topics."