Skip to content

A groundbreaking initiative aimed at enhancing the independence and quality of life for individuals suffering from blindness and visual impairment. Navigating the world with limited vision presents numerous challenges, and our project addresses these difficulties through the integration of artificial intelligence and computer vision technologies.

Notifications You must be signed in to change notification settings

nsuryaa/smart-glass-for-blind

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Smart glass for blind

Image Captioning

The goal of image captioning is to convert a given input image into a natural language description. The encoder-decoder framework is widely used for this task. The image encoder is a convolutional neural network (CNN). In this tutorial, we used resnet-152 model pretrained on the ILSVRC-2012-CLS image classification dataset. The decoder is a long short-term memory (LSTM) network.

alt text

Training phase

For the encoder part, the pretrained CNN extracts the feature vector from a given input image. The feature vector is linearly transformed to have the same dimension as the input dimension of the LSTM network. For the decoder part, source and target texts are predefined. For example, if the image description is "Giraffes standing next to each other", the source sequence is a list containing ['<start>', 'Giraffes', 'standing', 'next', 'to', 'each', 'other'] and the target sequence is a list containing ['Giraffes', 'standing', 'next', 'to', 'each', 'other', '<end>']. Using these source and target sequences and the feature vector, the LSTM decoder is trained as a language model conditioned on the feature vector.

Usage

1. Clone the repositories

git clone https://github.com/nsuryaa/smart-glass-for-blind.git
cd image_captioning

2. Download the dataset

pip install -r requirements.txt
chmod +x download.sh
./download.sh

3. Preprocessing

python build_vocab.py   
python resize.py

4. Train the model

python train.py    

5. Test the model

python sample.py --image='png/example.png'

Pretrained model

If you do not want to train the model from scratch, you can use a pretrained models.

More about project

Visit the following link

About

A groundbreaking initiative aimed at enhancing the independence and quality of life for individuals suffering from blindness and visual impairment. Navigating the world with limited vision presents numerous challenges, and our project addresses these difficulties through the integration of artificial intelligence and computer vision technologies.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published