Create an app that recognises static Indian Sign Language hand gestures fed through the webcam. Each gesture could be from either hand and corresponds to an English alphabet. The time frame between two consecutive gestures is made as low as possible.
This is a Computer Vision problem and hence, convolutional neural networks is a good way of
going about it. A VGG16 architecture was used to train the model.
The ISL dataset is available in the following website:
The trained model was later saved and used in the backend of the app.
Keras OpenCV Tkinter
An accuracy of 81.2% was obtained after training the model.
The accuracy of the model could be improved. Pipeline to better preprocess the dataset and detect edges ( using IBM Watson ) is underway. Additional functionalities to be added to the web app, such as merging the detected alphabets into words and meaningful sentences.
Machine Learning, Deep Learning, Computer Vision and CNNs
● Piyush Ingale - firstname.lastname@example.org
● Dhanwin Rao - email@example.com
● Parth Dodiya - firstname.lastname@example.org
● Raghav - email@example.com