PROBLEM STATEMENT:
Create an app that recognises static Indian Sign Language hand gestures fed through the webcam. Each gesture could be from either hand and corresponds to an English alphabet. The time frame between two consecutive gestures is made as low as possible.
PROPOSED SOLUTION:
This is a Computer Vision problem and hence, convolutional neural networks is a good way of
going about it. A VGG16 architecture was used to train the model.
The ISL dataset is available in the following website:
https://www.kaggle.com/dodiyaparth/isl-dataset-double-handed
The trained model was later saved and used in the backend of the app.
TECHNOLOGY USED
Keras OpenCV Tkinter
RESULTS
An accuracy of 81.2% was obtained after training the model.
FUTURE WORK
The accuracy of the model could be improved. Pipeline to better preprocess the dataset and detect edges ( using IBM Watson ) is underway. Additional functionalities to be added to the web app, such as merging the detected alphabets into words and meaningful sentences.
KEY LEARNINGS
Machine Learning, Deep Learning, Computer Vision and CNNs
TEAM
● Piyush Ingale - ingalepiyush459@gmail.com
● Dhanwin Rao - dhanwinrao18@gmail.com
● Parth Dodiya - parthdodiya999@gmail.com
● Raghav - raghavraghuwanshi0101@gmail.com