Inspiration
The motivation behind our idea is to isolate the loneliness felt by the deaf community, by bridging the communication gap between them and others through our sign language translator. Our project aims to help them convey their thoughts and ideas to others by translating sign language into English with text and audio transcripts.
What it does
From the real-time video feed, our model detects the gesture(from sign language) and displays the corresponding English alphabet on the screen.
How we built it
We built this solely using Python but used lots of machine learning and computer vision libraries like mediapipe, cv2, tensorfow, keras etc.,
Challenges we ran into
Detecting sign language gestures in real-time posed challenges with variations in lighting, backgrounds etc. To tackle this challenge, we have enhanced our dataset with augmented images generated with various conditions that mimic challenges posed in live feeds.
Accomplishments that we're proud of
We combined machine learning and computer vision for this project - which is our first time doing something like this.
What we learned
We have learned various image processing techniques and pipelines like mediapipe to accomplish object detection (hand in our case) and also gained skills in building machine learning models by tuning the parameters to improve the performance and testing the convolution neural network model with real-time data.
What's next for SignSavvy: Unlocking Communication-Your Bridge to Silent World
We plan to extend this model to detect gestures and translate words into English and implement functionality to generate both textual transcripts and audio translations of recognized sign language gestures.
Log in or sign up for Devpost to join the conversation.