Inspiration

As young adults who have grown up in a diverse environment, we have noticed plenty of people who have disabilities such as being deaf or having speech impairments. We have many people close to us who suffer from these disabilities and we find it difficult to communicate with them as they do not understand verbal language and we do not understand sign language. So we decided a way for us to understand them through a sign language translator!

What it does

Our app can recognize what letter is being shown from hands that show sign language from A-Z! This will be a live app and all you need is a camera/webcam of any sort. You can run the program and try some hand signs and the computer will translate it!

How we built it

We built it using only Python and with the main use of OpenCV and TensorFlow. We started off with the help of a library from cvzone which had functions that could easily detect hand movement. We also used NumPy and time to help build this project. We took 300 pictures of each hand sign to train with and then put them into Teachable Machine from Google to train a model using TensorFlow and Keras and inputted it into our program.

Challenges we ran into

We ran into a lot of issues initially which consisted of many compatibility issues between the libraries, languages, and APIs being used. We made the mistake of using more challenging platforms for object detection and machine learning which had us working around huge errors in the code. Eventually after a lot of research, we were able to find resources such as CVzone and Teachable Machine that helped to simplify the process and minimize areas for error. We also struggled with getting enough pictures to train our models for more accuracy.

Accomplishments that we're proud of

We are proud of the multitude of signs that our program can identify, which is shown from the A-Z we can see, which is a total of 26 different signs. This took us the most time and was incredibly time-consuming so we are proud of it.

What we learned

We learned how to use OpenCV as most of us were inexperienced with the library and the same with TensorFlow. We were able to learn to use object detection to detect objects such as hands and identify different hand signs effectively.

What's next for CogniSign

The next step for CogniSign is to focus on making the program more accurate and creating more precise models. This can be done by increasing the number of models/images used to train per letter while also providing variety in the models such as showing multiple angles. After that, CogniSign aims to include common phrases used in ASL language as we are aware the ASL community doesn't communicate exclusively with signed letters. This would require much more data and machine learning.

Built With

Share this project:

Updates