MoodMirror: Bridging Emotions and Technology

Inspiration

  • Recognizing emotions can be particularly difficult for neurodivergent children, and we wanted to create a therapeutic tool to assist them.
  • Our goal was to tackle the barriers in emotional understanding by creating a solution that could be used at home to foster better communication and empathy.

What it Does

  • Detects faces in an image using a pre-trained model.
  • Feeds the detected faces into a deep convolutional neural network to classify emotions being displayed.
  • Integrates with a Swift-based video calling application that sends frames to a backend for emotion prediction.
  • Displays text and emojis corresponding to the predicted emotion, overlaying them on the video.

How We Built It

  • Used OpenCV with a pre-trained model to detect faces.
  • Trained a deep convolutional neural network using NumPy and TensorFlow on the FER-2013 dataset to classify faces into seven possible emotions.
  • Developed a multi-platform application for macOS, iPhone, and iPad using Swift.
  • Sent video frames via HTTP to a Flask backend, where the backend processed the frames and returned the predicted emotion.
  • Overlaid emotion text and emoji onto the video feed in real-time.

Challenges We Ran Into

  • Developing and testing the model on a diverse dataset of faces to improve accuracy.
  • Connecting the Flask backend to the Swift frontend, especially since none of us had prior experience with Swift.
  • Deciding between Swift and React Native for development; while React Native was more familiar, Swift was ultimately chosen for its platform-native capabilities, though it was more challenging.
  • Setting up webcam access and handling app permissions for capturing video frames.
  • Sending video frames to the backend for real-time processing was technically complex.
  • Building a video calling/conferencing feature was challenging due to deprecated and outdated documentation for existing SDKs.

Accomplishments That We're Proud Of

  • Successfully creating a functional app in a limited amount of time.
  • Building and integrating a backend that connects seamlessly with the frontend.
  • Implementing video conferencing features from scratch.
  • Training and deploying a model to classify emotions from facial expressions.
  • Overcoming obstacles by finding alternatives for outdated SDKs.
  • Transitioning from React Native to Swift for the project, despite the learning curve.

What We Learned

  • Mobile development differs significantly from web development and comes with its unique challenges.
  • Quickly adapting to and migrating between different technologies is a valuable skill.
  • Leveraging pre-trained models can simplify complex machine learning tasks and accelerate development.

What's Next for MoodMirror

  • Implement real-time spatial recognition to overlay emojis directly onto detected faces in the video feed.
  • Develop a multimodal machine learning architecture to combine speech and image data for more accurate emotion classification.
  • Further enhance the app for broader usability and improve the user experience.

Built With

Share this project:

Updates