Our Inspiration

What if a single conversation could change the course of someone's life?

Talk2Me VR sprouted from our own encounters with the silent struggles of friends and family members. Mental health challenges don’t announce themselves with fanfare—they whisper through missed workouts, withdrawn smiles, and late-night restlessness. When we saw the hackathon theme of “Healthcare,” we were compelled to tackle mental health in a way that felt both urgent and deeply human.

We set out to create a solution that married technical innovation with genuine empathy. By immersing users in a VR environment, we offer a safe space to practice tough conversations—no real-world stakes, just real-world impact.

To make our virtual companion resonate, every gesture and line of dialogue is rooted in research-backed indicators of early depression: subtle mood shifts, disrupted sleep or eating routines, and growing feelings of isolation. Through these cues woven into the experience, Talk 2 Me VR empowers users to recognize warning signs—and start conversations—before it’s too late.

What It Does

Talk 2 Me VR is an immersive simulation game that helps users build the skills to recognize and respond to the early signs of depression in a friend. Players engage in branching conversations with an AI-driven character, whose subtle emotional and behavioral shifts reflect real mental health indicators. The game emphasizes emotional intelligence, empathy, and communication—critical tools for supporting someone who might be silently struggling.

How We Built It

We developed the frontend in Unity with full VR support and used a Flask backend to manage AI interactions. Our architecture includes:

  1. Gemini & MCP: Powers the agent’s behavior using tool-calling for dynamic dialogue, emotional expression, and real-time feedback.
  2. Whisper: Converts the user's spoken input into accurate text.
  3. ElevenLabs: Generates realistic, emotionally nuanced voice responses for the AI character.
  4. Custom tooling: Enables facial animations, user behavior tracking, post-interaction feedback emails, and performance evaluations based on user engagement and empathy.

Challenges We Ran Into

Our primary hurdle wasn’t just cutting latency—it was balancing performance and speed to keep conversations both rich and responsive. High-capacity models have deeper understanding but slow down replies; lightweight models reply fast but can feel shallow.

To strike the right equilibrium, we:

Built a multi-model Gemini pipeline

  • Gemini 1.5 Flash: handles immediate conversational turns with sub-second response times
  • Gemini 2.5 Flash: engages for more nuanced dialogue when context demands deeper understanding
  • Gemini 2.5 Pro: reserved for post-conversation analysis and scoring, invoked only once the user completes an interaction segment

Leveraged coroutines & multithreading in Unity

  • Parallelized network requests and AI inference so that “heavier” model calls run off the main thread—keeping the UI smooth and responsive even during complex processing.

We also recruited the help of other hackers to demo and give feedback on their experience talking to the AI character.

Accomplishments We're Proud Of

This project pushed the boundaries of our technical skills and creativity. We are proud of the seamless integration between speech-to-text, AI logic, and emotionally expressive text-to-speech, all in real time within a fully immersive VR experience.

More importantly, we are proud of how Talk 2 Me VR addresses a deeply personal and socially relevant issue. The conversations feel realistic, the emotions are subtle but meaningful, and the experience gives users practical tools they can apply in real life.

What We Learned

The best way to learn something that you are unfamiliar with is to jump straight into the deep end. For two of our three teammates, this was the first time using Unity. It was a rapid learning experience that taught us how to think spatially, optimize for performance, and create emotionally resonant interactions.

For our third teammate, it was the first time building for VR specifically, which brought new insights into how presence, gesture, and eye contact affect emotional communication in 3D space.

We also gained a deeper appreciation for how AI and game design can work together to create experiences that are not only interactive but emotionally educational.

What’s Next for Talk 2 Me VR

While our minimum viable product is functional and immersive, there is still so much more we hope to build. Future improvements we are exploring include:

  • Adding new characters with different identities, cultural backgrounds, and mental health experiences.
  • Expanding the range of settings to include schools, workplaces, and public spaces where these conversations might naturally occur.
  • Including more mental health challenges beyond depression, such as anxiety, burnout, or grief.
  • Integrating user progress tracking, follow-up resources, and educational content to deepen the learning experience.

Our long-term goal is to make Talk 2 Me VR a widely accessible tool that helps people recognize emotional pain in others and respond with empathy, care, and confidence.

Built With

Share this project:

Updates