Inspiration Chime blends wearable tech with a familiar, friendly form factor- a plushy toy!

By pairing Ray-Ban Meta glasses for hands-free leadership with playful wearable duckling companions, we keep groups together through calm signals instead of screens.

Chime supports groups moving through shared spaces: family travel, school field trips, guided tours, busy events. The leader wears Ray-Ban Meta glasses and issues voice commands (“Hey Chime, find Sally” or “ring the group”) to locate someone or broadcast an alert. Each participant carries a small ESP32-based companion that monitors proximity and plays gentle audio cues when someone drifts too far. The system ui is audio-first and the group leader can keep count of their nearby Chimes with a passive or active check-in cue. Chimes keep everyone connected without stealing attention from the experience.

How we built it Leader device interface: Android + Jetpack Compose app themed with our brand colors, running on Meta Ray-Ban glasses. Voice control is handled via Android SpeechRecognizer with wake word + command parsing, backoff, and calibration-aware debouncing.

Audio & voice pipeline: A GlassesAudioManager routes HFP audio to the glasses mic, with continuous listening, exponential backoff on recognizer errors, and permission/state surfacing to the UI. VoiceCommandProcessor normalizes wake-word variants (“chime/time/thyme”) and maps short phrases into Chime actions.

Group devices: Participant companions are ESP32/Arduino-based, delivering proximity cues and playing alerts. BLE is used for proximity and command delivery; we added calibration flows to map RSSI to comfort/far thresholds.

Experience design: Branding across UI and launcher, voice-first flows, and a console-style log for debugging/demo.

Challenges we ran into Balancing safety cues with non-intrusive behavior: making alerts gentle, rare, and context-aware. Speech reliability in noisy environments: handling recognizer error 11, partial/no-match spam, and ensuring continuous listening doesn’t loop forever. Mapping RSSI to human-friendly distance: needing a clear calibration flow that doesn’t mis-trigger on partial “next” utterances. Constraining ESP32 audio + BLE timing so commands remain simple and responsive. Accomplishments that we're proud of A cohesive, audio-first loop: “Hey Chime…” on glasses → parsed command → immediate sound/alert on the companion without pulling out a phone. Robustness improvements: permission gating, restart backoff, UI state surfacing for audio errors, and debounced calibration so flows don’t skip steps. Friendly, branded UI and launcher that matches the approachable hardware concept, while keeping the on-glasses experience minimal and glance-free.

What we learned Voice UX for groups needs forgiveness: debounce, cooldowns, and clear fallbacks matter more than perfect parsing. Calibration must be resilient to partial speech and ambient noise; short utterances need guardrails. There's important balancing between inevitable missed commands and false positive voice ui interactions.

What’s next for Chime QR code on beacon for quick pairing Improve proximity accuracy (multi-sample smoothing, adaptive thresholds, per-user calibration profiles). Refine the sonic language and add multilingual/locale-aware command sets. Harden speech in the wild: collect real-world logs, tune wake-word variants, and consider on-device models if available. Expand field testing in real scenarios (school trips, tours, large events) and iterate on comfort vs. alert thresholds. Explore richer companion feedback (haptics/LED patterns) that stay aligned with the calm, toy-like personality.

Built With

Share this project:

Updates