Hearth
Neighbors helping neighbors. Post a need, meet a helper, get rewarded.
Tracks
Main track: Social Impact
Sponsor tracks:
- Gemini API — need parsing, volunteer matching, and semantic embeddings are all powered by Gemini 2.5 Flash and
embedding-001 - ElevenLabs — full voice layer: STT via
scribe_v1for posting needs by voice, TTS viaeleven_multilingual_v2for reading results aloud - Solana — on-chain volunteer rewards transferred from a community pool wallet on fulfillment, with immutable Memo program receipts
- Vultr — both production servers (frontend + API) run on Vultr Cloud Compute instances, provisioned via OpenTofu
- MongoDB — all community data stored in MongoDB Atlas with 2dsphere geospatial indexes for proximity matching
Inspiration
Every neighborhood has people who need help and people who want to give it. They almost never find each other.
That gap is worst for older adults. The AARP estimates that nearly 90% of adults over 65 want to age in place, in their own homes and communities. But isolation, mobility limitations, and unfamiliarity with modern apps make that harder every year. And the platforms that exist aren't built for them. Craigslist is noisy and sketchy. TaskRabbit requires a credit card and assumes you're comfortable navigating a marketplace. Facebook groups are chaotic and easy to miss. None of these ask "what do you need?" in a way that a 74-year-old who's never downloaded an app can actually answer.
Research on older adult technology use is pretty clear: natural language works better than structured forms, verbal feedback helps people feel confident the system understood them, and interfaces that punish hesitation or imprecision get abandoned fast. They're not less capable; they're just not who the designer had in mind. We built Hearth with that person in mind from the start.
On the volunteer side, the problem is different but just as real. People who want to give their time have no easy way to signal what they can actually do, no way to get matched to something they're suited for, and no acknowledgment that showing up matters.
We wanted to build something that felt less like a marketplace and more like a neighborhood.
What it does
Hearth is a hyperlocal mutual aid platform built around how people actually communicate, especially older adults who prefer natural language over forms and menus.
Post a Need You describe what you need however comes naturally: typing a sentence, speaking into the mic, or both. No dropdowns, no required fields, no form that rejects you for using the wrong format. ElevenLabs transcribes voice input; Gemini 2.5 Flash parses the free text into a structured request with a category, urgency level, description, and time window. Then it shows you back what it understood in plain English before anything gets posted. If you said "I need someone to pick up my prescriptions sometime this week, nothing urgent," that's what gets posted, interpreted correctly, without you touching a single picker.
Get Matched
When a user opens a need, the platform matches them against available volunteers using semantic embeddings. Volunteer profiles are embedded as vectors using Google's embedding-001 model, so the ranking is based on meaning, not keywords. A need for "help carrying groceries upstairs" correctly surfaces volunteers with transportation and errand skills even if those exact words don't appear in their profile. Gemini 2.5 Flash re-ranks the top candidates with full context before showing them to the requester.
Chat, Fulfill and Reward Once matched, the requester and helper get a private in-thread chat to work out the details in plain conversational language. When the need is done, the backend transfers 0.01 devnet SOL from the community donation pool to the helper's Solana wallet. That wallet is auto-generated on signup so volunteers never have to set up a crypto wallet themselves. The transaction shows up on screen with a Solana Explorer link and an immutable on-chain memo receipt.
Community members can donate to keep the reward pool funded via Stripe. Volunteers can also connect a bank account through Stripe Connect Express for fiat payouts.
Voice throughout TTS reads match results and fulfillment confirmations aloud at three points in the flow. An older adult with arthritis, low vision, or just no patience for small text on a phone screen can complete the full loop without reading a single line of UI copy.
How we built it
Designing for older adults Before picking a stack we asked: what does someone who has never used a mutual aid app actually need from an interface? No jargon. No account setup before seeing value. No multi-step forms. Big text. Immediate confirmation that the system understood you. The onboarding flow is two questions. Posting a need is one text box or one button. Every action has a clear, plain-English outcome.
The data model reflects this too. Needs are stored as both the original raw text the user typed or spoke and the structured parsed output. We keep the raw text because it carries information the structured fields don't: tone, context, the specific way someone described their situation. If we ever add a care coordinator view or a moderation layer, that raw text is the most human signal we have.
Natural language pipeline
The core of the data pipeline is Gemini 2.5 Flash doing structured extraction. Free text comes in, a validated JSON object comes out: { category, description, urgency, timeWindow }. We parse into one of 14 categories covering the most common mutual aid needs. The prompt is designed to be charitable: it infers urgency from phrasing like "as soon as possible" or "no rush," fills in reasonable defaults, and never rejects a request for being ambiguous.
Volunteer profiles go through a parallel pipeline. A text embedding of their skills and availability note is generated using embedding-001 and stored as a vector in MongoDB. At match time, cosine similarity against the need's embedding narrows the field before Gemini re-ranks the top candidates.
Frontend Next.js 14 (App Router) with Tailwind CSS, deployed in Docker behind Caddy on Vultr. Caddy handles automatic TLS via Let's Encrypt. Mapbox GL JS powers the live neighborhood map.
Backend Express + TypeScript running on Bun, containerized on a separate Vultr instance. All API routes are typed end-to-end using shared Zod schemas in a monorepo libs package, so request and response shapes are validated at runtime and inferred at compile time with no duplication.
Voice ElevenLabs scribe_v1 for speech-to-text (raw binary audio streamed from the browser via MediaRecorder), and eleven_multilingual_v2 with Charlotte's voice for text-to-speech.
Blockchain Solana Web3.js on devnet. One community pool wallet funded by donations. On fulfillment, the backend signs a SOL transfer to the helper's wallet and writes a JSON memo to the chain as an immutable receipt. Keypairs are auto-generated for new users at signup.
Payments Stripe Payment Intents for fiat donations. Stripe Connect Express for volunteer bank onboarding and payouts.
Infrastructure Fully provisioned with OpenTofu: two Vultr vc2-1c-2gb instances, MongoDB Atlas M0, firewall groups, SSH keys. GitHub Actions builds Docker images, pushes to GHCR, and SSHes into each instance to hot-swap the container on every push to main.
Challenges we ran into
Getting voice right for real users, not demos. STT that works for a 25-year-old speaking clearly into a laptop mic is not the same as STT that works for a 72-year-old speaking slowly, with pauses, on a phone in a noisy room. We tested ElevenLabs scribe_v1 with deliberate pacing, mid-sentence corrections, filler words, and accented English. It held up better than we expected. We also made sure the parsed output is always shown back to the user in plain language before anything is submitted, so there's a confirmation step when it doesn't.
Middleware ordering broke our voice feature. ElevenLabs STT takes raw binary audio. Express's raw() body parser needs to run on that route before json() does. What we missed: cors() also has to come before the raw body route, or the browser's CORS preflight fails before the audio arrives. We spent way too long tracing that back to a two-line ordering issue in the middleware stack.
Next.js caches aggressively in non-obvious ways. After a user completes onboarding, we call router.refresh() to re-fetch their profile from the server. It did nothing, because the profile page was being served from the static cache. The fix was one line (export const dynamic = "force-dynamic"), but finding it took a while.
Stripe rejects self-transfers. We originally wired donations to use transfer_data[destination] pointing at our own platform account ID. Stripe flat-out rejects that. Donations now go directly to the platform account with no destination, which is the right pattern for a community pool.
Gemini's embedding endpoint changed. text-embedding-004 returns a 404 on the v1beta API. The working model is embedding-001. We found this at runtime.
Solana race condition. We were calling writeMemoReceipt() fire-and-forget, then immediately creating the Fulfillment record. The solanaTxHash was always null in the database because the memo transaction hadn't resolved yet. We awaited it and wrapped it in a try/catch so a slow devnet doesn't break the fulfillment flow.
Accomplishments that we're proud of
- The full loop works in production: speak a need, get matched, chat to coordinate, mark it done, see a Solana transaction confirm on screen.
- The natural language pipeline handles how people actually talk. "My back's been acting up and I could really use someone to grab a few things from King Soopers this weekend" parses cleanly into a grocery need, medium urgency, weekend time window, with no form fields touched.
- ElevenLabs STT handled slow, hesitant, accented speech reliably in our testing. TTS makes the system feel responsive even if you never look at the screen.
- Every volunteer gets a Solana wallet auto-generated on signup. No seed phrases, no MetaMask, no setup. The blockchain reward is completely invisible until the moment you earn it, which is how it should work for an audience that didn't grow up with crypto.
- The matching is genuinely semantic. Cosine similarity on volunteer embeddings combined with Gemini re-ranking means the top result for "help moving furniture" is a volunteer who listed "home repair and errands," not one who happens to have the word "help" somewhere in their bio.
- The infrastructure is fully reproducible from scratch with
tofu apply. Zero manual steps.
What we learned
Design for the hardest user first. Every decision we made to serve older adults, no mandatory structured input, voice-first interaction, plain-language confirmation at every step, large touch targets, immediate feedback, also made the product better for everyone else. Accessible design isn't a constraint. It's a forcing function for clarity.
Natural language is a data quality decision, not just a UX preference. Structured forms produce clean data but also produce abandoned forms. When users describe needs in their own words, they include context ("my doctor appointment is Tuesday and I can't drive since my surgery") that a dropdown never captures. Storing both the raw text and the parsed structure gives you the best of both: machine-readable data for matching and the full human signal for anything that requires judgment.
Gemini 2.5 Flash is really good at structured extraction. Our need-parsing prompt barely needed iteration. It reliably pulls category, urgency, description, and time window from messy, informal language and handles ambiguity gracefully rather than erroring.
ElevenLabs scribe_v1 is production-ready for accessibility use cases. We were genuinely skeptical it would handle slow, hesitant speech. It does.
Infrastructure as code from day one pays off fast. We provisioned a second Vultr instance for the frontend mid-hackathon in about twenty minutes. If we'd been clicking through a dashboard that would have taken much longer and been impossible to reproduce.
What's next for Hearth
Deeper accessibility work. We want to test Hearth with actual older adult users, not our assumptions about them, and instrument where the voice flow breaks down. Confidence scores from STT, re-prompt flows when parsing is ambiguous, and larger-text display modes are all on the list.
Longitudinal data on who gets helped. Right now we store needs, fulfillments, and users. We don't yet track patterns over time: which categories go unfulfilled most often, which neighborhoods have volunteer gaps, what urgency levels are going unmatched. That data is the foundation for connecting with social services and care coordinators who can fill the gaps Hearth can't.
Real Solana rewards. Devnet is a demo. Mainnet with small, real SOL rewards changes the psychological dynamic for volunteers in a way a leaderboard badge doesn't.
Push notifications. A background job that pings nearby volunteers when a high-urgency need posts, especially medical or transportation, would close the loop much faster than relying on volunteers to browse.
Neighborhood groups. Real mutual aid organizing happens in bounded communities: a housing complex, a block, a senior center. Scoped groups with their own moderators and donation pools are the natural next step.
Multilingual support. Gemini handles translation natively and ElevenLabs supports 30+ languages. The full voice-to-fulfillment flow could work in Spanish, Mandarin, or Arabic with relatively small changes, which matters a lot for the communities where mutual aid is most needed.
Partnerships with existing mutual aid networks. The tech works, but adoption is a community-organizing problem. Partnering with established local mutual aid groups, senior centers, libraries, and community health workers who already have trust is the path to real impact.
Log in or sign up for Devpost to join the conversation.