Inspiration
Scholarships are one of the few ways students can radically change their financial reality, but the process feels like a second unpaid job.
When we talked to international and first-generation students, a pattern kept coming up:
- 30–40 browser tabs open with half-read eligibility pages
- Fear of “wasting” a great essay on the wrong scholarship
- Confusion about what actually makes a winning application
Existing tools mostly scrape lists and dump them into a search box. They don’t answer the real questions students have:
“Which scholarships are truly for me? How do I write something that actually wins?”
GoGetScholarship is our attempt to build the tool we wish we had: an AI coach that sits beside you from match → plan → essay, not just a directory of links.
What the project does
GoGetScholarship is an AI-powered scholarship application assistant that:
Finds real matches, not spammy lists
- Students fill a short onboarding (academics, background, activities, constraints).
- A matching engine combines hard filters (eligibility rules) with vector search + LLM reranking to surface scholarships that are:
- Likely eligible
- Aligned with the student’s profile and goals
- Worth the effort (amount vs workload)
- Each result shows:
- A plain-language “Why this fits you” explanation
- “Eligibility flags” (e.g., “citizenship unclear”, “minimum GPA not confirmed”).
- A plain-language “Why this fits you” explanation
Turns matches into an application game-plan
- From the match page, students can swipe / shortlist scholarships Tinder-style.
- Saved items appear on a Scholarship Planner dashboard:
- Deadlines laid out on a timeline
- Auto-generated task checklist (references, transcripts, essays, forms)
- Simple workload estimate so students don’t overload one week.
Coaches essays instead of auto-writing them
- A dedicated Essay Workspace with a rich-text editor.
- The AI coach reads:
- The scholarship prompt and rubric
- The student’s profile + uploaded résumé/activities
- Relevant winner stories & successful essay snippets from similar awards
- It can:
- Propose structured outlines
- Suggest concrete story angles from the student’s own background
- Give rubric-aware feedback (“your leadership example is vague”, “impact not quantified”)
- Show before/after line edits with explanations, not just rewrites.
Leverages “winner stories” as live teaching material
- We index public scholarship winner stories and example essays in our RAG system.
- The coach can say:
“Here’s how a past winner answered a similar leadership prompt, and what made it strong. Now let’s adapt the structure—not the wording—to your experience.”
The result is a guided flow: discover → prioritize → plan → draft → refine.
How we built it
Architecture
Frontend
- React + TypeScript with TanStack Start, Tailwind, and shadcn/ui.
- Key screens:
- Onboarding & profile
- Match / swipe interface
- Scholarship detail + “Why this fits you”
- Planner dashboard with task list
- Essay Workspace (rich editor + AI sidebar)
- Winner Stories library.
Backend & AI service
- Node.js
match_scholarships(profile)explain_match(user, scholarship)plan_application(user, scholarship)coach_essay(draft, prompt, user_profile).s.
Data & Retrieval
- Normalized scholarship JSON schema (amounts, eligibility, effort, categories).
- Postgres + pgvector-style store for:
- Scholarship embeddings
- Winner-story / essay chunk embeddings
- Ingestion scripts to:
- Clean messy text
- Extract structured eligibility fields
- Tag scholarships (STEM, leadership, community, etc.).
AI techniques
- Embedding-based retrieval to build an initial candidate list.
- LLM reranking with an explicit scoring rubric (eligibility, fit, impact, effort).
- RAG for essays:
- Context = {prompt, rubric, student profile, relevant winner snippets}.
- Guardrails:
- The model cannot fabricate eligibility (“I’m not sure” instead of guessing).
- No “one-click essay”; the UX emphasizes revision and reflection.
Challenges we faced
Data quality and eligibility ambiguity
Scholarship descriptions are often vague (“preference given to…”, “typically awarded to…”).
We had to design prompts and heuristics that:- Extract useful structure without over-claiming certainty.
- Propagate uncertainty forward into the UI (flags, confidence notes).
Balancing fairness and personalization
It’s easy for a recommender to keep surfacing only “obvious” big awards while hiding smaller niche ones.
We iterated on:- A scoring function that explicitly balances fit, equity, and effort.
- Surfacing a mix of “safe”, “reach”, and “hidden gem” scholarships.
Latency and context limits
- Combining scholarship data, user profile, and winner stories can explode the prompt size.
- We built a thin retrieval layer that trims context to only:
- The top-k most relevant scholarship fields
- 2–3 winner snippets
- The minimal user profile needed for the current task.
Accomplishments we’re proud of
An end-to-end demo where a student can:
- Onboard,
- See tailored matches with explanations,
- Add them to a planner,
- Start an essay and get rubric-aware feedback— all using the same underlying AI backbone.
Integrating winner stories not just as static reading material, but as a live part of the AI’s reasoning and suggestions.
A UX that positions AI as a coach, not a ghostwriter, which is crucial for academic integrity and for students actually understanding their own story.
What we learned
Matching is only half the problem.
Students don’t just need to know what to apply to—they need help turning their lived experiences into compelling narratives.Good explanations beat “magic”.
Early tests where the AI simply said “this is a strong match” felt untrustworthy. Adding explicit reasoning (“because you…”) dramatically changed how confident students felt.
What’s next
If we keep building this beyond the hackathon, we’d like to:
- Scale the scholarship dataset and automate ingestion from more regions.
- Add outcome tracking (submitted, shortlisted, won) to learn which matches and essay patterns actually work.
- Build a counselor/mentor view so advisors can see a student’s planner and essay drafts and add their own feedback.
- Experiment with fairness metrics (e.g., making sure under-represented students still see high-quality opportunities).
Our goal is to make “I don’t know where to start” disappear from scholarship conversations—and replace it with a clear plan, a confident story, and a realistic path to funding.
Built With
- claude
- neon
- pgvector
- postgresql
- react
- tailwind
- tanstacks
- typescript
Log in or sign up for Devpost to join the conversation.