Atryn
Get rid of those cold emails. Get rid of those AI cover letters. Find research using your real personality
An AI-powered research discovery platform that turns natural language into matched professor connections, with video introductions built in.
Inspiration
Every undergraduate researcher at the University of Toronto goes through the same ritual. You want to get involved in research. You have interests. You open a browser tab.
Then it hits you: there is no front door.
You end up in a maze of departmental websites, each formatted differently, each last touched years ago. You're reading faculty bios that say "interests include machine learning" without telling you whether they take undergrads, whether they're funded, or whether your email will disappear into a void. You send cold emails into the dark. You wait.
The university has hundreds of research labs. The information exists. It's just scattered, buried, and inaccessible to anyone who doesn't already know where to look. Seniority and social capital determine who gets a position, not capability, not fit, not interest.
We kept asking: why does this have to be a cold email problem? So we built the front door that should have existed already.
What It Does
Atryn is a research discovery and application platform for the University of Toronto. Students have a natural language conversation with an AI assistant, get matched to relevant labs, and submit video introductions directly. Professors get a dashboard to review applicants and update their status.
Student's View
| Step | What Happens |
|---|---|
| Chat | Type "I'm interested in machine learning and NLP" and Atryn responds with matched labs and inline cards |
| Browse | Click a lab card for full detail: description, research areas, professor contact |
| Apply | Hit "Express Interest" to record a 60-second video intro or upload a file |
| Track | Dashboard shows every submission with status: pending / shortlisted / rejected |
Professor's View
| Step | What Happens |
|---|---|
| Review | See all submissions for their lab, tabbed by status |
| Watch | Click to watch each student's recorded video intro |
| Decide | One click to shortlist or reject |
How We Built It
Architecture
Next.js Frontend (App Router, TypeScript, React Framer Motion)
|
AWS Amplify (hosting)
|
AWS Lambda (API handlers)
|
┌─────┼──────────┐
Bedrock S3 DynamoDB
(Claude) (videos) (data)
AWS API Gateway for API's
Conversational Discovery
Every chat message runs through a three-step pipeline: keyword scoring against the labs database (topic match +3, category +2, name +2, text +1), context injection into the Bedrock Converse API alongside full conversation history, then response text returned with structured lab cards rendered inline.
The system prompt is explicitly constrained: "Only discuss research labs, professors, and research topics." This prevents the model from drifting into campus services, mental health resources, and other adjacent-but-wrong data.
Video Introductions
The VideoRecorder component uses the browser's native MediaRecorder API. Students record up to 60 seconds in the browser (WebM), preview it, re-record if needed, and submit. Videos upload to S3 and the public URL is stored with the submission record.
What's Next
- Extending professor connection past reject/accept
- UI changes
- Overall UX changes and efficient database storage in AWS
Built With
- amazon-bedrock
- amazon-dyanmodb
- amazon-web-services
- anthropic
- aws-amplify
- aws-lambda
- next.js
- react
- tailwindcss
- typescript

Log in or sign up for Devpost to join the conversation.