Inspiration

The inspiration for building Resignal, an AI-powered mock interview app, comes from a simple yet persistent gap in how people prepare for interviews. Interviewing is a skill that improves through deliberate practice and timely feedback, yet most candidates prepare in isolation, rely on generic advice, or receive limited and subjective input from friends. As a result, many interviews remain one-off experiences, and valuable learning opportunities are lost.

This app aims to turn every interview into a learning opportunity. By simulating realistic interview scenarios, providing structured and actionable feedback, and helping users reflect on what went well and what didn’t, the app enables interviewees to practice with purpose.

What it does

This app provides a realistic, AI-powered mock interview experience. It evaluates responses in real time and delivers structured, actionable feedback on areas such as answer structure, clarity, and content quality, while allowing users to ask follow-up questions based on that feedback. Each session is recorded and summarized so users can reflect on their performance, track progress over time, and learn from every interview. By turning practice into a continuous feedback loop, the app helps candidates build stronger interview skills and approach real interviews with confidence.

How we built it

Resignal is built on a modular AI architecture designed to support thoughtful reflection rather than generic automation. The system intentionally separates interview simulation, response analysis, and feedback generation into distinct components, allowing each part to evolve independently while keeping the overall experience focused and coherent.

At its core, Resignal uses large language models to analyze interview transcripts through structured prompts aligned with predefined evaluation rubrics. We currently rely on Gemini AI for its strong reasoning capabilities and consistency in long-form analysis. Instead of asking the model to simply “rate” answers, we guide it to examine clarity, structure, confidence, and communication patterns in a way that mirrors how experienced interviewers reflect after a conversation.

On the client side, the iOS app is built entirely with SwiftUI, following a clean MVVM architecture. This allowed us to design a minimal, distraction-free interface that keeps users focused on speaking, reflecting, and understanding feedback rather than navigating complex UI. Recording sessions are always user-initiated and clearly bounded, reinforcing trust and intentional use.

The backend is implemented using Fastify, deployed on Vercel, acting as a lightweight orchestration layer rather than a heavy server. Its primary role is to securely package transcripts, interview context, and relevant attachments, then route them to the Gemini API for analysis. This design keeps the system flexible, scalable, and easy to iterate on while minimizing unnecessary data retention.

In addition to structured feedback, Resignal includes an Ask feature that allows users to follow up with specific questions about their performance. This supports deeper reflection by letting users explore alternative answers, clarify weaknesses, or rehearse improved responses, turning feedback into an ongoing conversation rather than a static report.

Throughout development, we were careful not to design Resignal as a tool that “optimizes” interviews in a shallow way. Instead, the system is built to surface signal from noise: helping users notice patterns they might otherwise miss, without replacing their own judgment. The technical choices reflect this philosophy: modular, transparent, and intentionally constrained.

Challenges we ran into

Defining the Right Scope

The hardest product decisions weren’t about what Resignal should do, but about what it should not be.

Early on, it was tempting to expand into mock interviews, real-time coaching, scoring systems, or hiring recommendations. Each addition pushed the product away from reflection and toward authority. We repeatedly asked: What role should this app play in the interview process?

We chose a narrow but powerful answer: Resignal exists after the interview, not during it, and never on behalf of the interviewer. Its value is helping candidates extract signal from ambiguity, not telling them what to think.


Naming the Product

Naming the app became a boundary-defining exercise.

We wanted a name that reflected reflection, signal extraction, and reframing. Resignal came from the idea that interviews are noisy, emotional, and often unclear. The app helps users “re-signal”, i.e. to reinterpret what happened with distance and structure.

The name clarified positioning: this is not a hiring tool, not a score, and not an automation engine. It’s a thinking tool.


Avoiding Over-Automation

A constant temptation was to let the app “decide” more.

LLMs make it easy to summarize, score, and conclude. But doing so reduced trust and increased dependence. We intentionally avoided:

  • Pass/fail judgments
  • Single numeric scores
  • Absolute, prescriptive next steps

This restraint required more design effort, not less. The goal was to be informative without being authoritative.


Balancing Simplicity and Depth

Interviews are complex, but the product couldn’t feel heavy.

The MVP focused on:

  • One interview at a time
  • One core transcript
  • One feedback surface
  • One follow-up entry point (“Ask”)

Any new feature had to justify its value against this minimal baseline. If it increased cognitive load, it usually didn’t make the cut.


Context Control on the Backend

More context doesn’t always produce better answers.

We had to decide:

  • Which transcript segments actually matter
  • When job descriptions help versus distract
  • How much history to include without bloating prompts

This led to a two-step approach:

  1. Classify user intent
  2. Assemble only the relevant context

The result was smaller prompts, better responses, and a system that was easier to reason about.


Future-Proofing Without Overcommitting

We made architectural choices to keep Resignal flexible early on:

  • Keep the backend thin
  • Avoid model-specific logic
  • Treat prompts and rubrics as evolving artifacts

Building Resignal reinforced a core lesson: software shapes behavior not only through what it does, but through what it intentionally refuses to do.

Accomplishments that we're proud of

We’re proud to have built an end-to-end AI interview system that goes beyond simple question generation to deliver consistent, actionable feedback at scale. A key accomplishment is our structured evaluation framework, which translates open-ended interview responses into stable, rubric-based signals across multiple dimensions, including strengths, areas for improvement, and key observations.

We also successfully implemented real-time response analysis with contextual follow-ups, enabling users to engage in a dynamic feedback dialogue rather than receiving static summaries. This interaction loop significantly enhances learning efficiency and mirrors the iterative guidance a candidate would receive from a human coach.

Additionally, we built a flexible foundation that supports rapid iteration without sacrificing simplicity, allowing us to continuously improve the system and expand its capabilities.

What we learned

Working on Resignal reinforced several lessons that weren’t obvious at the beginning, and in some cases, ran counter to our initial instincts.

Minimal interfaces demand more thinking, not less. Removing features doesn’t simplify the work, it concentrates it. Every missing button forces a clearer decision about intent, flow, and responsibility. The restraint in Resignal’s UI required more product thinking than adding dashboards, scores, or toggles ever would have.

Decision-support tools must respect human judgment instead of replacing it. We deliberately avoided letting the system issue verdicts or final conclusions. Interviews are subjective, emotional, and context-heavy. Our role wasn’t to decide for the user, but to surface signal in a way that preserved agency and reflection.

Constraints aren’t limitations; they’re protection. By defining boundaries early, post-interview only, no scoring, no automation of decisions, we protected the product from drifting into something louder, heavier, and less thoughtful. Many of the hardest decisions were about saying no to features that would have made the app more impressive, but less honest.

Clarity is something you have to design for deliberately. It doesn’t emerge automatically from more data, larger models, or deeper automation. Clarity came from choosing what context to include, what to exclude, how to frame questions, and when to stop the system from doing more, even when it could.

Resignal didn’t just teach us how to build an AI-powered product. It forced us to confront how software subtly shapes judgment, and how careful design can create space for people to think more clearly, rather than less.

What's next for Resignal

Resignal’s next phase will focus on depth over breadth. Rather than expanding aggressively, we will prioritize:

  • Refining core workflows based on real usage patterns to ensure the app is intuitive and effective.
  • Calibrating evaluation rubrics across roles, seniority levels, and industries, including improving scoring consistency, expanding domain-specific knowledge, and establishing stronger benchmarks for high-quality answers.
  • Enhancing the session-level learning system by aggregating insights across interviews, turning isolated practice sessions into a longitudinal learning experience where users can track progress, identify recurring gaps, and build confidence over time.

Built With

Share this project:

Updates