Inspiration
We noticed a critical gap in AI — models often ignore voices from marginalized communities because their knowledge doesn’t exist in mainstream datasets. This perpetuates bias in AI reasoning, especially around sustainability and social justice. EquiNet was inspired by the idea of creating a universal embedding space that centers these underrepresented voices so AI can think more inclusively.
What it does
EquiNet transforms fragmented, marginalized knowledge into a shared neural representation space. Users can:
- Query this knowledge space using natural language.
- Retrieve results from grassroots blogs, local news, NGO reports, oral histories, and academic sources.
- Visualize the results in an interactive fairness map that shows bias and representation.
- Save queries, track fairness over time, and generate insight reports.
At its core, EquiNet is an AI-powered fairness explorer that bridges knowledge gaps to make AI reasoning equitable.
How we built it
We combined:
- Data Collection: Crawled grassroots blogs, local news, and compiled PDF reports from NGOs and think tanks.
- Data Processing: Extracted text and metadata, cleaned and normalized datasets.
- Representation Learning: Used a multilingual embedding model to vectorize the content.
- Bias Alignment: Applied embedding normalization to equalize representation of voices.
- Search & Retrieval: Implemented a FAISS-based similarity search for efficient querying.
- Frontend: Built an intuitive interface with query input, results feed, fairness visualization, and dashboards.
- Authentication: Integrated Auth0 for user profiles and saved queries.
- Backend: Designed REST APIs to handle querying, saving, visualization generation, and user dashboards.
Challenges we ran into
- Data Diversity: Finding high-quality datasets for marginalized voices in different languages and formats was difficult.
- Bias Alignment: Ensuring embeddings fairly represent minority voices without losing context required experimentation.
- Visualization Complexity: Designing a fairness visualization that’s both intuitive and scientifically meaningful was tricky.
- Integration: Coordinating multiple components (data pipeline, embeddings, backend, frontend) in a short timeframe was challenging.
Accomplishments that we're proud of
- Built a fully functioning prototype of EquiNet in under 36 hours for Hack the Valley 2025.
- Integrated authentic grassroots and NGO sources into the dataset, giving real representation to marginalized voices.
- Designed a unique fairness visualization tool to make bias transparent and actionable.
- Created a user dashboard that stores queries, visualizations, and fairness progress over time.
What we learned
- The importance of representation in datasets — bias starts long before model training.
- Embedding normalization is a powerful but nuanced method for bias alignment.
- Designing for inclusivity requires balancing technical precision with intuitive user experience.
- Rapid prototyping forces creativity and practical problem-solving under pressure.
What's next for EquiNet
- Expand the dataset with more multilingual and oral-history sources.
- Improve bias alignment with advanced adversarial debiasing techniques.
- Enhance the fairness visualization into a real-time interactive map of knowledge equity.
- Add collaboration features so communities can contribute their own knowledge.
- Package EquiNet as an open-source tool for researchers, NGOs, and educators.

Log in or sign up for Devpost to join the conversation.