Inspiration
What is the most valuable thing you own? Your phone? Your home? Your memories? The ability to recognise your face in the mirror?
We live in an age of medical miracles, we can map the human genome, edit our own DNA and perform surgery with robots, however the stroke remains to be a leading a cause of death and disability worldwide. Every minute, a person loses 1.9 million neurons. Every second of inaction is a tiny, irreversible death of the self.
The problem is not a lack of medical knowledge, the problem is a gap in time. A gap between the moment a symptom appears, a drooping smile, slurred speech and a sudden weakness, and the moment the treatment begins. In that gap, fear, confusion and denial take hold. A person lies down to "wait it out". A family member hesitates, unsure if it's serious. Precious neurons die.
Current solutions rely on awareness campaigns and acronyms like "FAST". They are vital but they depend on a human being, often panicked and untrained to correctly interpret neurological symptoms in real-time. We are asking people to solve a complex medical puzzle in the worst moment of their lives.
Meanwhile the technology we carry in our pockets is capable of launching rockets, translating languages in real-time and recognising our faces in a crowd. It is a sensor-rich supercomputer, yet for the stroke patient, it remains silent. It waits for us to recognise the danger, rather than recognising it for us.
This is a failure of design. A failure to point our technological power at the moments that truly matter.
We need a tool that does not wait. A tool that turns the passive device in our pocket into an active guardian. A tool that uses the camera to see what we miss, the microphone to hear what we cannot, and the intelligence of AI to interpret the subtle signs of a brain under siege.
Not to replace the doctor. Not to offer a diagnosis. But to close that deadly gap in time. To transform uncertainty into action. To give a family the gift of a minute. Because in a stroke, a minute is not just time. It is memory. It is movement. It is identity.
We need to stop building technology that steals our attention, and start building technology that protects our humanity.
What it does
Neuralink is a simple website that allows users to:
- Determine if a photo portrait of someone given, via screenshot or upload, has a stroke.
- Uploaded photos and screenshots are sent to the AI model via REST API infrastructure dependent on a - Flask web server. (Explain AI model functioning)
- The following results display on screen after the analysis, presenting a percentage that states if a person has a stroke, playing an alarm sound effect if present.
Provides common causes of a stroke at the bottom of the page.
How we built it
We built NeuroSpot using a modern tech stack integrated with a custom built CNN model:
Frontend: React with Typescript, styled with TailwindCSS for UI/UX styling and Vite to run a development server to preview code changes in real-time.
Backend: Flask to hold our REST API infrastructure that could connect the CNN model that's also written in Python for convenience.
CNN model: 4-block Convolutional Neural Network, with 3 Dense Layers with reLu activation with Adam's Optimiser(Learning rate = 0.0001) all built in Pytorch, training involves 20 epochs with early stopping.
-Deployment: Used Vercel to host our client-side interface and Ngrok for both our API server and CNN.
Challenges we ran into
We ran into many problems along the way:
- Ensuring the model and the backend server works together
- Deployment involving the model since the hosting service does not work well with Pytorch
The notification pop-up if a stroke is detected was unable to be implemented so we moved on from that idea
Accomplishments that we're proud of
Despite all the roadblocks, we have achieved significant milestones:
Building a Convolutional Neural Network (CNN) from scratch using Pytorch
Deploying the CNN on a web server for the first time through vercel
Built a responsive UI that successfully connects with the client through their camera or uploaded image
Successfully connecting the CNN to the frontend and backend
What we learned
This was a massive learning experience for our team:
The process of building a CNN from scratch
Model training and how to improve model accuracy
Ways to deploy AI models on Vercel
What time pressure feels like when building a big project
What's next for NeuroSpot
We are not done! More to come:
Implementing a notifications pop-up if the chance of having a stroke is above 99%
Enhancing our CNN for more accurate stroke predictions
Establishing partnerships with major charitable organisations worldwide
Creating this into a mobile application and deploying it on Appstore
Built With
- express.js
- javascript
- node.js
- python
- pytorch
- react.js
- typescript
- vercel
- vite
Log in or sign up for Devpost to join the conversation.