Inspiration

Millions of people living with motor impairments retain their thoughts and intentions, yet lose the ability to perform basic daily tasks. Something as simple as feeding oneself can become dependent on constant assistance. This loss is not just physical. It affects dignity, confidence, and autonomy.

Assistive feeding devices have improved quality of life for many people with motor impairments. However, most current systems rely on residual physical input or voice commands, which can be unreliable and difficult to repeat. We were inspired by the gap between intention and execution. If someone can focus, that intent should be enough to initiate meaningful action. MindAssist translates simple neural signals directly into a complete assistive routine. Rather than requiring continuous control, it enables a single intentional mental state to trigger a safe, autonomous sequence.

What it does

MindAssist is a brain-controlled assistive feeding system. Using a real-time EEG headset, a sustained focus signal activates a robotic arm that performs a complete feeding routine safely and autonomously.

No physical interaction,
no repeated voice commands,
just focus-driven control.

The system continuously processes EEG signals using thresholding, smoothing, and sustained activation detection to distinguish deliberate intent from noise. When a sustained focus signal is detected, the robotic arm activates. The arm moves from a neutral position to pick up food, uses computer vision to detect and align with the user’s mouth, delivers the bite safely, detects completion, and returns to neutral. The system then waits for the next intentional activation. For environmental awareness, we integrated a camera-based computer vision module. The system performs real-time face detection and mouth localization. The detected coordinates are used to adjust the final approach of the robotic arm, allowing dynamic alignment rather than fixed positioning. We structured the entire system as a finite state machine. This ensured safe transitions between idle, activation, feeding, and reset states, preventing repeated or unintended motion.

How we built it

Our team brought together hardware and software engineers working at the intersection of robotics, signal processing, and computer vision.

On the hardware side, we built and tuned the robotic arm, defined safe motion cases, and developed repeatable feeding trajectories. We experimented with different sensors and mounting configurations to improve reliability and stability. We also developed the EEG processing pipeline testing signal thresholds, implementing smoothing and sustained activation logic, and addressing connectivity instability. Bluetooth pairing was initially inconsistent, so we automated the connection process to make the system robust and repeatable.

On the vision side, we implemented real-time face detection and object localization. We processed live camera frames, extracted coordinates, and translated those into spatial adjustments for the robotic arm. Latency optimization was critical to ensure that the arm’s movement felt responsive and aligned with the user’s position.

The integration phase required unifying three asynchronous systems: neural input, mechanical actuation, and visual feedback. We built a structured software architecture to coordinate them deterministically, prioritizing safety, clarity, and reliability over complexity.

Challenges we ran into

EEG signals are inherently unstable and sensitive to noise. Bluetooth pairing and data streaming were initially inconsistent, so we automated and stabilized the connection pipeline. Synchronizing EEG input, servo motion, and vision processing demanded strict control architecture to avoid timing conflicts. The robotic arm itself required precise tuning due to sensitive servo behavior. Achieving smooth, controlled movement while maintaining responsiveness was a key engineering challenge.

Accomplishments that we're proud of

This is an ambitious project, especially for a team including first-time hackers. We chose to work with inherently noisy EEG signals in a high-stakes and complex assistive context. Building a system that translates brain signals into physical motion required careful engineering and disciplined design decisions.

What we learned

We learned that in assistive technology, simplicity often outperforms complexity. That insight guided our decision to use a simpler EEG interface with strong filtering and structured activation logic rather than a more complex but less stable signal classification system. We designed for reliability over sophistication.

What's next for MindAssist

Longevity and motor impairment represent a growing, global challenge, making this a highly scalable problem space. While we focused on feeding, the same intent-driven control framework can extend to tasks such as medication management, object retrieval, communication aids, and environmental control. It can also be valuable to log the robotic arm’s actions in a database, enabling caregivers to monitor usage patterns and allowing the data to be analyzed further to inform healthcare decisions and long-term care optimization.

Our immediate next step is engaging directly with the mobility-impaired community. Due to the constraints of a 36-hour build, we were not able to gather user feedback during development. We are eager to iterate based on real-world input and refine the system in collaboration with the individuals it is designed to support.

Built With

Share this project:

Updates