Building a Mobile App for a Consumer EEG Headset: Lessons from RE-AK and CLEIO
Consumer EEG headsets are entering the market faster than the software ecosystem can support them. Hardware teams design elegant wearable devices with impressive analog front-ends, then discover that the companion mobile app is the bottleneck. The app is where raw electrode signals become meaningful feedback for the user, and it is where most neurotech products either succeed or fall apart.
At DEVSFLOW, we have built companion apps for multiple EEG headset programs, including work with the RE-AK neurofeedback platform and the CLEIO clinical monitoring system. This post shares the architectural decisions that shaped those projects and the lessons we learned along the way.
The First Decision: On-Device vs. Cloud Processing
Every EEG app team faces this question early. Do you process the signal on the phone, send raw data to a cloud backend, or split the work between both? The answer depends on your latency requirements, your regulatory context, and your users' connectivity assumptions.
When On-Device Processing Wins
For neurofeedback applications, latency is the deciding factor. The user needs to see or hear feedback within 100 to 200 milliseconds of the neural event. A round trip to a cloud server, even a nearby one, adds 50 to 150 ms of variable latency that makes closed-loop feedback unreliable. For RE-AK, we chose to run the entire signal processing pipeline on the phone: bandpass filtering, artifact rejection, FFT computation, and band power extraction all happen locally.
Modern smartphones handle this comfortably. An iPhone 13 or a mid-range Android device with a Snapdragon 7-series chip can run a 256-point FFT on 8 channels at 250 Hz with less than 2% CPU utilization. We use Apple's Accelerate framework on iOS and the Eigen library (compiled via NDK) on Android for vectorized DSP operations. The processing load is negligible compared to what the GPU does for rendering.
When Cloud Processing Makes Sense
If your product involves machine learning classification (sleep staging, seizure detection, cognitive state estimation), the model size and complexity may exceed what you want to deploy on a phone. For CLEIO's clinical monitoring features, we run lightweight feature extraction on the device and send computed features (not raw EEG) to a cloud endpoint for classification. This reduces bandwidth requirements by roughly 95% compared to streaming raw data, and it keeps PHI exposure minimal because the raw neural signals never leave the device.
The hybrid approach works well, but it introduces a synchronization challenge. Your app must continue to function when connectivity drops. We implement a local queue that buffers computed features with timestamps and drains to the backend when connectivity resumes. The backend must handle out-of-order and duplicate feature vectors gracefully.
Multi-Sensor BLE Peripheral Management
Consumer EEG headsets are rarely just EEG. A typical device includes EEG electrodes, an accelerometer or IMU for motion artifact detection, a PPG sensor for heart rate, and sometimes a temperature sensor. Each sensor may expose its own GATT characteristic or share a multiplexed data stream.
The Single-Peripheral, Multi-Characteristic Approach
We strongly recommend exposing all sensors through a single BLE peripheral with separate GATT characteristics for each data stream. This is what we implemented for both RE-AK and CLEIO. The alternative, treating each sensor as a separate BLE peripheral or using multiple connections, creates connection management complexity that is not worth the architectural cleanliness.
With a single connection, you manage one MTU negotiation, one connection interval, and one set of connection event timing. You subscribe to multiple characteristics via setCharacteristicNotification and receive interleaved callbacks. On the app side, you demultiplex by characteristic UUID and route each stream to its processing pipeline.
Timestamp Synchronization Across Sensors
EEG data is only useful in context. If your app correlates EEG with accelerometer data for motion artifact rejection, the timestamps must be aligned. Do not rely on the phone's receive timestamp for this. BLE delivery timing is bursty and variable. Instead, have the firmware embed a shared peripheral clock timestamp in each sensor's data packets. The app reconstructs aligned timelines using these embedded timestamps.
We use a 32-bit microsecond counter on the peripheral that wraps every 71 minutes. The app detects wraps by comparing consecutive timestamps and adjusts accordingly. This approach gives us sub-millisecond alignment between EEG and IMU data, which is essential for reliable artifact rejection.
Architecture Patterns That Scale
A neurotech mobile app is a real-time data pipeline with a user interface attached to it. Treat it that way from the start.
- Separate your BLE communication layer, your signal processing layer, your data persistence layer, and your UI layer into distinct modules with well-defined interfaces. We use a reactive pipeline (Combine on iOS, Kotlin Flow on Android) to connect them.
- Never process BLE data on the main thread. This sounds obvious, but it is the single most common performance mistake we see in neurotech apps. The main thread is for rendering. Everything else happens on dedicated background queues or coroutine dispatchers.
- Design your data persistence format for append-only writes. EEG sessions can last 30 minutes to several hours. You cannot buffer the entire session in memory. Write samples to disk continuously in a format that supports efficient appending. We use EDF+ (European Data Format) for clinical applications and a custom binary format with a JSON manifest for consumer products.
- Build the recording pipeline before the visualization pipeline. Users will forgive a laggy real-time display. They will not forgive a corrupted recording of a 45-minute meditation session.
Clinical Validation Considerations
If your EEG headset is targeting clinical use or wellness claims that reference clinical evidence, the mobile app is part of the regulated system. This has concrete implications for how you build it.
Data Integrity and Audit Trails
For any data that may be used in a clinical context, your app must guarantee integrity from the BLE receive buffer to the storage layer. We compute a rolling CRC-32 over each recording session and embed checksums at regular intervals in the data file. If any segment is corrupted (due to a crash, storage error, or BLE data loss), the checksum identifies exactly which portion is affected.
Software as a Medical Device (SaMD)
If your signal processing pipeline makes clinical claims (for example, detecting abnormal EEG patterns), the app likely qualifies as SaMD under FDA and Health Canada regulations. This means you need documented software development lifecycle processes, risk analysis per ISO 14971, and traceability from requirements to implementation to verification.
Plan for this from the beginning. Retrofitting traceability and risk documentation onto an existing codebase is expensive and painful. We structure our projects with requirements tracked in a lightweight system from day one, with each requirement linked to specific test cases. This adds minimal overhead during development and saves enormous effort during regulatory submissions.
Algorithm Validation
Any signal processing algorithm that drives user-facing results must be validated against a reference dataset. For RE-AK's neurofeedback features, we validated our on-device FFT and band power calculations against MATLAB reference implementations using a curated dataset of 200 EEG sessions. The acceptance criterion was less than 0.1% deviation from the reference for all computed features. Document this validation thoroughly, as regulators will ask for it.
Platform-Specific Realities
Cross-platform frameworks like Flutter and React Native are tempting for neurotech apps because they promise a single codebase for iOS and Android. In our experience, they work well for the UI layer but poorly for the BLE and DSP layers. We use native code (Swift and Kotlin) for all BLE communication and signal processing, and share business logic through a Kotlin Multiplatform or C++ core where appropriate.
The reason is control. When you are debugging a packet loss issue that only manifests on Samsung Galaxy S22 devices running Android 13, you need direct access to the platform's BLE APIs and logging infrastructure. An abstraction layer between you and BluetoothGattCallback is a liability, not an asset.
Building a companion app for an EEG headset or neurotech wearable? DEVSFLOW Neuro has shipped mobile apps for neurofeedback, clinical monitoring, and consumer brain-sensing products. Visit neuro.devsflow.ca to learn how we can accelerate your product development.