Recently I’ve been writing Java classes to manage NeuroSky’s native packet format for an Android deployment. The packet format is more unusual than those of other sensors I’ve programmed for (e.g. Nexus, Biosemi, Affectiva, BioHarness). The packet format has clearly been designed to be future proof with its use of a variable length payload and infinite descriptors. Normally sensor variables are either described by their location in the packet or by a code identifier. The identifier being a fixed sized variable. NeuroSky uses a variable sized identifier so basically it can package whatever it likes into its payload. This ensures software that implements the NeuroSky format correctly are likely to be compatible with future hardware releases, even if new data types are supported, as the software should only read the data it supports and ignore the rest. Its certainly a welcome design, though after going through the packet format I’ve yet to see anything that tackles loss packets (e.g. sensors like the Zephyr range of heart monitors include a packet counter which can be used to identify loss packets, the HxM is rather cool as its payload incorporates several seconds of recordings for redundancy along with a packet counter).
Normally my work has concentrated on the detection of mental and emotional states for passive adaptation (e.g. adapting game difficulty to mental workload). I’ve not really developed systems to detect muscle movement for voluntary control. Over the weekend I gave eye blink detection a shot and wrote a basic algorithm to test my code. Now one of the cool things about the NeuroSky is its lag is exceptional small, I just didn’t expect it to be – “I can detect your eye closing, display an image and remove it before your eyes open” small. Anyways I’ve been having great fun testing the algorithm the only way it should be done: by having the Weeping Angels from Dr Who hunt me down as I blink.
Here’s some test video, remember don’t blink.
Barring upgrading the algorithm some more (needs configuring per person at the moment), I would really like to see this built into a VR experience. Screenshots of angels have caused a few jumps, but a 3D VR experience, now that would be terrifying!
Apologies for the video quality, my usual screen recorder just stopped working on all my machines.:(
wOW THIS LOOKS AWESOME.
This looks very Interesting. What happens if you blink actually, without intending to change the page or image?
Is there any way for it to detect natural blinks from Exaggerated blinks?
Hi Jonjo,
Thanks,
Hmm, I’ve not looked at discriminating between natural and exaggerated blinks. The above game concept treats both the same (as it should given the premise). So any type of blink would cause the image to change. I would imagine that exaggerated blinks would have an amplified voltage change and so could be discriminated according to a threshold. If this was not the case we could just change how exaggerated blinks are generated (e.g. double blink).
I’m guessing this is in reference to using this system as a mouse click replacement? (i.e. I assume your the same commenter I’ve been talking to on Youtube). Off the top of my head double blinking would be the easiest way to generate mouse clicks using this set-up. My algorithm doesn’t currently capture fast blinking, but I am working on this as well as a calibration process. The system is hard-coded to me (given I’m the main user so far) and doesn’t work as well for others but I hope to rectify this shortly.
Once I have the algorithm capable of discriminating non-command from command blinks I’ll upload a demonstration video. I did quickly test whether I could build such a system and this is possible. If your interested in talking further please feel free to e-mail me at gilleade(@)gmail.com and we can see if we can develop something more tailored to your needs.