Time Keeps on a Slippin

Originally posted on PhysiologicalComputing.net.
Most people I know who work in the field of physiological computing purchase off-the-shelf sensors for their research. There’s nothing specifically wrong with this, most of us are not engineers and nor do we have the time to become one as our interests lie elsewhere. At LJMU all our equipment is off-the-shelf and we have some damn fine devices which we’ve used in our work (e.g. see my review of the BM CS-5 cheststrap). However I’ve noticed we place a lot of faith (and money) in these devices to do what they say on the tin (e.g. see the issue I raised last year about the software bundled with BioHarness). Personally I like to know the limitations of any equipment I’m using, and if I find anything outside the spec I’ll try to figure out why (sometimes to my detriment as you’ll see below). Its not that I’m particularly troubled if a sensor has any defects as I don’t expect them to be perfect, the problem I have is with defects I don’t know about as they can make things, problematic to say the least. For example the first off-the-shelf sensor I ever worked with was the WaveRider Pro a 4 channel biofeedback device which had a slight problem with counting time.

The WaveRider API was originally designed for a 16 bit operating system but a 32 bit version had been made available by a 3rd party. My first affective game made use of this driver. The game used the player’s heartbeat rate to adapt the gameplay which the driver derived from the raw ECG data (i.e. I didn’t use the raw data myself to derive the player’s heartbeat rate). At this point all was good. Later on I wanted to use WaveRider in an experimental situation and thus required the use of the raw ECG data. Now the driver didn’t time stamp the data stream which is typical of most sensing devices I know of as its easier to reconstruct the time series using the device’s sampling rate which is specified by the manufacturer (e.g. if the device has a sampling rate of 128 Hz then every other sample has a time index of [N x 1/128] seconds where N is the sample no.)*.  To ensure everything was working I simulated a hardware clock to check I was getting the correct number of samples, in this case 128 per second, and this is where everything started to go wrong.
Instead of 128 samples per second I was getting a high level of variance which I could not attribute to the various levels of software an API call had to go through (perhaps a variance of a few samples I’d let slip, but not tens). To cut a long story short, I had to take apart the 32 bit driver and found a buffering bug in the API call to the raw feed. At the top level a programmer using the derived heart rate function would of been none the wiser to this loss in data**. Nor would anyone using the raw ECG data as the device driver works through an application which has its own GUI which doesn’t harbour the bug. The only way of identifying this bug was through the simulation of a hardware clock and checking that the correct number of samples had been received over a set period. At this point the design choices taken in the programming of the 32 bit driver had lead to the data loss in the raw signal. Once I fixed this I got a consistent number of samples per second, and this is where the manufacturers specification came into play. Instead of receiving 128 samples per second consistently I was getting 129. By this time I had ripped apart the API, there was nothing else on the software side I could think of that would cause me to be gaining samples or to put it another way, time. Subsequently I left the sensor to run for an hour to see if the sample rate was consistently 129 Hz which indeed it was.
The only conclusion I could come up with was that the clock in the physical device (i.e. the oscillator) was not to specification. It was a hertz out.  Now a single hertz was not really going to affect any of my signal analysis and by this time my little bug hunt had eaten up an inordinate amount of time for a relatively minor issue, but it was enough to make me wary of trusting off-the-shelf sensors at face valve. In my line of work (interactive systems and signal analysis) I need to know what, if any, issues a sensor has so I know if I can compensate for them. For example if I know the oscillator is consistently out by a few hertz’s (or even tens) then all I need do is account for the different sampling rate which is pretty much me altering a single line of code. Problems arise when I don’t have the correct information about a sensor not if its different than what it says on the tin (obviously I would like some things to be correct either that or a discount).
Over the years I’ve become accustomed with a variety of off-the-shelf sensors , some good and some bad. The majority have their own little quirks which I (and others) have had to get around. The trick for us in this line of work is not just in finding what is an acceptable issue but figuring out if there’s a issues in the first place and so the lesson of this tale of woe becomes: when selecting a sensor for your project, decide on what limitations are appropriate for the environment you wish to use that device for it makes everything so much less problematic.
* For wireless devices a timer is preferable (or a packet count) in order to compensate for any data loss in the transmission.
** Because of the way in which the WaveRider API works, calls to the derived heartbeat rate do not suffer from the buffering bug just the raw feed.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.