Cross posted on PhysiologicalComputing.net
Last month I attended the inaugural Quantified Self Europe conference over in Amsterdam. I was there to present a follow-up talk to one I gave back in 2010 at Quantified Self London in which I described my experiences in tracking my heart rate along with publishing it in real-time over the Internet.
The Body Blogger system as it became known, after a term Steve came up with back in 2009, was only really intended to be used to demonstrate what could be done with the BM-CS5 heart monitors we’d recently purchased. As these devices allowed wireless real-time streaming of multiple heart rate monitors to a single PC there was a number of interaction projects we wanted to try out and using web services to manage the incoming data and provide a platform for app development seemed the best choice to realise our ideas (see here and here for other stuff we’ve used The Body Blogger engine for).
Having tracked and shared my heart rate for over a year now I’ve pretty much exhausted what I can do with the current implementation of the system which I didn’t spend a whole lot of time developing in the first place (about a day on the core) and so my Amsterdam talk was pretty much a swan song to my experiences. During the summer, I stopped tracking my heart rate (then fell sick for the first time since wearing the device, go figure I’d of liked to of captured that) so I could work on the next version of the system and loaned out the current one to Ute who was interested in combining physiological monitoring with a mood tracking service. Ute was also in attendance at Amsterdam to present on her experiences with body blogging and mood tracking (for more information on this see the proposal and first impressions posts).
As with my previous reflections post, I’ve posted rather late so if your interested in what happened at the conference check out the following write-ups: – Guardian, Tom Hume, and Alexandra Carmichael. The videos of the day I imagine will be online in the coming month or so, so if your interested in the particulars of my talk you’ll need to wait just a little bit longer, though I do cover a few of the topics I talked about during my CHI 2011 (video) talk (on the issues associated with inference and sharing physiological data).
Instead I’m going to briefly cover some of the interesting items I observed during the event, in no particular order they as follows: –
Emergence of Analysis Middleware
If you’ve ever bought a self-tracking device such as a pedometer or weight scale in the past year or so it’s probably been tied to a specific online service. As I mentioned in my predictions for 2011 post, self-tracking devices tend to be bound to a singular visualisation and analysis service, typically made by the manufacturer of the device. As such, your rather much dependent on that company remaining interested in servicing your data. And even if they are interested, the visualisations and analyses they produce may not be the ones your particularly interested in using. Hence the importance of such devices supporting data portability, for without it you can’t explore your own data outside the sandbox you’ve been provided.
As such it was nice to see a range of middleware solutions being demonstrated at the conference which provided visualisation and analysis services. One in particular drew my interest, CommonSense, a data stream management service (e.g. like Pachube) which allows you to build composite sensors from individual data streams as well as provide data analysis to recognise certain states evident in the data streams (e.g. predicting illnesses from physiological data streams). While I’ve not had time to play around with the platform, I’m very excited about the possibilities it opens for both application developers (R&D prototyping should be a lot easier) and the wider community of self-hackers as it provides a more powerful tool set to explore their data (especially longitudinal).
This term was brought up during the discussion of the “Personal Data Visualisation” breakout session. How easy is it to observe items of interest in a given data set using a given visualisation format or alternatively what is the glance-ability of the visualisation?
From my own experience in developing visualisations for the body blogging data, drawing any kind of meaning from a surface level inspection of the data is reliant on the format of the visualisation being appropriate for the data set. For example presenting heart rate over a period of seconds, minutes and hours as a time series graph allows all sorts of items of interest to be draw out (e.g. stress responses and physical activity). However once the period expands into days and months a time series graph falls apart and becomes a convoluted mess. I decided on abstracting my data into a heat map in order to solve this thereby making it easier (at least for me, which I’ll talk about next) to observe longitudinal trends such as my sleep patterns and the effect of life style changes and major events (e.g. Christmas).
I imagine this topic and its many facets has already been covered in depth by the data visualisation community. If I was still working over at Lancaster’s Computing Department I could “ask the audience” for a comment (there were a few data visualisation experts over there when I was around) but until I know better or find the time to delve into that research space, glance-ability is the new metric with which I’ll be using to evaluate my own visualisations 🙂
Being the Expert in Your Self
When designing a visualisation it is important to note that the format will shape the experience of the viewer when they come to inspect the data. I’ve talked about this before with respect to physiological data being used in an interactive setting, where meanings can easily be inferred from even the most basic presentation format (e.g. time series graph) without any meaning actually being there. Take lie-detection games for example. During the conference it was very exciting to see the many different approaches everyone took to visualising their data sets. Unfortunately during several of the talks I found it difficult, if not impossible to see what the presenter saw in their data given the visualisation they had chosen.
When I gave my CHI 2011 talk, it was brought to my attention by an audience member that the heat maps I used, while visually interesting were difficult to interpret and it was only thanks to the spoken narrative I provided that any of it made any sense. For me this raises an interesting question. As I developed the visualisations for my data set, had I become so much of an expert in my data that when I shared my work with others the only person who could actually see anything of interest was indeed myself? I imagine other quantifiers have run into this problem as well given the ease at which one can slip into over analysing one’s own data.
Essentially the lesson here is when sharing one’s own visualisation take care in explaining the interface as clearly as possible as any trends are likely to be obvious to the creator as they would be the expert in themselves and their interface. Other people on the other hand, such as an audience, need to be guided into the visualisation in order to see as you do. Hopefully I remedied the CHI problem in my talk, which I improved first by introducing my data set and the visualisations I developed in stages (like a tutorial) and by using a reduced colour palette in order to make areas of interest easier to see.
Anyways, all in all I had a great time at the conference. The blend of people from different lines of work and research backgrounds provided for an excellent melting pot of ideas to charge the mind and would highly recommend your attendance even if your only remotely interested in self-tracking; just try to avoid some of the more exotic foods put out for consumption. I’m still regretting trying the weed grass shake.
Cross posted on PhysiologicalComputing.net