Thinking back on last week’s ISWC & ISMAR, I think there are three especially ripe areas of wearables research in the next few years:
-
Fusion of Wearables and Ubicomp: This is an area I’ve thought was ripe for a while, but apart from location-beacons and markers for AR (Augmented Reality) there’s surprisingly little research that combines Ubiquitous Computing and Wearables. There are exceptions, like Georgia Tech’s work with the Aware Home and some work in adaptive “universal remote controls” for the disabled, but it feels like there should be some good work to be done combining the localization of Ubicomp with the personalization of Wearables. It also nicely fits with Buxton’s argument that the key design work to be done is in the seamless and transparent transitions between different context-specific interfaces.
-
Social Network Computation, Visualization & Augmentation: This research has been going on for awhile, especially at the University of Oregon and more recently at the MIT Media Lab, but it seems to be getting traction lately. This sort of research looks at what can be done with multiple networked wearables users in a community. Typical applications include automatic match-making (along the lines of the Love Getty that was the craze in Japan several years ago), keeping a log of chance business meetings at conferences and trade shows, understanding social dynamics of a group like whether one person dominates the conversations, and real-time visualization of those social dynamics.
-
AugCog / Wearable Brain-Scanning: As I mentioned in a previous post, this is potentially a big breakthrough. I don’t mean in the sense that it solves problem the wearable field has been struggling with, but rather that this could open a whole new branch of research. Neuroscience has taken off in the past 10 years with advances in brain-imaging technology like functional MRI. The downside is that you can only see what the brain is doing when performing tasks inside a lab setting — it’s studying the brain in captivity. Wearable sensors give us the ability to study the brain in the wild, and to correlate that brain activity with other wearable sensors. That plus the lower price should enable all sorts of new research into understanding how we use our brains in our everyday lives. That, in turn, will hopefully lead to new ways to augment our thinking processes, whether by modifying our interfaces to match our cognitive load, providing bio-feedback to help treat conditions like ADHD or perhaps addiction, or even physically stimulating the brain to treat conditions like Parkinson’s.
That’s not to say there aren’t broad and potentially frightening aspects to this technology, but the issue that concerns me most applies generally to our recent understanding of the brain: I don’t think our society is prepared yet to deal with the coming neuroscience revolution. Our justice system, religion and even our system of government is based on the worn-out Cartesian idea that our minds are somehow distinct from the wetware of our brains and bodies. It’s been clear for decades that that assumption is false, but so far we’ve tried to ignore that fact in spite of warnings from science fiction and emerging policy debates about mental illness, psychoactive medication, addiction as illness and the occasional the-twinkies-made-me-do-it defense. The applications envisioned by AugCog are going to force the issue further, and societies doesn’t make a shift like that without serious growing pains.