Trends in Wearables

Thinking back on last week’s ISWC & ISMAR, I think there are three especially ripe areas of wearables research in the next few years:

  • Fusion of Wearables and Ubicomp: This is an area I’ve thought was ripe for a while, but apart from location-beacons and markers for AR (Augmented Reality) there’s surprisingly little research that combines Ubiquitous Computing and Wearables. There are exceptions, like Georgia Tech’s work with the Aware Home and some work in adaptive “universal remote controls” for the disabled, but it feels like there should be some good work to be done combining the localization of Ubicomp with the personalization of Wearables. It also nicely fits with Buxton’s argument that the key design work to be done is in the seamless and transparent transitions between different context-specific interfaces.

  • Social Network Computation, Visualization & Augmentation: This research has been going on for awhile, especially at the University of Oregon and more recently at the MIT Media Lab, but it seems to be getting traction lately. This sort of research looks at what can be done with multiple networked wearables users in a community. Typical applications include automatic match-making (along the lines of the Love Getty that was the craze in Japan several years ago), keeping a log of chance business meetings at conferences and trade shows, understanding social dynamics of a group like whether one person dominates the conversations, and real-time visualization of those social dynamics.

  • AugCog / Wearable Brain-Scanning: As I mentioned in a previous post, this is potentially a big breakthrough. I don’t mean in the sense that it solves problem the wearable field has been struggling with, but rather that this could open a whole new branch of research. Neuroscience has taken off in the past 10 years with advances in brain-imaging technology like functional MRI. The downside is that you can only see what the brain is doing when performing tasks inside a lab setting — it’s studying the brain in captivity. Wearable sensors give us the ability to study the brain in the wild, and to correlate that brain activity with other wearable sensors. That plus the lower price should enable all sorts of new research into understanding how we use our brains in our everyday lives. That, in turn, will hopefully lead to new ways to augment our thinking processes, whether by modifying our interfaces to match our cognitive load, providing bio-feedback to help treat conditions like ADHD or perhaps addiction, or even physically stimulating the brain to treat conditions like Parkinson’s.

    That’s not to say there aren’t broad and potentially frightening aspects to this technology, but the issue that concerns me most applies generally to our recent understanding of the brain: I don’t think our society is prepared yet to deal with the coming neuroscience revolution. Our justice system, religion and even our system of government is based on the worn-out Cartesian idea that our minds are somehow distinct from the wetware of our brains and bodies. It’s been clear for decades that that assumption is false, but so far we’ve tried to ignore that fact in spite of warnings from science fiction and emerging policy debates about mental illness, psychoactive medication, addiction as illness and the occasional the-twinkies-made-me-do-it defense. The applications envisioned by AugCog are going to force the issue further, and societies doesn’t make a shift like that without serious growing pains.

Trends in Wearables Read More »

AugCog

One of the most exciting talks for me was the joint ISWC/ISMAR keynote by Dr. Dylan Schmorrow, one of the program managers for DARPA. The program managers are the guys who decide what research projects DARPA should fund — the best-known PM was probably JCR Licklider, who funded the Intelligence Augmentation research that led to the invention of the Internet, the mouse, the first(?) hypertext system, etc. The current program Dylan talked about was Augmented Cognition, which I’m now convinced could become the biggest breakthrough in wearable computing yet.

Intelligence Augmentation tried to support human mental tasks, especially engineering tasks, by interacting with a computer through models of the data you’re working with — that was really the start of the shift from the mainframe batch-processing model to the interactive computer model. AugCog is about supporting cognitive-level tasks like attention, memory, learning, comprehension, visualization abilities and basic decision making by directly measuring a person’s mental state. The latest technology to come out of this effort is a sensor about the size of your hand with several near-infrared LEDs on it in the shape of a daisy, with a light sensor in the center. The human skull is transparent to near-IR (that’s how you get rid of all the heat your brain produces), so when it’s placed on the scalp you can detect back-scatter from the surface of the brain. By doing signal processing on the returned light you can detect blood-flow and thus brain activity, up to about 5cm deep (basically the cortex). They’ve already got some promising data on detecting understanding — one of the things DARPA is especially interested in is being able to tell a soldier “Do this, then that, then the other thing… got that?” And even if he says “Yup” his helmet can say “no, he didn’t really get it….” Outside of military apps (and getting a little pie-in-the-sky), sometime down the road I can imagine using this kind of data to build interfaces that adapt to your cognitive load in near real-time, adjusting information displayed and output modalities to suit. In the more near-term, these devices are starting to be sold commercially and cost on the order of thousands of dollars, not tens or hundreds of thousands. That means a lot more brain-imaging science can be performed by a lot more diverse groups.

For more info check out www.augmentedcognition.org, or go to the Augmented Cognition conference being held as a part of HCI-International in Las Vegas July 22-27, 2005.

AugCog Read More »

Buxton at ISWC: it’s the transitions, stupid!

[I’ve been trip-blogging this past week but haven’t had convenient net access, so I’m afraid the real-time aspects of blogging are lacking… now that I’m hooked into the wireless at DEAF04 here’s some of my backlog.]

Bill Buxton’s ISWC keynote made a lot of points, but the one that struck me most was derived from three basic laws:

  1. Moore’s Law: the number of transistors that can fit in a given area will double approximately every 18 months.
  2. God’s Law (aka the complexity barrier): the number of brain cells we have to work with remains constant.
  3. Buxton’s Law: technology designers will continue to promise functionality proportional to Moore’s Law.

The problem then is how to deliver more functionality without making the interface so unwieldy as to be completely unusable. Buxton went on to talk about the trade-off between generality and ease-of-use: the more specifically-designed an interface the easier it is to use but the more limited its scope.

The key, he argues, is to make lots of specific applications with interfaces well-suited for their particular niche. Then you don’t need a single general interface, but instead can concentrate on the seamlessness and transparency of transitions between interfaces.

It’s a nice way of thinking about things, especially when thinking about the combination of wearables and ubicomp (see next post).

Buxton at ISWC: it’s the transitions, stupid! Read More »

More on Red vs. Blue

Kevin Drum’s latest comment on Tom Wolfe) rings very true for me:

In other words, they [Red-State folk] disagree with us, but not so much that they can’t be brought around or persuaded to vote for us based on other issues. Too often, though, a visceral loathing of being lectured at by city folks wins out and they end up marking their ballots for people like George Bush.

I think that’s spot-on — and it works both ways too. My step-dad and I are a great example I think (hi Frank!) — we get along great and pretty much share the same core values when it comes to life, but go completely loggerheads when it comes to arguing politics. My sense (and he’s welcome to correct me here) is the thing that sets him arguing most is any argument that smacks of intellectual/long-haired-hippie/lecturing elitism — almost regardless of the policy in question. I’m on the other side of that equation — I claim to hate Bush because of his incompetence and policy (and to some extent I do), but what really gets my teeth on edge about him is the anti-intellectualism he sides with and stands for. That more than anything is what drives me, a third-party-voting fiscal conservative who thought Iraq was a threat that needed to be dealt with, further and further taking the position of the Left.

Don’t think for a minute that the pundits of both sides aren’t doing this to us on purpose…

More on Red vs. Blue Read More »

A mini-mandate, but for what?

I remember back when Reagan was running against Carter the word mandate meant a clear sign from the people that they supported a candidate, but it seems the word has eroded to the point that today it means “squeaked by with a 3% margin.” But at least he got more votes than the other guy this time, so I suppose that’s at least a mini-mandate. The question is, what’s it a mini-mandate for?

I’m pretty sure it’s not a mandate for:

  • torturing and sodomizing our prisoners-of-war
  • borrow-and-spend economic policy
  • having a choice of if and when to go to war, but going in without proper planning anyway
  • lying to cover your ass instead of admitting a mistake and moving on
  • US imperialism

I expect most Bush supporters would agree on those points, though they may take offense that I’d even bring the topics up. I’m not nearly as certain it wasn’t a mandate for these other points though:

  • the erosion of First and Fourth amendment rights in the name of security
  • weakening of environmental regulations
  • putting an end to the supression and misery long endured by the rich and powerful
  • imprisonment or just plain “disappearing” people without trial, again in the name of security
  • isolationism wrt Europe & the UN
  • nation-building (ironically enough) wrt the Middle East
  • discrimination against gays
  • giving tax money to religious organizations

A mini-mandate, but for what? Read More »

Red vs. Blue, by population

Electoral-Vote.com has a nice pictoral map of how the states came out, normalized by population. Makes me feel a little less outnumbered than the traditional map, especially considering California is over 12% of the nation’s population

I’d also like to point out to those who keep talking about “Liberal California” that the split was only 54.6% to 45% for Kerry — lower than Hawaii or Illinois. It’s a big state, we contain multitudes.

Red vs. Blue, by population Read More »

Fog Screen

OK, I don’t know where I’d put it or exactly what I’d do with it yet, but I want one of these. FogScreen is a large wall of fog kept in a thin sheet using laminar flow, then used as a projector screen. That part has been around for a while, but they’ve recently added the ability to “write” on the screen like some wizard writing runes in the air, using the same ultrasound-tracked pens used in virtual-whiteboard systems. Check out their video.

Fog Screen Read More »

Where are the new innovations in AR?

Like in previous years, the big theme here at ISMAR (the International Symposium of Augmented and Mediated Reality) seems to be registration and tracking — how to detect where objects and people are in the physical world so you can overlay graphics as accurately as possible. AR isn’t my main field, but I’ve had a couple of conversations so far about how we’re really reaching a point of diminishing returns. It’s great that we’re seeing minor incremental improvements in this area, but what we’re really lacking are new, innovative uses of AR to push the field further. Unfortunately, it sounds like at least in part a lot of these new innovations didn’t make the cut for the conference because they lacked in strong evaluation or quantifiable contribution to the field — it’s much easier to judge the quality of a new camera-based image-registration method than it is to judge the usefulness of a brand new application.

The Software Agents field was a response to a similar stagnation in Artificial Intelligence. AI researchers had a lot of good but imperfect tools that had been developed over the years, but kept trying to solve the really hard general problems. Software Agents grew out of the idea that it was OK if your algorithm wasn’t perfect in every condition so long as you cleverly constrained your application domain and designed your user interface to cover for those imperfections. It was a struggle to get acceptance of the idea at first, and in the end a few of the big players in the new domain went and founded their own conference rather than try to fit their own work to the evaluation metrics used for more traditional AI papers. Hopefully it won’t take such a dramatic move on the part of AR researchers to breath new life into this field.

Where are the new innovations in AR? Read More »

Wearable on an iPaq

Every year I think it’ll finally be the year we wearables folk can swap out our custom hardware for an off-the-shelf palmtop with a head-mounted display and one-handed keyboard connected to it, and every year it’s just not quite there. Looks like we’re finally getting there: Kent Lyons from Georgia Tech has now swapped out his CharmIT-PRO for an iPaq.

It’s still not quite plug-and-play: he had to hack the original Twiddler-1 (the serial-port one, not the current PS/2 version) with a different power connector, and the CF-IO card he’s using to connect the iPaq to his Microoptical display has a fairly limited bandwidth, so he had to hack his X server to blit out just the windows changes to the active window. Oh yeah, and he wrote a new Twiddler driver for the iPaq.

He’s promised to put up a how-to guide on the Web soon — I plan to keep bugging him till he does :).




Wearable on an iPaq Read More »