Augmented reality meets Twitter meets CafePress? Something like that -- squidder.com has hacked together PaperTweet3d, basically a barcode that encodes your Twitter username on your shirt plus an augmented-reality system that automatically overlays your latest tweet over it.
There are exactly 52 playing cards in a standard deck. There are also exactly 52 shots in the famous shower scene in Alfred Hitchcock's movie Psycho. From this amazing coincidence comes 52 Card Psycho, a new augmented-reality experimental film piece my brother recently designed in collaboration with the Future Cinema Lab at York University:
52 Card Psycho is an installation-based investigation into cinematic structures and interactive cinema viewership; the concept is simple: a deck of 52 cards, each printed with a unique identifier, are replaced in the subject's view by the 52 individual shots that make up Hitchcock's famous shower scene in Psycho. The cards can be manipulated by the viewer: stacked, dealt, arranged in their original order or re-composed in different configurations, creating spreads of time, and allowing a material interaction with the 'cinema screen'— an object which normally is removed and exalted, and unchangeable in its linearity.
Back in 2001 I was a guest speaker at the Third International Spearman Seminar on Extending Intelligence, a seminar hosted by the Educational Testing Service's R&D Division and Sydney University's Department of Psychology. Most of the other speakers were either from the field of education or psychology, but I and a couple other speaker were brought in to provide a view from outside the field. I'm pleased to say those talks have finally been turned into a textbook, Extending Intelligence: Enhancement and New Constructs, including my own chapter "Challenges and Opportunities for Intelligence Augmentation."
Sunglasses company Rodenstock presented Informance, a "design study for the spectacles of the future" at this year's OPTI trade fair in Munich. Made in conjunction with UK-based Cambridge Consultants, these are a pair of sunglasses with a 160 x 120 pixel head-up-display embedded in the glasses. The design looks similar to the prism-in-glasses approach that MicroOptical (now Myvu) did about 10 years ago.
One of the big problems with the old in-glasses-style MicroOpticals was that the field-of-view was small enough that you really needed a custom fit to put the display in the right location, and even then it was hard to use when walking or running. MicroOptical's display was at the center rather than the edge of the glasses, which may make a difference — given that Rodenstock intends athletes to use this to see heart-rate, time, etc., I certainly hope so.
I used to love my locally-cached copy of Wikipedia on my Treo 650 (after two years, a new update of the TomeRaider version was created back in February), but now that I've turned in my Treo for an iPhone I'm looking for a replacement reader. I'm not there yet, but this desktop-based offline Wikipedia reader is a good start to the project. As the author puts it:
Isn't the world of Open Source amazing? I was able to build this in two days, most of which were spent searching for the appropriate tools. Simply unbelievable... toying around with these tools and writing less than 200 lines of code, and... presto!
More on my efforts to get a locally-cached copy of Wikipedia readable from my iPhone (perhaps with the iPhone book-reader app that was recently hacked together) once it comes back from repairs...
(Thanks to Fairyshaman for the link!)
There's a call for papers out for the 11th International Symposium on Wearable Computers (ISWC 2007), which will be held in Boston on October 11-13 (tentative dates). Abstracts are due April 22nd, papers due April 26th.
Powercast (formerly Firefly Power Technologies, and spin-off based on University of Pittsburgh research) made a splash at CES this year with their dime-sized receiver that harvests RF energy from a nearby wall-wart transmitter. Based on their patent and related tech from PITT, the technology looks pretty darned simple (so simple I'm surprised there's not prior art, but then this isn't my field). It's basically just an antenna with a bunch of taps, each tap consisting of an inductor to resonate with the desired RF frequency and a rectifying diode to turn the energy into DC. That DC voltage is integrated across a series of capacitors, and stored in another capacitor.
I've not seen any detailed specs on how efficiency drops off with range from the transmitter, though a Businesses 2.0 write-up claims their range is only about 3 feet, with voltages too small for laptops but good enough for small devices. Their tech has also been tested for recharging wireless sensors at the Pittsburgh Zoo, and Philips is apparently coming out with a wirelessly-powered lightstick using the technology later this year.
Arcade Reality is a cute little Augmented Reality game for the Palm Treo cellphone. Video is displayed from the phone's camera, and monsters are (randomly) pinned to points in the real world. Move your cellphone around to center them in the screen, then blast them with your lasers. (Thanks to Stig for showing off the game...)
One fashion faux pas I noticed in Apple's iPhone presentation on Tuesday: the basic black bluetooth earpiece. Come on Steve — where's the bright flashing LED? How am I supposed to assert my dominance over other geeks if I can't blind them from across the room just by turning my head? Heck, I can find my carkeys in the dark with my year-old Plantronics headset — at least give me laser beams or something! (And if you can make it beep with an original Star Trek communicator sound when I get a call, now that would really show 'em who's the king of fashion!)
Bruce Schneier's Crypto-Gram points to some impressive work done by researchers at the University of Washington showing how Apple's Nike + iPod kit can be used to track people. The kit consists of a transmitter that you put in your shoe and a receiver you plug into your iPod. The transmitter wakes up whenever it gets shaken and sends out pedometer info every second, and the receiver then uses that info to give voice and visual feedback on your pace and how far you've run. The UW team discovered that each transmitter sends out a unique ID so the receivers can distinguish among several in the area, and then built several PDA-sized units to listen for IDs and log the data either to flash memory or retransmit it over Wi-fi or SMS. They also built software that would trigger a USB camera whenever a particular ID went by, and wrote a visualization tool that shows either historical or real-time overlays of sensor IDs and/or pictures taken on top of Google Maps. Details are in their paper, and they also have a video.
The threat models they lay out aren't government surveillance so much as jealous/ex-boyfriends and stalkers, and to some extent professional thieves and muggers, unethical organizations tracking their members (or their competition's members), and stores tracking their customers. Except for muggers (which just involves detecting whether a passing jogger is likely to have an iPod or other cool gadgets on them), all the scenarios they discuss involve the use of a network of their relatively cheap sensors, each one adding a single location to the overall surveillance network. A stalker would place trackers at strategic locations, then wait for them to phone home with the unique IDs they see. To link a a unique ID with a particular person he just has to get close to his target (or for that matter just watch her jog by) and then note the ID that's being broadcast. Or he can leave one tracker in the bushes by his target's front door and note what ID it picks up (he gets when she comes and goes that way too). And since consumers are encouraged to "just drop the sensor in their Nike+ shoes and forget about it” the trackers will work even when the target isn't actually jogging or using the device.
The work is impressive, but I feel like by focusing on the Nike + iPod design it's pointing to the smoke instead of the fire. Yes, Apple probably could have designed their system to make this sort of tracking more difficult. Ditto the RFID chips in smart cards, passports, highway toll-payment boxes, quick-payment key fobs and consumer products, not to mention Bluetooth devices and cellphones. But the main technology trend that's making this sort of tracking possible, I would argue, is not the plethora of remotely-readable unique IDs we carry everywhere we go so much as the small, cheap hardware that even a moderately technical attacker can turn into his very own sensor network. RFID and transmitters are a ready-made "fingerprint" that such sensor networks can read easily, but as machine vision and pattern recognition technology improves there will be an increasing number of features will uniquely identify you to a sensor network, including minor differences in hardware you carry, how you walk or what you look like. This is not to say we shouldn't encourage companies to make tracking by RFID harder to do, but I think it's at best going to buy us 5-10 years before you'll be able to buy your own automatic person-tracking sensor network at any online spy-shop. We'd better be thinking now about what kind of social and legal systems we'll want once that day comes.
The overall winner of this weekend's Open Hack Day at Yahoo! was Blogging in Motion, which mounts a camera and pedometer in a handbag and then uses the Flickr API (and I presume a cellphone) to automatically blog one picture every minute. Sounds like a purse version of Steve Mann's Wearable Wireless Webcam, and more recently Microsoft Research Cambridge's SenseCam system, all hacked together in just one 24-hour marathon.
C|Net Asia has a review of Levi's RedWire DLX Jeans, which include a watch pocket for your iPod Nano and a mini joystick on the outside for controlling it. Looks like Levi's also groks that the iPod is as much a fashion accessory as it is an MP3 player, and matches accordingly:
The material is rather like a pair of Levi's 523s. Tough and with a yielding woven pattern. In affirmation of the MP3 player it carries, the DLX's detailing are colored a classic iPod white; from rivets to the button-fly and right down to the use of white embroidered threads.
(Thanks to Aileen for the link!)
Solestrom Swimwear has a new bikini with a built-in UV Meter so you can figure out how long before you've had too much sun. (Looks like it's just a meter — it would impress me more if it let you input how sun-sensitive you are and it gave you a countdown of how long you had left before burning.)
From their press release:
The bikini collects UV data though a smart fabric belt, and reports the UV index to the wearer with 0.01 accuracy. The electronic components are neatly built into the removable belt, and can be worn even underwater. Next in the list is a lower cost cousin, the SmartSwim™ UV Index Detector Bikini, which has UV sensitive beads that change color with the level of UV intensity. The reading gives more of a range rather than an accurate number, but for those who simply need to know if the UV is low, moderate or high, this bikini fits the bill.
(Link via Retrospectacle.)
My friends Bill & Amy have set up a page for their Personal Aura Device, a set of sound-reactive LED poi and clothing they're designing and building for Burning Man this year. Seeing them in action is amazing — they have one controller with a microphone that wirelessly controls boards fitted with with extremely bright red, green and blue LEDs. The main music mode ties intensity of each color to a different frequency band in the audio, so base and drums beat in the blues, mid-tones in the greens and vocalists and guitar are followed by the red. It's pretty hypnotic to watch, especially when they've got two sets of poi plus costuming all pulsing in unison to the music.
Apple and Nike are pre-promoting the Nike + iPod Sport Kit, a Nike shoe with a built-in pocket for a pressure sensor that wirelessly sends your pace to your iPod Nano. The iPod will then provide "workout-based voice feedback" and "Nike sport music content." Due out in late June for a suggested retail price of $29.
(Thanks to David Merrill for the link...)
From the publicity chair for this year's ISWC: "Submissions now open for the Tenth IEEE International Symposium on Wearable Computers! Submissions can include full papers (8 pages), short papers (4 pages), poster papers (2 pages), demonstrations, tutorials and workshops, and exhibits. All submissions are due on April 21st at http://iswc.net."
See the call for papers for more details.
Wearables in 2005
Bradley Rhodes and Kenji Mase
In July 1996, one year before the first International Symposium on Wearable Computers, DARPA sponsored a workshop entitled "Wearables in 2005" (www.darpa.mil/MTO/Displays/Wear2005). Attendees predicted how wearable computers might be used in 2005 and identified key technology gaps that needed to be filled to make their vision a reality. In October 2005, the 9th Annual International Symposium on Wearable Computing was held in Osaka, Japan, the first to be held in Asia. Participants presented a wide range of research from both industry and academia, spanning 13 countries and weaving together such diverse fields as interface design, hardware and systems, gesture and pattern recognition, textiles, augmented reality, and clothing design.1 Many of the themes would have sounded familiar in 1996, with continuing improvements in ergonomics and power management as well as gesture recognition and augmented reality.
As you would hope, the field has also developed in new directions in the past decade, with a much greater emphasis on large-scale recording and annotation of everyday activities, on the science and engineering of clothing design, and on performing thorough quantitative evaluations of potential input devices. We have also seen a large increase in the use of accelerometers, smart phones, and RFID readers as researchers leverage continuing drops in cost and size in the consumer electronics world.
As the largest primary conference for wearables researchers, ISWC provides a good snapshot of the state of the field. So, with the benefit of hindsight, here are some highlights of how wearables research actually looked in 2005.
My friend Jay got me an SeV Sport TEC jacket for Christmas — I haven't used the "Personal Area Network" channels for iPod headphones or the like (yet), but man is it nice to have all these pockets. I gave it a test drive in Joshua Tree National Park last weekend, and it was great to have one pocket for the camera, another for the wallet, a third for "little important things" like matches, LED flashlight & pocketknife, a fourth for trail maps, a fifth for trail mix, a springy lanyard for the car key, a back pouch for the removable sleeves, etc. I kept finding new pockets all through the trip, each with a little card in it printed with suggestions for what I might use it for. Definitely the great geek-gift of the season!
Denim giant Levi Strauss said on Tuesday it had designed jeans compatible with the iPod music player, featuring a joystick in the watch pocket to operate the device.
The Levi's RedWire DLX Jeans for men and women, which will be available this fall, also have a built-in docking cradle for the iPod and retractable headphones.
BBC News reports the jeans will be launched around August for around $200.
(Update: forgot to thank Aileen for the link!)
One of the gadgets announced at CES last week was Celestron's SkyScout, a hand-held viewfinder that identifies stars being viewed, based on GPS + compass and accelerometer to tell your location and where in the sky you're looking. Cute concept — assuming they did a good job on the implementation, it's nice example of hand-held augmented reality that avoids most of the normal difficulties: the environment being tagged (the night sky) is extremely well-modeled and predictable, the user tends to be looking in one place rather than walking around or moving his viewfinder, it's always outdoors with a good view of the sky so GPS always works, and it's night so you don't have to worry about the sun washing out the display (it also uses both text and audio, so presumably you can also avoid having the display wash out your night vision).
(Link via B.K. DeLong.)
Marc Davis and others working at UC Berkeley's Garage Cinema Research group have some interesting work on using a person's context when taking a photo with a cellphone (specifically time, location and people who are around) to predict who that photo is likely to be sent to [paper, video]. They're using that prediction to offer a "one-click" list of people with whom to share a photo that's just been taken, and report that 70% of the time the correct sharing recipients are within the top 7 people listed. In their study, they found that time was the best predictor of who a likely recipient would be, even beating out what other people were around (determined by detecting other cellphones in the area via Bluetooth).
It's interesting to compare this to my own work [paper] using the Remembrance Agent on a wearable computer, where I found relatively little benefit in using either location or people in the area to suggest notes I had taken in previous conversations that might be useful in the new situation. It's clear that the application and user's lifestyle makes a huge difference. All my notes were taken when I was a grad student, so over a third of my notes were taken in just one of three locations: my office, the room just outside my office and the main classroom at the Media Lab. That's too clumped to help distinguish among the wide variety of topics I'd talk about in those locations. On the other hand, people in the area had the reverse problem: since I'd be giving demos and talks all the time, over a third of the people I was with when taking notes showed up only once. The "people who are around" feature was too sparse to be helpful. (I never did test time-of-day or day-of-week as feature vectors, because I dropped that feature from the RA when I wrote version 2, but I suspect it would have the same problem location does.)
As I mentioned a few weeks ago, the iPod Nano is small enough that the headphones are becoming the limiting factor for size. Now Macally is drawing the obvious design conclusion with their mTune Chordless Stereo Headset by integrating the Nano right into the headphones.
(Thanks to Rawhide for the link.)
Winner of Engadget's Halloween Costume Contest: a functional Canon PowerShot S200 costume, with working rear LCD viewfinder. (Thanks to Nerfduck for the link.)
Wow. Techworld is reporting on a demonstration of wireless communications sent at 3.7Mbit/s to a radius of 18 miles using just 50mW and an omnidirectional antenna using a technology called xMax, developed by xG Technology. If this is for real, that's on the order of 1000 times more efficient than GSM, CDMA or WiMax. The company plans to target long-range wireless, but Princeton EE professor Stuart Schwartz claims he has seen it also demonstrated as a personal-area network, giving 2Mbit/s over 40 feet using just 3 nanoWatts.
If this is all true then it's revolutionary. To his great credit, Techworld reporter Peter Judge has a full companion article laying out the several places where reporters have to take the company at its word about the technology and the honesty of the demo, as well as remaining potential hurdles such as preemptive regulation and the possibility of reflections or interference once other transmitters start using the same system. But we'll know soon enough whether it's more than just snake oil, and if so it's going to be darned impressive.
(Thanks to Kurt for the link.)
I just posted pictures from the wearable-technology fashion show that was part of the ISWC 2005 program, sponsored by the KANSAI IT Synergistic Society. This was the third ISWC to include such a show, the first being Beauty and the Bits hosted by the MIT Media Lab at the first ISWC, and the second hosted by Komposite at ISWC 2002 in Seattle.
There were a few practical application garments being shown at this show, but most leaned towards the fashion end, with dance, music and LEDs playing prominent roles. My apologies for the quality of some of the pictures — my little hand-held camera doesn't work well in low lighting.
Yesterday, Socket Communications announced a they'll be coming out with a Bluetooth barcode-scanning ring, with full production in Q1 2006. (No word yet on how it will compare with Symbol Technology's SRS-1 Ring Scanner, which has already been on the market for several years.)
(Thanks to Nerfduck for the link!)
The winner of this year's best paper award at ISWC (the first ISWC to have such an award) was a paper by Don Patterson from the University of Washington called Fine-Grained Activity Recognition by Aggregating Abstract Object Usage. All the authors got certificates and Don took home a new video iPod as the prize.
This was one of several papers presented that used an RFID reader in a glove, in this case to classify what kind of activity a person is conducting based on the sequence of objects she has touched. This would be useful, for example, for alerting a care worker if a resident of an assistive-living home had stopped eating.
From the abstract:
In this paper we present results related to achieving fine-grained activity recognition for context-aware computing applications. We examine the advantages and challenges of reasoning with globally unique object instances detected by an RFID glove. We present a sequence of increasingly powerful probabilistic graphical models for activity recognition. We show the advantages of adding additional complexity and conclude with a model that can reason tractably about aggregated object instances and gracefully generalizes from object instances to their classes by using abstraction smoothing. We apply these models to data collected from a morning household routine.
Here are all six nominees for best paper from ISWC'05, which were the top 10% of full papers based on reviewer-rating:
Fine-Grained Activity Recognition by Aggregating Abstract Object Usage (author's PDF), by Donald Patterson, Dieter Fox, Henry Kautz, Matthai Philipose (U. Washington and Intel Research, Seattle)
ReachMedia: On-the-move interaction with everyday objects (author's PDF), by Assaf Feldman, Emmanuel Munguia Tapia, Sajid Sadi, Pattie Maes and Chris Schmandt (MIT Media Lab)
A Design Process for the Development of Innovative Smart Clothing that Addresses End-User Needs from Technical, Functional, Aesthetic and Cultural View Points by Jane McCann, Richard Hurford and Adam Martin (University of Wales)
Pictorial Depth Cues for Outdoor Augmented Reality by Jason Wither and Tobias Höllerer (University of California, Santa Barbara)
A Body-mounted Camera System for Capturing User-view Images without Head- mounted Camera by Hirotake Yamazoe, Akira Utsumi and Kenichi Hosaka (ATR)
The Impacts of Limited Visual Feedback on Mobile Text Entry for the Twiddler and Mini-QWERTY Keyboard (author's PDF) by James Clawson, Kent Lyons, Thad Starner and Edward Clarkson (Georgia Tech)
It's decided: next year's International Symposium on Wearable Computing will be in Montreux, Switzerland on October 11th-13th, with workshops and tutorials after the main conference on October 14th. This'll be co-located with UIST, which has their doctorial symposium on the 15th and main conference October 16th - 18th.
The conference, by the way, will be held in Casino Montreux. I wonder if we can get back to our roots and try out some roulette-wheel predicting wearables? ;-)
I've always pushed for more "artistic" papers at ISWC, but there's often a culture and communications gap between the technical and artistic communities. Joanna Berzowska's presentation on her animated kinetic dresses was a wonderful exception. The goal of her project was entirely aestetic — the hemline of one dress rises and lowers as if betraying (or thwarting) the wearer's secret desires, and broach flowers open and close of their own accord on the neckline of another dress. But her presentation was full of all the technical details and lessons necessary to accomplish these creations. A couple examples:
They used Nitinol (memory wire) sewn into felt to cause the motion. After trying many configurations, they determined that a tight coil was the best configuration to "set" the Nitinol, as it created the largest motion.
Felt was the perfect fabric for a number of reasons. It's sturdy, so when the Nitinol relaxes back to its non-set shape the felt will pull the dress or flower back to the normal position. It's thick, so circuitry and wires can be felted into the fabric itself. And it's a good insulator of heat and electricity, so the wearer is protected if there's a short. It's even fairly fire-retardant.
Yesterday I took a tutorial on building a wearable computer from the Intel-based Stargate board. Both the Stargate and for that matter the iPaq have a good form-factor for a Tin-Lizzy-style wearable (small, low power and have USB-Host for a one-handed keyboard) except for the big problem that they don't have VGA-out to drive a head-up display. Kent Lyons has developed a nice hack to get around this limitation. (Technical summary follows.)
The hardware Kent's using is IO Data's Compact-Flash XGA card. Compact Flash doesn't have enough pins to memory-map, so the CFXGA card uses the BLT interface to send just the pixels that change. (The card is designed for giving PowerPoint presentations from your handheld so they're not worried about fast-changing scenes.) Kent leverages this by using his own modified X server that can use the BLT interface. It's only 640x480 at 16 bpp, but it's enough for text and simple interfaces on a head-up display. There's code and a brief how-to at Kent's website, as well as an email address where you can bug him to add more detail :).
I picked up an iPod Nano as a birthday gift to myself, and love it. It's small and light enough to fit in my shirt pocket, and I'm finding that even 2 Gig is enough for a wide range of my randomly-sampled music library (plus podcasts, which is really what I want to use it for).
The one big problem I have with it (besides still needing to buy some sort of sleeve to protect its screen) is what to do with the headphones when I'm not using it. The ones that come with it are always a tangled mess after sitting in my pocket, and the Javo Edge retractable kind seemed fine on my normal iPod but now actually takes up more room and is twice as thick as the iPod itself!
The problem of "what do you do with it when it's not being used" is one that watches and belt-clip pagers have solved but iPods and cellphones headpieces really haven't yet. Even wireless earpieces for cellphones don't have a place when they're not in use, though at least they don't get tangled. It's a harder industrial engineering problem than you might think, and one that I think often gets overlooked.
I'll be blogging from the 2005 International Symposium on Wearable Computers in Osaka this week. Please enjoy!
This sounds fun: The ROBO CAFE [jp→en] has just opened up in Osaka, Japan, where customers can watch Nuvo dance, view a Segway of course have their crumbs cleaned by a Roomba. Sounds like a perfect place to unwind after a day at the wearable-computing conference I'll be attending there in a couple of weeks :).
(Thanks to Rebecca for the link!)
Sun Labs have developed a cute little Java-programmable board called the Sun SPOT (Small Programmable Object Technology), along the lines of the Berkeley Motes project and other small Ubiquitous Computing sensor boards:
Based on a 32 bit ARM-7 CPU and an 11 channel 2.4GHz radio, Sun SPOT radically simplifies the process of developing wireless sensor and transducer applications. The platform enables developers to build wireless transducer applications in Java™ using a sensor board for I/O, an 802.15.4 radio for wireless communication, and use familiar Integrated Development Environments (IDEs), such as Net-Beans™ to write code.
The system uses the IEEE 802.15.4 wireless standard that's designed for short-range (< 10 meters, same as Bluetooth) with low data rates but also low latency and ultra-low power consumption — pretty much what you need for individual sensors.
Semapedia is a project to annotate physical locations with 2D barcodes that link to Wikipedia articles. With the Semacode software running on your PDA/cellphone, you scan a barcode and it'll take you to the linked-to article. There've been a lot of attempts at this sort of physical annotation of the world, WorldBoard being one of the earlier ones I remember.
I like the concept in theory, but I'm always disappointed by the quality and variability of the links. Do I really want a link about privacy just because I see a no-tresspassing sign, or about the Hofburg Imperial Palace just because I'm standing there? Perhaps, if I'm in the mood for ironic social commentary or I'm a tourist with an interest in architecture, but most people won't be the right audience for any given link. One man's art is another man's graffiti, and the world-annotation systems I've seen are currently little more than virtual spray paint.
The variability is the real key. If 90% of the tags I come across link to something interesting to me, I'll probably follow every one I see. If only 50% link to something interesting, I might look at the human-readable title printed on the tag and then decide whether I think it likely that the article will be well-written and interest me. If 90% of the tags wind up being useless, I won't even bother reading the title — and then it won't matter that there are 10% that I would have enjoyed if I had bothered to look.
I'm not totally pessimistic about this sort of technology though. With the right combination of filtering (to make tags I don't care about completely invisible), subtlety (to make the tags I might care about still be unobtrusive in case I don't want to be bothered) and community support (to insure relevance to me and to bond me to my community regardless of the link quality), I could see something like this finally taking off.
(Thanks to Eugen Leitl on the Wearables mailing list for the link!)
James Fusia at Georgia Tech has just released Twidor, a Java-based typing tutorial for the Twiddler 2 one-handed keyboard that's used on so many wearable-computing rigs. Experts on the Twiddler can reach around 50-60 wpm on the Twiddler (with one hand tied behind their back, no less).
(Thanks to Thad for the link!)
BlackDog is a great concept. It's a flash-based Debian Linux machine that fits in the palm of your hand (400Mhz PowerPC and 256 or 512MB RAM), with just a USB 2.0 plug, SPI and MMC Expansion slot, and a thumbprint sensor. There's no battery or power plug — it's powered completely via the USB plug.
Plug the Dog into the USB port of a Window-XP and it'll automatically boot up from ROM in about 2 seconds. Then it claims to be a USB CD-ROM drive with an auto-run program, from which it starts up CygWin and X, and then makes an X connection back to its own server. Presto! Instant use of the host machine's display and keyboard with your CPU, computing environment and data (up to 1GB through the MMC slot). Unplug and the host machine is left just as you found it. Security comes from what you have (the Dog itself) and who you are (the thumbprint reader), though of course you're still susceptible to low-level keyboard, screen and network sniffing attacks from the host machine.
There's a lot we could ask for from a personal server that BlackDog doesn't have, like automatic wireless sync-up with the interfaces around us, but this sounds pretty decent and more importantly it'll actually work with today's infrastructure and machines.
(Thanks to Steve for the link!)
Microsoft Research has announced a Request for Proposals for projects in relating to their Digital Memories (Memex) research kit, in the context of "personal lifetime storage." Microsoft's inspiration (and probably the inspiration for everyone else working in this area too, at least indirectly) is Vannevar Bush's 1945 article As We May Think, in which he famously described a kind of personal library-in-a-desk he called the memex:
Consider a future device for individual use, which is a sort of mechanized private file and library. It needs a name, and to coin one at random, ``memex'' will do. A memex is a device in which an individual stores all his books, records, and communications, and which is mechanized so that it may be consulted with exceeding speed and flexibility. It is an enlarged intimate supplement to his memory.
MSR expects to give 6-9 awards to college and university projects, up to a max of $50,000 per award, and recipients would also be given a SenseCam wearable camera and software from the MyLifeBits, VIBE and Phlat research projects at Microsoft Research. Strings are minimal — they expect semiannual progress reports, want it presented at at least one of their workshops and expect the project to be either dedicated to the public domain or released under an open license such as the BSD license.
Advance registration for the 9th Annual IEEE International Symposium on Wearable Computers, to be held October 18th-21st in Osaka, Japan, is now open. ISWC always brings together a great mix of industry and academic researchers from fields as diverse as interface design, machine vision, hardware and fashion design, and as program committee co-chair I can guarantee this year will be no exception.
Public Radio's Marketplace has a nice piece on the company Actionspeak, which hires people to go shopping while wearing small video cameras. The claim is that the cameras are unobtrusive enough that the research subjects quickly find themselves acting as they always do while shopping, and Actionspeak then analyzes the video to learn ways their customer can improve their presentation or marketing. They'll also do runs where subjects are asked to give a running monologue about what they're thinking as they shop.
These videos might give some straight marketing info (like which family member actually decides the sale or whether to focus on self-position, packaging or price), but I bet the real win is in showing designers how their product actually gets used in the wild. The combination of seeing as your customer sees, along with the ability to ask about particular moments afterwards, is really powerful. Not only can you learn things you'd never learn from interviews alone, but the overlay of first-person video with explanatory customer interviews has much more impact on a designer than would a table of survey results containing the same information. (Take a look at the consumer goods video especially for examples.)
The English version of Wikipedia is about 650,000 articles, which comes out to about 1 Gig compressed database — that easily fits on a PDA / cellphone these days. I've been thinking for a while now that I should look into loading it all onto my Treo 600, but I see now someone has done all the work for me!
Erik Zachte has produced conversion scripts as well as detailed instructions on how to convert the complete Wikipedia Encyclopedia into TomeRaider ebook-reader format for Pocket PC Windows and Palm OS. Text-only version fits in just over half a gig, text + images is 1-2 gig depending on image down-sampling. I also like his "Build, Buy or Borrow" plan: you can use his scripts to build your own latest version for free, buy the latest version on CD or DVD, or download for free his semi-anually updated version direct from the Wikipedia server. That's exactly the sort of "free as in freedom" software business plan I hope winds up succeeding in the new economy.
The theme for NPUC this year was The future of portable computing, so naturally there was a lot of talk about location-based applications. Ian Smith's talk on social mobile computing especially focused on using location. Personally I'm getting more and more skeptical about location-based apps. They've been right around the corner for a good decade now, and I'm starting to wonder if location-based apps are like video conferencing — something that sounds like it should be a hit but once they're implemented nobody seems to care.
That said, I think if there's ever going to be a successful location-aware application (outside of the ubiquitous museum-tourguide app) it'll be one that uses location as an excuse to socialize. I'm not sure whether the final winners will look more like Dodgeball, GeoCaching, moblogs, or a cross between LiveJournal and the geospatial web (or all of these), but I'm pretty confident that when you scratch the surface the real point won't be location, it'll be human-to-human interaction that just happens to use location as the medium.
That also fits my general rule of thumb: The killer app is always communications. (That or sex, which is really a subset of communications.)
Technorati tag: npuc2005
Technorati tag: npuc2005
Aaron Marcus of AMandA just gave a talk promoting the wrist-top computer as a prime ubiquitous computing platform. I'm skeptical — It feels to me like the wrist is good for quick access to info that's already showing or just a button-press away, but if you have to drill down (pushing small buttons with your wrist in front of your face) then that quick-access gets washed out by the slow interaction speed. That leaves a pretty narrow set of applications where you just a little bit of information with very little cognitive load.
Reasons to work on wrist:
Reasons not to work on wrist:
So what applications have the wrist-top as the clear winner interface? Well, there's telling the time, there's textual alarms, there's ... um ... gimme a second, there's gotta be more ....
Technorati tag: npuc2005
|Symbol's WSS 1000|
(the non-wireless, old version)
This month's Technology Review has a brief article on how UPS has upgraded their Symbol Technology ring-scanner wearable computers to use Bluetooth and Wi-Fi instead of a wire to an arm-mounted computer. The article is missing a few details (most notably it makes it look like Symbol came in to oust some other vendor's system, when in fact Symbol made the old system too), but it is a nice update on one of the early commercial wearable computer success stories.
One bit in the article that I found interesting was UPS's comment on barcodes vs. RFID:
Robert Nonneman, a manager of industrial engineering at UPS, says the company has watched RFID for 15 years but doesn't see it as an imminent solution to the problem of parcel tracking. In test runs, he says, RFID tags did not surpass the accuracy rate of bar code scanners. And an RFID rollout--including tags and a new technological infrastructure--would be costly. "You can't simply replace optical scanners with an RFID reader and expect an improved return on investment," he says. "There have to be process changes to leverage the technology."
I remember years ago Dick Braley from FedEx talking about the possibility of using RFID to ping a room full of packages and determine which (if any) need to be shipped out that day. That sort of room-flooding is a very different application than scanning a single package, and is one that barcode-readers will have a hard time performing, but it sounds like it's either not what UPS needs, would require a huge upgrade path or just not available yet from RFID technology.
Zarlink has announced a chip designed specifically for wirelessly linking implanted medical devices to hospital base stations. (Props to Eugen on the wearables list for the link.)
I'd like to officially declare that wearable computers have hit the mainstream.
When I started wearing a hat-mounted display connected to a 50 MHz shoulder-strap Linux box back in 1996, I defined a wearable computer as having five features: portable while operational, supporting of hands-free use, equipped with sensors into the environment, ability to be proactive in supplying information or aid to the wearer, and always on, always running. When sponsors or journalists would look at my contraption and ask how such a beast could ever become mainstream I'd just point to their cellphones, which strictly by my definition were already wearable computers and which were becoming more wearable-like all the time.
The thing that made cellphones only borderline wearables was that they'd usually be in a pocket or bag rather than worn — that meant their access-time was greater than that critical one or two seconds that make wearables so compelling. With bluetooth phones and the most recent round of wireless headsets I think there's finally been a shift in how non-researcher, non-techies are using cellphones, and it looks a lot like what we self-described cyborgs were doing a decade ago.
I first started noticing about six months ago that people around my lab were wearing these bluetooth headsets even when not talking on the phone. These weren't just researchers, these were the venture capitalists and financial planners that occupy the rest of our building. Then a few days ago I landed in the Atlanta airport and noticed not one but three people turn on their bluetooth-enabled cellphones, put on an ear clip and then not talk on the phone. These were early adopters but not techies, and yet they looked just as fashionable and comfortable wearing their headsets as they did in their expensive suits. I talked to two of them about their new fashion accessory, and both gave the same explanation: it's now so comfortable and so simple that they prefer to wear the headset just in case a call comes in. The woman I talked to said it was just more convenient to wear the device than find a place to carry it, and now she never had to go hunting for her cellphone when she got a call. It was so light and comfortable, she said, that she soon just forgot it was there altogether. The man talked about what a pain it used to be to be carrying suitcases on the escalator when the phone rang, and how now he just pushes the button on his ear and starts to talk. He also showed off the hands-free dialing feature: just tap your ear and say "office" and the phone's speech recognition system automatically connects you.
This sort of technology has been creeping up on us for years, so it's easy to miss the progress. It's nice to take a step back and see how seamlessly these people integrate with their technology compared to when I was "packing iron" on a daily basis. In many ways, they're far more cyborg than I ever was.
The Company also announced that the Company was contacted Friday, April 22 by the U. S. Attorney's Office for The Eastern District of Virginia, which is opening an investigation. In addition, the Audit Committee, through its legal counsel, has contacted the Securities and Exchange Commission in connection with the previously disclosed Audit Committee investigation and findings. The Company will cooperate fully in these investigations and any others.
The Company also affirmed that it continues to face a severe liquidity crisis and possible insolvency. There can be no assurances that the Company will have sufficient cash to meet its financial obligations or fund continuing operations. The Office of the Chairman of the Board is authorized to retain a consultant with financial and management restructuring expertise. The Company intends to work with such adviser to reduce costs, conserve cash, and obtain advice regarding restructuring and other alternatives to maximize shareholder value.
I remember several years ago hearing grumbling (unconfirmed by me) that folks at Xybernaut were pumping their stock with misleading press releases and then selling on the bump, but this is looking like much larger chickens coming home to roost.
Mobile and wearable computer hardware vendor Xybernaut Corp. said Wednesday (April 20) it had fired several top-level officers and announced the resignation of its accounting firm after an independent audit revealed widespread management corruption, including the use of company funds for personal expenses and nepotism by the company's CEO.
BBC Radio Four's Science Frontiers has a nice show this week about neuroprosthetics, including interviews with Micuel Nicolelis at Duke and folks at the Donoghue Lab at Brown. Streaming audio is here for another day or so. (Link by way of Mind Hacks.)
I'm curious whether this kind of technology will win out in the long run. It's clear it fills a need — the PDA/cellphone small screen is fine up to a point, but in general we want big screen real estate in a small package, and you just can't get that with today's rigid screens. There are a few competing models though, each with their own strengths and weaknesses.
Ordered from most personal & on-the-move to most public &apm; in situ:
Of course we might also wind up with several systems and use whatever works best in a given situation, just like we have both laptop and PDAs today. But if one niche winds up being vital (say, everyone needs information while on-the-move so everyone wears an HMD) and if it winds up being good enough for the other niches then that tech will eat the others, just like we're seeing laptops more and more often being used as desktop-replacements today.
We've just posted the Call For Papers for the 9th Annual IEEE International Symposium on Wearable Computers (ISWC 2005), to be held October 18 - 21st in Osaka, Japan. This will be the first ISWC in Asia, and I'm proud to be co-program chair along with Professor Kenji Mase-san from Nagoya University.
Ignoring things like the wrist watch, the earliest wearable computer was built back in 1961 by Ed Thorp (father of the theory of card-counting in Blackjack) and Claude Shannon (father of information theory) to answer a question that had plagued mankind for generations: is there any way I can cheat reliably at roulette?
Now over 43 years later, history repeats itself yet again as a treo has walked away with more than $2.3 million, allegedly having used a cellphone rigged with a laser range-finder to up their odds of winning from 1 in 37 to about 1 in 6. Police have dropped the investigation after deciding there was no interference with the ball in play. (That wouldn't fly in Vegas, where laws were put in place after wearables users in the '70s spooked casinos.)
(Thanks to Steve Schwartz for the link!)
Man, I can think of all sorts of mischief I could get in with one of these things...
From Personal Tech Pipeline (and thanks to Thad for the link):
Your favorite rodent has learned that Siemens is working on an all-purpose gadget that simply pays attention to what's happening nearby, and notifies you by SMS when something is strange.
Called the MyAy, the experimental device has a keypad but no display. It monitors its environment with a microphone, an infrared sensor, a temperature sensor, and an acceleration sensor (to tell if the MyAy itself is being moved).
Yesterday I did a quick scan of the one-handed keyboards that are available, and figured I'd post a quick summary:
And of course there's the plethora of cellphone / PDA keyboards like the one-thumbed "chicklet keyboards" on the Treo-600/650 and Blackberry or using Multitap or T-9 on a standard 12-button cellphone keyboard. I'm not a big fan of Multitap or predictive systems like T-9, but I've liked the Treo keyboard even for one-handed typing. I expect I'd have more trouble using it eyes-free than I do with the Twiddler, but then again I don't have years of experience using the Treo to type SMSs under the table when the teacher isn't looking either...
A couple non-commercial things of interest:
The Data Egg was an integrated PDA & five-button chording keyboard designed and prototyped back in the early '90s, but it got black-holed after the inventor lost control of his IP. Never tried one myself, but I've always liked the idea as a sort of chording-keyboard sleeve over a PDA.
Something else I like the look of is Chordite, which interests me mostly because of its unique hand-fit. Prototype only, researcher claims about 33 wpm.
There's a good article in today's NYTimes about Dr. Paul Bach-y-Rita's work in remapping human sensation — allowing the blind to "see" via tactile feedback on the tongue for example. Sounds like there have been some breakthroughs recently in terms of miniaturization and wearability (no surprise there), plus some good results in allowing people with damaged vestibular systems to regain normal balance unaided.
Fusion of Wearables and Ubicomp: This is an area I've thought was ripe for a while, but apart from location-beacons and markers for AR (Augmented Reality) there's surprisingly little research that combines Ubiquitous Computing and Wearables. There are exceptions, like Georgia Tech's work with the Aware Home and some work in adaptive "universal remote controls" for the disabled, but it feels like there should be some good work to be done combining the localization of Ubicomp with the personalization of Wearables. It also nicely fits with Buxton's argument that the key design work to be done is in the seamless and transparent transitions between different context-specific interfaces.
Social Network Computation, Visualization & Augmentation: This research has been going on for awhile, especially at the University of Oregon and more recently at the MIT Media Lab, but it seems to be getting traction lately. This sort of research looks at what can be done with multiple networked wearables users in a community. Typical applications include automatic match-making (along the lines of the Love Getty that was the craze in Japan several years ago), keeping a log of chance business meetings at conferences and trade shows, understanding social dynamics of a group like whether one person dominates the conversations, and real-time visualization of those social dynamics.
AugCog / Wearable Brain-Scanning: As I mentioned in a previous post, this is potentially a big breakthrough. I don't mean in the sense that it solves problem the wearable field has been struggling with, but rather that this could open a whole new branch of research. Neuroscience has taken off in the past 10 years with advances in brain-imaging technology like functional MRI. The downside is that you can only see what the brain is doing when performing tasks inside a lab setting — it's studying the brain in captivity. Wearable sensors give us the ability to study the brain in the wild, and to correlate that brain activity with other wearable sensors. That plus the lower price should enable all sorts of new research into understanding how we use our brains in our everyday lives. That, in turn, will hopefully lead to new ways to augment our thinking processes, whether by modifying our interfaces to match our cognitive load, providing bio-feedback to help treat conditions like ADHD or perhaps addiction, or even physically stimulating the brain to treat conditions like Parkinson's.
That's not to say there aren't broad and potentially frightening aspects to this technology, but the issue that concerns me most applies generally to our recent understanding of the brain: I don't think our society is prepared yet to deal with the coming neuroscience revolution. Our justice system, religion and even our system of government is based on the worn-out Cartesian idea that our minds are somehow distinct from the wetware of our brains and bodies. It's been clear for decades that that assumption is false, but so far we've tried to ignore that fact in spite of warnings from science fiction and emerging policy debates about mental illness, psychoactive medication, addiction as illness and the occasional the-twinkies-made-me-do-it defense. The applications envisioned by AugCog are going to force the issue further, and societies doesn't make a shift like that without serious growing pains.
One of the most exciting talks for me was the joint ISWC/ISMAR keynote by Dr. Dylan Schmorrow, one of the program managers for DARPA. The program managers are the guys who decide what research projects DARPA should fund — the best-known PM was probably JCR Licklider, who funded the Intelligence Augmentation research that led to the invention of the Internet, the mouse, the first(?) hypertext system, etc. The current program Dylan talked about was Augmented Cognition, which I'm now convinced could become the biggest breakthrough in wearable computing yet.
Intelligence Augmentation tried to support human mental tasks, especially engineering tasks, by interacting with a computer through models of the data you're working with — that was really the start of the shift from the mainframe batch-processing model to the interactive computer model. AugCog is about supporting cognitive-level tasks like attention, memory, learning, comprehension, visualization abilities and basic decision making by directly measuring a person's mental state. The latest technology to come out of this effort is a sensor about the size of your hand with several near-infrared LEDs on it in the shape of a daisy, with a light sensor in the center. The human skull is transparent to near-IR (that's how you get rid of all the heat your brain produces), so when it's placed on the scalp you can detect back-scatter from the surface of the brain. By doing signal processing on the returned light you can detect blood-flow and thus brain activity, up to about 5cm deep (basically the cortex). They've already got some promising data on detecting understanding — one of the things DARPA is especially interested in is being able to tell a soldier "Do this, then that, then the other thing... got that?" And even if he says "Yup" his helmet can say "no, he didn't really get it...." Outside of military apps (and getting a little pie-in-the-sky), sometime down the road I can imagine using this kind of data to build interfaces that adapt to your cognitive load in near real-time, adjusting information displayed and output modalities to suit. In the more near-term, these devices are starting to be sold commercially and cost on the order of thousands of dollars, not tens or hundreds of thousands. That means a lot more brain-imaging science can be performed by a lot more diverse groups.
[I've been trip-blogging this past week but haven't had convenient net access, so I'm afraid the real-time aspects of blogging are lacking... now that I'm hooked into the wireless at DEAF04 here's some of my backlog.]
Bill Buxton's ISWC keynote made a lot of points, but the one that struck me most was derived from three basic laws:
The problem then is how to deliver more functionality without making the interface so unwieldy as to be completely unusable. Buxton went on to talk about the trade-off between generality and ease-of-use: the more specifically-designed an interface the easier it is to use but the more limited its scope.
The key, he argues, is to make lots of specific applications with interfaces well-suited for their particular niche. Then you don't need a single general interface, but instead can concentrate on the seamlessness and transparency of transitions between interfaces.
It's a nice way of thinking about things, especially when thinking about the combination of wearables and ubicomp (see next post).
Like in previous years, the big theme here at ISMAR (the International Symposium of Augmented and Mediated Reality) seems to be registration and tracking — how to detect where objects and people are in the physical world so you can overlay graphics as accurately as possible. AR isn't my main field, but I've had a couple of conversations so far about how we're really reaching a point of diminishing returns. It's great that we're seeing minor incremental improvements in this area, but what we're really lacking are new, innovative uses of AR to push the field further. Unfortunately, it sounds like at least in part a lot of these new innovations didn't make the cut for the conference because they lacked in strong evaluation or quantifiable contribution to the field — it's much easier to judge the quality of a new camera-based image-registration method than it is to judge the usefulness of a brand new application.
The Software Agents field was a response to a similar stagnation in Artificial Intelligence. AI researchers had a lot of good but imperfect tools that had been developed over the years, but kept trying to solve the really hard general problems. Software Agents grew out of the idea that it was OK if your algorithm wasn't perfect in every condition so long as you cleverly constrained your application domain and designed your user interface to cover for those imperfections. It was a struggle to get acceptance of the idea at first, and in the end a few of the big players in the new domain went and founded their own conference rather than try to fit their own work to the evaluation metrics used for more traditional AI papers. Hopefully it won't take such a dramatic move on the part of AR researchers to breath new life into this field.
Every year I think it'll finally be the year we wearables folk can swap out our custom hardware for an off-the-shelf palmtop with a head-mounted display and one-handed keyboard connected to it, and every year it's just not quite there. Looks like we're finally getting there: Kent Lyons from Georgia Tech has now swapped out his CharmIT-PRO for an iPaq.
It's still not quite plug-and-play: he had to hack the original Twiddler-1 (the serial-port one, not the current PS/2 version) with a different power connector, and the CF-IO card he's using to connect the iPaq to his Microoptical display has a fairly limited bandwidth, so he had to hack his X server to blit out just the windows changes to the active window. Oh yeah, and he wrote a new Twiddler driver for the iPaq.
He's promised to put up a how-to guide on the Web soon — I plan to keep bugging him till he does :).
Over the next two weeks I'll be blogging from the International Symposium on Wearable Computers, the International Symposium on Mixed and Augmented Reality and the Dutch Electronic Arts Festival (busy couple weeks!).
I've already run into one other blogger here, Kerry Bodine at StyleBorg — she'll be blogging over the next couple days as well.