Reporting on Kodak's retiring of their famed Kodachrome film, NPR's All Things Considered, Melissa Block interviewed photographer Steve McCurry (emphasis mine):
I'm looking at one of your most iconic images, this is the photo of a young Afghan girl... she's wearing a brick-red head scarf and there's a green background and her eyes are just popping off the screen...
I think that just about says it all. You can also view an online gallery of what some of the great photos taken with Kodachrome look like after they've been scanned, digitized, and re-rendered on whatever computer monitor you happen to have. Such vivid colors!
Taiwanese display company Prime View International announced today it will be purchasing E Ink for about $215. PVI has been a long-time partner of E Ink, and supplies the backplane for the Amazon Kindle and Sony's eReader, which use E Ink's ink technology.
TechCrunch has a nice video showing off Plastic Logic's new prototype e-reader based on E Ink. Plastic Logic's main advantage is their plastic backplane (rather than glass) which is lighter and less fragile. They're also pitching their interface to focus more on business use — in particular the ability to annotate documents (using a touchscreen) and a sidebar that allows them jump to different pages more quickly.
This is just a prototype and so probably an unfair criticism, but I do note that when the demonstrator selects a different document and says "we are able to quickly move from any of the last five documents you've been reading" there is an 8.5 second delay before the new document comes up.
BROOKE GLADSTONE: Now, you testified about your negotiations with Amazon regarding the Kindle electronic reader. Could you tell us about that?
JIM MORONEY: Somebody was bringing up the Kindle as the solution we should all be focused on. And I love the Kindle. I read books on it all the time. My problem is that after negotiating and negotiating and negotiating, the very best deal we could get from Amazon was to split revenues for whatever price we decided to charge. We could get 30 percent of that money. They get 70 percent.
BROOKE GLADSTONE: Wow.
JIM MORONEY: I could have probably lived with that, but there was another clause in there that they would not give me relief on, and that said that they have the right to relicense my content to any portable device, not just an Amazon-owned device, any portable device. In essence, I was giving them a complete licensing agreement for nothing for all of my content, period.
I'm sort of – that’s - give away my future, you know.
If Amazon came back – I thought maybe they'd call today – and said, do you know what, we'll give up on that little clause about the relicensing of your IP, I would have said, okay, you know what - I'll try this thing at 70/30 and see if it works. But nobody called today, as far as I can tell.
Compare that to Apple, who keeps about 35% to 40% of the price of the 99-cent purchase price for a song sold on iTunes. Of course, Apple's main business model is selling iPods while Amazon's main business model is selling content, but even so I'm surprised Amazon is demanding such a high percentage for what still amounts to an untested market. Maybe they figure (probably correctly) that newspapers are desperate enough to go for it?
I missed Obama's press conference on Wednesday and wanted to listen to it on my long commute home yesterday. To my surprise, it was easy to find full video and typically a full-text transcript of the conference from sites like The Huffington Post and NPR.org as well as YouTube and directly from the White House Blog, but no audio-only sources. Eventually I had to use Farkie, a free online video-converter to download the Youtube video and convert it to MP3.
Am I missing some obvious source source, or has video made such headway now that nobody even bothers making audio-only versions available anymore?
Apparently, the White House Press Corp shares my interest in technological shifts from paper to digital. I'm a research scientist at a company that makes photocopiers — I wonder what their excuse is?
Microsoft Office Labs has come out with a very nice "vision of the future video" called "2019" — Long Zheng over at iStartedSomething has posted both a short montage and longer 5 minute version (I recommend the longer).
I've always loved these sort of concept videos from corporate reserach labs, and as the medium goes I'd rate this one pretty high. The production value is top-knotch (as are most other such videos from Microsoft). As you'd expect, there are many kernels of ideas that have been around — I was especially reminded of Hiroshi Ishi's ClearBoard, Jun Rekimoto's Pick-and-Drop and various aspects from Bruce Tognazzini's Starfire concept video — but there were still many concepts that were new to me. And unlike so many concept videos out there they seem to have mostly avoided the trap of assuming that devices will have a combination of strong AI and psychic powers.
Nicholas Carlson at The Silicon Alley Insider does the math.
Very cool article and movie about how the makers of Coraline used 3D printers to create individual facial expressions and other effects. So would that make Coraline a CG stop-motion animation?
Typealyzer claims to be able to determine the Myers-Briggs personality type of someone just by analyzing their blog. It's still in beta, which may be why they list me as an ISTJ ("The Duty Fulfillers") -- the exact opposite of how every other Myers-Briggs personality test classifies me.
:|: Medialogy.net is a new net criticism initiative publishing micro-essay insights into current trends in media and visual culture. It is an open forum for new critical voices; we are continuously seeking articles and comments with fresh perspectives on emerging media phenomena. Medialogy.net was begun with the desire to distribute, expand, and textually manifest a rolling conversation between young media researchers and artists globally networked through the university and gallery system. We seek to match a cynical perspective with critical intelligence, and a constant willingness to pull down old paradigms and icons of media philosophy and cultural criticism.
It's especially interesting to see discussion about the same media trends and subjects I tend to link to (OK, when I'm posting at all), but from the perspective of people who come first from the media side (in this case film) and second from the technology side rather than the other way around.
My lab has just released a beta of iCandy, an application for Mac and PC that lets you associate an image and a two-dimensional QR bar code with any iTunes song, YouTube video or Flickr photo, and to print them out as postcards, business cards, posters or photo albums. Then you can just hold the barcode up to a webcam to automatically bring up the photo or play the song or movie.
The app itself is pretty cute (we've been using it internally for a few months now) and they've recently set up a community network site for sharing your playlists and media pics with others too. The online Flash-based version seems to be broken at the moment, but check out the app.
There are exactly 52 playing cards in a standard deck. There are also exactly 52 shots in the famous shower scene in Alfred Hitchcock's movie Psycho. From this amazing coincidence comes 52 Card Psycho, a new augmented-reality experimental film piece my brother recently designed in collaboration with the Future Cinema Lab at York University:
52 Card Psycho is an installation-based investigation into cinematic structures and interactive cinema viewership; the concept is simple: a deck of 52 cards, each printed with a unique identifier, are replaced in the subject's view by the 52 individual shots that make up Hitchcock's famous shower scene in Psycho. The cards can be manipulated by the viewer: stacked, dealt, arranged in their original order or re-composed in different configurations, creating spreads of time, and allowing a material interaction with the 'cinema screen'— an object which normally is removed and exalted, and unchangeable in its linearity.
This may be old news, but it looks like the New York Times is developing an API for accessing their content:
The goal, according to Aron Pilhofer, editor of interactive news, is to "make the NYT programmable. Everything we produce should be organized data."
Once the API is complete, the Times' internal developers will use it to build platforms to organize all the structured data such as events listings, restaurants reviews, recipes, etc. They will offer a key to programmers, developers and others who are interested in mashing-up various data sets on the site. "The plan is definitely to open [the code] up," Frons said. "How far we don't know."
Pilhofer and Frons both declined to give any specific dates, but Pilhofer said the API itself will be done "within a matter of weeks." In the next six months, "we'll have some of the major pieces — a restaurant guide, weekend events listings and books," Frons added.
(Link by way of the IdeaLab Blog.)
My coworker Steve Savitzky has some interesting musings on the Kindle, Amazon's new ebook reader:
If you want everyone else's opinion, see the links after the cut. Here's mine: interesting play, but it's in the wrong game.
You see, Kindle is Amazon's attempt at an iPod for books. They're using what they hope is an elegant, convenient, and reasonably-priced piece of hardware (which I'd guess that they're selling at pretty close to cost when you factor in the pre-paid data plan) to sell digital copies of books (which are fairly expensive considering all the atoms they don't have to handle compared with their dead-tree counterparts).
Apple, on the other hand, is using convenient access to an extensive collection of audio tracks (which they sell at pretty close to cost) to sell a particularly elegant and convenient, but overpriced, piece of hardware. Apple isn't even in the hardware business, really: they understand that they're in the fashion business, and have made it really easy for other companies to sell accessories for iPods.
Hands up, who's going to build fashion accessories for the Kindle? Don't all speak at once... How many people are going to buy a Kindle for each of their kids? Is anybody going to let their kids loose on a piece of hardware that lets them buy books at $10/pop at the click of a button? That's what I thought.
Sounds pretty spot-on to me...
Having trouble explaining what a wiki is to your mom or tech-impaired coworker? Try showing them this 4-minute "Wikis in Plain English" video from CommonCraft. (The've also got quick "Plain English" guides on New Light Bulbs, Social Bookmarking, Social Networking and, just in time for Halloween, Zombies.)
(Thanks to KB for the link.)
The picture on the left is cropped from an image of a man sexually abusing children in Vietnam and Cambodia in 2002 or 2003. The face was digitally scrambled, and the image posted to the Net along with about 200 others. The picture on the right is the same picture, digitally unscrambled by Germany's federal police. Interpol has posted four unscrambled images of the man's face on their website, and have asked the public at large for help in identifying him. They've already reportedly received hundreds of tips.
That part's pretty cool, and I hope they catch the guy, but I have some trepidation at the idea of casting such a wide net using just a few photographs. Say you know somebody who looks remarkably like this guy -- maybe that creepy guy you see on the subway every morning. How likely is it that he really is they guy they're looking for?
If you were looking for a local criminal, say someone who robbed the neighborhood 7-11, it would probably be pretty likely you'd found the right guy. After all, it's pretty darned rare for two unrelated people to look so similar that even after close inspection you mistake one for another. The trouble is, even a very rare event becomes extremely likely when you're sampling the entire world: if there's only a one-in-a-hundred-million chance that two randomly-chosen people look really similar, then every person on the planet has approximately 67 doppelgangers running around. It's not that we can't distinguish between those one-in-a-hundred-million pairs, it's just that our brains only specialize our ability to recognize things as far as necessary. That's why people from another part of the world "all look alike" until you actually start to live with them, and why it becomes trivial to distinguish between 'identical' twins once you've known them for a couple months. But nobody's brain is specialized enough to distinguish between one-in-a-hundred-million chance similarity, because it never comes up in our lives.
Interpol seems to recognize they're taking a risk in publishing these photos, and they caution that law enforcement would have to positively identify any suspects (with additional photos and corroborating data at their disposal). Still, I see two risks where innocent look-alikes could get caught up in this. The first is that, regardless of the advice to wait for positive ID, people are naturally going to be suspicious of any look-alike, and may take action. Though fear of terrorism now tops the list, fear of child molesters in our midst will always be right up there in terms of emotion-stirring boogiemen. The second, even more dangerous risk, is that Interpol itself will try to apply their usual "one-in-a-million" criteria for reasonable doubt to a one-in-a-hundred-million situation. That, I'd argue, would repeat the fiasco the FBI created when they arrested a Portland lawyer for the Madrid bombing, based on a close partial-fingerprint match and (presumably) the fact that he was Muslim.
You know why movies seem to be continuous motion even though they're really just static images being shown at 48 frames per second — persistence of vision, right? Well, there's a great article on the Grand Illusions site about how that explanation is 'simple to understand', 'elegant', even 'poetic'... and also fundamentally incorrect.
Apparently psychologists have known for almost as long as film has been around that the explanation was bunk, but somehow film historians find it too compelling an explanation to give it up. Somehow, the image that images persist... persists.
Remember the 300-page AT&T / iPhone bill video that made the rounds a couple weeks ago? Looks like AT&T got the message:
AT&T free msg: We are simplifying your paper bill, removing itemized detail. To view all detail go to att.com/mywireless. Still need full paper bill? Call 611.
I used to love my locally-cached copy of Wikipedia on my Treo 650 (after two years, a new update of the TomeRaider version was created back in February), but now that I've turned in my Treo for an iPhone I'm looking for a replacement reader. I'm not there yet, but this desktop-based offline Wikipedia reader is a good start to the project. As the author puts it:
Isn't the world of Open Source amazing? I was able to build this in two days, most of which were spent searching for the appropriate tools. Simply unbelievable... toying around with these tools and writing less than 200 lines of code, and... presto!
More on my efforts to get a locally-cached copy of Wikipedia readable from my iPhone (perhaps with the iPhone book-reader app that was recently hacked together) once it comes back from repairs...
(Thanks to Fairyshaman for the link!)
Wonder if there's a general law to be learned about the median time between release of a public-image server and the first reports of someone doing something embarrassing being discovered in the database?
Shame their demo doesn't work on the Mac, but given that it's now coming out of Microsoft Labs that's not surprising :)
(Thanks to Aileen for the link!)
Today's NYT has a blurb on Livescribe, the new company founded by LeapFrog's Jim Marggraff to turn the Anoto-based FLY Pentop Computer into a note-taking application for students. His application is basically Lisa Stifelman's 1997 Audio Notebook system but without all the extraneous hardware that was necessary back then: take notes on paper while the pen records the lecture. Tap on the note later and the pen recites whatever it recorded just before you wrote it.
As the article notes, pen-based input has had a long and difficult life, but I've always thought that if anything will be the killer app that brings it into the mainstream, this would be it. If their implementation is good, they've got a chance of really making a big splash.
In case you haven't seen it yet, here's video of the flexible, full-color OLED display that Sony unveiled at last week's SID conference.
My local Shell station has decided to augment its super-low prices of just $3.60 a gallon with some alternate revenue: automatic full-video and audio advertisements blasted at you while you pump gas.
At least there's some satisfaction in the movie trailer they were showing in the rotation — after being subjected to several annoying ads there's something satisfying about seeing explosions playing out on your gas pump.
Here's a video of an incredible talk Hans Rosling gave at last year's TED conference. On one level it's a talk about trends in world health (Rosling is a professor of international health at the Karolinska Institute in Sweden), but at another level it's about the need for much better visualization tools so people can make sense out of all the data we already have freely available in public databases. The whole talk is an example, using tools developed by the non-profit Rosling founded called Gapminder.
After watching the video, check out Gapminder World, being hosted by Google.
(Thanks to my dad for the link!)
Wells Fargo is using optical scanning and OCR to improve how their customers deposit checks in ATMs. No more empty envelope drawers and out-of-ink pens; now you just put all your checks and cash in a stack and insert it into the slot. The ATM automatically scans each one in, does optical character recognition to tell how much each is for and puts up a verification screen. After you correct the amounts, the machine will either spit out a receipt with a summary line for each transaction or a printed image of each scanned check. From their press release:
"With the new technology, you don't need to spend time writing on an envelope or keying in a deposit amount. You just insert your money into a slot and the machine sorts, counts and verifies it," said Jonathan Velline, head of Wells Fargo's ATM Banking division. "Our Envelope-Free ATMs also converts paper checks into a digital image which then appears on the ATM screen and receipt, so you know your check was received. You can't get this in the traditional envelope world."
I used one of their machines in Alameda recently and it was pretty slick, though I had to insert each of my three checks individually since it couldn't handle my differently-sized and somewhat wallet-wrinkled stack.
|Photo credit: Jason de Fillippo|
Nice earthquake preparedness ad campaign from the Bay Area Red Cross: they've got a billboard truck in San Francisco's Justin Herman plaza that when viewed from the right angle overlays right on top of the real architecture. More on the campaign here. (Via Jason de Fillippo, thanks to my cousin Aaron for the link.)
From an NPR interview with Principal Ed Kovochich, who has banned cell phones in his Milwaukee high school because they've been causing flash crowds at what would otherwise have been a simple one-on-one fight:
Quite a bit of the school was text messaged where the fight was taking place, and soon there were hundreds, and they were cheering and jeering and usually you get into that mob violence mentality. And suddenly what was 3-on-1 became 3-on-2 and then 3-on-3 and etc. and before you knew it we had a lot of kids fighting.
The New York Times discusses the new trend towards building your own custom television commercials via the Web:
They can automatically add names of local sales agents or dealership addresses, and they can change the content of the ad, depending on where it is showing, to appeal to various demographic groups. Among the companies that have used these services are Wendy’s, Ford Motor, Coldwell Banker and Warner Independent Pictures... The automated system it is offering to advertisers, called Pick-n-Click, is currently available only for automotive advertisers and has 150,000 components —like voice-overs, video footage and text options.
Polymervision has announced a partnership with Telecom Italia to roll out (pun intended, sorry) an e-book reader with a flexible display:
While smaller than a typical mobile phone, the new device features a display which extends up to 5-inches and may simply be stored away after use by folding it, thanks to the flexibility of the polymer based display material. The device features the largest display available in the industry for the same form factor, the 16 grey levels combined with a high contrast and high reflectivity display for paper like reading experience enables comfortable reading, even in bright sunlight. Future developments include colour and moving image capable display.
(Thanks to Dirk for the link.)
Zink prints a 2-by-3-inch picture in 30 seconds -- somewhat slower than inkjet printers -- that comes out dry. It brings back the instant gratification of 1970s-era Polaroid picture, without forcing you to wait for it to develop. And it's a much better quality print than Polaroids were.
With Zink devices, the plastic paper has layers of plastic in the middle with millions of tiny crystal dyes that can be activated by heat. If you heat the paper a certain amount, the dyes melt and you get yellow. If you heat it less but for a slightly longer time, you get magenta. If you heat it a little less and slightly longer, you get cyan. Those colors can be mixed to print any color. If you think of microwaving a frozen dinner, you get the idea.
The special paper is still a little expensive (about 80 cents for a 4-by-6-inch print) because it has to be doped with ink over the entire surface, but the company hopes to reduce the cost in the future.
The original Choose Your Own Adventure books are now available as audio ebooks for your iPod. If you'd like to try a free download, turn to page 22.If you'd like to continue walking until you reach the end, turn to page 42. (Thanks to Janie for the link.)
It'll be interesting to watch how Apple's iTV + iTunes competes with Tivo in the long run. The big difference is that iTV is inherently narrowcast — play podcasts and downloads from the iTunes store — while Tivo's main schtick is to provide the advantage of narrowcast on top of a legacy broadcast video distribution system (cable).
Long-term I always bet on narrowcast, but there's still a big question of timing: when does enough content become available on the Internet that you no longer need your cable TV subscription? And how much can Apple do to make that day come a little sooner?
I've been yawning about the rumors of a phone that's also an iPod — music is the least of the apps that I use on my phone, and I'm quite happy with my Treo 650. But a quad-mode phone that runs OS X, including dashboard widgets and Safari, with GSM, EDGE, Wi-Fi and Bluetooth? Now that's a big deal! (And just as the future of Palm OS was looking a little shaky — looks like now I can continue with my life-long dream of never having to use any form of Windows. :)
I'm especially looking forward to playing with is the two-fingered "pinch" interface for resizing, something that's possible because their touchscreen can handle multiple touches at once — I've wanted something like that since I saw Sun's Starfire concept video back in 1993...
Swivel looks like it might be interesting. They're billing their service as "YouTube for Data," where you can upload your data sets and then graph or compare them to other sets. In its best form I can imagine something like this supporting open source style research, especially if they support ways to explain and present your data (that or a good API for bloggers to link in data). In its worst form I could see any sensible analysis of the data sets getting burried under a pile of meaningless correlation statistics.
Swivel Co-founders Dmitry Dimov and Brian Mulloy start off by describing their company as “YouTube for Data.” That’s a good start for someone trying to understand it, because the site allows users to upload data - any data - and display it to other users visually. The number of page views your website generates. Or a stock price over time. Weather data. Commodity prices. The number of Bald Eagles in Washington state. Whatever. Uploaded data can be rated, commented and bookmared by other users, helping to sort the interesting (and accurate) wheat from the chaff. And graphs of data can be embedded into websites. So it is in fact a bit like a YouTube for Data.
But then the real fun begins. You and other users can then compare that data to other data sets to find possible correlation (or lack thereof). Compare gas prices to presidential approval ratings or UFO sightings to iPod sales. Track your page views against weather reports in Silicon Valley. See if something interesting occurs.
The New Scientist has a write-up on an EU-funded prototype system called Tai-Chi that can turn ordinary surfaces into a touch-pad input device just by attaching a tiny piezoelectric sensor (i.e. microphone) to the surface. In one configuration, the system figures out where you're touching / tapping by listening to how vibrations are distorted by the object and then either comparing to a database of vibration "fingerprints." The method requires calibration to create the database, but they're claiming accuracy to within a few millimeters.
From some email spam mail I just got:
Hi thisisjusttestletter. How are you ? Call me. Poor you, i don't even think how much spam you are recive. when they can
That makes two of us.
In his closing plenary at this year's CSCW, Bill Buxton made a provocative point about how to make a difference in the research world. His key point was that people often think of technology as alchemy, creating gold out of nothing. But alchemy (the creation of brand new ideas) is very hard and very rare, and is ultimately a fool's game. Most progress comes not from alchemy but from prospecting, the recognition of good ideas that are already out there, the understanding of which ideas are ripe for exploitation and the ability to marshal the right resources to get them into the world. He quotes Alan Kay: "It takes almost as much creativity to understand a good idea as to have it in the first place."
The example he gave was of the Blackboard, which was invented in 1801 and which Buxton claims revolutionized education more than every other technology introduced into schools since then put together. Before 1801 each child had his or her own slateboard, which he or she used to mark and correct answers before copying them down on paper. Buxton noted as an aside the irony that we're now trying to reintroduce slates into the classroom in the form of tablet PCs, but his main point was the fact that there're very few differences between a slate and a blackboard: a blackboard is just a slate that's been made an order of magnitude larger and hung on the wall. A technologist looking for novel innovation might overlook such a "minor" modification, and yet that slight change made all the difference.
...it will be as cheap to buy, per square foot, to buy 100 dpi full-color displays as the same square-footage of whiteboard today. In 7 years, displays with on the order of 20 times more pixels than are on that screen right now [pointing to a 15' x 15' projector screen] but the same size will be cheaper than that screen is right now without the projector. It's going to be about one to ten dollars a square foot for a 100 dpi full-color display that's 6mm thick. And the only question is which of the six or so competing technologies is gonna get there first.
And now, what does that mean? That's a technological affordance, it doesn't mean anything except that it's interesting because I'm a technologist. But as a designer, as a citizen, as a father, I care because now I can't think about watches, mobile phones, or any of these other devices out of the context of these portable wearable types of things moving around in space collectively and relating to those things there on the wall. What's that mean for education, what's it mean for business, how do we conduct our meetings? And that is CSCW, or a different branch of it. And the amount of effort put to that, to me, is still really low.
Personally I think he's being a little optimistic the time scale, but not by a lot, and he's certainly right that researchers need to be thinking about how that changes the environments in which we work and live. And he has a little built-in slack in his prediction: CSCW only meets every other year, so even if he's wrong we won't be able to collect on our drink until 2014.
According to this graph of spam volume by spam blacklister TQMcube, spam volume has increased more than tenfold in the past six months. I'm not sure if this is some kind of attempt to overwhelm spam-filters and blacklisting services or just another ratcheting up, but I do find it disheartening that doing a news search for "major increase in spam" results in posts and news reports that span several years. (Thanks to Jeff for the link to the graph.)
A few days ago Reuters opened a bureau in Second Life, the online virtual world that's more second home than game to some 400,000 (presumably part-time) residents. Adam Pasick is bureau chief and sole reporter, and is dedicated fulltime to Second Life. As science fiction writer Charlie Strauss put it a month ago, "Truth stranger than fiction? Must write faster, the clowns are gaining ..." (Via NPR's Marketplace.)
A Fox News cameraman was about 20 blocks away when the New York small-plane crash occurred last week, so he broadcast live via his Palm Treo smart-phone. (Thanks to Jamey for the link.)
You've probably already heard about the cell phone that screams after it's reported as stolen. My friend GirlPurple has suggested the perfect add-on market: Custom Scream Tones.
From a NYT article on the efforts of credit card companies to cut out child-pornography sites from their networks:
Among purveyors of child pornography, Mr. Christenson said, there is a “growing trend toward steering visitors of these sites to various alternative payment methods.”
Mr. Christie said one of those methods involved granting access to Web sites in return for explicit photographs of children. “That phenomenon is something that we are very concerned about,” Mr. Christie said.
Tim May's original BlackNet concept warned that modern crypto can make illegal trafficking in pure information nearly impossible to trace. The main obstacle to making BlackNet-like networks a reality at a consumer level has been handling payment: anonymous e-cash systems never really got traction, and non-anonynmous financial services leave a trail right to a criminal's door.
What remains is a system of barter, or "CryptoCredits" as the BlackNet post describes them. Back when it was written digital information wasn't all that fungible: there were a limited number of things that one could exchange in pure-digital form, and the BlackNet post mostly described a market for high-stakes digital goods like trade secrets and business intelligence. But bits have become much more fungible in the past thirteen years, and nowadays an illegal info-trader can find pure-digital goods at all levels of illegality. He might trade kiddie porn for digital movies, blackmail info for stolen credit card numbers, control over zombied PCs for World of Warcraft gold, or passwords to porn sites for validated spam addresses. He might even contract for specific services, ranging from mundane transcription of documents to decoding of CAPTCHAs to obtaining the phone records of an HP board member.
Engadget has the dish on the new "Amazon Kindle" eBook reader being developed by Amazon, complete with wireless for instant eBook purchases and what looks like an eInk display. (Let's hear it for people who regularly troll for new FCC Filings reports!)
This may be old hat to some of you, but it was new to me — I just got an email spam that includes subliminals. The whole ad is an animated GIF designed such that the word BUY! flashes over the email for a split second every 30 seconds (including briefly as the email loads). I doubt this'll actually make the spam any more effective (and in this case it's a stock-push-scam, so the spammer-scammer won't know either), but it's interesting to see what they're up to these days.
Neven Vision comes to Google with deep technology and expertise around automatically extracting information from a photo. It could be as simple as detecting whether or not a photo contains a person, or, one day, as complex as recognizing people, places, and objects. This technology just may make it a lot easier for you to organize and find the photos you care about. We don't have any specific features to show off today, but we're looking forward to having more to share with you soon.
Neven Vision's page now redirects to the Google blog post, but a cached copy in The Wayback Machine indicates they've been focusing on face recognition technology of late, and C|NET mentions their iScout software for mobile phones that uses images shot with a camera phone to access additional content. (Link via John Battelle's Searchblog, with some nice extra info at SearchEngineWatch.)
Each year IBM Almaden hosts the New Paradigms in Using Computers workshop. This year's theme was Web 2.0, which in this case roughly meant the mix of community sites, blogs and wikis that make up the supposed "next wave" of the Net.
Below the cut are my notes on this year's meeting. They're still in rough form (and of course are just based on my own recollection and what I managed to type as I was listening), but please enjoy!
Technorati tag: npuc2005
Dr. Miller talked about ChickenFoot, a Firefox extension that makes it (somewhat) easier for non-programmers to customize web pages they come to. The idea is to let people create a bookmark for things like "my latest bank statement," or add a link on every Amazon book review page to the MIT library website's listing for the given book.
contribution is in a few functions that let you specify things like
click 'I feel lucky' button instead of having to see what that
button is called internally by the page's raw HTML code. They're now
working on a version that does full keyword spotting and highlights the
buttons as you specify them.
There's clearly a tension between web-page authors and users here, and authors might not want users modifying their webpages because it hurts their business model (like the Amazon example above), or because a customization is pounding their server (as a GreaseMonkey script did to GMail last year), or because bugs in customization get blamed on provider. To this Miller says we've been down this road before with ad blockers, frame around content and deep linking, and that content providers are fighting a losing battle here: those that fight their users will lose their users.
My favorite talk of the bunch. Ross is the founder of Socialtext, which provides enterprises with a wiki and offers hosting services. One piece ofnews is that they just announced Socialtext Open, an open-source (MPL 1.1) version of their main software that's identical to their non-big-enterprise version.
Here are a few key concepts and quotes; check out his slides for more details.
It's not about the tools, it's about the practices people develop for using the tools.
One of his case studies (DrKW Wiki, an intranet for the investment bank Dresdner Kleinwort) had three inflection points of adoption that corresponded to additional features: single-sign-on to the wiki (from the same sign-on as the rest of the intranet, I presume), WYSIWYG editing of pages so non-techies could participate, and mobile access. Traffic to the wiki was greater than the rest of the intranet in just 6 months. CIO of DrKW: "For early adapters, email-volume on related projects is down 75%; meeting times have been whacked in half."
"PDFs is where knowledge goes to die."
Open Source (and Wikipedia in particular) is kept strong by the constant threat of the "Right To Fork." At any time, anyone can copy the Wikipedia software and content and fork, and they've had to stay relevant to fight that off.
He's now working with Dan Brickland on wikiCalc. Some questions he's asking: What happens when a document is a cell and a cell is a document? And anyone can change a cell? And each one has an RSS feed? And they compute / interact with nearby cells in some way? Distributed?
Photography used to be about memory preservation, now it's about communication & connection.
People aren't generating "content." What they're doing (and motivated by) is:
Interesting statistic: about half their traffic is on their API rather than their webpage (about 10-12M API calls / day).
His main predictions (based mostly on watching his two teenagers and generally being a bright guy):
Adults see the Web as important. Teens create ugly web pages. Teens see the Web as transient — like IM. Email is only for talking to parents and teachers, and the Web is rapidly heading this way. POTS (Plain Old Telephone Service) will probably go that way too: a legacy technology for communicating with geezers who haven't made the jump.
Limits on who will publish? Depends on your definition of "publish," but probably only a few extroverts will publish globally. Most will publish things only readable by friends.
Future: ubiquitous access to the net. Free or flat fee (today: Skype to Skype is free). RSS, small chunks of text (blogging), more audio, video, IM integrated with community-ware (e.g. LJ-Jabber). All via cellphone. Desktops will be docking stations and used for offline editing.
Zero-cost publishing means the cost of failure is zero. Ready-aim-fire becomes Ready, Fire-Aim-fire-aim-fire-aim...
Who loses: Those that control the "last mile" (cable, phone land-lines).
Who wins: People with opinions (extroverts). People who are always online. Those that can deploy quickly, and update quickly. Perpetual beta. Those that collapse development and operations. Asynchronous, Open Source, VOIP. The half-life of concept to End-Of-Life is approaching 5 years.
The trouble with community-generated information: blog spam, "IP looting," and marketplace for fake reviews. Many (most?) hotel managers have someone working non-stop to plant false reviews for his hotel on the various online review sites.
RealTravel is a web service that lets people post their travel logs, tips and reviews online. The idea is to offer more trustworthy information (and fuller information) because it's tied to a full profile including pictures, maps of where someone went, travel logs, etc.
The hard part is motivating participation: why should I share my feedback & advice with strangers? Answer: Do it for your friends & family. With style (i.e. with tools to make the write-up look really professional). Add auto-generated maps, recommendations for hotels & restaurants, embedded photos, etc.
Principle of design: motivate through enlightened self-interest. Design services that reward individual behavior that has global benefit. Communicate the value proposition to people who would recognize that value.
Key Motivators (design these, and target audiences with these motivators):
Why should users do things that benefit the community? is the wrong question. Make doing the right thing low-friction. "Snap to grid," e.g. have auto-complete of all the cities in the world, snap "diving" to the main-taxonomy tag "diving & snorkeling."
At IBM's NPUC workshop yesterday, Ross Mayfield announced that his company has released an Open Source distribution of Socialtext, their flagship wiki software, under a Mozilla Public License (MPL 1.1). I wasn't all that pleased with any wikis I've tried in the past (including SocialText when I played with it over a year ago)... might be time for me to give it another try and see how it looks.
Socialtext Open can be downloaded from Sourceforge.
My friends Bill & Amy have set up a page for their Personal Aura Device, a set of sound-reactive LED poi and clothing they're designing and building for Burning Man this year. Seeing them in action is amazing — they have one controller with a microphone that wirelessly controls boards fitted with with extremely bright red, green and blue LEDs. The main music mode ties intensity of each color to a different frequency band in the audio, so base and drums beat in the blues, mid-tones in the greens and vocalists and guitar are followed by the red. It's pretty hypnotic to watch, especially when they've got two sets of poi plus costuming all pulsing in unison to the music.
Interesting: Livejournal has just launched a Jabber server, and are developing integrated features like posting via Jabber and of course integrated Friends and Buddy lists. And they'll be federating, so you'll be able to talk to other Jabber-enabled systems (like GMail/GTalk) without the usual mucking about in monopoly-space (you know, like you do with AIM, MSN, Yahoo! Messenger, and all the other dark-age services that still wish it was 1990).
(Thanks to Sunyata__ for the link!)
From OpenDarwin (thanks to Dave for the link):
We explore making virtual desktops behave in a more physically realistic manner by adding physics simulation and using piling instead of filing as the fundamental organizational structure. Objects can be casually dragged and tossed around, influenced by physical characteristics such as friction and mass, much like we would manipulate lightweight objects in the real world. We present a prototype, called BumpTop, that coherently integrates a variety of interaction and visualization techniques optimized for pen input we have developed to support this new style of desktop organization.
I don't know about this being a full desktop replacement, but for some kinds of applications I could see it working quite well. For example, I'd love it for sorting through tens to hundreds of images or other visual media, especially if they added two-handed or multi-handed interaction to it.
If I were to write a kind of How To Succeed In Business Without Really Trying kind of guide to giving demos of your research, it would probably include the following list of things to avoid:
Never one to take the easy route, my current research project contains every one of these features. No matter how many successful trials I run, I never really know whether this time it'll go boom.
Apparently there's another Bradley J. Rhodes out there that publishes in a vaguely related field (Cognitive and Neural Systems) and who got his Ph.D. from Boston University at the same time I got mine from MIT. This should cause no end of fun-filled confusion for years to come!
At the Ambidexterous Magazine launch party last night, Chris Tacklind (of D2M, I think) was showing off his laser-diode glove. These things are lots of fun — I remember my group-mate Michael P. Johnson built one when I was at the Media Lab, and got good enough he could make little figure-8s with two fingers while the other dots circled around them.
Something I hadn't seen before and liked even better was a sound-display toy Chris was playing with, but I forgot to take a picture that one (eit!). It was just a small cardboard tube with a balloon stretched across one end, and a laser diode shining onto a small mirror stuck to the end of the balloon. You'd speak or sing into the tube and the sound vibrations would show up as little laser shows on the wall in front of you. Use it as a drum and you'd get even cooler effects. (Chris goes around teaching kids to make these things — the one he had was made by a 10-year old.)
Now I want to install something like that into the bottom of the little dumbek drum I have. Stretch a membrane across the bottom of the drum and attach a laser pointer to the inside of the drum shining onto a mirror on the membrane such that it reflects up onto the underside of the white translucent drumhead. Aligned correctly, I bet I could get some fun lasershow-style patterns on the drum head on every beat. (Might need to modify the design if the membrane changes the sound too much — we'll see.)
Remember my continuing rant about how it's time to just cache the entire Web and keep it local? A start-up named Webaroo has a similar idea. They're offering free software (Windows only) that caches "webpacks" of pages that make up certain interest areas, and update those caches whenever you re-synch. Their current plan is the usual "pay for it all through advertising" model.
I've not tried it yet and don't know how easy it is to personalize webpacks or how well they handle things like accessing pages that require sign-in, but it definitely looks like a good start. (And if they do the job well, I could easily see them winding up being purchased by one of the big players in search.)
(Thanks to Aileen for the link!)
It took them longer than I expected, but it looks like Google has finally come out with a Related Links feature that people can add automatically-updated links to related searches, news or web pages on to their sites. Think Google Adwords only with search results instead of pay-for-placement advertisements. The text-box is simple to add to any webpage (it took me all of 30 seconds) and gets updated to whatever info is current when the page is viewed — essentially adding dynamic related content even if your page remains static.
One thing I didn't have to worry about with Margin Notes was how to keep the system from being gamed by spammers and Google-juice stealers, though I did have to worry about relevance to individual readers. Something I'd like to see is a similar system that uses my own RSS subscriptions as the core source of info, plus perhaps one level of linkage out (e.g. take my blog-roll & RSS subscriptions plus the blog-roll and RSS subscriptions associated with each of those sites). That would give me some amount of personalization as well as make it harder to game the system.
(via Google Blogoscoped)
Dang, I'm late to the party as usual. In case you haven't seen it yet, as cures for information overload go this looks like it has potential...
(Thanks to Jim for the link.)
Long before flash mobs became the phonebooth-stuffing of the naughts, science fiction author Larry Niven speculated how the invention of teleportation would cause sudden flash crowds of gawkers to appear wherever a newsworthy event is taking place. Over at Searchblog, John Battelle suggests we're bound to see something similar happen in couch-potato-land once channel surfers can automatically see what show or media event everyone else is watching...
|Image credit: Jeff Han|
Jeff Han at NYU has a very nice demo video of what you can do with a multitouch-sensitive screen (video is also cached at youtube). This is using Frustrated total Internal Reflection to sense both pressure and location of multiple simultaneous touches.
(Thanks to /amq for the link!)
In many ways I see the problems with Google's centralization as just another facet of a tension that has existed since the Internet started: the tension between the decentralized "every end-user is his own service provider" model and the centralized fiefdom model where you sign up for one of a handful of service providers. I think it was the coming of the Web in the early '90s that finally tipped the scale in favor of the decentralized model, and as a result we saw an explosion of URLs and email addresses that weren't only from AOL, CompuServe or Prodigy. This, I think, was all for the better. But now the proliferation of GMail addresses and Google Base scare me precisely because they smack a little too much of the fiefdom model we so wisely avoided 15 years ago.
From CNN Money:
NEW YORK (FORTUNE) - Aiming to increase access to the Internet, particularly for people away from home, Google, Skype and other leading Internet investors Sunday announced a $21.5 million investment in an innovative new Internet access network called Fon.
The company, founded by Spanish entrepreneur Martin Varsavsky, aims to build a network of WiFi hotspots far larger than those from companies like T-Mobile and Swisscom, none of which have more than 30,000 hotspots worldwide.
Fon aims to exceed that in its first year, and to have one million hotspots in four years. It hopes to achieve this by getting individuals and businesses to contribute their own hotspots to the network in exchange for getting access wherever they go. Those who contribute access get to use the access of others.
If Fon comes through with a simple-to-install and manage setup I can see this idea really gaining traction — it sounds like he's got the right level of incentive to end-users who donate their bandwidth, and the technology for charging non-members for hotspots is already tried & tested. (Heck, if it works well I'd pay $25 just to get his "use only 50% of your bandwidth on outsiders" software; I keep thinking something like that must exist, but I've yet to find one.)
Long as he can jump-start the process and can convince ISPs to let him in on a racket they'd love to control themselves I'd say he's got a good chance. (I see he's already signed on Speakeasy in the US, though they also have one of the most liberal wireless sharing policies around so others may be a harder sell.)
I'm not sure yet what I'd do with it, but the Optimus mini-three keyboard looks very fun: it's an auxiliary keyboard with three keys, each with its own little OLED screen displaying the current function (potentially animated). The most compelling examples are where not only is the button's function displayed but also what effect it'll have in the current context, like what image, song, or PowerPoint slide is coming up next when browsing through media.
USB 1.0, currently Windows only for the configuration software but others are coming. Pre-orderable for $100 until April 2nd, shipped to arrive on May 15th. This is coming out of the Art. Lebedev Studio in Russia — looks like they've also got a complete keyboard coming soon too.
(Thanks to Nerfduck for this link too!)
Via Reuters: UC Irvine professor Beatriz da Costa will be releasing 20 pigeons into the San Jose skies during this year's International Symmposium on Electronic Art. Each pigeon will be equipped with a camera, GPS, air-polution monitor and cellphone, and images and location-based polution data will be automatically posted to a PigeonBlog. (No word on whether PigeonBlog will comply with RFC 1149.)
Thanks to Nerfduck for the link!
The Wikipedia community is trying to respond to whitewashing of politically-sensitive articles that appear to be coming from congressional staffers themselves (with the staff of Marty Meehan (D, MA) being one of the biggest culprits).
I'm always amazed that Wikipedia works as well as it does — hopefully the bad press Meehan and other congress-critters get over this flap will outweigh any good press specific staffers may have hoped to achieve.
I'm a bit late on this, but I'm psyched to see that last week Google flipped the switch to allow all their Google Talk instant messenger accounts to talk to any other Jabber client out there. I've not verified it yet, but I think that included people with .Mac accounts using iChat, and BigBlueBall has a nice tutorial on how to use the federation to hook up your GTalk account directly to AIM, Yahoo!, MSN and ICQ using Jabber transport services.
This is the final step I've been waiting for before ditching my AIM account and going entirely to Jabber!
Podzinger is a nice little search engine for podcasts that indexes the podcast audio (using BBN's speech-to-text software) as well as available metadata. They also have a nice interface for showing searched-for words in their transcript, with the words linking to the proper segment in the audio clip.
Intelliseek will be a big corpus of spidered and annotated blog posts to attendees at the 3rd Annual Workshop on the Weblogging Ecosystem (held in conjunction with the WWW 2006 Conference in Edinburgh, Scottland):
The data release comprises a complete set of weblog posts for three weeks in July 2005 (on the order of 10M posts from 1M weblogs). This data set has been selected as it spans a period of time during which an event of global significance occurred, namely the London bombings.
The data set includes the full content of the posts plus mark-up. The marked-up fields include: date of posting, time of posting, author name, title of the post, weblog url, permalink, tags/categories, and outlinks classified by type - details may be found here.
Sounds like a great resource for researchers. I'm also amused (in a dark sort of way) by the datashare individual agreement they require people to sign — essentially they admit that there's no way they can get copyright clearance from all million or so bloggers they've collected, so they just ask everyone to agree to remove any posts if anyone complains, not use the results for commercial purposes and not use it passed the workshop.
GMail has added Web Clips at the top of their page, showing RSS and Atom feeds plus "relevant sponsored links" to the top of your messages. Unfortunately, it looks like only the sponsored links are actually relevant (which I read to mean "related to the message you're reading"). Clips from your own RSS feeds are still just random.
Hopefully they're busy working on fixing that — I still think automatic annotation of email (and blog entries) with other related entries form a largish set of favorite RSS feeds is a seriously useful application that needs to be exploited. Honestly, I've been expecting it to be just around the corner for about three years now, and I'm not sure why I'm still waiting. (I know, I know... if I really want it done I'd sit down and write one myself...)
Two interesting technologies have just been announced in the flexible-computing arena. First (via engadget) is NEC's announcement of their Organic Radical Battery, a 300-micron thick flexible battery with an energy density of about 1 mWh/cm2 and recharge time of just 30 seconds. Then throw in Plastic Logic's announcement of a 10" diagonal SVGA E-Ink display (4-level greyscale) that's both flexible and less than 0.4mm thick. (Thanks to Kurt for the links!)
|NEC's ORB battery||Plastic Logic's E-Ink based display|
Brian Toth's Google Maps plugin is a cute integration for OSX that lets you lookup your Apple Address Book addresses using Google Maps and also get directions between your Address Book addresses. (Thanks to Noel for the link!)
I picked up a Fly Pentop the other to play with (one of the advantages of being a user interface researcher is all the toys :). Here're a few thoughts.
There're at least four challenges with using a real ink pen as a computer interface:
In spite of these limitations, it's extremely engaging to be able to draw your own functional user interface — as anyone who read Harold and the Purple Crayon or watched Simon in the Land of Chalk Drawings as a kid knows. The effect really hit me when I was making a calculator. First I wrote the letter "C" and circled it to enter calculator mode. Then, as the pen spoke instructions to me, I drew a big rectangle and started to fill it with numbers and arithmetic symbols. I realized about three numbers in that I didn't have to stick to the usual layout and placed the rest of the numbers going up, down and sideways. Then I tapped on the numbers with the pen to type out 22 + 44..." only to discover I'd forgotten to draw an equals sign. I quickly drew one in, then tapped it to hear the pen speak "22 + 44 equals 66". It was as if I were running from something in the land of chalk drawings and someone suggested we draw a door so we could escape!
The interface also feels more magical than it would if it were implemented on a tablet PC. This could be a novelty effect — I'm used to paper being static and non-functional and computer screens being reactive — but I think it's also because it feels like the pen is reacting to my physical environment, rather than simply reacting to the way I interact with it. When I interact with a tablet PC, I think of the computer as being the screen (even if the actual CPU is somewhere else). With the Fly, I think of the pen and speaker as being the device, but not the paper. That means even though a tablet PC and the pentop computer might implement the exact same interface, I feel more of an emotional attachment with the pen because it appears to be observing and sharing my external environment and not just the actions I perform directly on the device.
The NYTimes has a write-up on Leapfrog's Fly Pentop Computer, which essentially merges the Anoto Pen technology with a speaker and what sounds like some very clever games & applications, all wrapped in a $100 pen. Supposedly it's for the 8 to 14-year-old market, but I'm thinking it might be good for this 30-somethinger as well.
(Thanks to Ted for the link.)
Update 11/25/05: I picked one up at Fry's a couple days ago — here're some thoughts on it.
For an example, may I suggest the new Docbug Reader's group :).
(Thanks to Del for the link.)
By way of John Battelle's SearchBlog:
Amazon Mechanical Turk provides a web services API for computers to integrate "artificial, artificial intelligence" directly into their processing by making requests of humans. Developers use the Amazon Mechanical Turk web services API to submit tasks to the Amazon Mechanical Turk web site, approve completed tasks, and incorporate the answers into their software applications. To the application, the transaction looks very much like any remote procedure call: the application sends the request, and the service returns the results. In reality, a network of humans fuels this artificial, artificial intelligence by coming to the web site, searching for and completing tasks, and receiving payment for their work.
The service is legit — http://mturk.amazon.com/ redirects to the main site.
On first blush it sounds similar to OpenMind (which was started by David Stork, a coworker of mine). OpenMind has especially been used to gather human knowledge for training up AIs (especially common-sense knowledge) — I wonder where Amazon expects to go with the idea.
Yesterday Google announced that the first wave of Google Print is up, starting with a large number of books that are in the public domain. (As they say in their PS, see also The Million Book Project's libraries in the US, China and India as well as Project Gutenberg.)
Google Base is Google's database into which you can add all types of content. We'll host your content and make it searchable online for free.
Once again, Google proves they groks that the marginal cost of storage and file transfer is essentially free. By becoming the go-to place for everyone's content they draw more eyeballs for their web ads and position themselves to become the best aggregation service on the Net (if they aren't already).
Britain's National Archives have started putting their government-produced public information films online, starting with films from 1945-1951. Everything from a 1948 cartoon explaining the benefits of the newly formed National Insurance Act (Britain's welfare system) to a film showing how to use the pedestrian crossing (crosswalk).
Material is released under the Crown Copyright license, which is essentially a non-commercial-use license with the added stipulation that the material is "re-used accurately and not used in a misleading context."
(Thanks to Andy for the link.)
Osaka has great infrastructure for helping the blind find their way. Not only is the city covered in these yellow bumpy paths you can follow with a cane or your foot (with differently shaped bumps at intersections so you can tell where to turn), but they've also got braille signs in all the subway stairwells explaining where this passage leads. The best part is how they put the braille in the best possible place for it to be found just by feeling around: wrapped around the handrail itself.
The latest buzz buzz in FM music formats is Jack-FM, a nationally syndicated format that eliminates DJs and replaces them with essentially random shuffle-play (the rough transitions between radically different songs is part of the charm). The playlist is pulled from a library of around 1,200 songs, about 3-4 times that of a traditional station, though all songs have to have been in the top 40 in the last 40 or so years. Jack-FM's website attributes their success to the iPod making people comfortable with shuffle-play:
Random acts of greatness “jack” radio. Several kajillion iPod users can’t be wrong. Thanks to the shuffle feature, hearing different styles of music one after another feels completely natural, and desperate radio programmers have taken notice. The “Jack” format—so named for its Everyman inclusiveness—is popping up in every market to save commercial radio from obsolescence.
I'm skeptical about Jack "saving commercial radio from obsolescence" — it sounds more like the blowing of taps to me. Way back when, before the days of top-40 or Clear Channel, DJs actually added value through their extensive record collections and expert knowledge of who the hot new groups were. But that was then, and by eliminating DJs altogether, Jack is declaring that the job music-radio DJs do today can be done just as well and more cheaply by a random-number generator.
That may be true, but I have to wonder if the radio stations embracing this format have thought this cynical line of thinking all the way to its conclusion. If Jack is so wonderful because it emulates my iPod on shuffle play, then why the heck do I need their advertisement-filled, frequency-hoarding broadcast at all? Sure, 1,200 songs is better than 300, but my iPod holds over ten times that many songs, lets me skip songs, lets me pick my own formats and lets me share my playlists with my friends — all ad-free. The only advantages broadcast has over the iPod are expert DJs (which they're eliminating), installed base of radios (which iPod-like technology will eventually match), and the arcane copyright laws that give radio broadcasters a way to legally broadcast without needing to pay the RIAA or recording artists (though they still pay song writers through BMI or ASCAP.) Even in the slow and bloody copyright wars, that third advantage is also slipping away. Today I can fill my iPod from an all-you-can-eat subscription service, from Creative Commons and other legal free-download sites, or from a number of less legal sources, and other sources keep rising. Once it becomes ubiquitous, why would we as a society keep granting exclusive rights to scarce public radio frequencies for such an archaic way to transmit music?
Siemens is showing off a paper-thin electrochromic display that they hope will eventually lead to an all-in-one device that uses printing technology to lay down the display, circuits and even the battery. According to the New Scientist:
The display is controlled by a printed circuit and can be powered by a very thin printable battery or a photovoltaic cell. The goal is to be able to create the entire device – the display and its power source - using the same printing method, so that manufacturing costs would be as low as possible. Siemens expects to achieve this by 2007.
Also impressive is that the display cost about £30 (just over $50) per square meter of materials.
Update 5pm: added link to Siemens announcement.
To the surprise of few, today Steve Jobs announced a Video iPod at his "One More Thing" press conference today. The main iPod now supports H.264 and MPEG-4 video formats, with a capacity of around 150 hours worth on the 60GB. You can also download movie trailers and purchase music videos at the iTunes Music Store for $1.99 each, and it looks like ad-free episodes of shows from ABC and Disney television are coming soon.
(As is traditional after Apple announcements (regardless of how good the news), AAPL is down five and a half percent so far today.)
Update 3:09pm: TV-show purchase is now up, with episodes for $1.99 and a full season for $34.99. And they've got Pixar shorts up for only $1.99 too! (Not sure if JHymn will work with video like it does with audio — I'll try it out tonight.)
Ignore the fact that podcasting isn't a new medium (it's called audio guys, it's been around a while). It's also not that different from seven years ago when people just linked to MP3s on their webpages, and search engines like Scour.net and Lycos MP3 Search would find them for you. Technologically the only difference is a little bit of XML to help machines know what's being linked, plus a few tweaks (like RSS subscription) that make the experience more user-friendly.
What I think has changed in the past 7 years is the number of people producing and distributing their own amateur and semi-pro content, and the accompanying infrastructure to support them. In 1998 almost all the MP3s available on the web were copyrighted songs people had ripped from their CD collection, and so the RIAA and other members of the content cartel could squash whatever infrastructure cropped up in the name of stamping out piracy. Today there're countless MP3s online that are completely legal to download, and that primes the pump for for inventing the infrastructure to make it even easier. Moreover, piracy has largely gone to the P2P networks, so now MP3s on the web are harder to paint with the sweeping "it's all piracy" brush.
And that all leads to podcasting, which I'm hearing the media describe as "making your own radio programs for broadcast over the net." This is, of course, the big long-term competition for the content cartel — their big-advertising, mass-produced one-size-fits-all model will have trouble competing with thousands of niche narrowcasts that each have a small personal audience. More importantly, podcasting is online audio that finally isn't being linked with piracy — it's good, happy audio on the web, not at all like those nasty pirated MP3s in the previous decade.
And just think, it only took us seven years to get here...
E Ink just announced it will be offering prototyping kits that include a 6" diagonal, 170 pixels per inch, 4 gray level e-ink display. Like all E Ink displays, it only needs power to change the display, not to maintain the image. The kit also includes a development board with a 400 MHz Gumstix single-board computer as well as I/O boards for MMC, Bluetooth and USB.
No word yet on prices, though their kits page says order forms will be available soon. Kits will begin shipping November 1st.
Update 9/27/05: fixed typo (I'd said it needs power to change the display but not to update it, which makes no sense).
Update 9/29/05: As Andrew points out in the comments, they've now posted their order form and the kit is $3000. Not cheap, especially considering you can get your own Gumstix for $159 and a Sony Librié for $419. (You could also get a Toshiba DCT-100 for just $229, though I believe that's using one of Kend Displays' ChLCD display.)
Lately I've noticed a rise in the number of Google search results that just lead to a bunch of ads plus some automatically-generated content copied from other web pages, rather than pages with the original content I'm looking for. This is the latest step in an ongoing arms race between the search engines (and their users) and so-called search engine optimization companies that try to funnel searchers through to their customer's ad-laden sites rather than going direct to the site they want. The SEOs are essentially using Google's own infrastructure against it, creating Google-hosted blogs, generated using content from (I'm guessing) the results of Google searches, all sprinkled with links to pages containing nothing but Google-supplied Ads.
Google's trying to stop folks from gaming the system like this, but I expect there's some kind of fundamental limit to what can be done to stop it. You could probably even describe it as a theorem:
For any automatically-indexed search engine of sufficient size, it is possible to construct a document that has a high page rank for a given query even though the constructed document adds no useful information beyond that which would have been returned without it.
A corollary would be:
The more complete a search engine is in terms of documents indexed, the lower the relevance of its search results will be in terms of the ratio of documents with original content vs. documents that simply copy information from other pages.
If this does, in fact, wind up being a fundamental theorem for search engines, I have a humble suggestion for what we should name it: Göögel's Incompleteness Theorem.
Semapedia is a project to annotate physical locations with 2D barcodes that link to Wikipedia articles. With the Semacode software running on your PDA/cellphone, you scan a barcode and it'll take you to the linked-to article. There've been a lot of attempts at this sort of physical annotation of the world, WorldBoard being one of the earlier ones I remember.
I like the concept in theory, but I'm always disappointed by the quality and variability of the links. Do I really want a link about privacy just because I see a no-tresspassing sign, or about the Hofburg Imperial Palace just because I'm standing there? Perhaps, if I'm in the mood for ironic social commentary or I'm a tourist with an interest in architecture, but most people won't be the right audience for any given link. One man's art is another man's graffiti, and the world-annotation systems I've seen are currently little more than virtual spray paint.
The variability is the real key. If 90% of the tags I come across link to something interesting to me, I'll probably follow every one I see. If only 50% link to something interesting, I might look at the human-readable title printed on the tag and then decide whether I think it likely that the article will be well-written and interest me. If 90% of the tags wind up being useless, I won't even bother reading the title — and then it won't matter that there are 10% that I would have enjoyed if I had bothered to look.
I'm not totally pessimistic about this sort of technology though. With the right combination of filtering (to make tags I don't care about completely invisible), subtlety (to make the tags I might care about still be unobtrusive in case I don't want to be bothered) and community support (to insure relevance to me and to bond me to my community regardless of the link quality), I could see something like this finally taking off.
(Thanks to Eugen Leitl on the Wearables mailing list for the link!)
There's interesting work going on between Brad Fitzpatrick at LiveJournal, Bob Wyman of PubSub and other folks at SixApart (who make the MoveableType blogging software) about making continuous streams of blogging content so large aggregators (like PubSub, Technorati or Google) can get continuous updates from large sources of blog posts like LiveJournal, SixApart or Blogger.
Or, if you feel like playing yourself, type this into a command prompt to see a continuous stream of "No one understands me. Should I dye my hair pink or blue?":
telnet updates.sixapart.com 8081<enter>
GET /atom-stream.xml HTTP/1.0
(Extra points for being the first one to plug it into a screensaver :)
I've been wondering when Google would get around to this. A few days ago they announced Google Blog Search, which indexes blog entries based on RSS or Atom feeds.
Google's playing catch-up to smaller services like Technorati, but seem to have scooped Yahoo! and MSN, both of whom have been rumored to be coming out with an RSS-feed search "any day now" for months (Yahoo! even briefly revealed a test page before they realized it wasn't being firewalled properly).
One feature Google gets right is that every page includes a link to subscribe to an RSS or Atom feed on that query, essentially turning any search phrase into an aggregator. Technorati has something similar with their watchlists, but you have to create an account and go through their page to create a new standing query. Google just creates the contents on the fly &mash; a big win in terms of ease-of-use since you're likely to most want a standing query after you've just done the search as a regular one.
One of the folks helping to raise money for the Katrina relief effort is the site Boobs 4 Bourbon Street (it's been slashdotted and is down right now, but check back later). People are donating pictures of their bare breasts (with the website shown in the photo itself, to insure they were taken with full knowledge and consent of how they'd be used), and anyone who donates $5 or more to one of the main relief charities gets an account and password to view them.
From the site:
Click on any of the charities in the right-hand column, go to their donation page, and make an online donation for $5 or more. (Please note: Habitat For Humanity's minimum online donation amount is $10. If you try to make a smaller donation in their form, you'll get a fairly nondescript error message.) Then, once they send you a confirmation email, forward it to us, at firstname.lastname@example.org. At this point, one of our volunteers will see your email, then create an account with which you can access the gallery, and email you with your login information. The username will be the email address that you emailed us from. The password will be randomly-generated. Write it down or print it out or save the email.
To quote my friend Adam, this has got to be the least efficient way to use the Internet to get pictures of boobs since Archie, but hey — it's for charity, and when I checked a couple days ago they'd already raised $3000.
I also note (so you won't think this post is entirely about boobs) that they've got an interesting trade economy going here, where they're providing a service (in this case light-porn) not for cash but for proof that cash was paid to someone else. It'd be great to build up the infrastructure to make this kind of thing easier, much as Software Ransom sites have done for pooling commissions for software (are you listening, PayPal?).
The full implications of this kind of client-side Wiki didn't really hit me until I briefly wondered where I could download a copy, only to realize I already had just by visiting the site. As their instructions point out, just do "Save Webpage As..." (either of their main page or of a blank version) and you've got your own copy, ready to edit.
Apple announced their new iPod Nano yesterday — 2 or 4 GB (around 500 or 1000 songs, or around 25,000 photos), with 14 hours battery life, color display and a click-wheel, all squeezed into a 3.5 x 1.6 x 0.27 inch package. That's about 20% of the footprint and only 75% of the thickness of a single standard CD jewelcase! Nicely done.
MoveOn.org is hoping to use the same infrastructure they've used in the past for political action parties and call-athons to organize temporary housing for refugees from Hurricane Katrina. Check out www.hurricanehousing.org for more details, especially if you live within about 300 miles of the affected areas.
Participatory Culture Foundation has released DTV, an open source (GPL) video player that combines an RSS aggregator with a BitTorrent video player. Currently in beta for OSX, Windows and Linux coming soon. Combine this with their open source Broadcast Machine to create your own channels, or use del.icio.us to add videos you find on the web to your own published channel.
(Props to Noel for the link.)
Chris Schmandt, the head of MIT Media Lab's Speech Interface Group, has just made a PDF of his now out-of-print 1994 book Voice Communications With Computers: Conversational Systems available for download in PDF form for free off his website.
Chris was one of the readers for my Generals Exams, and naturally this was one of the books on my reading list. It's 12 years old at this point, but most of the issues he talks about are inherent in speech communications regardless of the technology. Highly recommended.
(Thanks to Thad for the link!)
Another back-of-the-envelope calculation, inspired by a comment by my friend Beemer:
Years before a single hard drive will store 1 bit for every atom in the Universe at current doubling rates: 222
Warning: past performance is not necessarily indicative of future results.
I just read a great review of Mac OS X 10.4 (Tiger) over at ars technica. If you're into Macs or finer points of geekery like how file systems should work, especially check out the long discussion on metadata in Tiger.
Yesterday I said that within a decade disk space should be cheap enough to put the entire visible web on your desk for under $1000. I think that's actually a pretty conservative estimate, since it assumes a 100 KB average page size, up to an order of magnitude higher than some estimates.
Here's another back-of-the envelope: let's say we wanted the equivalent of Google's webcache on your desktop (that is, all the HTML but no images). Another way to calculate it starts with the fact that the 2003 update to Berkeley's How Much Info? study estimated that in 2002 the web was only 167 Terabytes total, with only 30 TB as HTML (69 TB when you include images). Assuming 75% compression, that's just around 8 TB. That same year a 2002 OCLC study calculated that the total number of web pages was only increasing by about 5% per year (with the number of sites actually shrinking, but the number of pages per site growing). That rate had been decreasing ever since the explosion in the mid '90s, but let's assume growth became a steady 5% and will stay at that rate for the next few years. (There are a lot of assumptions going on here, but the nice thing about these kinds of curves is that even if my numbers are off by a factor of two somewhere, so long as disk keeps increasing at the same rate that crossover point only changes by one year.)
Now we've got two trends, and just need to find the intersection point for the price we want:
|Year||Price of 1 TB disk||Size of public web|
(compressed HTML only,
assumes 5% growth/year)
|Cost to store|
So given a few assumptions, we'll be able to cache all the raw text on the public web for under $1000 (disk cost) within 3 years!
A paper from January 2005 calculates the publicly indexable Web (the part easily accessible to search engine web-crawlers) as being around 11.5 billion pages. Estimates on average webpage size seem to be all over the map, but let's figure around 100 KB per page, for a total of around a petabyte (one million Gig) for today's indexed web. (I'm assuming text and images, but ignoring other media.)
Disk these days is going for less than 50 cents per Gig, so enough disk to store your own personal Google (and then some) costs around $500,000. With compression you can probably cut that in half. The price of disk is also falling by a factor of two every 12 months, so assuming no major jumps or snags in the disk-price curve, in a little less than a decade we can expect to hold the equivalent of today's indexed web for less than $1000.
Now of course, in that time the web will continue to grow, so we may no longer be satisfied with our measly petabyte-on-the-desk, but I figure the amount of human-generated Web content has a much slower growth rate than our disk-space curve. The number of web sites actually shrank between 2001 and 2002, and though it now seems to be growing again there's only so much content that human beings can create in a day. The real question I have is whether in a decade anyone will see having access to the whole web as being all that interesting — I could easily see the majority of people losing interest in the surface web in favor of personal deep-web niches. The only reason I want the whole web in my pocket is because it's too hard for me to filter out in advance the 99.99% of the web that'll never be of interest to me — the closer we get to that kind of pruning, the less disk we need and the higher-quality the experience will be.
Update 8/2/05: doing a different back-of-the-envelope estimate leads to being able to store a compressed-HTML cache (no images) on less than $1000 worth of disk within 3 years...
...and Photoshop takes them right back off.
(Thanks to Spider for the link...)
Microsoft Research has announced a Request for Proposals for projects in relating to their Digital Memories (Memex) research kit, in the context of "personal lifetime storage." Microsoft's inspiration (and probably the inspiration for everyone else working in this area too, at least indirectly) is Vannevar Bush's 1945 article As We May Think, in which he famously described a kind of personal library-in-a-desk he called the memex:
Consider a future device for individual use, which is a sort of mechanized private file and library. It needs a name, and to coin one at random, ``memex'' will do. A memex is a device in which an individual stores all his books, records, and communications, and which is mechanized so that it may be consulted with exceeding speed and flexibility. It is an enlarged intimate supplement to his memory.
MSR expects to give 6-9 awards to college and university projects, up to a max of $50,000 per award, and recipients would also be given a SenseCam wearable camera and software from the MyLifeBits, VIBE and Phlat research projects at Microsoft Research. Strings are minimal — they expect semiannual progress reports, want it presented at at least one of their workshops and expect the project to be either dedicated to the public domain or released under an open license such as the BSD license.
Yesterday a consortium of the major movie studios announced final specs for a new standard digital format for movie theaters. The specification uses JPEG 2000 video compression, which (though it happened before I started working there) I'm proud to say largely came out of work performed at my lab.
The big advantage of JPEG 2000 is that you can "pull out" bits from a code stream to get different resolutions — in this case a 4K distribution (1,302,083 bytes per frame at 48FPS) and a 2K distribution (651,041 bytes per frame at 48 FPS) can both be generated on-the-fly from the same file, just by discarding segments of the stream.
(Thanks to Mike for the link.)
The display is a passive-matrix, reflective type cholesteric liquid crystal display. Two 3.8-inch diagonal QVGA prototypes, a monochrome display and a color version able to display 512 colors, were shown.
Differing from widely used flat displays that have color filters consisting of red, green and blue pixels, the paper display has a three layered structure in total about 0.8 mm thick. One layer consists of two 0.125 mm-thick films sandwiching liquid crystal. Cholesteric crystals in each layer are twisted in a certain pitch to reflect only red, green or blue light respectively.
Images on the screen can be changed with 10-milliwatts to 100-milliwatts depending on scanning speed.
(Thanks to John for the link...)
Microsoft and Google both come out with new versions of free online satellite mapping software this past weekend. Google Maps has added the "hybrid view" that lets you see your driving directions laid out on the map itself, which is the feature I've wanted ever since they came out with satellite-view. Microsoft has just released their web-based Virtual Earth, which doesn't yet support driving directions (coming soon I'm sure) but does include a nice (dare I say "Google Maps-like"?) scrollable interface and switching between maps and aerial photography. They've an interface for keeping track of multiple locations on a scratch pad, an API for adding your own way-points on the URL line, and a cute zoom-in animation.
One fun feature of Virtual Earth is that some parts of the US have incredible resolution: compare Seattle's space needle from Virtual Earth and Google Maps to see what I mean. Unfortunately, Virtual Earth's image coverage is pretty spotty. In spite of the name, it only covers the USA — I'm guessing they're just using USGS publicly available images right now. Also, for many areas they're using very old black-and-white images that they've then overlaid with color for roads and parks. This leads to a few embarrassing misses like the fact that their map shows Apple Computer's Corporate HQ has yet to be built (I didn't see any horse-and-buggies on the streets though, so it can't be too old).
According to a Data Memo by the Pew Internet & American Life Project, 29% of online Americans have a good idea what phishing means, 13% what podcasting is, and only 9% know what RSS feeds are. Over half knew the terms adware, internet cookies, spyware, firewall and spam.
Of course, the real question in my mind isn't whether people know what phishing means, but whether they know that regardless of what it's called the 22 "You must change your PayPal password!" emails they have in their inbox are attempts at fraud. Still, it'll be interesting to see how these terms spread in the next six months or so.
(Thanks to Rowan for the link.)
|"Power House Mechanic" (1920) by Lewis Hine|
The New York Times story on PhotoMuse.org, a collaboration between the George Eastman House and International Center of Photography Alliance. (The site is currently overwhelmed, but they've got a sampler up at the moment.) From the article:
While there are now dozens of growing digital databases of photography on the Web, many - like Corbis and Getty Images - are commercial sites that do not allow the public unfettered access to their collections. The Photomuse site will join others, like the digital collections of the Library of Congress, the Metropolitan Museum of Art and the National Museum of Photography, Film and Television in Bradford, England, that are beginning to create what amounts to a huge, free, virtual photography museum on the Web.
Anthony Bannon, the director of Eastman House, said one of the biggest hurdles encountered by the project - after overcoming the initial cultural resistance of both institutions to share their collections and expertise - has been converting the images of both Eastman and the center. onto a single computer system. (So far, he said, Eastman has digitized almost 140,000 of its photos and center about 30,000.)
"It's not just like pushing a button and the images slide over," he said, adding that copyright issues with many photographers could also keep many images off the Web for years. "Some are generous and understand the positive result by having the images seen on our Web site but others are worried about losing opportunities for revenue," Mr. Bannon said. "All of us are still learning about how the Web can be used, I think."
It's nice to see traditionally conservative institutions opening up to the idea that on the Web, sharing your art, knowledge or expertise freely often pays you back far better than hording it.
MarsFlag is a new search engine in Japan (went live in March) that provides links as thumbnail images of returned results instead of text, with larger-version pop-ups when you rollover with the mouse. Supports full-text search (e.g. this search on wearable computer) as well as pictorial topic areas like movies, fashion magazines and motercycles.
According to Internet Watch [JP → EN], the search engine at least in part determines results ranking using bookmarks kept by the 35,000 subscribers to the Mark Agent web-based bookmarking service that the company also owns. MarsFlag claims this helps thwart attempts to gain page rank by creating link farms, a process called search engine optimization. (Presumably that'll only work until SEO companies start generating fake Mark Agent accounts...)
Tech-On reports that Matsushita (Panasonic) showed off a new prototype color eBook reader with a 5.6", 210 points-per-inch display at the NE Technology Summit 2005 event held in Tokyo yesterday. Given that their current grayscale Σ book uses a bistable display made by Kent Displays, I would hazard a guess that their prototype is using Kent's new color ChLCD display (but that's just a guess). Bistable displays like the ChLCD and eInk's microcapsule display (used in the Sony LIBRIé) take power to change an image but not to maintain it, so they're incredibly low power for low-framerate applications like eBooks.
Another nice Google Maps reuse: gmapPedometer. Chart your bike/running/walking/tourism route and see how far you've gone (not to mention get a pretty satellite image, and a URL you can use to bookmark or email to friends).
(Thanks to Jill for the link!)
The theme for NPUC this year was The future of portable computing, so naturally there was a lot of talk about location-based applications. Ian Smith's talk on social mobile computing especially focused on using location. Personally I'm getting more and more skeptical about location-based apps. They've been right around the corner for a good decade now, and I'm starting to wonder if location-based apps are like video conferencing — something that sounds like it should be a hit but once they're implemented nobody seems to care.
That said, I think if there's ever going to be a successful location-aware application (outside of the ubiquitous museum-tourguide app) it'll be one that uses location as an excuse to socialize. I'm not sure whether the final winners will look more like Dodgeball, GeoCaching, moblogs, or a cross between LiveJournal and the geospatial web (or all of these), but I'm pretty confident that when you scratch the surface the real point won't be location, it'll be human-to-human interaction that just happens to use location as the medium.
That also fits my general rule of thumb: The killer app is always communications. (That or sex, which is really a subset of communications.)
Technorati tag: npuc2005
Interesting tidbit from Ian Smith (Intel Research) here at NPUC 2005, who says that Beki Grinter at Georgia Tech predicts that within a generation doorbells in Europe will be obsolete. Apparently teens in Europe SMS or phone their friends instead of ringing the doorbell so they don't have to risk talking with parents.
Technorati tag: npuc2005
Quick resource tip: Spark Fun Electronics is a cool little online electronics & prototyping-supply shop. Good descriptions & spec sheets with the hardware hacker in mind. (Thanks to Thad for the link!)
I tested the battery on my iPod yesterday to see if I qualified for the $50 store credit Apple's offering to settle a class-action lawsuit. No dice for me — my 3-year-old first-generation 10 Gig iPod (specced to last 10 hours) does, in fact, still last around 9.5 hours continuous play from a full charge.
About a year ago I mentioned how the "virtual band" The Bots had put up a public-domain database of G.W. Bush audio clips to help would-be remixers get started. Their own rap Fuzzy Math is fun, but IMO succeeds mostly on the novelty of hearing GW saying things he'd never cop to in real life. The mixes over at The Party Party (by the band (me))) take GW mixing to the next level. The music stands on its own, and they turn the inherent choppiness of the mixing process into an advantage by fitting it with the natural rhythm of the music. (Be sure to especially check out My name is RX, a cross between Bush, Sympathy for the Devil and Slim Shady.)
A couple fun tools that've crossed my path the last few days:
(I was going to post GMerge as well, but it's been taken down after receiving what has to be the most friendly cease-and-decist letter I've ever read...)
The theme of this workshop is ubiquitous computing entertainment, playful social networking, and games. Our goals are to provide a productive forum in which international researchers, members of the entertainment industry, game players, game designers, and game publishers can discuss key issues in ubiquitous gaming, present and future uses of ubiquitous computing that create compelling, playful and socially beneficial gaming experiences, and to facilitate an exchange of ideas that will allow ubiquitous games to break out of their current “niche” and into the mainstream.
The workshop will be September 11th, 2005 in Tokyo.
The PostSecret blog is a "community art project where people mail-in their secrets on one side of a homemade postcard." Some are thoughtful, some disturbing, some kinda silly, but almost all are high quality. I think Sarah Boxer's NYT Arts Review nails why that's the case:
The Web site gives people simple instructions. Mail your secret anonymously on one side of a 4-by-6-inch postcard that you make yourself. That one constraint is a great sieve. It strains out lazy, impulsive confessors.
For PostSecret, you write, type or paste your secret on a postcard, and then, if you want, decorate the card with drawings or photographs. Next the stamp and then the mailbox. Yes, it's work to confess. And it should be, if only for the sake of the person who might be listening.
That's a lesson we need to remember as we design for more and more frictionless communication — sometimes a little friction is exactly what you want. (A special thanks to my dad for the link.)
Coming next month to a store near you (via the New York Times):
The cone of silence, called Babble, is actually a device composed of a sound processor and several speakers that multiply and scramble voices that come within its range. About the size of a clock radio, the first model is designed for a person using a phone, but other models will work in open office space.
I'm imagining all sorts of cute hacks you could do with this. I especially want to set one up to cancel out my speech and simultaneously play music or pre-recorded/synthesized speech over it — turn your whole conversation into a badly dubbed movie!
I don't think I've ever seen a more complete, well laid out website devoted entirely to mapping every public toilet on an entire continent. Find toilets based on address or GPS location, plan your trip around public toilets, register for My Toilet Map or even receive a monthly newsletter. Brought to you by Australia's National Continence Management Strategy.
My brother is finishing up his Masters of Fine Arts at SUNY Buffalo, and just recently debuted his main dissertation project: a 20-minute experimental film about Eadweard Muybridge called tesseract (downloadable here).
Even to my relatively untrained eye it's a beautiful piece (it just won the award for Best Photography at the Jutro Filmu international film festival in Warsaw), but the part that's most interesting to me is how he's applying ideas from Scott McCloud's Understanding Comics to film. Take a look and see for yourself.
I've run into a few interesting models for information flow-control in the past months:
All three cases are creating two classes of info-users: people who can disseminate their information (paid subscribers, plus people who read the Times when it's still fresh) and consumers who can read what the first class of people point to but can't go further without paying. While it's a little Tom Sawyerish to essentially force people to pay to drive potential new subscribers to your site, I can see the basic model working if the balance is gotten right and fit the content well. At the very least, it's nice to see models that are more subtle than the all-free / ad-based / subscriber-based triple that's so common today.
Concerned about the influence Google's PageRank algorithm has in determining what information people see? Think that ranking pages by how many people link to them isn't objective so much as automated mob rule? Want a search engine for people who don't want to just follow the herd, or just want to see the dominant paradigm get a little more subverted?
If so, then you'll be interested in Shmoogle, the Google-randomizer developed by Tsila Hassine. Shmoogle forwards your query on to Google and then randomizes the results, presenting them in the same no-nonsense interface you'd expect from Google along with the original rank of each result. Shmoogle is more of an art project than a practical alternative to Google, designed to encourage us to question whether "everyone else thinks this is good, so you should too" is really the best assumption upon which to base the library of first resort. Random order is at least diversifying, but to me it seems so arbitrary — and has me thinking of all sorts of alternatives:
If you can think of more variations feel free to comment...
Ed Felton argues that the new Family Movie Act (passed by Congress on Tuesday and likely to be signed by the President) actually protects free speech rather than, as some might claim, protects censorship. (The act, for those who haven't heard, makes it legal to edit out limited portions of a non-pirated home-viewed movie at the direction of a member of that household — so it's OK to make a DVD player that optionally skips all the sex scenes, scenes with Jar-Jar Binx, or for that matter the sex scenes with Jar-Jar Binx.)
I agree with Ed here — empowering individuals to choose what they want to watch or not watch doesn't promote censorship any more than movie reviews or the TV remote control do. The only case that would trouble me is if there were a systemic bundling of edits — for example if the only anti-violence filter for a movie also filtered out all the sex scenes. But given that such bundling already happens in the editing room of the movie itself and given that there will likely be competition in this arena (baring broad patents) I don't see that scenario as likely.
Google has indexed around 8 billion web pages, total.
Nearly 15 trillion copies are produced on copiers, printers, and multi-function machines per year.
Update 4/20/05: fixed Google stat from 8 million to 8 billion (what's a few orders of magnitude among friends?) and added copier stat. (Thanks to Mort & Beemer for keeping me honest.)
Another thing I wish I knew about long ago: http://printfreegraphpaper.com/. Good source of PDFs for graph paper in a variety of types, measures and pitches. Prints in faint grey. (By way of the Seattle Times technology section.)
Remember the Doodle Writer, the writing-desk toy with the magnetic stylus that lets kids (or you) write without making a mess? Well, Pilot has the same thing in whiteboard size. It's called the CleanWriter Chalkless Board, and it's mainly being marketed as a whiteboard replacement for clean rooms. A coworker of mine just picked one up for the new playroom he's setting up for his two-year-old — I'll post an update when I find out how she likes it. (I know I'd think it was way cool at age two — or even age 35.)
Nice piece of glue by Paul Rademacher, combining Google Maps with Craigslist to produce a great integrated service. This is exactly what I wish I'd had when I was looking for a house all last year. (Thanks to Adam for the link!)
EFF has posted a short paper on how to blog without getting fired, breaking it down roughly into (1) blog pseudonymously, (2) limit your audience and (3) know your (lack of) legal rights.
It's unfortunate that (4) come to a reasonable agreement with management about what's acceptable wasn't even in the running. That's a tricky negotiation though, both because once you broach the subject it's much harder to go back to being anonymous and because your management might feel OK about looking the other way but when pressed might feel the need to say no rather than yes. And when it comes to protecting themselves from upper management or angry stockholders should your blog embarrass the company, they're probably right.
I'm of two minds when it comes to pseudonymous writing. On the one hand, I still want more choice of soft walls when it comes to managing what I write. Mailing lists and things like LiveJournal's friends lists are good starts, but what I really want is a publish-this-to-everyone-except-those-who-would-get-me-in-trouble-for-what-I-wrote button. But on the other hand, I can't help but see such a button as a kind of cowardly way out. Maybe it just stirs some deep emotion implanted during half-listened to high-school discussions of Thoreau, but isn't the measure of a writer, at least in some small way, just how much trouble his writing gets him into?
Movable Type has so much effort going into trying to block spam — I wish they'd put even half that effort into making a half-decent interface for just deleting comments in bulk...
On second thought, it could be that the map is correct and the satellite images are skewed to locally fit the Google perspective. Maybe the map is the territory after all!
As a side-note, the Google Maps URL includes GPS coordinates, so given a street address you can get both GPS coords and satellite map quickly and easily. You can also just erase the part from
q=Blah+blah& part of the URL to get a nice clean satelite image, or just add
&t=k to an existing Google Maps URL to turn it into a satellite image. (I really hope that doesn't become an issue for some well-meaning panic-stricken patriot who thinks terrorists couldn't get that info quickly and easily in dozens of other ways — I always missed that GPS feature when it was taken out of earlier mapping software.)
Update 7:40pm: Note that you need to click on "Link to this page" in the upper right-hand corner to get the full URL to show up in the address bar.
OK, this amused me enough I had to share. My fraternity brother Nivi just lost his voice, so he went and purchased a nice-sounding text-to-speech voice for his Mac at Cepstral and piped its output into Skype with Soundflower. Voilà — instant TTS phone.
I remember David Ross once told a story about how the Model T Ford (nicknamed the "Tin Lizzy") was adapted to all sorts of things unexpected things, from winching wagons to pumping water. The key was the car's simplicity: it was just a motor on wheels, and it didn't take an expert to that motor for something besides driving. It's a lesson that keeps repeating itself: tools made up of simple, powerful components with straightforward interfaces for linking the pieces together find their own new uses.
I've often heard (and sometimes said) that there are three possible outcomes to the copyright wars:
OurMedia.org (just released in Alpha) is another step forward towards making the third scenario a reality. It's a new web service that's offering to host any sort of creative media (including audio & video). For free. Forever. You own your own copyright, you choose your own license.
This is similar to what The Internet Archive does, and in fact the IA is providing free storage and bandwidth for OurMedia's media files. OurMedia is focusing much more on the general pro/am community though, and includes a free blog & Wiki (all based on Drupal), community-based rating and comment systems and plans for many more social-network support plans.
(Thanks to Seth Finkelstein at Infothought for the link.)
LinkBack looks pretty sweet:
LinkBack is an open source framework for Mac OS X that helps developers integrate content from other applications into their own. A user can paste content from any LinkBack-enabled application into another and reopen that content later for editing with just a double-click. Changes will automatically appear in the original document again when you save.
Looks like it goes about 90% of the way towards the convenience of editable embedded objects, without all the problems associated with that last 10% of trying to get everything to actually be edited in a window within the embedded document. It's also interesting that this is an open-source project, spearheaded by 3rd-party software developers Nisus, OmniGroup and Blacksmith rather than by Apple itself.
LinkBack is currently being integrated into Nisus Writer Express, OmniGraffle, OmniOutliner, ChartSmith, Stone Create, Border and a plug-in has just been released to paste LinkBack data into Keynote 2.
Sounds like President Jacques Chirac has bought into the French National Library president Jean-Noël Jeanneney's call to make huge swaths of European literature available online. A big nudge came from Google's plans to put some 15M English-language books online, leading Jeanneny to write an editorial in the French paper Le Monde warning that such a service would naturally view the knowledge of the world through an Anglo-American lens. If it became the dominant source of knowledge, that perspective would become equally dominant. (You can see the full editorial in this blogger-cached copy or the Google translation).
He is, of course, quite right in assessing the threat. It's nice to see the French respond with a call to counter-attack rather than protectionism — such a contest can only result in a race to the top, delivering the best each of us has to offer to the betterment of all. It's also nice to see yet another example of culture as something to spread rather than something to protect — that sometimes gets lost with all the copyright wars going on.
Jeanneney also hits on something that's not coming out much in the English press: he's not just afraid English-language texts will be over-represented, but also that the organization of the texts will be seen only through that lens. From a March 4th Le Monde Q&A (auto-translated by Google):
Why are you hostile with the Google project?
Hostile? It is not the word right. When Google announced, December 14, its project of digitalization of 15 million volumes drawn from the funds of several large Anglo-Saxon libraries, we did not doubt that among these works would appear a great number of European titles. But their selection, their hierarchisation in the lists will be defined inevitably starting from a singular glance: that of America. The Anglo-Saxon scientific production will be inevitably overestimated. The American mirror will be the single prism. My remark does not raise of any chauvinism, I do not intend to inform any lawsuit with the opening of Google, I restrict itself to note an obviousness. I would like simply that one can have in the future another point of view, marked by another sensitivity - European - of a glance on the world undoubtedly quite as partial and even partial, but different. What I defend, it is a multipolar vision.
It's not clear to me how Google plans to categorize the vast library they're helping put online, or indeed if they plan to do more than add existing (no doubt US/British-centric) library classifications, offer full-text search and then let the emergent organization of the Web take its course. But the problem is a tricky one, and search-engine bias is both subtle and, honestly, inevitable. We would all benefit from multiple experiments, multiple methods and multiple points of view, and at least for a while that's worth a little duplication of work. However, I do hope that all the sides involved come together at least enough to establish some common data formats and, more importantly, agree to share data with each other. No one would be served by multiple little fiefdoms, each hoarding their little corner of culture out of fear the other side would gain an advantage. Let's keep this a race to the top.
It looks like Ben Stanfield started a blogstorm this weekend by pointing out a new(?) clause in AOL's AIM terms-of-service that states "In addition, by posting Content on an AIM Product, you grant AOL, its parent, affiliates, subsidiaries, assigns, agents and licensees the irrevocable, perpetual, worldwide right to reproduce, display, perform, distribute, adapt and promote this Content in any medium. You waive any right to privacy." AOL has been trying to stamp out the fire, and say the terms aren't meant to apply to person-to-person instant messenger, only to posting in public chat rooms, message boards and other public forums.
Update 3/18/05: I had thought only the Enterprise version of AIM supported encryption (and that may be AOL's intent), but apparently you can just create your own certificate and that'll work too. Thanks to Aleatha for the comment (and also for pointing out that the TOS has, in fact, been around for a year or so in this form).
On a related note, I also notice that iChat 3.0 (shipping with Mac OSX Tiger) will support Jabber as a standard protocol. Yay!
Mitch Kapor has two posts about Microsoft's purchase of Groove Networks. Mitch was founder of Lotus and more recently the Open Source Foundation and was also the first outside investor in Groove, so he has several good insights into the software and how / whether Groove would be as Open Source if it were done today. The quote that got my attention the most was this one though:
With the prospect of open source-based server capabilities of all kinds becoming more like the electrical power and distribution system, universally available on demand in whatever amount is needed, a whole class of objections to client-server architectures such as dependence on non-local, unreliable and inconvenient infrastructure diminishes. Groove's peer-to-peer architecture performs uniquely well in areas where the telecom infrastructure is weak, such as conflict-ridden areas of the Middle East and Asia where both military and humanitarian aid groups have deployed it successfully, but this alone is a niche application.
I like peer-to-peer technology for a whole host of reasons, but I think he's right that the infrastructure arguments for P2P are (and always have been, IMO) weak except in niche applications (bandwidth saving via BitTorrent being the notable exception). But the driver for P2P technology hasn't been about limiting the effects of technical infrastructure failure — it's been about limiting active efforts by an adversary to stop communications. The adversary might be an opposing army, an oppressive government or the RIAA, but the goal is the same — and that's a need hasn't changed in the past 10 years.
Unbalanced and Half-true News Opinion and Commentary What would people be talking about if you controlled the newsroom teleprompters? Choose a professional talking head to speak for you on a freewheeling moderated panel discussion by accessing our dedicated web connected teleprompters.
I love this sort of remixing art. I wonder if I could make a toolbar that could make their talking heads read all my blognews?
I've not seen the interface yet (it's Windows-only, and I'm, well, not). But assuming (a) it's easy to turn on or off and (b) users can tell what's an auto-link and what's original to the webpage I see the application itself as just one more shift in power from the author to the audience, just like TiVo, ad blocking, style-sheet overrides, those DVD-reediting kits for people who don't like the dirty bits, the remote control and the highlighter pen. I'm in favor of all of them.
There is one thing that does concern me though, and it's not the application itself, but the bundling of the information source with the Google Desktop app itself. There's not much they could do about that (and I trust Google a lot more than I trusted Microsoft when they tried this same trick), but I would feel much better if this were a generic open API in my Firefox, where I could pick and choose who handles each of my rewrite rules. Even a benevolent hegemony is dangerous, both in case it stops being benevolent and because it lacks genetic diversity.
So, who's up for writing an auto-link Firefox plug-in?
National Journal has a great historical look at the National Association of Broadcasters' battle to keep the "beachfront property frequencies" they've enjoyed for free for over 80 years. Well worth the read.
Mitsubishi just announced its Pocket Projector, which is about the size of your hand and can project a 20" image from just over a foot away. The press release doesn't say how bright it is or whether you'll need lights out to see it well — it uses three colored Lumileds LEDs. SRP $699.
If I were any of the other mapping services I'd be scrambling to catch up right about now...
Disk size has nothing to do with importance, but I still get a weird feeling seeing my music collection as a big blue block some 50 times bigger than the project I've been working for almost two years. Now I wish I had a treemap for how I spend my time during the day...
Update: and for Linux there's KDirStat, which is apparently older than either of the other two...
Technology Review recently declared they are trying to get back to being more science & analysis, less breathless hype. Let's hope David Talbot's Terror's Server in the February '05 issue was just still in the pipeline before they made that decision. Here's the letter to the editor I just sent:
David Talbot's "Terror's Server" was the kind of rambling, analysis-free hand-wringing we came to expect from the mainstream press in the mid 90s, not from Technology Review in 2005. Talbot's main point that terrorists are (gasp) using the Internet is obvious and trivial. Terrorists are also using telephones, SUVs, credit cards, textbooks and mail-order catalogs to plan their attacks. Why is there no call for the automobile industry to "fix" their terrorist SUV problem?
The Net amplifies individual voices, be they the voices of civil rights activists, cancer survivors or terrorists. The real issue is not whether terrorists use the Net (just like everyone else does these days), but whether society is better off allowing individual voices to be so easily heard. This is an important debate with historic undertones; Gutenberg's press amplified Luther's 95 theses and led to hundreds of years of war and bloodshed — and to the Protestant Reformation and Renaissance. Please, next time address the issue directly instead of simply hiding behind the terrorism flag.
PhD, MIT Media Lab (2000)
The Makyoh (Japanese for "magic mirror") is an ancient art that can be traced back to the Chinese Han Dynasty (206 BC — 24 AD). They were made of metal, usually with an intricate pattern carved or cast on the back and the front polished to a mirror finish. The front looks like a smooth reflecting surface, but when sunlight or other bright light is reflected onto a wall a glowing pattern emerges. Usually the image seen would be the same as the image on the back of the mirror, often an image of the Budah or other focus for meditation. The art later moved to Japan (especially Kyoto), and after missionaries brought Christianity into Japan in the mid 1500s many mirrors were made with secret images of the Holy Cross or of Christ. Because Christianity was punished at the time, many Christians wore such magic mirror as a secret sign of their faith.
I just received a modern makyoh from the Grand Illusions toy shop, a wonderful site for exotic, clever and scientific toys (and they now accept PayPal). One thing I love about Grand Illusions is that they include videos and articles about how their toys work, including the magic mirror. Much as I respect the secrecy magicians have for their tricks, I much prefer the magic scientists perform — real magic isn't spoiled when you know the secret, it's even more amazing.
I've posted a few other pictures on my pictures page.
I just finished hooking up voice over IP so it services all my house phone ports, with the Motorola Voice Terminal hiding in the closet along with the house's patch panel, DSL modem and firewall router where it belongs. I can't say it was totally painless, but most of the effort was just gathering the right tools, connectors & knowledge. Useful resources included this general phone wiring primer and this one specifically on how to distribute VoIP throughout a home. Since I was starting kinda from scratch, I also found this basic page on how the heck to use a punch-down tool useful.
DocBug Exclusive — Google revolutionized the internet. Now it is hoping to do the same with inter-galactic communication.
The company behind the US-based internet search engine looks set to launch a service that turns unused bandwidth into a powerful signal generator capable of sending advertisements to the far reaches of space. Thousands of miles of fiber-optic cable laid during the boom of the late 90s now lies dormant, and this so-called dark fiber capacity is available at a price that industry experts say is ripe for being turned into a giant planet-sized billboard.
Jules Hewlett, senior analyst at a company that talks to reporters about technology, said: "From an intergalactic advertising perspective there is a big appeal in the fact that Google is a search operation — and of course the Google brand is a huge draw." We're not sure what he means by this, but he's very smart so we've quoted him anyway.
Though the project is hush-hush, Google spilled the beans about their new project by posting a job advertisement on their website that calls for a "strategic negotiator" to help the company to provide a "global backbone network" — in other words, an Earth-sized Light Bright.
By investing in capacity, Google could reroute packets to certain parts of the world, lighting up the dark fiber to spell out words or even full phrases that would be visible against the darkness of space for light-years.
Although Google is reluctant to talk about its plans, off the record people close to the company have called reports of the plan "mere speculation," "baseless rumor" and in one case "the biggest load of malarky I've heard since The Times reported we were coming out with telephone service."
Media Lab Europe is closing its doors after just 5 years (here's the NYT article, subscription required). I know folks who were there, but I don't have any deep knowledge of what went down (besides the obvious problem of opening just before the tech crash in 2000), so I'll just raise a Guinness wish them all a happy landing wherever their next gig lies.
I still think the MP3 player could become a satellite-radio killer, but in the meantime Trax Technologies is coming out with a satellite-radio-to-iPod dock...
Some time ago I posted about the ongoing debate between David Hockney and our Cheif Scientist, David Stork, about whether the great painters of the 15th century "cheated" by secretly using optical devices like the camera obscura. Hockney thinks the realism one suddenly sees in paintings around 1430 proves that such devices were used, even though no record of them can be found (they were secret, remember?). Stork thinks it's hogwash, and has both proposed numerous ways the realism could have been acheived using technology known to exist at the time and pointed out reasons the optical techniques Hockney proposes wouldn't have worked anyway.
Now the New Scientist is reporting that evidence of one alternative technology Stork suggested has been found:
Separate findings will be published in March by Thomas Ketelsen, a curator at the Museum of Prints, Drawings and Manuscripts in Dresden, Germany. Hockney has argued that the similarity between Jan van Eyck's drawing Portrait of Niccol˛ Albergati and a larger oil painting of the same name could only have been achieved using optical projections. But using a microscope, Ketelsen has found evidence of previously unseen pinpricks in the drawing - suggesting the copying method was mechanical, not optical. He suggests that a type of reducing compass called a "reductionzirkel" might have been used.
Falco points out that the pinpricks could have been made 50 years after van Eyck's death by someone wishing to copy it, or even 500 years after. "Holes can't be carbon dated," he says. But Stork thinks the mounting evidence can't be ignored. "The evidence doesn't support Hockney," he says.
"The debate is fascinating," Hockney says. "But it cannot end just because someone found pinpricks."
Hockney's argument was never strong to begin with, but it's starting to sound like he's join the ranks of creationists, alien abduction followers and conspiracy nuts. If so, he may as well have ended his last sentence after the fourth word...
To quote a coworker of mine, "Apple's going to make unspeakable amounts of money on these." I'm especially glad to see the Mac mini, since it's exactly what I've been searching around for as the media hub of a new entertainment center I'm putting together now that I'm no longer in a one-bedroom apartment. Cheap ($499 starting, a little over $600 for the bluetooth & 80G version I want), small (6.5" x 6.5" x 2"), and quiet — it'll be my combination CD jukebox, DVD player and MIDI munger for my keyboard.
As for the iPod Shuffle, I could really see this becoming the satellite-radio killer. If the iPod is your music collection in your pocket and the iPod mini is your jogging music / music wherever you want, the iPod Shuffle is going to be the daily download. Mix it with iMix, Wiretap Pro and/or a Radio Shark and you've got a personalized commercial-free radio with a "next song" feature. You miss out on realtime info like traffic reports (which really should go to your GPS/nav system anyway) and breaking news, but how much news breaks between the time I leave for my commute and arrive at my destination? I'd also miss out on Howard Stern if I don't go with satellite radio, but for me that's a feature, not a bug.
As is traditional, Apple's stock is down almost %6 as of this writing. Me? I just put in a buy order...
BusinessWeek and Reuters mention that the New York Times is "considering" moving to a pay-for-content model for their web-based news, though they've no immediate plans to do so. Kevin Drum at Washington Monthly comments :
For all the big talk in the blogosphere, if this happened it would pretty much spell the end of political blogging. Without a copious supply of online newspapers and magazines providing the raw material, there are very few bloggers who would have anything left to say.
I doubt that, though honestly I'm not sure it would be a bad thing if it happened. Riffing off my basic belief that the trend towards decentralized communication are too powerful for one company (or even one cartel) to reverse, I see one of two things happening should the NYT make such a move:
Personally, my money's on #2 happening regardless of what the Times does.
Found at the bottom of a PDF online form on the IRS website:
Amazon.com has set up a one-click donation site for the American Red Cross Disaster Relief fund. All proceeds will go to the fund, and they've already collected over $2M. (Get your donation in before the 31st for tax savings this year...)
Before the Net, I would have thought about giving a donation but not gotten around to it — as it is, I just sent $100. I love seeing things like one-click and PayPal making it easy to do good...
The concept of transclusion — the quotation of documents by linking directly to embedded text instead of making a copy — has a lot of interesting possibilities, but in the end it feels to me like it's going in the exact wrong direction for the digital age. Nelson's whole design seems to be based around the idea of ownership: I own the bits I've written, I control the content and modifications, and when you quote from me you owe me a micropayment. That was the shape of publication in the last century, but it's not how 21st-century publication is shaping up. In so far as ownership means control, information in the 21st century has no owner. Information can have hosts, pedigrees, histories, and even generally-accepted custodians, but in the future that's being built "my bits" means not what I've written but what I'm carrying in my hard drive. Like a new joke or a bad cold that travels around the office, mutating as it goes, each copy of information is controlled by the host that holds it in his possession. I can't see any technology that tries to buck that trend winning out in the long run, especially not as we ride the technology trends towards the day when I can store the entire Web in my pocket.
I'm pleased to say I was able to find my old favorite Powers of Ten pretty easilly...
The latest for that James-Bond or Peeping-Tom wannabee:
(Thanks to Thad Starner and Ellis Weinberger on the Wearables list for the links...)
(By way of SlashDot) The Wikipedia entry on Sollog is an interesting example of how a community can protect a shared collaborative space. Sollog is a self-proclaimed seer/prophet, who a little over a week ago was the subject of a new anonymously-written Wikipedia entry touting his books and otherwise proclaiming his powers. Since then the entry has been edited, vandalized & blanked back and forth 194 times (several of the vandalizations from the same IP address that wrote the original article), put to a vote on deletion by community members (who decided to keep the article, though in edited form) and finally protected from further edits to keep it from being vandalized.
The part that impresses me most is the amount of work and calm rational discussion that's gotten done over at the discussion thread on the topic (and even pre-refactored version). I wish I had a metric for how much the success of an online community owes to the communication tools at its disposal (protection, easy version-handling, IP-blocking, etc.), a clear mission statement / rules of engagement and smart dedicated people, but I'm betting the breakdown is something like 20% / 30% / 50%...
Google just announced a new partnership with the libraries of Harvard, Stanford, the University of Michigan, the University of Oxford, and The New York Public Library to digitally scan library books and make them searchable online. In one sense they're playing catch-up with Amazon, who started putting text online some time ago and is in a stronger position to turn that into more book sales. I'm speculating a bit here, but I expect Amazon is also in a better position to negotiate for the right to make more copyrighted text available than Google, given the easier read-it-to-buy-it pipeline.
One thing that really strikes me about Google's project is this bit:
Users searching with Google will see links in their search results page when there are books relevant to their query. Clicking on a title delivers a Google Print page where users can browse the full text of public domain works and brief excerpts and/or bibliographic data of copyrighted material. Library content will be displayed in keeping with copyright law. For more information and examples, please visit http://print.google.com/ [URL corrected — 'Bug].
I'm a little biased since my PhD Thesis was about this kind of application, but I can easily see this sort of show me information related to what I'm doing now app being the next big thing interface advancement. (At least once it's integrated with good search, the right data, and most importantly a company that doesn't try to integrate it with an all-too-helpful cartoon character.)
It's a small feature, but just one more thing from the "those guys at Google really get it" port: Google Suggest. Autocomplete of common search terms as you type into the Google bar, along with the number of terms listed.
PalmSource just announced that their next version of Palm OS will be built with Linux at its core. To this end, they're purchasing China MobileSoft (CMS), which has a phone platform already built on top of their own Linux variant. As the Register puts it:
Like Apple with Mac OS X, PalmSource will keep all the top-layer code proprietary, but it will release any changes it makes to the underlying Linux code — for faster boot times and battery life preservation systems, for example — available to the open source community.
Via the SJ Mercury News (sub. req.), the FDA has approved an RFID chip that you place on (not in) a body part that's to be operated on to identify the proceedure and other info:
The system works like this: At an initial visit, the information on the operation is placed in the computer. The patient sees it on a monitor and verifies that it's correct. The data is then printed out on the chip and then re-read by the computer. Again, the patient verifies the data.
On the day of the procedure, the patient once again verifies the chip is correct, and it is then placed on the area to be operated.
At the suggestion of the FDA, the chip will have a notice on it that it should be removed before the procedure.
This would presumably replace the current technique of using a sharpie and writing stuff on the patient like "no, the other leg!" I'm curious but a bit sceptical — on the one hand the RFID tag can hold a lot more info than you can fit on a body part (allergies, etc.), but I don't see that making up for the immediacy of reading what's written on the body part you're about to operate on. Why would an RFID tag be any more likely to be read than the patient's chart?
A major security hole has been found in TWiki which allows anyone with access to the search function to execute arbitrary shell commands with the privilages of the web-server process. Anyone running TWiki should read here and upgrade and/or take countermeasures immediately.
Google's got a new service for searching journals, conference proceedings and other scholarly writings called (appropriately enough) Google Scholar. Nice clean interface, and like Citeseer they're pointing not just to the official pay-for-download sites like the ACM and IEEE portal sites but also the free-for-download versions that authors usually put on their own sites (often in violation of copyright, but the last thing professional orgs want to do is piss of their own community).
Seen at the DEAF'04 festival: Very Slow-Scan Television. Gebhard Sengmüller builds on Slow-Scan TV, a video-transmission system developed to send TV over Ham Radio at around 8 frames per second. Then he hooks it up to a large robotic ink-jet printer that injects cyan, yellow and magenta ink into bubblewrap, producing one frame in about 10 hours.
I'm not giving up using Keynote, but it sounds perfect for retro-folk who still like writing your slides in Emacs or Vi...
OK, I don't know where I'd put it or exactly what I'd do with it yet, but I want one of these. FogScreen is a large wall of fog kept in a thin sheet using laminar flow, then used as a projector screen. That part has been around for a while, but they've recently added the ability to "write" on the screen like some wizard writing runes in the air, using the same ultrasound-tracked pens used in virtual-whiteboard systems. Check out their video.
For many Californians there will be something of a reforendum in tomorrow's election that isn't on the ballot: paper or plastic. As you've probably heard, there have been serious and significant security issues with electronic voting machines. That's an implementation problem which is shameful, but not a fundamental limitation of the technology. A more fundamental issue with the smart-card system most of our touchscreen-voting counties are using is that the system lacks any kind of voter-verified paper-trail — meaning there's nothing to fall back on if you suspect electronic fraud. The argument I sometimes hear is that getting rid of paper eliminates the problem of hanging chads and the recount problems from Florida 2000. This is true, in the same way eliminating all financial accounting records would reduce fraud convictions.
Here in California, our Secretary of State has insisted that all voters be given the option to vote via a paper ballot... but many counties feel that's an extra burden so they won't inform you of that right, and some counties even plan to further inconvinience paper-ballot voters. My advice to those who are voting in touchscreen counties: ask for paper anyway. My hope is that Wednesday's headlines (under the one that says "Kerry Wins," of course) all report record numbers of voters requesting paper ballots and giving a resounding no-confidence vote in the shoddy technology we have this time around.
I mailed my ballot earlier today, but for all you folks doing meat-to-meat voting on November 2nd, People for the American Way has a website to find your polling place: MyPollingPlace.com. Nice, simple. (Props to Political Animal for the link.)
There's been a lot of talk about how touchscreen voting is a better interface than paper ballots, but that we should not (and should not have to) sacrifice the security, understandability and reliability of having a paper audit trail as well. Now it seems we're seeing interface problems with touchscreen voting.
I expect the voting officials are right that this is a case of "user error" — that's what we call it in our industry when the interface designer didn't do enough of a good job and now wants to blame someone else. Having watched technologically-minded researchers get confused when they accidentally trigger our giant presentation touchscreen at work, it doesn't surprise me much either. Unfortunately, with all the cases of actual voter-registration fraud, invalid and highly-suspicious selctive purging of voters from the rolls, back doors secretly coded into official vote-counting software, and laughable "security" protocols in voting machines these voters (Democrat and Republican) are right to be skeptical. We need to do better.
FujiFilm recently showed off the F-next Image Viewer, a cute prototype hand-held digital photo album that includes a search-by-face function. Select a face in any picture, the device will show you all the other pictures where that face shows up.
There's a recent buzz around what's being called PODcasting, wrapping web audio with whatever wrappers are necessary to make them convinient to link in a blog and download to your MP3 player of choice for later listening. (See Doc Searls' explanation for a nice intro.)
It's a nice meme, and having gotten a lot out of my own browsing through audio links I hope it catches on. I find it interesting and not that surprising that the PODcasting meme seems seems to mostly involve pointing to educational and intellectual audio rather than the music that drove the P2P music-sharing revolution. Music briefly had its day on the Web, but was rapidly driven off by commercial interests worried overtly about piracy and covertly about both piracy and competition. Education has both a different culture and economic structure, and while educators and lecturers like to make money somehow there's a much deeper understanding that giving away our best ideas is often in our own best interests.
Unfortunately, even in the academic and public-radio world it looks like we're in a meta-stable state, with many sites offering only streaming audio due to either legacy licensing issues or presumably to maintain some control on distribution. Once the technology to record off a stream becomes ubiquitous (as it surely will), will the remaining barriers to recording and rebroadcasting the audio be enough to placate people who want to distribute their content for free but not let it run wild?
Regardless, this whole thing just reconfirms my original skepticism at the long-term viability of XM Radio as a basic technology. Here we are in the age of personalized, on-demand, time-shifted and place-shifted content... and XM Radio is offering a capital-intensive satellite-based broadcast solution. Maybe I'm underestimating the value of live, up-to-the-minute news and information, and maybe I'm underestimating the long-term value of a big company that can afford to make deals with the RIAA, but I just don't get it...
I was on a panel on wearable computing a couple days ago, and an interesting question came up:
Ten years ago, when you picked up the phone you asked Who is it?
Today, with cell phones and Caller ID, you pick up the phone and ask Where are you?
What question will be asked ten years from now?
My guess is that even when you meet someone face-to-face you'll ask Who are you with?, with the assumption that your friend might have invisible cyberspace tagalongs with her and that it might not be polite to butt in on the middle of their conversation.
Felton's brief analysis:
Where does this leave us? MD5 is fatally wounded; its use will be phased out. SHA-1 is still alive but the vultures are circling. A gradual transition away from SHA-1 will now start. The first stage will be a debate about alternatives, leading (I hope) to a consensus among practicing cryptographers about what the substitute will be.
Note to self: design my systems so it's possible to update crypto algorithms in all my legacy data, should the need arise.
Every year Dan Russell at IBM Almaden hosts a small one-day workshop for HCI and related researchers to schmooze and talk a particular subject. This year's topic was near & dear to my heart: what do we do with WAAAY too much information? The first half of the day featured talks from what Dan jokingly described as "people making the problem worse," the second half dealt with specific methods for trying to understand huge amounts of information.
My trip report is now online for anyone who's interested. Dan also promised that the conference video will be posted online — I'll post a link when it's up.
My friend Nick 'Rawhide' Matsakis had some insightful comments on my question about Apple v. Real, and since I've somehow broken my comments form (grumblings on MovableType to come later) and he's a grad-student-with-no-time-for-his-own-blog™ I'm posting them here.
The way I see it, there are three parts to Apple music triumvirate: The iPod, the iTunes Content Store, and the Fairplay format (AAC+DRM). Each of these support the other two in a devilish lock-in scheme. Customers don't appear to mind this lockin so much since the Apple solutions are all in the top of their class and arguably best of breed.
Despite many competitors, no one else has this kind of seamless experience and there is still no one who can. Sony has crashed and burned (see here) and Microsoft will no doubt have an excellent music store, but the only player they have announced is a $400+ media center thingy that plays movies and shows pictures. Meanwhile, Apple will sell millions of iPods this Christmas, with the $250 Mini leading the charge.
Also, despite claims of being proprietary, Apple has opened up this triumvirate. HP will begin selling iPods in a few weeks, Motorola will begin selling Fairplay-enabled cell phones next year, and Audible.com has been selling spoken-word content on iTunes for 9 months. So, what is the problem with Real making its content play on the iPod? The iPod is clearly the big moneymaker for Apple, so making it be able to play more content should only be a good thing, right?
On its face, yes, but I think there are two issues here. The first is one of control. Apple has 'opened' up its triumvirate, but only a tiny crack and only in ways that 1) are strategic for Apple and 2) maintain the quality of the experience for users (at least Steve Jobs' vision of a quality experience). Having Real have access to the iPod doesn't appear to offer either of these.
More iPod content is good, but the engineering effort required to maintain interoperability is better spent working with the likes of HP and Motorola, which will each bring the Apple solution to millions of customers (perpetuating the lock-in, etc.) Likewise, the deals with HP and Audible have maintained Apple's control over the experience. I'd be surprised if the Motorola phones didn't have an Apple-designed media player that enforced the Apple brand in its appearance and operation.
Also, more importantly, Real is a real competitor to Quicktime in the online streaming media domain. Apple would probably be very happy if Real disappeared completely, offering them a bigger slice of the cross-platform content-creation platform.
In short, I think this is all about Real, and the results might have been much different if another company had approached Apple trying to license fairplay. Personally, I want to think Apple is being foolish in not trying to get a broader base form MPEG-4/AAC over WMA. However, I think they are adopting the Microsoft battle plan: grab as much land as possible in the beginning then rent it to the rest of the world at a profit. This plan hurts consumers, but I think it is the only way that Apple will be able to hold off the onslaught of Microsoft.
This whole Real-Networks v. Apple flap over the iPod has me scratching my head. On the surface it's the age-old fight we always see when one company makes money by giving away the razors and selling the blades and another company tries to "free load" and sell their own blades. What confuses me here is that Apple makes most of its profit from selling the razors (iPods), and very little from selling the blades (songs on their iMusic site). (Their Q3 revenue on iPods was 3-4 times their revenue from the iTunes Music Store, and my unfounded guess is that the profit margin is also a lot higher than the estimated ten-cents-on-the-dollar they make on each $0.99 iTunes sale.) That's one of the things I've always liked about Apple's digital-hub strategy — unlike Sony, they don't have to be all schitzo about whether they're an electronics company or a content provider.
So assuming they expect to remain the portable-player market leader based on the merits of the iPod's design rather than format lock-in (bad assumption?) then why get bent out of shape that someone else is trying to make their product better?
Users of the Netscape Calendar service had an unpleasant surprise this morning: a note informing them that the service was no longer available and that Netscape apologizes "for any inconvenience this may cause." Small consolation for my coworker who lost access to all his appointments, upcoming talks and meetings for the coming year. A call to Netscape was equally helpful — they were sorry, but quickly pointed out that this had been a free service and that their Terms of Service agreement clearly stats they can discontinue it at any time without warning. When asked why they didn't warn customers in advance, the support person made some comment about how when they warned people in advance about changes to their email service they got lots of complaints, so this time they didn't want to warn anyone. And no, there isn't any way for him to recover his data. Eit.
I'll leave speculation as to why Netscape took this action and what it means about the health & direction of the company to others — for me there are two lessons to be learned here. One is that even a trustworthy good-guy company like Netscape can be bought up or go bankrupt without warning. In the end our valuable data is our own responsibility, and we need to insist on the ability to keep and store local copies of our data in non-proprietary formats. This is exactly why a friend of mine refused to use her Gmail account until she installed PGtGM, a program that lets her keep local backups of her Gmail archive.
The second lesson is for companies who provide Web services: even if you think of yourself as a good guy company and always have the customer's interests at heart, you won't be trusted — and shouldn't be trusted — without real safeguards in place to insure us in the event that you go belly up or turn to the dark side. Even the nicest Web-service company takes collateral damage when someone in the industry does wrong. You need to assure us that our data is safe and is owned by us, not just through words, but by enacting strong legally-binding assurances in your Terms of Service & Privacy Statement, by giving us the ability to export our data, by embracing open standards, and in many cases by making your software open source so we can still use and modify it if you go away. If you do these things and play right by us, we'll gladly use your service and often either subscribe to your premium service or click on your banner ads. If you don't, we'll equally gladly shift over to your competitors who do.
Micro Persuasion has re-posted an interesting article from a public-relations industry newsletter: It's Time to Take Blogs Seriously — and Maybe to Develop One of Your Own.
"To those people who still think that blogs are 'loose cannons,' I'd say that they should embrace the revolution, or become cannon fodder," says [Shift Communications principal Todd] Defren.
Some of the rules [Microsoft blogger Robert Scoble] suggests in his manifesto should be followed by anyone who wants to run a corporate weblog:
- Tell the truth
- Post fast on good news or bad
- Use a human voice
- Have a thick skin
- If you screw up, acknowledge it
- If you don't have the answers, say so
- Never lie
- Never hide information
- Link to your competitors and be nice to them
"The empowering nature of the Internet will allow users to blog with or without corporate permission," Defren says. "The blogger who is encouraged with tools, freedom, and a few simple rules-of-the-road becomes a valuable advocate for the company. The blogger whose ambitions are repudiated simply sets up shop at home and spends their free time gossiping about the company's embarrassing hiccups."
All sounds like good advice — and much nicer for those of us on the receiving end of their messages than the alternative "always be sincere, whether you mean it or not" line we sometimes get.
After interminable waiting, Verizon has finally officially announced they're going to start carrying the Treo 600.
The New York Times Magazine's How to Make a Guerrilla Documentary article about the production of Outfoxed has everything you could ask for in a story about Internet-era guerrilla media: footage gathered by recording Fox News 24/7 for six months straight, volunteer watchdogs identifying and categorizing clips via email, simultaneous editing by five different editors coordinated over a secure Web site, even the risk of being sued for Copyright infringement as a way of silencing the work. Throw in Web distribution, coast-to-coast kick-off house parties organized by MoveOn.org, and commentary clips available for download over BitTorrent and what do you get? A hard-hitting political documentary, produced in only four and a half months for only $300,000.
HP's Multi-user 411 Desktop computer is a cute idea: one Linux box with four monitors and four keyboards, sold to cash-strapped schools. It's a perfect match really — the typical school computer lab is lots of seats close together, with low CPU needs but a tight budget. Currently they're only selling them in South Africa, though there's certainly interest elsewhere.
The strange part is all the nay-saying industry analysts in the Reuters article, with quotes like "As interest in the machine grows, the limited supply has turned a well-intentioned product into a source of confusion among educators and a point of debate among industry analysts, who question whether a major computer maker has an interest in bringing a low-cost alternative to a wider mass market." Out here in Silicon Valley, that's the kind of quote we like to put on the gravestones of large companies who refuse to eat their young.
The hardware is nothing special — it's just a regular Intel box running Mandrake, with 4 NVIDIA Qdro4 100NVS 64MB DH cards (one AGP, three PCI), one PS/2 keyboard and three US keyboards, one audio card and three Telex P-500 USB Digital Audio Converters. Sounds like they've done a little bit of software coding to make it all smooth and there's clearly value in buying from a brand-name company like HP, but if they decide it's too risky I bet someone else could be producing near-identical machines within a week. Heck, make it a school project and kill two birds with one stone!
Crime Mapping News has a nice two-page writup on how the Santa Monica Police are using a GPS-enabled camera to keep track of graffiti abatement (see pages 2 & 3). The cameras (made by the company I work for, Ricoh) include a drop-down menu to tag images with meta-data, including things like gang affiliation associated with a particular graffiti tag. The images can then be downloaded wirelessly from any city Wi-Fi station and be viewed in aggregate along with other GIS information.
The Bots have put up a public-domain database of full MP3s from G.W. Bush's speeches, debates and other statement. It's mostly laid out for cutting up into remixes (like their latest song, Fuzzy Math) but they're also providing the full linear speeches for whatever people want to do with them. In a couple months they'll be sponsoring a contest for best use of the database — I'm looking forward to seeing what people come up with.
(Props to Lawrence Lessig for the link.)
help://Volumes/Rootkit/Rootkit.script. The browser passes the request on to the Help Viewer, which will gladly execute code. The exploit is being discussed on the MacNN Forums and has been summarized on TidBITS.
No solution from Apple yet (though apparently they've known about it for two months already — sheesh), but a stop-gap solution is to install MonkeyFood Software's free MoreInternet and then set the helper app for type "help" to some innocuous program like "chess."
On the minus side, it's sad to see OSX suffering the same pain I've teased Windows users about all these years. On the plus side, I'd been meaning to play more chess anyway...
UPDATE: In flaming about the above exploit, the MacNN folk found a variation that doesn't have a full work-around, though you can make it harder for an attacker to get the payload to your machine. See the top of the thread for details.
This is cute — download a special version of open-source software like Evolution, Gaim, The Gimp, Nautilus, or Rhythmbox from The Cooperative Bug Isolation Project and they'll randomly sample usage paterns to try to automatically detect bugs that make software crash. Unlike the usual "this application has unexpectedly quit, shall I email a crash log to the developer" kind of thing, this one collects sparse data from both crashes and normal use, enabling an automated classifier to tease out what the differences were.
This review of the Sony LIBRIé e-Book reader sounds typical of what I've heard — thumbs up the new E Ink screen, interface could be improved but isn't bad, and as usual the content side of the Sony house is willing to make the whole package next-to-useless by throwing enough DRM on the device to insure no one will want it. I can see the advertisements now:
Read for hundreds of hours without changing batteries — just like paperbacks!
Great resolution — just like paperbacks!
Magically disappears after 60 days — just like paperbacks!
Didn't they learn anything from minidisc?
Of course, this time their system is Linux-based, and Sony is making at least some of their software available online, so people might be able to write their own content for what sounds like decent and certainly interesting hardware technology. Wonder if that'll happen fast enough for the LIBRIé to get its legs before Phillips & others make a version that will actually play eBooks already out there?
Declan McCullagh echos something I've heard several places about Google's Gmail service:
The objections lodged against Gmail are telling, because they illuminate two different views about how to respond to new technologies. The protechnology view says customers of a company should be allowed to make up their own mind and that government regulation should be a last resort. Privacy fundamentalists, on the other hand, insist that new services they believe to be harmful should be banned, even if consumers are clamoring for them.
I'm not one of the people clamoring to ban Gmail (see previous post for my own take) but the above argument does miss the important point that email is a two-way street. Maybe you're happy to sign away your privacy to a third-party company, but I've signed no such agreement. When I send email to you or to a closed mailing list you're on I have the expectation that, at the very least, you will first read the email before deciding to share it with a third party. I trust Google, but I want that expectation of privacy to continue after all the other email-providers follow suit with their own arrangements.
Chris Pratley has an interesting Microsoft Perspective on the history of Word, in particular talking about how Microsoft beat out WordPerfect as the wordprocessor of choice when platforms shifted from DOS to Windows. Pratley joined Microsoft in 1995, but what interests me most is his version of the Microsoft story prior to his arrival — it gives a great insight into the Redmond Kool-Aid served to new Microsoft employees:
In case you're too young to remember, Windows development started back in 1983, and it was a joke in the industry. Windows 1.0 (released in 1984 I think) was sort of a demo. Windows 2.0 (1987 or so) was much better, but it was limited in memory (286 processor had a max of 1MB addressable RAM), and ran too slowly for practical usage. It is also hard to believe now, but all the pundits in the industry thought GUI interfaces with windows and dialog boxes and menus and mice (the Mac, Windows 2.0, etc.) were for novices and were basically toys, since they lacked the power of a command line interface. Lotus 1-2-3 and WordPerfect ruled the desktop, with arcane command sequences that a professional user could work magic with, but which new users found impenetrable. Especially interesting was the discussion that came up around the impending release of Windows 3.0 around 1990. In 1989, all the editorials talked about whether application makers should bother with a Windows-version of their DOS apps. WordPerfect was pretty clear - they saw Microsoft as a competitor, Windows as a lame horse, and they felt pretty strongly that they would best serve their customers by sticking with DOS. Their customers knew the WP-DOS interface, it was faster and more professional than the goofy toy-like Windows interface. It became a point of pride that WP would not do a Windows version.
PC-Word, on the other hand, tired of losing reviews and not being able to shake the stranglehold that WP had on the DOS word processor market, had nothing to lose by making a Windows version. Fortunately, that also coincided with the direction that Microsoft was taking: bet the company on Windows. In retrospect, this seems like a no-brainer, but remember that at the time Windows was still considered a joke. Betting the company on it was a big, big bet.
Now I love a "techie bets the company on a radical idea" fable as much as the next geek, but this version leaves out the most important part of the story: WordPerfect wasn't sticking with DOS — just like the other category-leaders Lotus 1-2-3, dBase and Harvard Graphics, they were spending their resources developing for OS/2, the new windowing OS being developed jointly by IBM and Microsoft. And the reason they bet on OS/2 is that both IBM and Microsoft were endorsing OS/2 as the platform for the 1990s: check out this quote from Bill Gates at the Fall 1989 Comdex. At the time, Windows was seen as essentially an extension of DOS, and was touted as being for low-end computers (a 386 with 4MB of RAM, also known as next-year's trash). Which is to say, Windows was touted as being "for novices and... basically toys," but the GUI and OS/2 were taken quite seriously. Now cut to Spring 1992, when Microsoft ships Windows 3.1 and signs a "divorce" document from the deal with IBM to develop OS/2 (much of the technology was later licensed for Windows-NT). Betting the company on Windows wasn't just a big, big bet, it was also arguably the biggest bait-and-switch of the decade.
Oddly enough, Pratley doesn't mention OS/2 even in his follow-up post, though he does make the claim that Microsoft built the first office suite:
Some of the posters noted that Word was helped to success by the Office bundle. That is certainly true - that move was a truly inspired marketing decision to use our strength of having enough apps to build a "suite" - something which hadn't existed up to that point. At first it was just a bundle of three apps for the price of 1.5 apps or so. People said it was crazy - too much of a giveaway.
That's another impressive claim, considering when Microsoft Works came out in August 1986 (for Mac, the DOS version was 1987) there was already Innovative Software's SmartWare Suite (1983), Electric Company's Electric Desk (1984 or earlier, later reborn as AlphaWorks and LotusWorks), Ashton-Tate's Framework (1984), Migent's Agility (1985) and Lotus Symphony (1985).
Pratley mentions a few suspect his blog is just a "marketing ploy," but I figure his admiration for Microsoft's history is genuine and his posts are from the heart — he just needs to get out of Redmond a little more. Perhaps his blog will be just the thing to cure the memory gaps that are so often caused by years of Kool-aid abuse...
Edit: changed typoed August 1996 to August 1986.
The AGNULA Project (A GNU Linux Audio distribution, originally funded by the European Commission) has just released DeMuDi 1.1.1 Live, a full music-recording, editing and playback suite all on a bootable CD-ROM. It's all made from existing Open Source components, built on top of a Linux low-latency kernel, the Advanced Linux Sound Architecture, and the JACK audio-connection kit, with applications ranging from soundfile editors & MIDI sequencers to sound-synthesis software, sheet-music preperation software and media players. Pop it into your Intel box, regardless of native OS, boot from the CD-ROM and you've got your editing suite! (A Mac-G4 version is in the works.)
(Thanks to Steve for the pointer...)
I've wanted something like iMix ever since I burned my first Lindy-Hop CD mix. Absolutely brilliant.
Now if we could just shift from $0.99 / track to something that suits better for radio-style "let's see what's on" kind of play we could give ClearChannel the heave-to once and for all.
(Some of the other new iTunes features look nice too)
Cute security technology from Beepcard:
The Comdot solution is easy and convenient: Users simply hold the card in front of their PC, phone or other networked microphone and squeezes the Comdot — a flat button on the card — the card uses sound, carrying a one time 3DES encrypted code, to identify the user to the destination server.
Bruce Schneier's comments:
This is perhaps the coolest security idea I've seen in a long time. They have a demo application where you go to a website and purchase something with a credit card. To authenticate the transaction, you have to put the card up to your computer's microphone and press the button. The sound is captured using a Java or ActiveX control — no plug-in required — and acts as an authenticator. It proves that the person making the transaction has the card in his hands, and doesn't just know the number. In credit-card language, it changes the transaction from "card not present" to "card present."
Even cooler, they are making an enhancement to the system that also includes a microphone on the card. This system will require the user to speak a password into the card before pressing the button.
Why do I like this? It's a physical authentication system that doesn't require any special reader hardware. You can use it on a random computer at an Internet cafe. You can use it on a telephone. I can think of all sorts of really easy, really cool applications. If the price is cheap enough, BeepCard has a winner here.
Hideaki Kawai, Managing Director, Head of Corporate R&D Division, TOPPAN CO., LTD commented: "Using printing technology on paper allows a high level of artistic label printing on the optical disc. Since a paper disc can be cut by scissors easily, it is simple to preserve data security when disposing of the disc".
Masanobu Yamamoto, Senior General Manager of Optical System Development Gp., Optical Disc Development Div., Sony Corporation said: "Since the Blu-ray Disc does not require laser light to travel through the substrate, we were able to develop this paper disc. By increasing the capacity of the disc we can decrease the amount of raw material used per unit of information."
Details will be announced at the SPIE Optical Data Storage 2004 Conference next week.
There's been a lot of hubbub over Gmail, Google's new free (advertising-based) Not An April Fool's Joke email service with 1Gig of disk space. The biggest issue is that Google hasn't properly communicated where they stand on protecting email privacy, especially in relation to their plan to automatically scan email and present relevant advertisements as a sidebar. In response, a host of privacy organizations have written an open letter demanding that the service be suspended until privacy issues are addressed. The EFF has also been asking some important questions, and Google says they're "batting about a number of options".
My first reaction is "it's about damn time someone's doing this." Since 1995 I've kept every email I've received or sent (yes, even spam), for a total of over 1.6 Gig and almost 200,000 non-spam email messages. I index it all with the Remembrance Agent (my PhD thesis project) so whenever I get email on, say, some hot new technology I also get links to what other friends, colleagues and mailing lists have said on the subject. (On a different note, when I write love letters I see what I've written to previous girlfriends, which is sometimes quite educational.) I'd love to have this kind of thing hooked up not only to my own email but also, say, my favorite 1000 RSS feeds that I'd like to read but don't have time for. That's clearly the direction Google is heading (they even cite me — I love it when that happens!)
Systems like Gmail face two problems, both of which are also strengths. The first is that my personal and work email archives contains some of the most sensitive information there is in my life. They include email confirmation of purchases, trips I've taken and investments I've made. They include love letters I've sent and later regretted, discussions of medical issues, and drunken emails complaining about people with whom I've lived and worked. They include research ideas not yet patented and drafts of papers not yet published. Often these emails are sensitive precisely because they are powerful and useful, but more often than not information that empowers me can also empower my enemies, competitors and parasites.
The second problem is Google's centralized architecture, which is easier to maintain and deploy but requires me to trust them with my most sensitive assets. This is a general problem with indexing the Deep Web of proprietary data, and I suspect it was the main failure point for Autonomy's short-lived Kenjin system and the main reason they moved to an inside-the-firewall search system. This is not to say a centralized approach is untenable; we already have institutions that are trusted with sensitive data, namely doctors, lawyers, and financial institutions. But what these three have in common are a combination of legal and institutional guarantees of privacy, security and longevity of the data they keep. By improving on the usual web-mail model Google plans to join these institutions in terms of trust required, but so far they haven't improved on the old and inadequate web-mail privacy guarantee. It may not even be possible for Google to make the necessary guarantees without Congressional support, an unlikely prospect given the Justice Department's current lust for total information awareness.
If Google manages to innovate new trust models as well they do technology, I suspect Gmail will be a good stop-gap technology, though it will never be as trustable as a combination of my personal local data cache, an encrypted backup service, and trusted friends or services who keep backup keys. Call me picky, but I'm still holding out for my personal server. How much longer before I can have the Web in my pocket?
I hate getting my chords tangled, so I just picked up some retractables:
Reason Magazine is personalizing some 40,000 covers for their June print version, according to a write-up in the NY Times. Subscribers will get a satellite photo of their own neighborhood with their house circled and the cover story: Bradley Rhodes... They Know Where You Are!
Cute stunt — the times article talks both about the privacy and personalized-marketing issues.
...and I'm one of them. Turns out the Wal*Mart purchase of online communities was an elaborate April fool's hoax (dang it, these things are happening earlier every year!). From the owner of the board:
...And after reading thru the discussions the past week and all the frustration, concern, heated discussion and heartfelt conversation I came to realize one important thing.
THIS WAS THE BEST APRIL FOOL'S GAG YET!!!!! (Well it *could* have been)
Yeah, you got it. It was a scam, April Fool's all completely bogus.
This was a carefully thought out and orchestrated prank from a group of truly demented geniuses, your moderators. Probably would have played out better had not a few people taken it as some declaration of war. We really had no idea some would be as hateful as to treat it that way.
Makes me wonder what else is a gag. Anyone taking bets on whether Richard Clark is going to jump up and yell April Fools?
UPDATE 4/1/04: This was, in fact, all an elaborate April Fool's hoax perpatrated by the moderators of the board. And I fell for it hook, line and sinker!
The Chainmaille Board is a niche web community for both professional and amateur artisans who make chainmaille jewelry and armor, one of the three big discussion boards for this community (the other two being the Maille Artisans International League and The Ring Lord Chainmail Forum). The board is run by "Lord" Charles DeCordene, who like The Ring Lord also sells his own supplies and jewelry, both to and in competition with other members of the community. The balance between fostering a community and competing with other members in that community is a universal issue from everywhere from niche hobbies to global industries, but that balance was shifted last week when Lord Charles announced that the discussion board was being purchased lock, stock and barrel by Wal-Mart.
Now the site's new banner sports a "Provided by Wal*Mart, Always Low Prices" logo, and the splash page explains what the purchase will mean to the community:
First, here is what it doesn't mean:
- We will under no circumstances sell your email addresses to anyone.
- We will under no circumstances send you promotional e-mail (also known as SPAM). On rare occasions we may send members a PM or an email should an urgent matter arise (i.e., if your posts contain inappropriate language or images).
- TCB will continue to have no pop-up ads. We find these annoying, and believe it would drive members away. So quite simply, we're not going to do it.
- We will not censor your political statements. We believe in free speech. However posts that contain profanity or statements and images that we believe are offensive to the family-nature of the board will be deleted.
And what it does mean:
- Increased tech support: We will soon set up a 24-hour chat forum where members can ask any technical questions.
- Easily accessible archives: Building on previous TCB efforts, we will compile a list of articles and gallery photos to make the board the best resource on Chain Mail available on the Internet.
- Connections to other board members: Because Wal-Mart is sponsoring multiple boards, we will offer members on all boards the option of registering with our General Community Board. This board will provide you the opportunity to find members in your area with similar interests. We are considering hosting monthly shopping days at certain Wal-Mart locations where members can gather together for a day of fun! It is up to you how involved you choose to be.
- Opportunity to sell your chain mail: Our General Community Board will have an online store that has not only Wal-Mart products, but also products of interest to our board members. In the B y Our Members area, members can post items they would like to sell. Think of it as a larger version of the Trading Post currently on TCB. Unlike many other online stores and auction sites, it will be absolutely free to post up to 15 items per month.
- Store Discounts: Beginning in June 2004, Members of Wal-Mart boards will be able to apply online for our new CyberCustomer Discount Card (CCDC). There is no annual membership fee and owners of a CCDC card will save 5% on all Wal-Mart purchases over $20.
We hope that you are as pleased as we are about this exciting venture. We look forward to building a successful relationship with every member here.
On the one hand, Wal-Mart's sponsorship is adding clear value to this community: Lord Charles was having trouble running the discussion board with his own time and money, and could never offer the kind of technical and developmental support the board will now enjoy. They also will likely expand exposure and thus membership in the community, which in spite of the necessary growing pains will likely help the community in the long run. Wal-Mart, of course, now has the opportunity both to become identified as an insider in a close-knit community and to put their own online auction sites in a premium position. That's vital for something like auctions, where customers and sellers alike will want to settle on a single marketplace. That marketplace is currently eBay — it's clear that Wal-Mart hopes to change that default by getting a hold in certain communities and then leveraging that hold through their General Community Board and CyberCustomer Discount Cards.
On the other hand, there have also been concerns expressed in the community, ranging from "Wal-Mart is evil" to "how can a small wholesaler/retailer like myself ever hope to compete against this?" And the latter is a very good question, especially for people who don't have the volume, Wal-Mart compatible style/branding, or just the desire to sell in the new landscape. These people might be in trouble down the road, forced to change their ways or quietly fade away (to the detriment of the community at large). On the third hand, small sellers who can make the shift to the new model might find the pie getting bigger and whole new marketplaces opened up, just as we're already seeing with eBay and Amazon Store cottage industries.
For anyone doing research with audio or video recording devices (or just budding voyeurs), the Reporters Committee for Freedom of the Press has a nice practical guide to wiretapping and video recording laws for the 50 states.
I love Joshua Kinberg's idea for a thesis in multimedia: a chalk-spraying bike that can print protest messages via the Web and SMS during the Republican National Convention in NYC. Perhaps the best part is is proposal for how to evaluate his work:
I will consider this project successful if I can print at least 100 messages in strategic locations during the week of the RNC. The amount of media coverage, unique visits on the project's website, the website's Google ranking, and the amount of online participation are other methods by which to gauge the effectiveness of the project.
It reminds me of conversations my officemate and I used to have about teaching an undergrad class in the media and manipulation, with the final exam being to plant a false story in as public a news source as possible.
E Ink just announced the first consumer device to use their Electronic Paper technology: the Sony LIBRIé e-Book reader. Six-inch diagonal display, 170ppi, 4 shades gray reflective screen with an almost 180-degree viewing angle, weighing around 300g with four AAA batteries. Even better than the viewing angle is the battery life: unlike LCDs, electronic paper takes power to modify a screen, but not to maintain a static display. Sony's tests show you can read around 10,000 pages on four AAA Alkaline batteries.
Borrowing a page from Dean, the Bush/Cheney campaign's webpage provided a create your own poster link to produce a PDF poster, complete with your own campaign slogan like "Negative 2.6 Mill. Jobs Created and Counting." Oops. A few weeks after Ana Marie Cox at wonkette.com suggested the idea, the B/C campaign staff caught on and crippled their poster-maker (which I note doesn't work in Safari at all now), but some of the best posters have now been turned into a Bush/Cheney Sloganator slide show.
Given the trend towards underground resampling and "media consumer reuse," I expect this to be the watershed year for underground campaign hacks. Anyone seen a re-edited TV ad yet, complete with inserted subtitles and footnotes? Where's the Phantom Editor when you need him?
Doug Adams of Doug's Scripts for iTunes has just posted about a nice little bit of Applescript he's coded to make any AAC file bookmarkable on your iPod, just like Audible.com's audio books. Apparently all it takes is to change the file type (not the extension) to the four-character string "M4B " (note the space). Apple posted the method in their Knowledge Base (article #93731), but the article was then quickly removed.
I have to wonder why Apple felt the need to pull this info (I also wonder how/if they thought pulling it would stop it from being used now that it's out, but that's another question). My best guess is they have some sort of exclusive deal with Audible.com for bookmarking capability, and somebody blew it by revealing the hack they used. I'd love to hear if someone knows more about the politics behind this though.
(Thanks to Rawhide for the link, and of course Doug Adams for the script!)
About a month ago I started downloading audio lectures and listening to them on my iPod. There's something absolutely wonderful about being able to browse through lectures by statesmen, Nobel laureates and other top minds of our era — here're a dozen that I've especially liked:
It costs $499 to buy a new 40G iPod.
It costs $10,730 to fill it with songs purchased online at 99 cents each.
I got my first Astroturf political spam comment today on my post on California mental illness legislation. The brief comment links to a press release signed by IPRWire founder and staunch Edwards Supporter Hans Schnauber, better known as The Butterfly Guy. Schnauber made the news in 1996 for registering domain names of big companies and then posting Web pages about how awful those companies have been for butterfly habitat.
The Kerry screed itself takes the well-known story of how Kerry discovered only last year that his grandfather was actually Jewish, and how he had taken his own life in 1921, probably due to financial difficulties. It then goes on to make the completely unfounded assertion that "According to sources, including The Boston Globe, Chicago Sun-Times, and Fox News, Senator John Kerry of Massachusetts has a family history of severe mental illness" and asks the ominous question "Will the American people vote for a candidate with a family history of mental illness and clinical depression?" Of course, the release doesn't actually cite the news stories to which it refers, but given a a NetNews post by the author we can guess it refers to the original Boston Globe article and the Sun-Times and Fox News pick-ups, none of which ever mention the possibility of mental illness.
A little web-searching reveals that Hans is hyping the press release on NetNews, posting under the name "Day Bird Loft (email@example.com)" (see this post where "Loft" signs his post as "Hans", and note that pigeons.ws points to the same base scripted website as iprwire.net). But in spite of hyping his story in numerous news groups (sometimes even replying to his own message), I've yet to see a response taking him to task for his self-promotion. Given that the NetNews is usually quite aware of spammers, I have to assume he's gotten away with it for four days (a lifetime on the Net) because his posts are mostly hand-crafted, point to an official-sounding press account (most people don't know PRWeb is a for-hire press-release wire service), and because he actually defends himself in the threads he posts to. I probably wouldn't have investigated it either had his comment not been so clearly generated by a spam-bot that got tripped by keywords out of context.
What's the moral of this story? Just another warning of what we already knew:
Update: Hans comments that he didn't use a spam-bot, just "plain old fashion tech creativity." It's that kind of personal touch that's missing so often from spam in this day of automation — I'm glad to see some craftsmen still put a little of themselves in their work.
From a purely strategic standpoint though, I have to wonder about the choice of mental illness as the hook for this smear campaign. The best whisper campaigns say out loud what people are already wondering. It doesn't have to be true: Gore was an honest man but could be painted as dishonest because of his association with Clinton. (Of course, it helps even more if the rumor has truth to it, as was the case with Clinton.) But I haven't seen anything in Kerry to make me think insanity; it just doesn't connect emotionally. The story would have stuck much better to Dean I expect — people were much more willing to think he was unhinged, and there were already a lot more forces trying to spin him that way. Perhaps when the nomination is over Hans will explain his reasoning and we'll be able to do a post-mortum on his one-man campaign.
The security risks from this code appear to be low. Microsoft do appear to be checking for buffer overruns in the obvious places. The amount of networking code here is small enough for Microsoft to easily check for any vulnerabilities that might be revealed: it's the big applications that pose more of a risk. This code is also nearly four years old: any obvious problems should be patched by now.
Microsoft's fears that this code will be pirated by its competitors also seem largely unfounded. With application code this would be a risk, but it's hard to see Microsoft's operating system competitors taking advantage of it. Neither Apple nor Linux are in a much of position to steal code and get away with it, even if it was useful to them.
In short, there is nothing really surprising in this leak. Microsoft does not steal open-source code. Their older code is flaky, their modern code excellent. Their programmers are skilled and enthusiastic. Problems are generally due to a trade-off of current quality against vast hardware, software and backward compatibility.
I was also gratified to see this comment, based on a book I loved as a kid:
// TERRIBLE HORRIBLE NO GOOD VERY BAD HACK
Even in Australia...
Seen at the PalmSource Develper Conference: A "virtual sheep" to teach sheep-shearing (from New Zealand, natch). Run the Palm-based barcode-scanner "shears" in the right order and you gain points. Run it wrong and blood graphics splash on the screen. I want one!
When I was a young MIT grad student back in 1994, I attended a big Media Lab symposium on the new Digital Information Superhighway. Mosaic had been out a little over a year, Netscape had been founded six months ago, and I was listening to Mickey Schulhof, President & CEO of Sony America, give us his vision of the future. The world he described was the standard pre-Web story: every home in America would have a set-top box (made by Sony) that decoded content for all us consumers. At the other end of the wire was a Sony office that handled billing and content delivery. The content was, of course, also produced by Sony, though they'd happily broker for non-Sony customers as well. He also made a strong point that they had no interest in managing the wires themselves, kindly ceding this part of the vision to competition.
Being a young grad student and having religiously read Wired Magazine for over a year, when it came time for Q&A I asked the obvious question: "In this world you describe, how will people get access to non-professionally produced content that can't afford the pricing structure Sony will require?" His answer: "I don't think people care about non-professional content."
As we all know, he was soon proven horribly wrong, but every time there's a new seismic shift in technology all the current monopolies scurry to try to put the Djini back in the bottle. The latest shifts for content is with portable and home-entertainment boxes, and it's in this context that I read the announcement that Disney has finally agreed to license Microsoft's Digital Rights Management software to "bring about a vibrant market for legitimate, high-quality entertainment delivered to new categories of end-user devices, such as personal media players and home media center PCs." In other words, the game is shifting again, and this time the Content Cartel isn't going to be caught with their pants down.
Now things get bloody, as if they weren't before. I suspect the only thing that frightens Disney more than P2P-traded Mickey Mouse fan-art is the idea of Microsoft stepping into the Sony role of Mickey Schulhof's vision. Microsoft, along with Apple and RealNetworks, have to walk the fine line between appeasing the Content Cartel and offering consumers enough control that they don't blow off DRM and proprietary standards entirely for systems with simple embedded Linux & MPEG. (See Jeffrey O'Brien's recent Wired article for a nice discussion.) I'm not sure who's gonna win this one, but as one of those people producing non-professional content, I sure hope Schulhof vision wasn't just late in coming.
The past few days I've been downloading streaming audio of lectures and talks given by interesting and intelligent people™, converting them to MP3 format and putting them on my iPod. The process is still a little slow — usually I stream the audio using RealPlayer and use Applescript and Wiretap to automatically capture to disk, then trim using Quicktime Pro and convert to MP3 using iTunes. However, I'm pleased with the end result.
I'm still looking for good sources of audio talks, and welcome suggestions & links. Here are the three I've most enjoyed so far:
FusedSpace is hosting a design contest for "innovative ideas that, by means of existing technology, can change or improve our current relationship with physical public space or that can otherwise bring about innovations in the public domain." (Where by public domain they mean in the sense of common space, not in the intellectual property sense.) Props to Corante and Eric Nehrlich for the link.
Years ago, Steve Mann made Cool Site of the Day with his Wearable Wireless Webcam. Now, almost a decade later, you can order the DejaView CamWear Model 100 hat or glasses-mounted camera, which continually records a 30-second buffer of video so you can push a button and start recording from before you even know you wanted to. Price will be $399, available "in January" (so they'd better hurry!)
I don't expect this first-generation product to make a big splash, but I do believe in the vision of always-ready wearable cameras and microphones with this sort of record-30-seconds-into-the-past kind of feature. The story is somewhat compelling for consumers ("when your baby makes that great smile, you can capture it and grab the best frame as a picture") but even more so for industry and inspection, where you're more concerned with documenting an event than with the artistry of the video.
Digital had many of the advantages I'd expect: less equipment to get lost, easy backups, ability to review pictures on-site, and easier remote collaborative editing. The disadvantages were more surprising to me, and included having to deal with brightness differences on different screens, inability to edit on a large horizontal surface like a light-table, and poor contrast compared to slides when showing photos to a large group.
I recently came across two programs for helping transfer large files via instant messenger or email. I see both these systems as gap-bridgers — they bridge between the spontaneity of email/IM and the robust and recipient-controlled download you get with Web browsers. Since the Internet abhors a gap, I've no doubt this difference in functionality will go away in the near future, especially as Web-based protocols are further integrated into the OS and file systems.
I got a new cellphone back on December 5th, swapping out my T-Mobile Sidekick for an AT&T Treo 600 (both good phones, but AT&T has much better coverage in my area). I also signed up to transfer my T-mobile number over to my new phone.
Twenty-six days and about 8 hours on hold with technical support later and I'm still waiting for my number to be transferred. The problem is a classic multi-system gridlock. AT&T sent a request for number transfer to T-mobile through Telcordia, an intermediary that handles number portability communication between the various telcos. They then sent a follow-up with more information, but the follow-up arrived at T-mobile before the main request arrived. This wedged T-mobile's system and caused both requests to be dropped. Now T-mobile is asking AT&T to cancel and resubmit the request, because they can't get their side unwedged. Unfortunately, AT&T's system can't cancel requests that are awaiting a response. Gridlock.
There's no one person to blame here. T-mobile's system clearly shouldn't have gotten wedged so easily, Telcordia shouldn't have delivered messages out of order, and AT&T shouldn't have sat on the request for three weeks when they thought the ball wasn't in their court. Most importantly, both telcos need more staff to cut through the hour+ hold times.
At long last I've gotten the problem escalated at AT&T, thanks to a dedicated number mobility group member named Andrea who was willing to wait through T-mobile's hold time and patch me into the call. They now say it'll be another 48-72 hours, which will bring them just under the 30-day return policy on my new phone. Here's hoping...
Update: And 29 days after purchase, my new phone finally takes calls! (And there was much rejoicing.) FYI, you can cut to the head of AT&T's customer support queue by dialing 1-888-799-1305 and selecting 3G and English. This is the priority queue used by AT&T stores, though customers can also use it. (Thanks to Nelson and Vyruz Reaper for the number.)
Last night I finally got around to watching Microsoft's Comdex presentation, specifically the section where Susan Dumais shows off her new search technology "Stuff I've Seen." (Search for "switch gears" at the bottom of the transcript or go to 1:07:50 on the video.)
Most of Stuff I've Seen is concentrating on the problem of quickly indexing and searching your entire hard drive, regardless of media format. (I sometimes jokingly refer to projects like this as YAPIM, or Yet Another Project Invoking Memex, my own thesis work fitting that description as well.) However, the part that interests me most is what they're calling implicit query. As CNET describes the Comdex demo:
In demonstrating Implicit Query, Dumais began to type an e-mail asking a colleague about a set of slides for an upcoming conference. Before the message was complete, the program — which appears in a window on the side of the screen — pulled up e-mails, slide decks and Word documents containing the name of the conference and the future recipient. Each hit came with a brief summary of the internal content, date, the type of software the file was written in, and its potential relevance, among other information.
This is the same functionality that in my PhD I call Just-In-Time Information Retrieval, and is the main focus of the Remembrance Agent software I developed. It can be incredibly powerful (I use it regularly to suggest email discussions related to my blog entries, for example) and I hope Dumais pursues it. It looks like she's still in early stages with the concept though, and and more importantly the current interface is still designed for explicit query — far too intrusive for something that runs all the time in the background. By contrast, Autonomy has had an actual product in this area for over three years, though I'd say the interface is still the real trickiness for this kind of application. Still, as is often the case one of the more interesting aspects of Microsoft doing something is that it's Microsoft doing it. If implicit query makes it into a future version of the OS (and if MS doesn't screw it up they way they did with that annoying paperclip) that'll be quite interesting.
Back in July there was a big scandal over DARPA's funding of a futures market where people bet on things like whether Arafat will be assassinated or when the US will pull out of Iraq. The project was canceled, and also became the straw that forced John Poindexter's resignation. Now the Guardian reports that San Diego-based Net Exchange, the company that was implementing the project, is going ahead and launching it without government support or involvement. Given the previous uproar, Net Exchange is being understandably quiet about the whole thing.
Personally I'd be happy to see them try this out. As I said before, the U.S. Government shouldn't be involved in something as shady as gallows gambling, but as a private experiment the whole thing intrigues me and I don't have a problem with seeing where it goes. My guess is it will wind up being an interesting past-time for armchair analysts, but like most markets will fluctuate far too much to provide any real security data. The only real danger I see is if the stakes get high (unlikely) and attract corruption — unlike sports gambling or its Wall-Street counterpart, Middle-East politics has neither conflict-of-interest nor insider-trading laws. The more likely danger is simple lack of interest, the risk all seemed-like-a-good-idea-at-the-time Internet projects face.
Just in case anyone was still in doubt that Apple's iPod is going to slowly grow into a universal portable media server, Apple has just announced several new iPod accessories, including a voice recorder (microphone to turn the iPod into a dictaphone) and media reader (accepts various media cards and slurps the data onto the hard drive for later retrieval). The iPod isn't the first hard-drive based MP3 player to offer these extras (Archos has had one for a while), but Apple goes one step further with automatic synchronization of recorded audio and stored pictures with iTunes and iPhoto respectively. Now if they can just add Bluetooth the iPod will be well on its way to becoming the personal server it's destined to become.
I just got my first automated blog-comment spam, attached to my post about artificial diamonds (I've since deleted it). Interestingly enough, the spam wasn't meant for me or my readers but for Google — it was just random snatches of English peppered with the word "jewelry" and links to http://jewelry.lstor.com/, which produces more random phrases. No doubt the idea is to raise the pagerank of some real page that will go there later.
Wonder if this is what they mean when all those spammers keep telling me they can raise my Google ranking?
The Program on International Policy Attitudes at the University of Maryland and Knowledge Networks have just released a report that sheds a lot of light on the much-reported polls that show Americans have serious misconceptions about the facts surrounding the Iraq War. (PIPA's press release and questionnaire are also available).
At the heart of the PIPA study are three questions:
The answers, by the way, are "no clear evidence has been found," "no weapons of mass destruction have been found," and "the majority of people in the world do not favor the US having gone to war." If you got at least one wrong don't feel too bad: only 30% of people surveyed in three polls (June, July, and August-September) got all three correct.
The report is well worth reading, but here's a brief summary of their findings:
Misperceptions correlate strongly with media source. People who watch Fox News as their primary news source were much more likely to be incorrect on the questions of links to al Qaeda, WMD and world opinion than those who watched any other source. People who got their primary news from television were more likely to have misperceptions than people who got their news from print media, and NPR/PBS viewers were the best informed on these subjects.
|Number of misperceptions per respondent||Fox||CBS||ABC||CNN||NBC||Print media||NPR/PBS|
|None of the 3||20%||30%||38%||45%||45%||53%||77%|
|1 or more misperceptions||80||71||61||55||55||47||23|
|2 or more misperceptions||69||51||41||38||34||26||13|
|3 or more misperceptions||45||15||16||13||12||9||4|
The data also show that these differences aren't explained by different viewer demographics. For example, the average incorrect answer rate was 54% for Republican Fox viewers, but only 32% for Republicans who get their news from PBS-NPR. Viewer education levels also don't account for the differences between the media sources. The amount of attention people pay to the news has little effect on the results, except in the case of print media and to some extent CNN, where more attention results in being better informed, and Fox News, where paying more attention to the news actually increases the likelihood of being misinformed.
I don't have high hopes that this report will directly change public opinion or make people better informed: the people who think we've already found WMD aren't going to be reading scientific reports. What I do hope is that this report, along with the poll data that led up to it, will be a wake-up call to the mainstream press to do their job. (Fox News, of course, is a lost cause and no doubt sees this report as evidence they are doing their job.) Paul Waldman recently wrote a column in the Washington Post that calls for exactly that:
Once misconceptions are known, journalists have an obligation to highlight the facts in a prominent way, writing stories specifically about where people have misunderstood or been misled, and correcting the misimpressions. The average citizen can't be expected to wade through the euphemisms and competing claims, research the evidence, and come to a conclusion about who's telling the truth and who isn't.
That's what reporters are for.
Let's hope the press wakes up soon — we need our fourth estate more than ever.
NeoMedia has just announced a service where you can take a picture of an ISBN code (the barcode printed on every book jacket) with a cellphone camera and be automatically brought the the Amazon.com page for that book. From their press release:
"Now, shoppers can take out their Nokia(R) 3650 camera phone at Barnes & Noble, Border's, or just about any other book store, and just take a picture of the ISBN on the book to comparison shop at Amazon.com right on the screen of their wireless Web browser," Jensen said. "It's kind of a high-tech version of the Santa Claus at Macy's(R) sending Christmas shoppers to Gimbels in the classic movie, 'Miracle on 34th Street'," he mused.
Gizmodo suggests this is Barnes & Noble's worst nightmare, but I expect it won't hurt the large chains, as their volume keeps prices fairly close to Amazon's as it is. It'll be harder on independent bookstores, but even then there's a premium that people are willing to pay for a book that's already in their hot little hands. That premium will be even larger than the usual amount people will pay for bricks-and-mortar convenience because the customer is already in the store — I expect a lot more.
The biggest question for me is whether "now is the time." I first saw this kind of technology about 6 years ago, both in a class project at MIT and in Anderson Consulting's (now Accenture's) Shopper's Eye project, and even briefly looked at doing a startup in this area just before the crash. It never quite felt like the time was right for this to go mainstream because the technology wasn't in the hands of enough consumers. Clearly NeoMedia thinks we're getting close.
The CIA's Counter-Terrorism Center (CTC) is working to develop training simulations with the help of the Institute for Creative Technologies, a center within the University of Southern California that specializes in combining artificial intelligence, virtual reality and techniques from the videogame and movie industries to create interactive training simulations. The company recently received accolades for their "Full Spectrum Warrior" project, which was designed as a training aid for the US Army but has also lead to a commercial videogame for the X-Box. The Army project uses material developed with the Army Infantry School at Fort Benning and a rich AI engine to run trainees through both military and peacekeeping scenarios. For example, in one scenario the trainee plays an officer in charge of a unit that has just been involved in a traffic accident between a tank and a non-English-speaking civilian. If approved, the CIA's simulation would allow analyst trainees to play themselves or the part of terrorist cell leaders, cell members, money-movers and facilitators.
The Washington Times, who broke the story, is highly critical of the project, comparing it to Vice Adm. John Poindexter's ill-received Idea Futures project and quoting unnamed military officials and other critics who call it "a ridiculous and absurd scheme that makes Poindexter's project look good in comparison" and suggest that "the key issue here is the CTC misspending funds on silly, low-priority projects, exactly the kind of thing that forced Admiral Poindexter to resign." A follow-up article, also in the Washington Times, quotes former Georgia congressman Bob Barr (R-GA) as saying "Perhaps this is the reason we were surprised by September 11. If it weren't so serious, it would be comical... What we ought to be doing is focusing our money and attention in identifying terrorists and their associates so we can be on the watch for these characters, not playing video games." The Sydney Morning Herald was slightly less critical, but also linked the project with Poindexter's projects.
It's entirely possible that this project is too expensive (the CIA has not revealed the price tag) or that the simulation is in some way teaching the wrong lessons. However, the main criticism seems to be of the form "the CIA is wasting time playing video games," which is patently absurd. Simulation role-playing has been an effective training tool in both the military and business for decades, and in fact much of the technology now seen in video games was originally developed for training U.S. Army officers. To suggest that the CIA should be out catching terrorists instead of playing video games is like suggesting the U.S. Army should be out fighting wars instead of wasting their time doing training exercises consisting of "running around with toy guns playing capture the flag."
It's pretty clear that there's a thicket of political wrangling going on behind the scenes, and the Times story is a salvo fired by people who want this CIA project canceled. I've no idea whether this is a case of fighting over scarce funding, vengeance against the CTC, or an honest attempt to scuttle a project that won't provide good training, and I won't even begin to speculate. Hopefully someone with a better understanding of the ins and outs of intelligence and military politics (like Phil Carter at Intel Dump) will weigh in on this before long.
Intel's Personal Server project, lead by Ubiquitous Computing long-timer Roy Want, got some press this past week after it was shown at the Intel Developer Forum. The prototype is a 400MHz computer with Bluetooth, battery and storage, all about the size of a deck of cards. No screen and no keyboard — I/O is handled by whatever devices happen to be around, be they the display and keyboard on your desk, the large-screen projector in the conference room or your portable touch-screen. This concept isn't new; it's something that researchers in Ubiquitous Computing and Wearable Computing (including Roy) have been talking about for over a decade. But it is the right concept, and Moore's Law is finally bringing it to almost within reach.
There are three main reasons why this is the Right Thing(tm):
Your hands aren't getting smaller. Handheld computers are now small enough that the limiting factor is screen and button size. Since our hands aren't getting any smaller, we're pretty much at the limit for everything-in-a-single-brick handhelds, at least for current applications. One of the ways out of that box is the wearable computing approach, where interfaces are spread around the body like clothing or jewelry. Displays are shrunk by embedding them directly into the glasses, tiny microphones are used for speech recognition, micro cameras and accelerometers are used for for gesture and context recognition, and specialty input devices such as medical monitors get used instead of more generic input devices. One of the big difficulties with wearables is all the wires leading from the CPU/Disk/Battery unit to the I/O devices, and in fact this problem was a big motivating force behind the IEEE 802.15 short-range wireless standards, which include Bluetooth. Wireless isn't a complete solution (you still have to worry about powering your I/O devices) but it's a start.
The other way to break the hand-size limit is the UbiComp approach: use whatever interfaces are in your surrounding area. When I'm at my desk I want to use my nice flat-panel display and ergonomic keyboard, not my black-and-white cellphone LCD. When I give a presentation I want to use the conference hall's projector. I don't need a keyboard at all, just enough to launch my Keynote presentation and change slides. Roy naturally leans towards this second approach, but as I've argued before the Ubicomp and Wearables approaches work well together; there's no need to choose.
Always the right tool for the job. Another advantage to breaking the CPU from the I/O is it gets around an inherent conflict in interface design. On the one hand, designers will tell you that you always want the interface to fit the task. Use a hammer to drive nails and a screwdriver to turn screws, and all that. But in the mobile world you don't want to carry around your cellphone, PDA, MP3 player, two-way pager, camera and laptop everywhere you go. When it comes to mobility, most people choose to carry a Swiss Army knife instead of a full toolchest, even though the one-size-fits-all interface won't ever be quite right for the task. (That's why I still carry my Danger Hiptop, which is great for text messaging but feels like I'm holding a bar of soap to my ear when I use it for voice.) When you break the brick, as it were, you can use one CPU, main battery, network connection and storage for all your devices. Then just bring whatever interfaces you need for whatever you tasks you expect that day, and use interfaces in your environment when they're available.
It's not clear when Intel (or Apple or Sony for that matter) will finally come out with a successful Personal Server style product. The hardware is just one necessary piece to the puzzle, with resource discovery, communication standards, good interface design and of course the all-important "killer app" to bring it all together. But in spite of the hurdles yet to come, this is the right approach. I'm glad to see Intel is giving it the support it deserves.
As a reminder for those who are interested in wearable computers, the early registration deadline for the 7th IEEE International Symposium on Wearable Computers is this Friday, September 26th. You can check out the advance program here.
I've been reading up on IBM's recently announced WebFountain project. The system, which has been dubbed Google on steroids, spiders the Net and other databases and applies various data-mining, natural-language processing and pattern recognition techniques to the data. The current system uses 500 parallel-processing Linux boxes, all accessing about half a terabyte of storage in the basement of the IBM Almaden Research Center. IBM's infrastructure allows clients to customize their searches and standing queries using a library that will "tokenize the data to identify people and companies, and discover patterns, trends and relationships in the data." The technology is being offered as a service, and is already being sold through a partnership with Factiva. It is being marketed mainly for trend identification and for "reputation management," where a company watches chat rooms, bulletin boards, newspapers and other sources to see what people are saying about it.
I'm quite interested in the technology, and even have a friend from grad school who has been working on it (Hi Dan!). But the thing that got me thinking was a comment about privacy by Robert Morris, the director of IBM Almaden. As reported in the San Jose Mercury News:
The technology could potentially raise privacy concerns if companies turned its power on analyzing individuals. But Hart and Morris said both companies would protect user privacy.
"Anything we mine is public data on the Web," Morris said.
But it isn't yet clear how the company would restrict users trying to use the tool to invade someone's privacy.
The quote is in line with the comment by The Economist: "No doubt some people will say it sounds a little intrusive. But all WebFountain does is reveal information that is hidden in plain sight."
Unfortunately, the idea that anything findable on the Net is "public" is a dodge — "public data" is a simplification of what is a much more complex set of social rules. Counter-intuitive as it may sound, privacy rules are not primarily about restricting information access to particular people. The primary purpose of privacy rules is to keep people from using the information in ways that would harm the person who is keeping it secret. This is why companies wink at sharing trade secrets with your wife or husband but are adamant about not revealing them to potential competitors, unless they've first signed a non-disclosure agreement. The NDA explicitly restricts harmful uses of the data, making the privacy rules unnecessary.
The idea that privacy is a restriction on power was brought home to me a few years ago by an old fraternity brother of mine. Back when he was still finishing his PhD at MIT he got a call from an MIT campus policeman, who somewhat sheepishly explained that he was calling on behalf of an irate member of the Massachusetts Maritime Police Department. Apparently this maritime policeman had been surfing the Web and had come across a picture from my friend's undergraduate fraternity days, showing him firing water balloons from a giant funnelator. The campus policeman said he was calling to inform my friend that slingshots are illegal in Massachusetts, and that he wanted to make sure that the device had been destroyed.
So here was a picture that was clearly "public" in that it had been published for anyone to see. The intended audience was anyone who was interested in our fraternity's annual Water War, plus anyone else who might get a chuckle out of it. You could even say the intended audience was everyone in the world except for particularly humor-impaired members of the Massachusetts Maritime Police Department. If webservers had provided such vaguely-defined access rules, we certainly would have used them.
A more realistic idea of public vs. private spaces is one of intended use, with restrictions on access as a proxy for limiting that use. When I write an article for an academic journal or even a blog entry I expect to be called upon to defend my position. When I write a LiveJournal post I expect much less criticism, and I expect that people who read my postings will be the sort of people who generally agree with me and will be accepting of whatever personal thoughts I write. Both are published on the Web, both are "public," but different social rules are implied by the relative ease of access, ease of discovery, and the different communities that are most likely to come across my posts. Difficult access provides a kind of "soft wall" that restricts access to certain communities, and the social rules of those communities provide a soft wall that limit how my information will be used. I expect most LiveJournal users would feel violated if information from their posts wound up being used in targeted marketing literature, even though most posts aren't password protected.
I don't intend to slam WebFountain with this argument — WebFountain is just the latest technology that is moving soft walls around by changing the ground rules. It was also only a matter of time before such a service was be offered. As a coworker of mine has pointed out, it is almost a certainty that the NSA has already developed such technology. (The argument goes: (a) The NSA would have to be really incompetent not to have done this, and (b) the NSA is not incompetent.) Given this is likely, it seems better for society that such technology be out in the open so people can adjust their expectations about how soft those soft walls really are.
For those who don't know, John Zuccarini is the most notorious of the so-called "typo-squatters," people who register domain names that are common typos of popular websites and then flood the poor fat-fingering visitor with advertisements. Zuccarini had at least 5,500 copycat Web addresses, and the FTC estimated he was earning between $800,000 and $1 million annually from the mostly porn-based banner ads he displayed, in spite of numerous lawsuits against him for trademark violations. Zuccarini was arrested last week under the new Truth in Domain Names provision in the PROTECT Act of 2003, which makes it illegal to use misleading domain names to lure children to sexually explicit material.
But to add insult to injury, no sooner has Zuccarini been arrested than he has been toppled as the typo-squatting king by a new upstart: the domain-name register VeriSign. Trumping Zuccarini's 5,500 copycat domain names, VeriSign has used their position as keeper of the keys to redirect ALL unregistered typos to their site. Try going to http://whattheheckisverisignsmoking.com/ and see for yourself. VeriSign has posted a white paper on their new move, which creates a top-level "wildcard" registry for every domain-name request in the .com or .net domains. The change redirects any entry without DNS service to VeriSign's own SiteFinder search engine, including reserved domain names such as a.com and domain names that are registered to other people but don't have an active name server.
The main problem is that VeriSign is abusing their position as gatekeeper of the com and net domains, which are a public trust and not VeriSign's commercial property. Network types have also been quick to point out other ways this move breaks things on the Net. Most important to everyday users, Web browsers are no longer able to gracefully handle bad links or mistyped URLs. Most browsers pop up a small dialog box for a bad URL, leaving the user on the old page. With the new changes, browsers cannot give this functionality. (Of course, for people who use versions of IE that redirect to Microsoft's search-page, the only difference will be a change of masters.) Furthermore, debugging scripts often use domain-not-found errors to check for routing problems; these are no longer returned. And finally, anti-spam software also often uses domain-not-found errors to detect mail from invalid email addresses. (There was also concern that email sent to a typoed domain name would not bounce properly, but it seems this was either not the case or has been fixed.)
As one might expect, the flameage has been fast and furious on this one. Of particular note is the discussion on the North American Network Operators Group mailing list, where members have already contributed several patches to routing software that would essentially ignore VeriSign's wildcard lookup, restoring the Internet (or at least the portions that apply the patch) to the old way things operated. Many are also simply dropping the IP address for sitefinder.verison.com (220.127.116.11) on the floor. If widely adopted such actions would essentially neutralize VeriSign's change, but I expect the adoption levels will only be enough to be a statement of protest, not an actual revolution. However, Computer Business Review notes that the Internet Corporation for Assigned Names and Numbers (ICANN), which manages aspects of the DNS for the US government, has yet to weigh in on whether VeriSign's changes are actually valid according to agreed-upon specs.
UPDATE: It seems VeriSign is only half-handling email correctly. What they've done is hooked up their own special mail-handler (which they call the Snubby Mail Rejector Daemon v1.3) that returns a fixed set of responses to SMTP transactions. Currently, VeriSign reads the From and To headers and then returns an error code. This means all misaddressed email relies on VeriSign's server to bounce mail, and should the server not be available bounces might be delayed by several days. It also means that all addresses of typoed email are actually sent to VeriSign before being bounced, rather than stopped locally. Of course, I'm sure no VeriSign employee would be so criminal as to actually use this information for industrial espionage, nor would he change the Snubby Mail Daemon to actually collect the contents of said messages.
Friends of mine have also pointed out that ISPs and businesses "cache" DNS addresses on their local DNS servers. By claiming that all DNS requests are legitimate, VeriSign is clogging these caches with bad requests.
Science fiction author Larry Niven once described a world where people would instantly teleport to places where something interesting was happening, causing what he called "Flash Crowds." Now the LA Times reports that movie makers are seeing the opposite problem: instant communication means that if the audience doesn't like your movie on opening-night Friday, by Saturday you'll have yourself a flash void:
"Today, there is just no hope of recovering your marketing costs if the film doesn't connect with the audience, because the reaction is so quick — you are dead immediately," said Bob Berney, head of Newmarket Films, which distributed "Whale Rider," a well-received, low-budget New Zealand picture that grossed $12.8 million and has endured through the summer. "Conversely, if the film is there, then the business is there."
Two things are going on here. The first is just that word-of-mouth is getting faster, which we already knew. That means that the old strategy of hyping a bad movie so everyone sees it before the reviews come out won't work much longer. The more important point, though, is that movie companies are seeing their carefully crafted ad campaigns overwhelmed by the buzz created by everyone's texting, emailing and blogging. The shift in power cuts both ways: audience-pleasers like Bend It Like Beckham thrive on almost buzz alone, while The Hulk was killed by buzz based partially on pirated pre-release copies, in spite of a huge marketing campaign.
Studios (and producers in general) will learn one of two lessons from this trend. Either they'll decide they need to manipulate buzz by wooing mavens and carefully controlling how information is released, or, just possibly, they'll follow the advice of Oren Aviv, Disney's marketing chief: "Make a good movie and you win. Make a crappy movie and you lose."
Many AI researchers believe that the biggest barrier to creating human-like intelligence is that humans know millions of simple everyday facts. This ordinary knowledge ranges from knowing what a horse looks like to a simple fact like "people buy food in restaurants." In the past, AI researchers would spend years painstakingly entering such information into huge databases, but now a new crop of researchers are leveraging the millions of Netizens who have nothing better to do than answer stupid questions all day to build these databases quickly and for free. One such site is the OpenMind Initiative (hosted by my own Ricoh Innovations), which is primarily being used by the MIT Media Lab to collect Common Sense Knowledge.
The latest foray into this space is the ESP Game. When you log into the game you are paired randomly with another player on the Net. Both you and your partner are shown the same 15 random images from the Web, one at a time. Your job is to type in as many words to describe the image as possible, with the goal of matching a word your partner has entered. When you agree on a word, you both get points and move on to the next image. Usually I don't care for Web-based games, but I have to admit the game is compelling.
The real goal of the system is to generate a huge database of human-quality keywords for all the images on the Net. The task is huge: Google's Image Search has already indexed over 425 Million images by using the text that surrounds the image's hyperlink. But numbers are on Ahn's side: if only 5000 people were to play the game throughout the day, all 425 Million images would receive at least one label in a single month. Given that many game sites get over 10,000 players in a day, a few months is probably all Ahn needs to fill out the whole database.
I'm probably the last on the block to have heard about this, but Scott McCloud, the author of Understanding Comics, has finally come out with an online comic available for a micropayment of 25 cents. Or rather, he came out with it over a month ago, but I just found out about it today. As you might expect from Scott, he's put the new medium (Macromedia Flash in this case) to good use without losing the fundamental comic-book feel. It was a quarter well-spent, especially since I could download the content to my computer and feel like I actually got something I can call "my copy."
Payments are made through BitPass, a new startup out of Stanford that allows you to open an account with as little as three dollars and a credit card or PayPal account. The whole process was quick and painless, as is the payment process itself. There's not too much content you can purchase through BitPass yet, but it looks like they're building up a solid content base as they go through their beta-testing. Content providers seem to still be figuring out how the market will play out for different kinds of media: models range from the donation cups that are already common with PayPal, to purchase-and-download, to a "30 reads in 90 days" pay-per-view kind of model.
And now just in case I wasn't quite the last person on the Internet to have heard about this, you know too.
Here are a few free Mac programs I've recently come across that make it easy to exercise your rights to fair use. Which is to say, these are programs that allow you to backup, timeshift, spaceshift, or quote digital media that you have bought and paid for but that the Content Cartel would rather you not be able to manipulate. Windows users will have to find their own equivalents (they're bound to be out there) or just break down and buy a Mac.
There are some interesting rumors floating around about Kaltix, a stealth start-up out of the Stanford WebBase Project. This is the same group that created the PageRank algorithm that was later spun out as a little start-up called Google. As you might expect with a company in stealth mode we're still long on speculation and short on facts, but it looks like their main technology is a faster way to compute PageRank, the algorithm used by Google to rank hits from a search based on the Web's link structure.
This is interesting because it would allow Google (or any other search engine) to quickly recalculate personalized indexes for each and every user. After seeding a personal index with my bookmarks file, Google would know that when I for "Jaguar" I'm probably interested in the latest version of Apple's OS, not the car or the cat. The CNET article has a good overview, but Jeffrey Heer's blog has a nice perspective as a researcher who happens to be housemates with one of the Kaltix founders.
There are a lot of question-marks still, and I'm not yet convinced that Kaltix's technology is the crown jewels that Heer or the CNET article claim it is. Speedy indexing is necessary for large-scale personalized search, but you still need to create a profile from something. The real question will be whether a search engine can generate a personal profile that helps disambiguate the searches people make in actual use. Add to this the need to keep personal information like browser history from being transmitted to outside companies and you have a tall order. I'm not saying these problems can't be solved, but as far as I know they haven't been solved yet. I expect Kaltix will get bought by one of the big search companies, but it will still be several years before we see personalized search running on any large (non-intranet) scale.
A couple stories have come up the last two days that highlight how the way the law and business determines identity isn't keeping up with technology. One story is about identity theft and the other about computer security violations, but both have a common thread: technology has made it so our common-sense assumptions about how to tell someone's identity no longer work.
The first is a lengthy Washington Post article about identity theft. The driving story is about Michael Berry, whose identity was stolen by an ex-con who proceeded to rack up debt and eventually commit murder all while living under Berry's name. Around this driving story the article gives a good analysis of just how incredibly easy and common this kind of identity theft is today.
It used to be that identifying someone was a long-term and high-touch operation. You'd get paychecks from a local business, deposit checks at the local bank branch, and write checks to the local grocery store. Over time all these entities would get to know you and your identity would become firmly entrenched in the system. Now that society is more mobile that system doesn't work, and we're finding that the replacement system of asking for social security numbers or mother's maiden name doesn't work too well either. Currently banks have to eat any monetary losses that come from identity-theft fraud, but do not currently have to take responsibility for damage caused to a person's credit rating or reputation (as a recently upheld by the South Carolina Supreme Court). That means that, as the law stands now, the economic incentives encourage more convenience and less security than would be the case if banks had to take the total cost of identity theft into account.
The second story is from yesterday's New York Times, who reported that a British man was exonerated of child pornography charges after his computer was found to have been infected by nearly a dozen Trojan-horse programs. Mr. Green, who has lost custody of his daughter and spent nine days in prison and three months in a "bail hostel" due to charges, has all along claimed that his computer was infected and that it even dialed into the Internet when no one was home.
In this case the question is whether Green is responsible for the material on his own computer. Not long ago if a crime was committed in a particular house then the perpetrator could only be one of a handful of people. For these data crimes, the person actually downloading porn onto Green's computer could have been literally anyone in the world. Similar arguments have been made about open Wi-Fi access points and "zombie" computers that are used as launching pads for attacks on other sites on the Net. As the Times article points out, there are two issues here. One is that bad guys could use such security problems as a defense, the other is that it really is a valid defense:
"The scary thing is not that the defense might work," said Mark Rasch, a former federal computer crime prosecutor. "The scary thing is that the defense might be right," and that hijacked computers could be turned to an evil purpose without an owner's knowledge or consent.
The general problem is that our old common sense ideas of identity no longer hold, or can't be applied in our hyper-convenient and mobile society. I'm not necessarily in control of my own networked computer. I'm not the only person who knows the last four digits of my SSN. And the person handling my application has almost certainly never seen me before, and that's no cause for alarm. Perhaps technology will come to the rescue in the form of biometrics that can prevent identity theft while still preventing governmental abuses. Perhaps regulation will come to the rescue in terms of systems to challenge faulty information, and by insuring that those who are responsible for security have the incentive to maintain it. Probably a combination of these will be required, but in the mean time I expect the problem to get worse before it gets better.
Guided voting already exists in basic form. I'm knowledgeable about a few political issues, but when it comes to local candidates or ballot initiatives outside my area of expertise I rely on party affiliation or endorsements from friends or organizations I trust to "tell" me how to vote.
Prof. Volokh's point is that, like it or not, Internet voting will lead to a much greater role for guided voting. Today's ballots have a candidate's party affiliation printed on the ballot, but if I want to know how, say, the National Organization of Women feels about a candidate I need to do my homework in advance and bring a cheat sheet. Volokh paints a future where I could go to a trusted third-party site, say suggestedvote.com, and check off the organizations I would like to guide my vote. The website would then produce a suggested ballot that aggregates all the recommendations of the organizations I picked, possibly weighing organizations differently in case they conflict on a particular issue. Then with a single keystroke my suggested ballot could be filed. The advantage of such a system, so the argument goes, is that the influence currently held by our two main political parties would be diluted and the political process would become more diverse.
While I like the idea in principle, I think there are two improvements that could be made to Prof. Volokh's scenario:
First, there is no reason to have a third-party gatekeeper such as suggestedvote.com. More general and egalitarian would be for election boards to publish a standard XML ballot and then any interested party could publish their own itemized recommendations. I would be able to subscribe to recommendations from now.org, aclu.org, or even volokh.com just like I currently subscribe to RSS feeds to read several blogs at once. Of course, a site like suggestedvote.com could still offer to host RSS or similar recommendation feeds for anyone who doesn't have their own website.
Second, I am quite frightened by the concept of one-click voting. Behavioral psychologists have repeatedly shown that people will tend to do what an interface makes easy to do (see The Adaptive Decision Maker for a nice analysis). This is why there are heated debates about things like motor-voter registration and whether voting booths should allow a single lever to cast all votes for a single party, policies that would be no-brainers if changing the convenience of voting didn't also change who votes and for what. Given that any change we make will affect how people act, I want the system to encourage thoughtful individual contributions to our democracy, not a constituency of sheep.
This is not to say there should be no voting guides at all, but rather that people should still be forced to actually see and touch every ballot measure, even if it is only to find and check the their favorite party nominee. Each ballot measure and candidate would be accompanied by labels representing endorsements by each guide the voter has chosen, possibly with links from the endorsement to a short argument explaining the group's reasoning. Rather than follow an automatically aggregated recommendation, voters would judge for themselves who to follow on each individual issue. Voters might even choose guides from organizations with whom they explicitly disagree, either to vote against their measures or to see opposing viewpoints. This system would not be that much more inconvenient than the one-click voting Prof. Volokh suggests, but would insure individual voter involvement while still giving the main advantages of voting guides.
Mark Glaser at Online Journalism Review has an interesting look at Howard Dean's Blog For America campaign blog. Glaser's main point: Dean's blog is building support and a sense of connection to his campaign, even though almost all the entries are from his campaign staff rather than Dean himself. As Dan Gillmor puts it, the official Dean blog is a campaign document, not a candidate document.
The article raises the question of how blogs (and by extension, the Web) is best used in political campaign. For Dean, blogforamerica.com is a tool for organizing grassroots support. It lets supporters know what they can do to help, and more importantly it keeps them informed about the bigger picture of how the campaign is moving. Dick Morris even goes so far as to declare grass-roots Internet organization as the new replacement for television ads. But as Glaser points out, you don't get the feeling of being in Dean's head like you would if he were writing his own daily entries. In fact, you get a better sense of Dean's thought process from the posts he made as a guest blogger at Lawrence Lessig's site than from his own blog.
Certainly there's nothing wrong with how Dean is using his blog, and his success so far has shown (yet again) just how powerful the Net can be for grass-roots organization. But I can also see why people would wish for more personal contact through his blog as well. Like email, blogs are an informal and even intimate medium, better suited to throwing out ideas that are from the heart, or at least from the hip, than to well-rehearsed campaign speeches. It gives everyday voters a seat on the campaign bus, where they can discuss the issues in detail and watch as positions become fully formed. One of the problems with politics, especially around campaign season, is that everything is so well crafted that you can never hear the doubts and alternatives that had to be considered in crafting the final message. This was brought home to me after 9/11 when, for a period of about three months, it seemed like the curtains had been lifted and politicians were all thinking out loud.
The next question in my mind is how this sort of medium can be used once a candidate is elected. Dean has commented that he might have a White House blog if he's elected, and of course already the White House publishes Press Secretary briefings on the Net. Perhaps the White House blog could become the 21st century's fireside chat?
Our Chief Scientist, David Stork, has been doing some side research the past few years in art history. In particular he's been assessing a theory that artist David Hockney presents in his book "Secret Knowledge": that artists as early as 1430 have secretly used optical devices such as mirrors and lenses to help them create their almost photo-realistic paintings.
The theory is fascinating. Art historians know that some master's used optical devices in the 1600's, but Hockney and his collaborator Physicist Charles Falco claim that as early as 1430 the masters of the day used concave mirrors to project the image of a subject onto their canvas. The artist would then trace the inverted image. This alone, Hockney and his supporters claim, can account for the perfect perspective and "opticality" of paintings that suddenly appear at in this time period.
If the theory itself is fascinating, I find Stork's refutation even more interesting. Stork's argument is based on several points. First, he argues, there is no textual evidence that artists ever used such devices. Hockney and his supporters counter that the information was of course kept as a closely guarded trade secret, and that is why there was no description of it. It isn't clear how these masters also kept the powerful patrons whose portraits they were painting from discussing their secret. Stork's second argument is that, quite simply, the paintings aren't all that perfect perspective after all. They look quite good, obviously, but if you actually do the geometry on the paintings Hockney presents as perfect you see that supposedly parallel lines don't meet at a vanishing point as they would in a photograph. And third, Stork points out that the methods Hockney suggests would require huge mirrors to get the focal lengths seen in the suspected paintings: mirrors far far larger than the technology could create at the time.
My analysis is a little unfair to Hockney as I've only seen Stork's presentation, but I must say I'm impressed with his argument. Hockney's argument is quite media-pathic. It's a mystery story that wraps history, secrecy, geniuses, modern science and great visuals all in one -- no wonder it's captured people's attention! Unfortunately, I expect Stork's right about one of the less fun aspects of the theory. It's also probably dead wrong.
For those interested, a CBS documentary on Hockney's theory will be rebroadcast this Sunday, August 3rd, on 60 Minutes.
A couple weeks ago I attended the New Paradigms in Using Computers workshop at IBM Almaden. It's always a small, friendly one-day gathering of Human-Computer Interaction researchers and practitioners, with invited talks from both academia and industry. This year's focus was on the state of knowledge in our field: what we know about users, how we know it and how we learn it.
The CHI community has a good camaraderie, especially among the industry researchers. I suspect that's because we're all used to being the one designer, artist or sociologist surrounded by a company of computer scientists and engineers. Nothing brings together a professional community like commiseration, especially when it's mixed with techniques for how to convince your management that what you do really is valuable to the company.
One of the interesting questions of the workshop was how to share knowledge within the interface-design community. Certainly we all benefit by sharing knowledge, standards and techniques, but for the industry researchers much of that information is a potential competitive advantage and therefore kept confidential. Especially here in Silicon Valley, that kind of institutional knowledge gets out into the community as a whole through employment churn, as researchers change labs throughout their careers.
Here are my notes from several of the talks. Standard disclaimers in place: these are just my notes of the event and subjected to my own filters and memory lapses. If you want the real story, get it from the respective horses' mouths.
Mike Kuniavsky (Adaptive Path): Reverse the polarity
Kuniavsky's first point was that the dot-com era brought new awareness of the need for usability testing and design. What was once an after-thought is now recognized as a primary driver for getting and keeping customers. He describes it as a crisis of purpose that has brought many new players together, including people involved in design, information science, HCI, ethnography and marketing. Techniques have also shifted from those that study single users (psychology, cognitive science) to those that study groups (marketing).
He then described the methodology used at his usability consulting company, which starts with an analysis of all stake-holder needs. These stake-holders include users of the system or Web site, as well as management, tech support, system maintainers, etc. Through this more holistic approach their solutions are more likely to have long-term benefit for their client because they can be more confident the right problem is being solved.
Colin Johnson (EyeTools): Understanding Users Through Their Eyes
EyeTools is a spin-off company from Stanford that provides user-testing of websites and software using their eye-tracking equipment. Their final reports show exactly where visitors are looking when they read a site, displayed either as they paths their eyes travel or as heat-map displays overlayed on top of the main page. They've shown some clients that not only are users not seeing the important marketing blurb on a page, but that they don't even notice when the blurb is changed to complete gibberish.
One preliminary but interesting effect they've seen is that people have very different viewing patterns when asked to "say out loud what you're looking at" than when they simply read naturally. If confirmed in a full study, that would imply the typical stream-of-consciousness commentary method of evaluating an interface may not get at natural usage patterns.
Bonnie John (HCI Institute, CMU): Data collection in support of US: Moving HCI to Level 5
Many Engineering fields have at least flirted with the Capability Maturity Model, which describes a process as being in one of five stages of maturity: Initial (chaotic), Repeatable, Defined, Managed, and Optimizing. The idea is to standardize the entire process such that methods are repeatable and testable, and the results of any change are predictable. Dr. John's argument was that we, as a field, should strive towards levels 4 and 5.
Arguments against this idea came fast and furious. The first was that our work would become controlled by the "tyranny of the quantifiable." Some research can not be easily quantified, but this does not make the research any less important. It was also pointed out that in HCI it's very hard to tell what to measure, and doubly hard for politically-charged applications such as education. The compromise position was that especially critical niches might still benefit, such as usability of interfaces for NASA's Martian lander. It was also pointed out that the field knows a great deal of rules about ergonomics, but such information has not yet been collected into a single volume.
Genevieve Bell (Intel): Lessons from the field: or, how not to do ethnography in 5 easy lessons
Dr. Bell had just gotten back from six months in Asia, studying daily interactions with technology in urban centers in India, Malaysia, Singapore, Indonesia, China and South Korea. The talk had lots of interesting insights about both Asian and Intel culture; here are a few tidbits:
Merissa Mayer (Google): The science behind Google's UI
Google has six basic steps to designing and testing their interfaces:
As an interesting side-note, she says the purchase of Blogger wasn't as strategic as it is seen to be outside of Google. They were purchased mainly because they were good engineers who needed resources, and it seemed to be a good match. There might be a move in the future to have them help with a service where blog owners can resell targeted ads on their site, should they want to monetize their publications.
Electronic voting is getting slammed this week. First, Dan Gillmor's Sunday Column took election officials to task for not insisting on providing physical paper trails that can be followed should the results of an election be in doubt. Then on Wednesday several computer security experts at Johns Hopkins University and Rice University published a scathing analysis of the design of the Diebold AccuVote-TS, one of the more commonly used electronic voting systems, based on source code that the company accidentally leaked to the Internet back in January. Exploits include the ability to make home-grown smart-cards to allow multiple voting, the ability to tamper with ballot texts, denial of service attacks, the potential to connect an individual voter to how he voted, and potentially the ability to modify votes after they have been cast. The New York Times and Gillmor's own blog have since picked up the report. Diebold has since responded to the analysis, but at least so far they haven't addressed the most damning criticisms.
There are several lessons to be learned from all this:
The solution lauded by both the Johns Hopkins team and Gillmor is to have a "voter-verifiable audit trail" as a backup for the electronic system. Whenever a vote is cast, a paper ballot is printed and checked by the voter for accuracy. If the print-out reads correctly, the ballot is stored as a record of the vote. If it is incorrect, the vote is invalidated and the paper ballot destroyed. Should the electronic record be questioned, the paper audit can be counted to confirm the results.
Diebold is in an interesting situation now. The Johns Hopkins analysis found security holes big enough to fly a starship through, but they had to make a lot of assumptions due to not having the complete specification. Diebold is defending their software in part because of those holes in the team's knowledge, but unless the whole system is brought out into the light for a full and informed debate to occur there's no way it can be trusted.
Luckilly, if you live in California there's something you can do. Secretary of State Kevin Shelley is soliciting public comments on a recent task force report on touch-screen voting machines, until Aug. 1. Comments can be sent to Secretary of State Kevin Shelley, attn: Touch Screen Report, 1500 11th St., Sacramento, CA 95814. E-mail comments to firstname.lastname@example.org or fax at 916-653-9675.
Frank Moss, US deputy assistant secretary for Passport Services, announced at the recent Smart Card Alliance meeting that production of new smart-card enabled passports will begin by October 26, 2004. Current plans call for a contactless smart chip based on the ISO 14443 standard, which was originally designed for the payments industry. The 14443 standard supports a data exchange rate of about 106 kilobytes per second, much higher than that of the widely-deployed Speedpass system.
The Enhanced Border Security Act and Visa Entry Reform Act of 2002 requires that countries in the US visa waiver program include machine-readable biometric data. This new passport project would update US passports to meet that standard as well. Moss says the goal is global interoperability, and plans to adopt standards from the International Civil Aviation Organization (ICAO). The EU recently made a similar announcement.
The new passports would encode cryptographically signed copies of the passport photo and passport information, making such information more difficult to forge. Moss did not mention the use of biometric data not currently included on US passports, such as fingerprints or retinal scans, though The Register suggests that the EU is looking into these systems as well. Given that the ICAO has not yet ruled out such additional biometrics it is probably too soon to tell, though the current ICAO blueprint does favor face images over fingerprint or retinal-scan technology.
Nearly six years to the day after the process was started, it looks like the IEEE is honing in on a single standard for a fast (around 100 Mbit/s), short-range (< 10m), low-power, low-cost wireless communication. The standard, which will be IEEE 802.15.3a, comes out of the IEEE Wireless Personal Area Network (WPAN) working group. Unlike cellular or Wi-Fi networks, the point of a personal area network is to communicate with other devices that are there in the room with you. For example, a high-speed WPAN would allow your PDA to stream video directly to a large-screen TV. Alternatively, your core CPU could wirelessly communicate with medical sensors, control buttons, displays and ear-pieces, all distributed around the body. The standard fills much the same niche as Bluetooth (the first standard adopted by the working group, also known as 802.15.1), but the new technology is significantly faster than Bluetooth (up to 100 times faster, according to champions of the technology).
Trade news columnists who know more than I do about this are picking Texas Instruments' proposal for OFDM UWB (that's Orthogonal Frequency Division Multiplexing Ultra Wide Band, thank you for asking) as the likely technology to be picked. Assuming it does, TI's UWB business development manager says we can expect to see the first UWB products hitting the marketplace in 2005.
Update: The standard did not receive enough votes to pass, and will be voted on again in mid-September.