Our Chief Scientist, David Stork, has been doing some side research the past few years in art history. In particular he's been assessing a theory that artist David Hockney presents in his book "Secret Knowledge": that artists as early as 1430 have secretly used optical devices such as mirrors and lenses to help them create their almost photo-realistic paintings.
The theory is fascinating. Art historians know that some master's used optical devices in the 1600's, but Hockney and his collaborator Physicist Charles Falco claim that as early as 1430 the masters of the day used concave mirrors to project the image of a subject onto their canvas. The artist would then trace the inverted image. This alone, Hockney and his supporters claim, can account for the perfect perspective and "opticality" of paintings that suddenly appear at in this time period.
If the theory itself is fascinating, I find Stork's refutation even more interesting. Stork's argument is based on several points. First, he argues, there is no textual evidence that artists ever used such devices. Hockney and his supporters counter that the information was of course kept as a closely guarded trade secret, and that is why there was no description of it. It isn't clear how these masters also kept the powerful patrons whose portraits they were painting from discussing their secret. Stork's second argument is that, quite simply, the paintings aren't all that perfect perspective after all. They look quite good, obviously, but if you actually do the geometry on the paintings Hockney presents as perfect you see that supposedly parallel lines don't meet at a vanishing point as they would in a photograph. And third, Stork points out that the methods Hockney suggests would require huge mirrors to get the focal lengths seen in the suspected paintings: mirrors far far larger than the technology could create at the time.
My analysis is a little unfair to Hockney as I've only seen Stork's presentation, but I must say I'm impressed with his argument. Hockney's argument is quite media-pathic. It's a mystery story that wraps history, secrecy, geniuses, modern science and great visuals all in one -- no wonder it's captured people's attention! Unfortunately, I expect Stork's right about one of the less fun aspects of the theory. It's also probably dead wrong.
For those interested, a CBS documentary on Hockney's theory will be rebroadcast this Sunday, August 3rd, on 60 Minutes.
The story sounds like something out of The Onion, or maybe a dystopian science fiction short story. As reported widely in the news yesterday, the Pentagon has been planning an electronic futures market for analysis of foreign affairs. The idea is to create a market where people can anonymously bet on things like whether the US will reduce troop deployment in Iraq by year's end, or whether Arafat will be assassinated. The current odds on the bet, so the argument goes, best reflects the actual probability given everything the collected thinkers know. Policy-makers could then use the probability to know where to focus their attention.
By today the firestorm had swept Washington and the Pentagon announced the project has been canceled. Apparently congressmen were not completely aware of what had been planned, in spite of the general plan being up for many months on DARPA's web site and mention of the project in a March New Yorker article.
I can't help but feel sympathy for Robin Hanson, the George Mason University Economics professor who has been spearheading the project. Critics were quick to describe the project as a marketplace where terrorists and mercenaries could make money by betting some horrific event would happen and then causing it. But as Hanson describes in interviews and on his Web site, the idea is more that professors, armchair analysts, and frequent travelers from all walks of life would combine their on-the-ground expertise to come to conclusions even the most expert intelligence worker in Washington wouldn't be able to reach. But interested as I am by the concept I just can't see it working for a number of reasons:
Update: According to futures sales on Tradesports.com, John Poindexter's chances of keeping his job after this uproar are around 70%.
A couple weeks ago I attended the New Paradigms in Using Computers workshop at IBM Almaden. It's always a small, friendly one-day gathering of Human-Computer Interaction researchers and practitioners, with invited talks from both academia and industry. This year's focus was on the state of knowledge in our field: what we know about users, how we know it and how we learn it.
The CHI community has a good camaraderie, especially among the industry researchers. I suspect that's because we're all used to being the one designer, artist or sociologist surrounded by a company of computer scientists and engineers. Nothing brings together a professional community like commiseration, especially when it's mixed with techniques for how to convince your management that what you do really is valuable to the company.
One of the interesting questions of the workshop was how to share knowledge within the interface-design community. Certainly we all benefit by sharing knowledge, standards and techniques, but for the industry researchers much of that information is a potential competitive advantage and therefore kept confidential. Especially here in Silicon Valley, that kind of institutional knowledge gets out into the community as a whole through employment churn, as researchers change labs throughout their careers.
Here are my notes from several of the talks. Standard disclaimers in place: these are just my notes of the event and subjected to my own filters and memory lapses. If you want the real story, get it from the respective horses' mouths.
Mike Kuniavsky (Adaptive Path): Reverse the polarity
Kuniavsky's first point was that the dot-com era brought new awareness of the need for usability testing and design. What was once an after-thought is now recognized as a primary driver for getting and keeping customers. He describes it as a crisis of purpose that has brought many new players together, including people involved in design, information science, HCI, ethnography and marketing. Techniques have also shifted from those that study single users (psychology, cognitive science) to those that study groups (marketing).
He then described the methodology used at his usability consulting company, which starts with an analysis of all stake-holder needs. These stake-holders include users of the system or Web site, as well as management, tech support, system maintainers, etc. Through this more holistic approach their solutions are more likely to have long-term benefit for their client because they can be more confident the right problem is being solved.
Colin Johnson (EyeTools): Understanding Users Through Their Eyes
EyeTools is a spin-off company from Stanford that provides user-testing of websites and software using their eye-tracking equipment. Their final reports show exactly where visitors are looking when they read a site, displayed either as they paths their eyes travel or as heat-map displays overlayed on top of the main page. They've shown some clients that not only are users not seeing the important marketing blurb on a page, but that they don't even notice when the blurb is changed to complete gibberish.
One preliminary but interesting effect they've seen is that people have very different viewing patterns when asked to "say out loud what you're looking at" than when they simply read naturally. If confirmed in a full study, that would imply the typical stream-of-consciousness commentary method of evaluating an interface may not get at natural usage patterns.
Bonnie John (HCI Institute, CMU): Data collection in support of US: Moving HCI to Level 5
Many Engineering fields have at least flirted with the Capability Maturity Model, which describes a process as being in one of five stages of maturity: Initial (chaotic), Repeatable, Defined, Managed, and Optimizing. The idea is to standardize the entire process such that methods are repeatable and testable, and the results of any change are predictable. Dr. John's argument was that we, as a field, should strive towards levels 4 and 5.
Arguments against this idea came fast and furious. The first was that our work would become controlled by the "tyranny of the quantifiable." Some research can not be easily quantified, but this does not make the research any less important. It was also pointed out that in HCI it's very hard to tell what to measure, and doubly hard for politically-charged applications such as education. The compromise position was that especially critical niches might still benefit, such as usability of interfaces for NASA's Martian lander. It was also pointed out that the field knows a great deal of rules about ergonomics, but such information has not yet been collected into a single volume.
Genevieve Bell (Intel): Lessons from the field: or, how not to do ethnography in 5 easy lessons
Dr. Bell had just gotten back from six months in Asia, studying daily interactions with technology in urban centers in India, Malaysia, Singapore, Indonesia, China and South Korea. The talk had lots of interesting insights about both Asian and Intel culture; here are a few tidbits:
Merissa Mayer (Google): The science behind Google's UI
Google has six basic steps to designing and testing their interfaces:
As an interesting side-note, she says the purchase of Blogger wasn't as strategic as it is seen to be outside of Google. They were purchased mainly because they were good engineers who needed resources, and it seemed to be a good match. There might be a move in the future to have them help with a service where blog owners can resell targeted ads on their site, should they want to monetize their publications.
Electronic voting is getting slammed this week. First, Dan Gillmor's Sunday Column took election officials to task for not insisting on providing physical paper trails that can be followed should the results of an election be in doubt. Then on Wednesday several computer security experts at Johns Hopkins University and Rice University published a scathing analysis of the design of the Diebold AccuVote-TS, one of the more commonly used electronic voting systems, based on source code that the company accidentally leaked to the Internet back in January. Exploits include the ability to make home-grown smart-cards to allow multiple voting, the ability to tamper with ballot texts, denial of service attacks, the potential to connect an individual voter to how he voted, and potentially the ability to modify votes after they have been cast. The New York Times and Gillmor's own blog have since picked up the report. Diebold has since responded to the analysis, but at least so far they haven't addressed the most damning criticisms.
There are several lessons to be learned from all this:
The solution lauded by both the Johns Hopkins team and Gillmor is to have a "voter-verifiable audit trail" as a backup for the electronic system. Whenever a vote is cast, a paper ballot is printed and checked by the voter for accuracy. If the print-out reads correctly, the ballot is stored as a record of the vote. If it is incorrect, the vote is invalidated and the paper ballot destroyed. Should the electronic record be questioned, the paper audit can be counted to confirm the results.
Diebold is in an interesting situation now. The Johns Hopkins analysis found security holes big enough to fly a starship through, but they had to make a lot of assumptions due to not having the complete specification. Diebold is defending their software in part because of those holes in the team's knowledge, but unless the whole system is brought out into the light for a full and informed debate to occur there's no way it can be trusted.
Luckilly, if you live in California there's something you can do. Secretary of State Kevin Shelley is soliciting public comments on a recent task force report on touch-screen voting machines, until Aug. 1. Comments can be sent to Secretary of State Kevin Shelley, attn: Touch Screen Report, 1500 11th St., Sacramento, CA 95814. E-mail comments to email@example.com or fax at 916-653-9675.
Frank Moss, US deputy assistant secretary for Passport Services, announced at the recent Smart Card Alliance meeting that production of new smart-card enabled passports will begin by October 26, 2004. Current plans call for a contactless smart chip based on the ISO 14443 standard, which was originally designed for the payments industry. The 14443 standard supports a data exchange rate of about 106 kilobytes per second, much higher than that of the widely-deployed Speedpass system.
The Enhanced Border Security Act and Visa Entry Reform Act of 2002 requires that countries in the US visa waiver program include machine-readable biometric data. This new passport project would update US passports to meet that standard as well. Moss says the goal is global interoperability, and plans to adopt standards from the International Civil Aviation Organization (ICAO). The EU recently made a similar announcement.
The new passports would encode cryptographically signed copies of the passport photo and passport information, making such information more difficult to forge. Moss did not mention the use of biometric data not currently included on US passports, such as fingerprints or retinal scans, though The Register suggests that the EU is looking into these systems as well. Given that the ICAO has not yet ruled out such additional biometrics it is probably too soon to tell, though the current ICAO blueprint does favor face images over fingerprint or retinal-scan technology.
Nearly six years to the day after the process was started, it looks like the IEEE is honing in on a single standard for a fast (around 100 Mbit/s), short-range (< 10m), low-power, low-cost wireless communication. The standard, which will be IEEE 802.15.3a, comes out of the IEEE Wireless Personal Area Network (WPAN) working group. Unlike cellular or Wi-Fi networks, the point of a personal area network is to communicate with other devices that are there in the room with you. For example, a high-speed WPAN would allow your PDA to stream video directly to a large-screen TV. Alternatively, your core CPU could wirelessly communicate with medical sensors, control buttons, displays and ear-pieces, all distributed around the body. The standard fills much the same niche as Bluetooth (the first standard adopted by the working group, also known as 802.15.1), but the new technology is significantly faster than Bluetooth (up to 100 times faster, according to champions of the technology).
Trade news columnists who know more than I do about this are picking Texas Instruments' proposal for OFDM UWB (that's Orthogonal Frequency Division Multiplexing Ultra Wide Band, thank you for asking) as the likely technology to be picked. Assuming it does, TI's UWB business development manager says we can expect to see the first UWB products hitting the marketplace in 2005.
Update: The standard did not receive enough votes to pass, and will be voted on again in mid-September.
I can hear it now:
Exec #1: "Members of the Word Media Cartel, we are against the ropes. We've tried imposing draconian penalties for even trivial piracy. We performed a perfect end-run around the fair use doctrine with the Digital Millennium Copyright Act. We've sued into bankruptcy anyone who might have a business model more survivable than our own. We've even sued down-and-out college students for $97.8 trillion dollars each, as an example to others who would stand in our way. And yet the peer-to-peer networks continue to thrive."
Exec #2: "If only our industry had a way to convince people that piracy was wrong. You know, change how people think about copying music and movies."
Exec #1: "Yes, yes, but there's no point in wishing for... hey wait, say that again!"
And so it came to pass: the Motion Picture Association of America launched an unprecedented media blitz to convince the American public that by using Gnutella you hurt not just Disney stock-holders, but also Jerry, the man who fetches coffee for George Lucas every morning at 5am.
The sheer power of this blitz is daunting. The kickoff this Thursday will have thirty-five network and cable outlets all showing the same 30-second spot in the first prime-time break (a "roadblock" in ad-biz terms). Then every major theater in the country will play daily trailers on all screens in more than 5,000 theaters. Whew. And all that time is donated, which would be incredibly impressive if the spots weren't essentially being donated to themselves.
And now the $97.8 trillion-dollar question: is the American people so pliable that their morality can be changed by a media blitz? (Could that be the manic laughter of of thousands of ad executives I hear in the distance?)
The Associated Press has an overview piece on how the makers of digital video recorders are capitulating to (excuse me, "voluntarily cooperating with") Hollywood and other members of the Content Cartel. Not too surprising given the shots fired through the bow of Sonicblue (makers of ReplayTV), forcing them into bankruptcy after paying millions in legal fees. In line with the rest of the industry, ReplayTV's new owners say they will be good little boys and remove the ability to auto-skip commercials or send recorded programs over the Internet to other Replay users.
In the short run this means consumers get fewer features, but in the long run it's just more sand thrown against the tide. DVRs are just commodity hardware, some standard drivers and a little bit of interface software. If UltimateTV, TiVo or ReplayTV doesn't provide the features people want then a whole host of small manufacturers, kit makers and do-it-yourself modification kits are all more than willing to fill the gap.