A couple weeks ago The Economist had an article discussing how economist and New York Times columnist Paul Krugman is becoming increasingly partisan in his writings. The article relies primarily on analysis done by Ken Waight over at Lying In Ponds, a site dedicated to rating columnists and other pundits on partisanship. I like the site's philosophy, particularly because it ignores the whole question of "bias" and goes straight to the more important issue of partisanship: blind, prejudiced, and unreasoning allegiance to one of the two main political parties.
I don't read Krugman often and don't have a personal opinion on his partisanship, though I do find Waight's arguments compelling. What's gotten me thinking is the follow-up question: should we care?
As Waight is quick to point out, there is nothing wrong with an editorial columnist having and expressing a bias — that's what we pay them for. He also points out that some biases will naturally align with the biases of one political party or another. Waight's beef is when a pundit crosses over from bias for similar ideals to bias for a political party itself. When this happens, Waight argues, "The views of pundits who are excessively partisan cannot be taken seriously (like advertising), because their ulterior motives or uncontrolled biases are certain to frequently contaminate their judgments."
It is here that I break ranks with Waight. Clearly partisanship can blind pundits, but there are levels of blindness that might occur. The worst partisans deliberately lie and dissemble to argue their case — these pundits should certainly not be taken seriously. However, less egregious partisans give factual, rational arguments, but either omit arguments that would support their opponents or only choose to talk about topics that put their side in the best light. These partisans can still provide a valuable service so long as (a) they make their partisanship clear and (b) they are only one part of a diverse and balanced opinion diet. I'd say most politicians of either party fall into this second, less egregious level of partisanship. While I certainly won't trust a politician without question, I will still take their arguments seriously. I would say the same for anyone with a strong prejudice, whether that prejudice is towards a particular party, methodology, world-view or value judgment.
All that said, I do believe that a prejudice towards a political party is qualitatively different than, say, a prejudice for well-run scientific studies or small government or Christian values. The difference is not that allegiance to a party produces worse decisions than allegiance to a world-view, method or value system, but rather that adherence to a party line is one of a few easy shortcuts that we non-pundits already use. As a good citizen I would love to become an expert on every political issue that comes up, but I just don't have the time. Instead, I learn about a few issues that are important to me and for the rest I rely on the opinion of the politicians and political parties that I elect to represent me. As Dr. Robert Cialdini puts it in Influence: Science and Practice:
It's instructive that even though we often don't take a complex approach to personally important topics, we wish our advisors — our physicians, accountants, lawyers, and brokers — to do precisely that for us (Kahn & Baron, 1995). When feeling overwhelmed by a complicated and consequential choice, we still want a fully considered, point-by-point analysis of it — an analysis we may not be able to achieve except, ironically enough, through a shortcut: reliance on an expert.
The problem with professional pundits who are partisan is that they use party positions as a shortcut for deciding what is right and wrong — just like we non-professionals do. That means we can't use their arguments as a shortcut validation of of the opinions we get using our own partisanship shortcut. Independent validation, I would argue, is the primary purpose of an opinion columnist.
Eugene Volokh once opined that we shouldn't hold non-professional pundits (like most bloggers) to the high standard of even-handedness. However, it is perfectly reasonable to hold professional columnists to this standard. When I read Krugman (or any other professional pundit) I don't expect him to disagree with the Democrats often, but I want to know that he could. Otherwise I haven't checked my initial shortcut at all, I just got two copies of the same shortcut. As Waight put it, "When two people agree on everything, it's pretty certain that only one is doing the thinking." First and foremost, we should expect our professional pundits to think.
Last night I finally got around to watching Microsoft's Comdex presentation, specifically the section where Susan Dumais shows off her new search technology "Stuff I've Seen." (Search for "switch gears" at the bottom of the transcript or go to 1:07:50 on the video.)
Most of Stuff I've Seen is concentrating on the problem of quickly indexing and searching your entire hard drive, regardless of media format. (I sometimes jokingly refer to projects like this as YAPIM, or Yet Another Project Invoking Memex, my own thesis work fitting that description as well.) However, the part that interests me most is what they're calling implicit query. As CNET describes the Comdex demo:
In demonstrating Implicit Query, Dumais began to type an e-mail asking a colleague about a set of slides for an upcoming conference. Before the message was complete, the program — which appears in a window on the side of the screen — pulled up e-mails, slide decks and Word documents containing the name of the conference and the future recipient. Each hit came with a brief summary of the internal content, date, the type of software the file was written in, and its potential relevance, among other information.
This is the same functionality that in my PhD I call Just-In-Time Information Retrieval, and is the main focus of the Remembrance Agent software I developed. It can be incredibly powerful (I use it regularly to suggest email discussions related to my blog entries, for example) and I hope Dumais pursues it. It looks like she's still in early stages with the concept though, and and more importantly the current interface is still designed for explicit query — far too intrusive for something that runs all the time in the background. By contrast, Autonomy has had an actual product in this area for over three years, though I'd say the interface is still the real trickiness for this kind of application. Still, as is often the case one of the more interesting aspects of Microsoft doing something is that it's Microsoft doing it. If implicit query makes it into a future version of the OS (and if MS doesn't screw it up they way they did with that annoying paperclip) that'll be quite interesting.
Back in July there was a big scandal over DARPA's funding of a futures market where people bet on things like whether Arafat will be assassinated or when the US will pull out of Iraq. The project was canceled, and also became the straw that forced John Poindexter's resignation. Now the Guardian reports that San Diego-based Net Exchange, the company that was implementing the project, is going ahead and launching it without government support or involvement. Given the previous uproar, Net Exchange is being understandably quiet about the whole thing.
Personally I'd be happy to see them try this out. As I said before, the U.S. Government shouldn't be involved in something as shady as gallows gambling, but as a private experiment the whole thing intrigues me and I don't have a problem with seeing where it goes. My guess is it will wind up being an interesting past-time for armchair analysts, but like most markets will fluctuate far too much to provide any real security data. The only real danger I see is if the stakes get high (unlikely) and attract corruption — unlike sports gambling or its Wall-Street counterpart, Middle-East politics has neither conflict-of-interest nor insider-trading laws. The more likely danger is simple lack of interest, the risk all seemed-like-a-good-idea-at-the-time Internet projects face.
I've finally gotten around to reading up on Trusted Computing (a process that, ironically enough, was interrupted by my being rootkitted a couple of weeks ago). I'd heard some pretty unsettling things about trusted computing, but now that I've done some digging... well it's still pretty disturbing.
Trusted Computing (TC) is one of several names for a set of changes to server, PC, PDA and mobile phone operating systems, software and hardware that will make these computers "more trustworthy." Microsoft has one version, known as Palladium or Next Generation Secure Computing Base (NGSCB), and an alliance of Intel, Microsoft, IBM, HP and AMD known as the Trusted Computing Group has a slightly different one called either trusted computing, trustworthy computing, or "safer computing." Some parts of Trusted Computing are already in Windows XP, Windows Server 2003, and the in the hardware for the IBM Thinkpad, and many more will be in Microsoft's new Longhorn version of Windows scheduled for 2006.
The EFF has a nice introduction to trusted computing systems, written by Seth Schoen, and Ross Anderson has a more detailed and critical analysis. A brief summary of the summary is that a trusted computer includes tamper-resistant hardware that can cryptographically verify the identity and integrity of the programs you run, verify that identity to online "policy servers," encrypt keyboard and screen communications, and keep an unauthorized program from reading another program's memory or saved data. The center of this is the so-called "Fritz" chip, named after Senator Fritz Hollings of South Carolina, who tried to make digital rights management a mandatory part of all consumer electronics. (He failed and is retiring in 2004, but I've no doubt there will be attempts to pass similar laws in the future.)
When most people think about computer security they think about virus detectors, firewalls and encrypted network traffic — the computer analogs to burglar alarms, padlocks and opaque envelopes. The Fritz chip is a different kind of security, more like the "political officer" that the Soviet Union would put on every submarine to make sure the captain stayed loyal. The whole purpose of the Fritz chip is to make sure that you, the computer user, can't do anything that goes against the policies set by the people who wrote your software and/or provide you with web services.
There are many people who would like such a feature. Content providers such as Disney could verify that your version of Windows Media Player hasn't had digital rights management disabled before sending you a decryption key for a movie. Your employer could prevent email from being printed or read on non-company machines, and could automatically delete it from your inbox after six months. Governments could prevent leaks by doing the same with sensitive documents. Microsoft and AOL could prevent third-party instant-message software from working with the MSN or AIM networks, or lock-in customers by making it difficult to switch to other products without losing access to years worth of saved documents. Game designers could keep you from cheating in networked games. Distributed computing and mobile agents programs could be sure their code isn't being subverted or leaked when running on third-party systems. Software designers could verify that a program is registered and only running on a single computer (as Windows XP does already), and could even prevent all legitimate trusted computers from reading files encrypted by pirated software. Trusted computing is all about their trust, and the person they don't trust is you.
End users do get a little bit of "trust" out of trusted computing, but not as much as you might think. TC won't stop hackers from gaining access to a system, but it could be used to detect rootkits that have been installed. TC also won't prevent viruses, worms or Trojans, but it can prevent them from accessing data or keys owned by other applications. That means a program you download from the Internet won't be able to email itself to everyone in your (encrypted) address book. However, TC won't stop worms that exploit security holes in MS Outlook's scripting language from accessing your address book, because Outlook already has that permission. In spite of what the Trusted Computing Group's backgrounder and Microsoft's Palladium overview imply, TC won't help with identity theft or computer thieves physically accessing your data any more than current public key cryptography and encrypted file systems do.
As long as you agree with the goals of the people who write your software and provide your web services, TC isn't a bad deal. After all, most people don't want people to cheat at online games and can see the value of company email deletion policies. The same can be said of the political officer on Soviet submarines — they were great as long as you believed in what the Communist Party stood for. And unlike Soviet submarine commanders, you won't get shot for refusing to use TC on your computer. Your programs will still run as always, you just won't be able to read encrypted email from your customers, watch downloaded movies, or purchase items through your TC-enabled cellphone. Some have claimed that this is how it should be, and that the market will try out all sorts of agreements and those that are acceptable to both consumers and service providers will survive. That sounds nice in theory, but doesn't work when the market is dominated by a few players (e.g. Microsoft for software, wireless providers for mobile services, and the content cartel for music and movies) or when there are network externalities that make it easy to lock in a customer base (e.g. email, web, web services and electronic commerce). What choice will you have in wordprocessors if the only way you can read memos from your boss is by using MS Word? What choice will you have in stereo systems when the five big record companies announce that new recordings will only be released in a secure-media format?
Of course, even monopolies respond to strong enough consumer push-back, but as Ross Anderson points out there are subtle tricks software and service providers can pull to lock in unwary consumers. For example, a law firm might discover that migrating years of encrypted documents from Microsoft to OpenOffice requires sign-off for the change by every client that has ever sent an encrypted email attachment. That's a nasty barrel to be over, and the firm would probably grudgingly pay Microsoft large continuing license fees to avoid that pain. These kinds of barriers to change can be subtle, and you can bet they won't be a part of the original sales pitch from Microsoft. But then what do you expect when you invite a political officer into your computer?