Last week California State Assembly member Michael Duvall (R-Orange County) was caught bragging to a colleague about having an extra-marital affair — next to a live mike. Along with his rather graphic descriptions he happened to mentioned his paramour's age and birthday, and from this information OC Weekly was able to identify the woman:
"And so her birthday was Monday," he said at the Wednesday, July 8 committee hearing. "I was 54 on June 14, so for a month, she was 19 years younger than me...
According to voter-registration records reviewed by the Weekly, veteran Sacramento-based lobbyist Heidi DeJong Barsuglia turned 36 years old on Monday, July 6.
In this case there were other sources who also identified Ms. Barsuglia, and it's not clear from the story whether OC Weekly actually arrived at her name through voter-registration records or simply used them for corroboration. However, EFF's Deep Links reports that it's actually not that hard to identify someone based on a few pieces of seemingly innocuous information like birthday, gender and zip-code:
Gender, ZIP code, and birth date feel anonymous, but Prof. Sweeney was able to identify Governor Weld through them for two reasons. First, each of these facts about an individual (or other kinds of facts we might not usually think of as identifying) independently narrows down the population, so much so that the combination of (gender, ZIP code, birthdate) was unique for about 87% of the U.S. population.
The linked-to abstract also mentions that about half the U.S. population are likely to be uniquely identifiable by only place, gender and date of birth, where place is basically the city, town or municipality where the person resides. And even if a search in a city as big as Sacramento came up with several potential matches, the hit that also happens to be a lobbyist working in an industry under Duvall's committee would be easy to spot.
The picture on the left is cropped from an image of a man sexually abusing children in Vietnam and Cambodia in 2002 or 2003. The face was digitally scrambled, and the image posted to the Net along with about 200 others. The picture on the right is the same picture, digitally unscrambled by Germany's federal police. Interpol has posted four unscrambled images of the man's face on their website, and have asked the public at large for help in identifying him. They've already reportedly received hundreds of tips.
That part's pretty cool, and I hope they catch the guy, but I have some trepidation at the idea of casting such a wide net using just a few photographs. Say you know somebody who looks remarkably like this guy -- maybe that creepy guy you see on the subway every morning. How likely is it that he really is they guy they're looking for?
If you were looking for a local criminal, say someone who robbed the neighborhood 7-11, it would probably be pretty likely you'd found the right guy. After all, it's pretty darned rare for two unrelated people to look so similar that even after close inspection you mistake one for another. The trouble is, even a very rare event becomes extremely likely when you're sampling the entire world: if there's only a one-in-a-hundred-million chance that two randomly-chosen people look really similar, then every person on the planet has approximately 67 doppelgangers running around. It's not that we can't distinguish between those one-in-a-hundred-million pairs, it's just that our brains only specialize our ability to recognize things as far as necessary. That's why people from another part of the world "all look alike" until you actually start to live with them, and why it becomes trivial to distinguish between 'identical' twins once you've known them for a couple months. But nobody's brain is specialized enough to distinguish between one-in-a-hundred-million chance similarity, because it never comes up in our lives.
Interpol seems to recognize they're taking a risk in publishing these photos, and they caution that law enforcement would have to positively identify any suspects (with additional photos and corroborating data at their disposal). Still, I see two risks where innocent look-alikes could get caught up in this. The first is that, regardless of the advice to wait for positive ID, people are naturally going to be suspicious of any look-alike, and may take action. Though fear of terrorism now tops the list, fear of child molesters in our midst will always be right up there in terms of emotion-stirring boogiemen. The second, even more dangerous risk, is that Interpol itself will try to apply their usual "one-in-a-million" criteria for reasonable doubt to a one-in-a-hundred-million situation. That, I'd argue, would repeat the fiasco the FBI created when they arrested a Portland lawyer for the Madrid bombing, based on a close partial-fingerprint match and (presumably) the fact that he was Muslim.
Bruce Schneier's Crypto-Gram points to some impressive work done by researchers at the University of Washington showing how Apple's Nike + iPod kit can be used to track people. The kit consists of a transmitter that you put in your shoe and a receiver you plug into your iPod. The transmitter wakes up whenever it gets shaken and sends out pedometer info every second, and the receiver then uses that info to give voice and visual feedback on your pace and how far you've run. The UW team discovered that each transmitter sends out a unique ID so the receivers can distinguish among several in the area, and then built several PDA-sized units to listen for IDs and log the data either to flash memory or retransmit it over Wi-fi or SMS. They also built software that would trigger a USB camera whenever a particular ID went by, and wrote a visualization tool that shows either historical or real-time overlays of sensor IDs and/or pictures taken on top of Google Maps. Details are in their paper, and they also have a video.
The threat models they lay out aren't government surveillance so much as jealous/ex-boyfriends and stalkers, and to some extent professional thieves and muggers, unethical organizations tracking their members (or their competition's members), and stores tracking their customers. Except for muggers (which just involves detecting whether a passing jogger is likely to have an iPod or other cool gadgets on them), all the scenarios they discuss involve the use of a network of their relatively cheap sensors, each one adding a single location to the overall surveillance network. A stalker would place trackers at strategic locations, then wait for them to phone home with the unique IDs they see. To link a a unique ID with a particular person he just has to get close to his target (or for that matter just watch her jog by) and then note the ID that's being broadcast. Or he can leave one tracker in the bushes by his target's front door and note what ID it picks up (he gets when she comes and goes that way too). And since consumers are encouraged to "just drop the sensor in their Nike+ shoes and forget about it” the trackers will work even when the target isn't actually jogging or using the device.
The work is impressive, but I feel like by focusing on the Nike + iPod design it's pointing to the smoke instead of the fire. Yes, Apple probably could have designed their system to make this sort of tracking more difficult. Ditto the RFID chips in smart cards, passports, highway toll-payment boxes, quick-payment key fobs and consumer products, not to mention Bluetooth devices and cellphones. But the main technology trend that's making this sort of tracking possible, I would argue, is not the plethora of remotely-readable unique IDs we carry everywhere we go so much as the small, cheap hardware that even a moderately technical attacker can turn into his very own sensor network. RFID and transmitters are a ready-made "fingerprint" that such sensor networks can read easily, but as machine vision and pattern recognition technology improves there will be an increasing number of features will uniquely identify you to a sensor network, including minor differences in hardware you carry, how you walk or what you look like. This is not to say we shouldn't encourage companies to make tracking by RFID harder to do, but I think it's at best going to buy us 5-10 years before you'll be able to buy your own automatic person-tracking sensor network at any online spy-shop. We'd better be thinking now about what kind of social and legal systems we'll want once that day comes.
Psiphon is a new anti-censorship web proxy just released by U. Toronto. People outside of a censoring country run a Psiphon server, and people inside a censoring country (<cough>China</cough>) just go to the server's URL and enter whatever URL they want to visit in the page's own virtual toolbar. The server handles encryption and proxying of the web pages automatically, and gets around URL-based and content-based filters.
One interesting aspect is that they're not doing anything to help people find a particular proxy. Instead they're relying on social networks, which is to say word-of-mouth:
A social network is a structure of nodes - usually individuals or organizations - that have ties between them, such as families or groups of friends or colleagues. psiphon leverages social networks as the discovery mechanism. The psiphonode administrator and the psiphonite(s) have a trust relationship and the web address is known only to these trusted people. Each network of psiphonode/psiphonites chooses how to grow the network. It can be small and extremely private or large and relatively semi-private. It depends on the specific context and needs of the psiphonites.
The nice thing about this set up is that it doesn't need any new routing or discovery infrastructure (since it relies on people to set them up themselves) and it makes it harder for governments to find Psiphon servers and block their ports.
(Props to Infothought for the link.)
Today's LATimes reports that Brandon Mayfield just won his $2 million lawsuit against the FBI for his wrongful detention in 2004. Brandon is the Oregon lawyer who the FBI pinched in connection to the 2004 Madrid train bombings because a partial fingerprint found in Madrid was a "close enough" match to his own. One quote from the article:
Michael Cherry, president of Cherry Biometrics, an identification-technology company, said misidentification problems could grow worse as the U.S. and other governments add more fingerprints to their databases.
The problem is emphasized in the March report from the Office of the Inspector General on the case, which reads much like a Risks Digest post and has a lot of take-home lessons. The initial problem was that the FBI threw an extremely wide net by running the fingerprints found in Madrid through the Integrated Automated Fingerprint Identification System (IAFIS), a database that contains the fingerprints of more than 47 million people who have either been arrested or submitted fingerprints for background checks. With so many people in the database the system always spits out a number of (innocent) near-matches, so the FBI then goes over the results. The trouble is that in this case (a) Mayfield's fingerprints were especially close, and (b) the FBI examiner got stuck in a pattern of circular reasoning, where once he found many points of similarity between the prints he began to "find" additional features that weren't really in the lifted print but were suggested by features in Mayfield's own prints.
People tend to forget that even extremely rare events are almost guaranteed to happen if you check often enough. For example, even if there was only a one in a billion chance of an innocent person being an extremely close match for a given fingerprint, that leaves about a 5% chance for each fingerprint checked of getting such a false positive. If we were to double the size of the database, that would rise to almost 10%. This kind of problem is inevitable when looking for extremely rare events, and applies even more broadly to fuzzy-matching systems like the TSA's no-fly list and Total Information Awareness (in all its newly renamed forms), which try to identify terrorists from their credit card purchases, where they've traveled or how they spell their name.
The Guardian has a "gotcha" piece about how easy it is to crack the security on the RFID tags in the new UK passports. Bruce Schneier and Bruce Sterling have both commented favorably on the piece, but personally I don't see what all the fuss is about. The RFID chip contains a cryptographically signed digital copy of the main page of your passport, including a digital copy of your photograph. The idea is that this way you can't modify the name or paste your own photo into a stolen passport because the digital data won't match, and you can't modify the digital data because it has to be signed by the issuing country. After people expressed concerns that someone nearby could eavesdrop on the conversation between the passport and the RFID reader, they decided to encrypt the passport using your passport number, expiration date and date of birth, which is encoded using a barcode (or maybe a magnetic stripe). That way the customs official swiping your card can read the photo but someone eavesdropping on the RFID conversation can't.
There's only one concern the story mentions that makes even vague sense to me:
This means that each time you hand over your passport at, say, a hotel reception or car-rental office abroad to be "photocopied", it could be cloned with equipment like ours. This could have been done with an old passport, but since the new biometric passports are supposed to be secure they are more likely to be accepted without question at borders.
Certainly people trust computers a little too much, but this sounds like something proper training would solve. The idea that the RFID chip can be cloned doesn't seem like that difficult a concept to teach.
So what am I missing here?
Here is the list of 65 US Senators that voted to grant the president the right to lock non-citiziens up indefinitely without the right to trial or to challenge the legality of their detention, that declared if they ever are given a trial then hearsay and evidence obtained through coercion may be used against them, and that gave amnesty to those who authorized or committed illegal torture and abuse.
I find it horrific that so many of those we've elected to protect our fragile democracy are so quick to grant powers that belong only to kings, dictators and despots.
The front page of yesterday's SJ Merc includes a great graphic showing how almost all of San Jose would be off limits to all registered sex offenders if California's Proposition 83 is enacted by voters this November. The proposition would make it illegal for a registered sex offender to live within 2000 feet of a park or school (regardless of whether his or her crime involved children) and to wear a GPS ankle bracelet for life.
From my brief read of the law defining sexual registration (IANAL!) it looks like convicted criminals are forced to register if they're found guilty of rape or by order of the court for any other crime if the court finds that "the person committed the offense as a result of sexual compulsion or for purposes of sexual gratification." That's not a sympathetic bunch of people, and though I'm disturbed by the idea of treating people as guilty of FutureCrime (punish people for what they might do in the future) I can understand the motivation. But as the Merc story points out banishing registered sex offenders from most parts of the city will just lead to more sex offenders becoming homeless, cut off from the support groups and social network that helps keep them from committing crimes again.
From today's district court ruling that the NSA warrantless wiretapping program is illegal:
The Government appears to argue here that, pursuant to the penumbra of Constitutional language in Article II, and particularly because the President is designated Commander in Chief of the Army and Navy, he has been granted the inherent power to violate not only the laws of the Congress but the First and Fourth Amendments of the Constitution, itself.
We must first note that the Office of the Chief Executive has itself been created, with its powers, by the Constitution. There are no hereditary Kings in America and no powers not created by the Constitution. So all “inherent powers” must derive from that Constitution.
The President of the United States, a creature of the same Constitution which gave us these [the First and Fourth] Amendments, has undisputedly violated the Fourth in failing to procure judicial orders as required by FISA, and accordingly has violated the First Amendment Rights of these Plaintiffs as well.
Today's WSJ has a story on how the FBI threatened to take away Moroccan immigrant Yassine Ouassif's green card if he didn't become an informant (behind a pay wall, sorry, a summary is here). Down at the bottom of the story is this bit:
Ms. Aklaghi [Ouassif's lawyer] says she learned more at that point about why federal authorities were so interested in him. Mr. Ouassif had been secretly recorded by an FBI informant talking to friends in a San Francisco mosque. A Homeland Security lawyer, she says, did not specify what Mr. Ouassif had said, but told her that his statements did not indicate criminal intent and were fully protected by the First Amendment. Nevertheless, his statements had landed him on the no-fly list, Ms. Aklaghi says, and led to all his subsequent travails.
So, if her information is correct, what this says is that Homeland Security is taking the position that though the First Amendment stops the government from "abridging the freedom of speech," it doesn't say anything about taking away someone's ability to board an airplane if he says something we don't like.
Homeland Security, of course, is not commenting at all, which points to the other big problem with all this nonsense: the people currently running the show are so secretive (and our congress so complicit) that it's almost impossible to find out what's actually being done in our name. Where's the transparency? Where's the freedom to be left alone when you're doing nothing wrong? This is not how the America I learned about in civics class works. We deserve better — a lot better.
Update 7/12/06: corrected spelling of Aklaghi's name.
In case you haven't been keeping score, the Bush administration claims they don't need a warrant to:
So far the administration's response to criticism that such warrantless surveillance is illegal has been to threaten the people with leaking evidence of their criminal activities with prosecution, no doubt trying to ferret out the whistleblowers by trolling through the phone logs of every reporter who's mentioned the subject.
Update: fixed typo.
Declining to answer questions about revelations that Vice President Dick Cheney argued for allowing the NSA to intercept entirely domestic telephone calls and e-mail without warrants, his spokeswoman simply responded:
"As the administration, including the vice president, has said, this is terrorist surveillance, not domestic surveillance."
The response follows last week's revelation in USA Today that the NSA has secretly collected the phone records of tens of millions of terrorists currently living in the United States.
For those of you in the Boston area, there's a day-long Workshop on Data Surveillance and Privacy Protection at Harvard on Saturday, June 3rd. Registration is free (though you need to register to attend), and it looks like a good set of speakers. (Link via Simson Garfinkel.)
The ACLU is hosting an online national town-hall meeting tonight (6pm PDT / 9pm EDT) called Our Freedom at Risk: Spying, Secrecy and Presidential Power. The ACLU has a strong opinion on the matter, obviously, but hopefully it'll still provide more light than heat. Questions are being taken via the Web, and archives will show up within 24 hours at the ACLU town hall site.
I saw this sign on the MBTA a few days ago, a part of their whole post-9-11-world Transit Watch Program.
Is it just me, or did this whole "Traitors Are Everywhere"
EFF is sounding a warning about Google Desktop's latest Search Remote Computers function. The function itself sounds nice: one search command to search all your documents and viewed webpages regardless of what computer they're on. Trouble is, Google does it by uploading all those sensitive documents to their own servers in case your laptop or other computers are off-line.
I think Google has a pretty good moral compasses, but (as I mentioned when GMail came out) there are fundamental risks with this sort of centralized system regardless of the trustworthiness of the company running them. As EFF's alert points out, many legal protections enjoyed by information stored on your own home computer are lost when stored with an online service provider:
I can imagine other legal and practical questions as well. For example, if Google Desktop wound up uploading a researcher's company-confidential tech reports, would that count as "disclosure" and thus prevent him from filing for a patent on his work? And if a laptop running the software is opened in a foreign airport (e.g. China), can the local Google office be subjected to subpoena under that country's own laws?
Saturday's Washington Post article on the NSA's domestic eavesdropping program has a short aside I find rather chilling (emphasis mine):
Even with 38,000 employees, the NSA is incapable of translating, transcribing and analyzing more than a fraction of the conversations it intercepts. For years, including in public testimony by Hayden, the agency has acknowledged use of automated equipment to analyze the contents and guide analysts to the most important ones.
According to one knowledgeable source, the warrantless program also uses those methods. That is significant to the public debate because this kind of filtering intrudes into content, and machines "listen" to more Americans than humans do. NSA rules since the late 1970s, when machine filtering was far less capable, have said "acquisition" of content does not take place until a conversation is intercepted and processed "into an intelligible form intended for human inspection."
When I was in the Software Agents Group at MIT in the late '90s, we had lots of discussion about whether people would be legally responsible for the actions of automated software programs (agents) they use. If I tell eBay's software to bid up to a given price, can I be held to that agreement even though the "agent" did the bidding and not me? If I knowingly write and unleash an intelligent virus, am I responsible for the damage it causes? The answer to these questions has to be yes if responsibility means anything in our increasingly automated society, and the question would be completely ludicrous were it not for the complexity of what software can now do without our direct intervention. Imagine the murder defense "I didn't kill those people, my gun did!" And yet, this is the logic being used by the NSA when they claim eavesdropping only counts if the interception is shown to a human. "I didn't spy on innocent Americans, my software did it!"
There are times where being watched by electronic eyes is preferable to being watched by humans. For example, I trust that Google's automated system will only use my email to generate relevant advertisements (and nothing else) more than I would if they had humans reading and tagging every email by hand. However, in the NSA's case their software is doing exactly what they themselves are prohibited from doing both by statute and the Fourth Amendment, namely looking for illegal activity by trolling through mountains of private domestic communications without probable cause. Even if the software only produced a human-readable summary or a ranked list of suspicious people, that output would be tainted just as surely as if an NSA analyst had produced it.
Boing Boing reports that Apple's iTunes 6.0.2 has a new "feature" where clicking on a song in your playlist pops up related albums on sale at the iTunes Music Store in a little window at the bottom. Apple does it by sending the song, artist, album, genre and ID to Apple (presumably — the IP addresses are in the 69.144.123.xx range, which is Akamai).
GET /WebObjects/MZSearch.woa/wa/ministoreMatch?an=Music+From+The+Motion+Picture&gn=soundtrack&kind=song&pn=Austin+Powers+-+The+Spy+Who+Shagged+Me HTTP/1.1
This is rightly being decried as spyware (really, how could it not be?) though at least iTunes will stop announcing what you're listening to if you close the mini-store window (using the new "box with up-arrow" button in the lower-right corner).
My PhD thesis was all about designing software that provides information based on what you're doing and I have a soft spot for applications like this, but I see three fundamental problems in what Apple has done here. First and most importantly, the mini-store is for their benefit rather than mine — they're taking advantage of the impulse buyer in all of us, hoping we'll make purchases we wouldn't make if we had time to think about it. Second, their application requires that personal (if not personally identifiable) information be sent over the net rather than processed locally, with no idea how long the info is kept or how it might be used. Music collections are personal things, and even if I liked the mini-store application I'd think twice about clicking on a lewd song for fear of how that info might be used or eventually tied back to me. Finally and most obviously wrong, they're snooping without asking, which is just plain rude and makes me distrust the company and the software.
Update 1/12/05:As Charles points out in the comments, MacOSXHints reports that Apple has told them that absolutely no information is (currently) being collected from the MiniStore. I'm glad to hear it (and would have been a little surprised if it was otherwise), but it doesn't change my not liking such data going beyond the bounds of my own domain. If the Mini-store was actually useful to me I might be willing to make that sacrifice, but as it is it's just annoying.
The thing that scares me about data mining is not that super-secret information about me is revealed — my Amazon wish-list doesn't contain anything I'd be embarrassed or concerned if it was seen by any of my friends or for that matter 99% of the other people in the world. And odds are good that anyone bothering to look me up by name or go to my website will fall into that category. The trouble is that if I pop up in a trolling-expedition at all it's much more likely the troller is among that 1% of the people that I would be upset about reading my wish-list. Ed McMahon doesn't mine the Internet to pick winners of the Publishers Sweepstakes, but over-zealous FBI agents do look for people promoting the wrong politics, companies look for suckers to blast with seemingly perfect-for-you product announcements, con artists look for rich recently-widowed women above a certain age, and pedophiles look for young latch-key kids with their own webcams.
Bush's eavesdropping program was explicitly anticipated in 1978, and made illegal by FISA. There might not have been fax machines, or e-mail, or the Internet, but the NSA did the exact same thing with telegrams.
We can decide as a society that we need to revisit FISA. We can debate the relative merits of police-state surveillance tactics and counterterrorism. We can discuss the prohibitions against spying on American citizens without a warrant, crossing over that abyss that Church warned us about twenty years ago. But the president can't simply decide that the law doesn't apply to him.
This issue is not about terrorism. It's not about intelligence gathering. It's about the executive branch of the United States ignoring a law, passed by the legislative branch and signed by President Jimmy Carter: a law that directs the judicial branch to monitor eavesdropping on Americans in national security investigations.
It's not the spying, it's the illegality.
Personally, I think it's the illegality and the spying, but in the name of keeping the debate clear I'm happy to keep the two arguments separate.
From the Sydney Morning Herald:
Jane, from Coogee, was surprised to find three police on her bus asking to inspect mobile phones. Each took a phone at random and scrolled through messages for five or ten minutes. Everyone obeyed. "The people were perfectly friendly about it," she said. "I thought it was a bit weird and a breach of privacy. But I didn't say anything. Nobody did."
No, it's not about terrorism, it's about potential racial violence, but it's still that nasty abuse-of-rights-in-the-name-of-safety-from-unknown-boogeymen vibe. Of course, such flagrant violations of our rights without a court order could never happen in the US. In the US, we'd never even know they'd read our text messages without a court order until we read about it in the New York Times.
(Thanks to Omri for the link.)
Wondering why your daughter, wife or girlfriend stays out so late? Wonder no more with new forget-me-not panties, the underwear that gives her comfort and you peace of mind:
These panties will monitor the location of your daughter, wife or girlfriend 24 hours a day, and can even monitor their heart rate and body temperature...
These "panties" can trace the exact location of your woman and send the information, via satellite, to your cell phone, PDA, and PC simultaneously! Use our patented mapping system, pantyMap®, to find the exact location of your loved one 24 hours a day.
(Thanks to Dan on the wearables list for the link.)
Did you fly in June 2004? If so (or if you have a similar name to somebody who did) then the Transportation Security Administration may have secretly collected information on you from airline reservation systems and credit bureaus. Wouldn't it be nice to find out what they know?
Luckily, we still live in a free country — you can just ask! EFF is making it easy to do just that, and you can help them reverse-engineer exactly what the TSA has been up to at the same time.
I took this photo last week at the entrance to MARTA, Atlanta's subway system.
It wasn't so long ago that I would have taken a picture like this one in a foreign country as a reminder of how different life would be without our Bill of Rights. It's amazing how quickly we're letting it slip away from us...
For years science-fiction author David Brin has been preaching that privacy as we know it is essentially dead, and rather than mourn our loss of shadow we should embrace the light — and make sure it shines in the bedrooms of power as much as it shines in our own. The Cameras Are Coming! has been his battle cry.
I remember hearing Brin speak at the Media Lab sometime in the late 90s and thinking he was completely off the mark if he thought ubiquitous lack of privacy was anything but trouble — I saw it as giving an expert marksman (powerful individuals, companies and governments) and someone who has never held a gun before (us peons) the same high-end rifle and saying "there you go, now you're both equal."
I've not gone completely over to Brin's position, but events in the intervening years have brought me a little closer. First, I've seen no sign of privacy erosion even slowing down and every sign that information wants to be free and unfettered is becoming a new physical law for the 21st century. (In the spirit of Free as in beer and Free as in freedom, this would be the Free as in virus point of view.) The same forces that erode top-down power and barriers to free expression are the forces that erode our privacy — I can't think one is inevitable without accepting the other as well. Second, things like the Abu Grahab scandal give me at least a little hope that light will occasionally leak into even the more protected dens, and that we peons are slowly learning how to shoot. I'm not totally convinced by any stretch (Abu Grahab, I'll point out, has so far only lead to punishment of low-level participants), but it's something.
I came in halfway through Brin's talk in the opening debate at CFP, but I did note one quote I especially liked (slightly paraphrased here):
Give the watchdog better glasses and more freedom, then yank the choke chain to make sure it remembers that it's a dog and not a wolf.
The fundamental question for every free society is how to insure we keep a hold of that choke chain. Shining light in the bedrooms of power is one part of the answer I think, but it's not enough.
I've some thoughts of what else is needed, but they involve questions about free will — and anyone who's heard me rant in person on the topic knows I'd never get to sleep if I started down that path tonight...
Personally I think identity theft is one of the biggest boons to privacy advocates in the past decade, because it finally answers the question "why should I care about privacy if I don't have anything to hide?" There are several other examples and classes of threat that I think are equally important though:
One disappointment I have about CFP is how privacy (step two of the privacy chain I talked about last post) is overshadowing discussion about freedom. I think privacy is important and worth fighting to protect, but I mostly see privacy as a way to keep others from gaining power over me (and thus becoming able to harm me) rather than as an end in itself. Sure I'd rather not have people posting nude pictures of me on the net, but I'm a lot more concerned that information collected about me isn't used to steal my identity or deny me a loan, employment or insurance. The debate between privacy-as-means-to-an-end folk like me and privacy-as-intrinsically-valuable folk has played itself out several times over the past few days.
Just had a panel on Privacy Risks of New Passport Technologies, discussing among other things the new RFID tag the US is rolling out for passports in the coming months. The tags will contain a digitally signed copy of your photo plus all the information on your data page except the signature, and will be readable at a distance. The readers are designed to read chips about from about ten centimeters away, but the danger is that it's possible to design devices that read the tag from longer distances. The exact distances possible aren't clear to me, but a speaker from the ACLU demonstrated reading a passport with the type of RFID being used from three to four feet away. The State Department is now promising the passport cover will include a Faraday cage to prevent reading when the passport is closed, but that won't help when the passport is opened.
The dangers really boil down to someone snooping or stealing one's identity at a distance without one's knowledge or consent:
Sounds like pretty big flaws in something in theory designed to make us safer, all of which would be solved by simply making the data only communicate through physical contact. The lone proponent on the panel was Deputy Assistant Secretary of State for Passport Services Frank Moss. I was rather unimpressed with his answers — many parts sounded like a song and dance surrounded by apologies for not really understanding the technology (and thus not being able to explain any details. However, he did answer the one main question I had: why the heck did the US push so hard for passports that could be read at a distance? His answer seems to boil down to it was cheaper and a little more flexible. Specifically:
I'm sympathetic to the difficulties in standardizing over a hundred national documents, but that's a piss-poor excuse given the potential security holes it opens up. The follow-up argument of "we were stupid when we pushed for it, but it's too late now so tough" is equally unacceptable in my mind.
Update 4/14/05: Ed Felton at Freedom to Tinker was at the same panel and has posted his own summary. His conclusion about the reason we're getting stuck with a contactless system are in line with my own: "In short, this looks like another flawed technology procurement program."
After a couple days soaking in privacy issues I'm starting to break everything into a three-part chain: identification, information and actions. (Appropriately enough for this conference, these these are fairly well associated with computers, privacy and freedom respectively.)
Many people have just a visceral negative reaction to someone knowing too much about them, but the consequences are mostly in part 3 — that's where you get stung. That said, sometimes the best way to stop something bad happening in step 3 is to stop steps 1 or 2 from happening, and often you never even find out that you didn't get a loan or a job due to a privacy violation.
Veronica Pinero's presentation, Panopticism vis-a-vis criminal records, had an interesting graphic which I've reproduced on the right. It's a map of all the sex offenders living within a 10-block radius of the CFP conference hotel.
The thing that strikes me is how fear-inducing this list is, both because of what it says and what it leaves out. It includes a map, showing that we're surrounded by no less than 39 sex offenders, and gives their names, mean-looking photos, and the name of the crime they were convicted of. What it leaves out is exactly where they are (addresses only within 100 numbers) and any sort of details of the crime that might help people figure out whether they or their children are actually at risk. I expect most of these guys did horrible things (is there any way "child molestation" can be better than it sounds?). Some I have no idea about, like "indecent liberties," or even whether "child rape" includes a 19-year-old having sex with his 17-year-old girlfriend. More importantly, I don't have any way to tell how frightened I should be or what I should do about it. Avoid downtown? Lock myself in my house? Buy duct tape? What good is this information to us, beyond making us even more afraid than we already are?
I'll be blogging from Computers, Freedom and Privacy 2005 the next few days, so expect a bunch of posts under the "Big Brother" category.
It's nice to see that censorware continues to protect public school children from being exposed to things like porn, terrorism, and criticism of the No Child Left Behind Act. Jamie McKenzie (creator of the blocked site and editor of the educational technology webzine From Now On) has the details. Link by way of Seth Finkelstein, who diagrams the all-too-common circle of finger-pointing.
The Union of Concerned Scientists just released the results of a survey of US Fish and Wildlife Service field scientists that reveals serious political preasure to self-censure and even exclude or alter technical information that might lead to species being protected. (It's telling that there was a 30% response rate even after a directive was sent out instructing scientists not to respond even from home on their own time.)
From the executive summary:
- Large numbers of agency scientists reported political interference in scientific determinations. Nearly half of all respondents whose work is related to endangered species (44 percent) report that they have been directed for non-scientific reasons to refrain from making findings that protect species. One in five have been instructed to compromise their scientific integrity, reporting that they have been “directed to inappropriately exclude or alter technical information from a USFWS scientific document.” In the Southwest region, that number was even higher—closer to one in three.
- Agency scientists reported being afraid to speak frankly about issues and felt constrained in their role as scientists. 42 percent said they could not publicly express “concerns about the biological needs of species and habitats without fear of retaliation,” while 30 percent were afraid to do so even within the agency. A third felt they are not allowed to do their jobs as scientists.
- There has been a significant strain on staff morale. Half of all scientists reported that morale is poor to extremely poor; only 12 percent believed morale to be good or excellent. And 64 percent did not feel the agency is moving in the right direction.
- Political intrusion has undermined the USFWS’s ability to fulfill its mission. Three out of four staff scientists felt that the USFWS is not “acting effectively to maintain or enhance species and their habitats.”
So often the system gives us a choice between acquiescing to a little erosion of liberty or taking it on the chin and fighting for the liberty of us all. Salute to John Perry Barlow, the latest hero in the good fight.
(I'm going to skip my armchair legal reasoning for why it's important that the government not have the right to use the excuse of "we're looking for terrorist threats" to search someone's ibuprofen bottle for drugs without a warrant, and why it's important that evidence found during such illegally-conducted searches not be admissable — if you don't know the arguments, check out some legal discussion on the Exclusionary Rule.)
Jumping briefly to media technology, when I cross this and my previous post in my head, I can't help but add a new tech toy to my Christmas wish list: a suitcase that automatically starts recording video and audio whenever it's opened, so when I recover my bag I can see just how intimate bag-searchers are getting with my personal effects. Think of it as a cross between a radar-detector and an automatic Rodney King video camera for privacy advocates.
Amtrack is now starting to perform random ID checks on their trains, "as part of a broader program to improve security." As Bruce Schneier points out, "this works because, somehow, terrorists don't have IDs."
From the article: The security program is the result of a federal directive, issued in May, to protect rail passengers from terrorism. I wonder if this is an expansion of the same secret, need-to-know-basis directives that John Gilmore is suing over.
John Gilmore just filed his case before the 9th Circuit Court of Appeals in his ongoing struggle to answer two basic questions:
Given the post-wardrobe-malfunction furor over at the FCC, it's nice to see things fall on the side of free speech every now and then:
The November 20, 2001 episode involves a scene depicting Buffy kissing and straddling Spike shortly after fighting with him. Based upon our review of the scene, we did not find that it is sufficiently graphic or explicit to be deemed indecent. Given the non-explicit nature of the scene, we cannot conclude that it was calculated to pander to, titillate or shock the audience. Consequently, we conclude that the material is not patently offensive as measured by contemporary community standards for the broadcast medium.
(Props to Declan McCullagh's Politech mailing list for the link.)
Tom Ridge has declared that CAPS-II is dead:
Asked Wednesday whether the program could be considered dead, Ridge jokingly gestured as if he were driving a stake through its heart and said, ''Yes.''
He cited the privacy concerns, particularly those arising from recently proposed regulations that would have required airlines to hand over information about passengers as part of a test of the program. Critics in Congress also complained that terrorists using fake identities could easily evade the system.
...but beside the fact that it was horribly susceptible to abuse, wouldn't do anything to make our skies more secure and made even the most government-trusting citizien start looking for the jack-booted thugs, what wasn't to like?
A month ago it was reported that the Transportation Security Administration was trying to expunge a contractor's congressional testimony from the public record and all web copies. The contractor, James McNeil of McNeil Technologies, testified about how his red-team of undercover testers were able to smuggle guns through airport security at the Rochester, NY airport by hiding them under bandages.
According to today's Wall Street Journal, they're at it again, now asking that McNeil's comments that the TSA is screening for drugs and kiddie porn also be removed from testimony:
CENSORED: Transportation Security Administration asks a House panel to redact from a hearing record a contractor's remarks that TSA has airport screeners also looking for drugs and child pornography. It "softens the focus on security," testified CEO James McNeil of McNeil Technologies, of Springfield, Va. TSA says screeners simply are told to alert police to such items. McNeil says TSA hasn't complained to him.
Now you can calculate the retail-value of your privacy to the penny with Swipe's privacy calculator. (They also have a tool for seeing what info is being stored on the barcode on the back of your driver's license.)
Jane Black's Privacy Matters column in BusinessWeek this week takes a look at the privacy backlash against the MATRIX statewide database and similar programs:
BIRTH OF BIG BROTHER. There's no doubt that MATRIX raises privacy red flags, though after an extensive briefing by the Florida Law Enforcement Dept., which is spearheading the project, I believe that it's little more than an efficient way to query multiple databases.
The real furor over MATRIX demonstrates something much more important — and surprising: Privacy advocates have gained a lot of ground in the two years since September 11. And the pendulum is swinging back in their favor.
"The MATRIX is not whirring away at night to create a list of suspects that is placed on my desk every morning," says Zadra [chief of investigation at the Florida Law Enforcement]. "All it does is dynamically combine commercially available public data with state-owned data [such as driver's license information, sexual-predator records, and Corrections Dept. information] when queried. I can't imagine any citizen getting angry that we're using the best tools available to efficiently and effectively solve crimes."
Nobody has a problem with law enforcement using the best tools to solve crimes. Everybody has a problem with law enforcement using those tools to harass innocent citizens and suppress free expression of speech. It's because of this potential for abuse that we have things like the Fourth Amendment and laws preventing the CIA from spying on US citizens. The trouble with all these combined public/commercial database plans like MATRIX, CAPPS-2 and TIA is that commercial databases have no such protections — companies can and will do just about anything to gather information about us, and it's all perfectly legal. Why should I care whether it's the CIA or Master Card that is telling the government what breakfast cereal I eat?
I've finally gotten around to reading up on Trusted Computing (a process that, ironically enough, was interrupted by my being rootkitted a couple of weeks ago). I'd heard some pretty unsettling things about trusted computing, but now that I've done some digging... well it's still pretty disturbing.
Trusted Computing (TC) is one of several names for a set of changes to server, PC, PDA and mobile phone operating systems, software and hardware that will make these computers "more trustworthy." Microsoft has one version, known as Palladium or Next Generation Secure Computing Base (NGSCB), and an alliance of Intel, Microsoft, IBM, HP and AMD known as the Trusted Computing Group has a slightly different one called either trusted computing, trustworthy computing, or "safer computing." Some parts of Trusted Computing are already in Windows XP, Windows Server 2003, and the in the hardware for the IBM Thinkpad, and many more will be in Microsoft's new Longhorn version of Windows scheduled for 2006.
The EFF has a nice introduction to trusted computing systems, written by Seth Schoen, and Ross Anderson has a more detailed and critical analysis. A brief summary of the summary is that a trusted computer includes tamper-resistant hardware that can cryptographically verify the identity and integrity of the programs you run, verify that identity to online "policy servers," encrypt keyboard and screen communications, and keep an unauthorized program from reading another program's memory or saved data. The center of this is the so-called "Fritz" chip, named after Senator Fritz Hollings of South Carolina, who tried to make digital rights management a mandatory part of all consumer electronics. (He failed and is retiring in 2004, but I've no doubt there will be attempts to pass similar laws in the future.)
When most people think about computer security they think about virus detectors, firewalls and encrypted network traffic — the computer analogs to burglar alarms, padlocks and opaque envelopes. The Fritz chip is a different kind of security, more like the "political officer" that the Soviet Union would put on every submarine to make sure the captain stayed loyal. The whole purpose of the Fritz chip is to make sure that you, the computer user, can't do anything that goes against the policies set by the people who wrote your software and/or provide you with web services.
There are many people who would like such a feature. Content providers such as Disney could verify that your version of Windows Media Player hasn't had digital rights management disabled before sending you a decryption key for a movie. Your employer could prevent email from being printed or read on non-company machines, and could automatically delete it from your inbox after six months. Governments could prevent leaks by doing the same with sensitive documents. Microsoft and AOL could prevent third-party instant-message software from working with the MSN or AIM networks, or lock-in customers by making it difficult to switch to other products without losing access to years worth of saved documents. Game designers could keep you from cheating in networked games. Distributed computing and mobile agents programs could be sure their code isn't being subverted or leaked when running on third-party systems. Software designers could verify that a program is registered and only running on a single computer (as Windows XP does already), and could even prevent all legitimate trusted computers from reading files encrypted by pirated software. Trusted computing is all about their trust, and the person they don't trust is you.
End users do get a little bit of "trust" out of trusted computing, but not as much as you might think. TC won't stop hackers from gaining access to a system, but it could be used to detect rootkits that have been installed. TC also won't prevent viruses, worms or Trojans, but it can prevent them from accessing data or keys owned by other applications. That means a program you download from the Internet won't be able to email itself to everyone in your (encrypted) address book. However, TC won't stop worms that exploit security holes in MS Outlook's scripting language from accessing your address book, because Outlook already has that permission. In spite of what the Trusted Computing Group's backgrounder and Microsoft's Palladium overview imply, TC won't help with identity theft or computer thieves physically accessing your data any more than current public key cryptography and encrypted file systems do.
As long as you agree with the goals of the people who write your software and provide your web services, TC isn't a bad deal. After all, most people don't want people to cheat at online games and can see the value of company email deletion policies. The same can be said of the political officer on Soviet submarines — they were great as long as you believed in what the Communist Party stood for. And unlike Soviet submarine commanders, you won't get shot for refusing to use TC on your computer. Your programs will still run as always, you just won't be able to read encrypted email from your customers, watch downloaded movies, or purchase items through your TC-enabled cellphone. Some have claimed that this is how it should be, and that the market will try out all sorts of agreements and those that are acceptable to both consumers and service providers will survive. That sounds nice in theory, but doesn't work when the market is dominated by a few players (e.g. Microsoft for software, wireless providers for mobile services, and the content cartel for music and movies) or when there are network externalities that make it easy to lock in a customer base (e.g. email, web, web services and electronic commerce). What choice will you have in wordprocessors if the only way you can read memos from your boss is by using MS Word? What choice will you have in stereo systems when the five big record companies announce that new recordings will only be released in a secure-media format?
Of course, even monopolies respond to strong enough consumer push-back, but as Ross Anderson points out there are subtle tricks software and service providers can pull to lock in unwary consumers. For example, a law firm might discover that migrating years of encrypted documents from Microsoft to OpenOffice requires sign-off for the change by every client that has ever sent an encrypted email attachment. That's a nasty barrel to be over, and the firm would probably grudgingly pay Microsoft large continuing license fees to avoid that pain. These kinds of barriers to change can be subtle, and you can bet they won't be a part of the original sales pitch from Microsoft. But then what do you expect when you invite a political officer into your computer?
It seems the Transportation Security Administration is still determined to go forward with their test of the Computer Assisted Passenger Prescreening System (CAPPS II) with live data, even if it means forcing airlines to cooperate. Airlines are understandably hesitant, since Delta Airlines withdrew support after facing a passenger boycott and JetBlue is now facing potential legal action for handing over passengers' data to a defense contractor without passenger knowledge or consent.
For those who haven't heard about CAPPS-II, the idea is to replace the current airline security system where passenger's names are checked against a no-fly list and people with "suspicious" itineraries like one-way flights are flagged for extra search. The TSA has released a disclosure under the Privacy Act of 1974, and Salon published a nice overview on the whole debate a few weeks ago. The ACLU also has a detailed analysis. Extremely briefly, the new system would work like this:
Number 6 is the part that really scares people, because the TSA refuses to say anything about how the (classified) black box computer system will identify terrorists. It could be based on racial profiling, political ideology, or i-ching and no one would ever know.
There's a lot of speculation that the whole "airline security" story is just an excuse to collect travel information from everyday citizens for use in something akin to the Total Information Awareness project that was just killed (or at least mostly just killed) by Congress last week. I'm of two minds on that theory. On the one hand, I can't believe the people at the TSA would really be so stupid as to think something like CAPPS-II would work for the stated purpose, so they must have ulterior motives. On the other hand, maybe I'm being too generous and they really are that stupid, or at least have been deceived by people a little too high on their own technology hype. Of course, there might be a bit of both going on here.
Too many details are left out of the TSA's description of CAPPS-II to do a full evaluation, but even with what they've disclosed there are some huge technological issues:
Given that Congress has just moved to delay CAPPS II until the General Accounting Office makes an assessment, I can only hope they'll have similar questions and concerns. This system is either lunacy or a boondoggle to keep a database on the travel habits of every single American — neither is a comforting option.
Since March 2002, the Reporters Committee for Freedom of the Press has released a semiannual report on how the War on Terrorism is affecting "access to information and the public's right to know." The fourth edition of this report, Homefront Confidential, has just been released.
The 89-page report ranks threats to a free press on the same color-code used by the Department of Homeland Security:
Homefront Confidential is a stark contrast to the kind of "information wants to be free" rhetoric I so usually find (and, I'll admit, often speak) here in Silicon Valley. In my techno-optimistic world, information naturally flows straight from bloggers in the field to a public eager for news, with no gatekeepers between us. There is some truth to this notion, and blogs have been credited with breaking the Monica Lewinsky story and keeping Trent Lott's racist remarks about Strom Thurmond in the public eye, as well as many other successes.
But while blogs and other Internet reporting can both accelerate a story's propagation and occasionally magnify the voice of an eyewitness or whistleblower, most important news starts in the hands of a few important decision-makers. Without cooperation from the Justice Department, information about closed terrorism and immigration proceedings (including the detainees' names) is simply not available. Without access to battlefields and military officers, details about our progress in war is not available. The government also has extensive powers to keep information bottled up, from criminal prosecution of whistleblowers under the Homeland Security Act, to legal restrictions on commercial satellite imaging companies, to use of subpoenas to force reporters to reveal their sources. These are all effective restrictions on the flow of information that aren't deterred by the blogger's nimble RSS feed.
Information wants to be free in this networked age, but the information that is most important for keeping our government in check is still behind several gatekeepers. In deciding the laws and policies of our land it's important to remember the converse of this techie creed: Yes, information wants to be free, but freedom also requires information.
With tomorrow's anniversary of 9/11, John Ashcroft wrapping up his national tour for promoting the USA Patriot Act, and President Bush asking for more authority under what is being called the first of several Patriot-II laws, I highly recommend people go read Dahlia Lithwick and Julia Turner's four-part series, A Guide to the Patriot Act, published in Slate. Lithwick and Turner manage to cut through the spin-doctoring on both sides of the debate, presenting the more controversial parts of the Act without shilling for one side or the other, but while still presenting their own analysis and thoughtful interpretation. It's a breath of fresh air, cutting between punditry and objective-to-a-fault reporting-without-analysis:
How bad is Patriot, really? Hard to tell. The ACLU, in a new fact sheet challenging the DOJ Web site, wants you to believe that the act threatens our most basic civil liberties. Ashcroft and his roadies call the changes in law "modest and incremental." Since almost nobody has read the legislation, much of what we think we know about it comes third-hand and spun. Both advocates and opponents are guilty of fear-mongering and distortion in some instances.
The truth of the matter seems to be that while some portions of the Patriot Act are truly radical, others are benign. Parts of the act formalize and regulate government conduct that was unregulated — and potentially even more terrifying — before. Other parts clearly expand government powers and allow it to spy on ordinary citizens in new ways. But what is most frightening about the act is exacerbated by the lack of government candor in describing its implementation. FOIA requests have been half-answered, queries from the judiciary committee are blown off or classified. In the absence of any knowledge about how the act has been used, one isn't wrong to fear it in the abstract — to worry about its potential, since that is all we can know.
Ashcroft and his supporters on the stump cite a July 31 Fox News/Opinion Dynamics Poll showing that 91 percent of registered voters say the act had not affected their civil liberties. One follow-up question for them: How could they know?
If you haven't read all 300-plus pages of the legislation by now, you should.
Since I haven't read all 300-plus pages of the legislation myself, I won't tell you to do so. But I will tell you to go and read Lithwick and Turner's guide.
Sherman Austin headed to jail on Wednesday to start his one-year prison sentence, guilty of hosting plans for the manufacture of explosives on his anarchist website, RaiseTheFist.com. The plans were not written by Austin, but Austin provided free hosting for anarchists and political protesters. In January of 2002, the FBI raided the home where Austin lived with his parents and confiscated all his computers and backup disks, including the server for RaiseTheFist. Agents also found components to make a Molotov cocktail. Austin was 18 years old at the time. (Austin details the entire story in an interview with CounterPunch.)
A few days later Austin went to the World Economic Forum protest in New York, where he was arrested and held without bail. He was eventually charged with possession of an unregistered firearm (the Molotov cocktail components), and with violating the controversial 1997 federal law that makes it illegal to distribute information about the manufacture of explosives "with the intent that the... information be used for, or in furtherance of, an activity that constitutes a Federal crime of violence." The law, championed by Sen. Dianne Feinstein (D-Calif.), raised serious first amendment issues when it was proposed. According to a CNET interview with Austin shortly before he went to prison, he is the first person to be convicted under the law.
In a statement on his web site, Austin said he originally planned to contest the charges. He decided to plead guilty to the information dissemination crime in return for the dropping of the firearms charge, because "after my lawyer consulted the USPO working on the case, she found out that a 'terrorism enhancement' is applicable to my charge, which could get me an additional 20 years." According to the LA Times, Austin was offered a plea bargain of four months in prison followed by four months in a halfway house, but U.S. District Judge Stephen V. Wilson rejected the plea and sentenced Austin to a full year in prison. After completing his term, he will be placed on three years probation, and will be barred from associating with any groups that espouse violence to achieve political, economic or social change. He will also need permission from the probation office operate a computer. The EFF has protested that the sentence is too severe for the alleged crime.
Several things bother me about this case.
First are the obvious First Amendment issues with the anti-information law under which he was convicted. Two things are necessary for this law to apply. The first is the distribution of information about explosives, which is clearly pure speech that is protected under the First Amendment. The second is the intent that the information be used for a violent crime, which is inherently difficult to prove or to disprove. It seems quite reasonable that Austin was all bluster and no action, an angry 18-year-old boy who liked to play political terrorist on his website and in his back yard but was not violent in real life. It is telling that the only previous charges brought against Austin were for refusal to disperse, conspiracy to commit a refusal to disperse, unlawful assembly, and disorderly conduct for blocking pedestrian traffic. In other words, for committing peaceful civil disobedience.
It's not surprising that the FBI thought they were dealing with a dangerous terrorist psychopathic when they went to RaiseTheFist.com and saw pictures of George W. Bush with a gun sight on his head, or read posts saying "Yeah, motherfucker, I'm a terrorist to the United States Government. I'm a terrorist to capitalism." and "We don't gather weapons, plan extreme operation, and risk our lives for nothing. This is real." But that's just speech, not action. It's like the old Saturday Night Live running gag where someone says "Well, it's not like I said I was going to kill the president..." and gets jumped by Secret Service agents that come out of nowhere. It's also not clear to me whether Austin was the author of any of these more violent postings, or whether he merely hosted them.
The second bothersome point is that the this smacks of selective enforcement. Information on how to make bombs is everywhere, from libraries to web sites to bookstores. This includes the infamous Anarchist's Cookbook that was published in 1970, and about which the author admits that the "central idea to the book was that violence is an acceptable means to bring about political change." And yet, the FBI has yet to raid Amazon.com to stop them from distributing this information. Of course, Amazon was not the author of the book, and it would be unfair to assume that Amazon intends violence just because they sell a violent book. Just as Austin did not write the explosives guide, and it is unfair to assume he intends any violence just because he offers web hosting for a violent page. Clearly, the crackdown was at least in part due to RaiseTheFist's message, and the fact that this message was in alignment with the growing anti-globalization movement.
The final point is most troubling: Austin was never able to argue his case. Plea bargains are meant to be an incentive to surrender when guilt is obvious. In cases like Austin's, where the plea is for a four-month sentence and the risk is 20+ years, there is huge incentive for a suspect to plead guilty even when he knows he is innocent. Sadly, this is often the rule rather than the exception, especially for the poor. It is only because this case involves mediapathic issues such as First-Amendment rights, the Internet, and terrorism that we have heard about it at all, unlike the hundreds of cases every day where innocent men and women cop a plea to go free based on time served rather than risk further jail time to clear their names.
Austin's lawyer describes his client as "a very peaceful person" who got carried away "in a very heated political environment." A clinical psychologist who specializes in threat assessments wrote for the defense that Austin "does not appear to have seriously considered the ramifications" of his actions "and would have been horrified had someone been injured." Let us hope that his year in prison, and his apparent abuse by the system, does not turn this peaceful-but-angry young man into the very terror the FBI fears.
Tampa Police have decided to scrap their much-criticized face-recognition system, admitting that during a two-year trial the system did not correctly identify a single suspect. Similar face-recognition systems are still in use in Pinellas County, Florida, and Virginia Beach, Virginia, though neither of these systems have ever resulted in an arrest either.
Face-recognition technology evokes images of automatic cameras scanning bustling crowds, automatically picking out terrorists from the millions of faces that pass by. One day the technology may be able to deliver on this, but currently it is still necessary for a human controller to zoom in on individual faces using a joystick. A 2001 St. Petersburg Times article describes a Tampa police officer scanning the weekend crowd in Ybor City, checking 457 faces out of the some 125,000 tourists and revelers in an evening.
Let's do some quick math. The police are only scanning 457 out of 125,000 people on a given night, or 0.3%. That means even if ten known bad guys from the watch-list are in the crowd, there's still only a 4% chance any one of them will be looked at by the system. That number drops to 0.4% if there's only one bad guy in the crowd that night.
Then there's the chance that the face recognition system doesn't sound an alarm. A recently published evaluation of the Identix system used in Tampa gives a base hit rate of 77% (that is, 77% of people on a watch-list were correctly identified). However, that was with a watch-list of only 25 faces. The hit rate goes down as watch-list size goes up, down to 56% with a watch-list of 3000 faces. According to the Associated Press, the Tampa database had over 24,000 mug shots on its watch-list. Then there's the problem that mug shots were taken indoors and the surveillance cameras were outdoors. According to the evaluation, mixing indoors and outdoors can reduce hit rates by around 40%. (The 40% reduction was seen on identity verification tasks; the watch-list task is actually more difficult.) Finally, these results all assume a 1% false-positive rate, which would result in five false alarms per night. Given all these (well-known) problems, it's amazing anyone ever thought this was a good idea.
There're several reasons I hope this failure dissuades similar attempts by other law-enforcement communities. First, as a 2001 ACLU report on the Tampa system points out, our resources could be better spent, and face recognition can give us a false sense of security. Second, a face-recognition systems in a public space gives the impression that everyone is a suspect, regardless of whether the system actually works. And finally, face recognition technology continues to improve. It won't happen in the next few years, but at some point the technology is going to reach the point where recognition is completely automated, high accuracy, and robust. When that happens, it will be possible to track large numbers of people as they go about their daily lives, and even track people retroactively from recorded video. Hopefully by this time our society will be so inoculated against such privacy violations that such uses will be inconceivable.
The story sounds like something out of The Onion, or maybe a dystopian science fiction short story. As reported widely in the news yesterday, the Pentagon has been planning an electronic futures market for analysis of foreign affairs. The idea is to create a market where people can anonymously bet on things like whether the US will reduce troop deployment in Iraq by year's end, or whether Arafat will be assassinated. The current odds on the bet, so the argument goes, best reflects the actual probability given everything the collected thinkers know. Policy-makers could then use the probability to know where to focus their attention.
By today the firestorm had swept Washington and the Pentagon announced the project has been canceled. Apparently congressmen were not completely aware of what had been planned, in spite of the general plan being up for many months on DARPA's web site and mention of the project in a March New Yorker article.
I can't help but feel sympathy for Robin Hanson, the George Mason University Economics professor who has been spearheading the project. Critics were quick to describe the project as a marketplace where terrorists and mercenaries could make money by betting some horrific event would happen and then causing it. But as Hanson describes in interviews and on his Web site, the idea is more that professors, armchair analysts, and frequent travelers from all walks of life would combine their on-the-ground expertise to come to conclusions even the most expert intelligence worker in Washington wouldn't be able to reach. But interested as I am by the concept I just can't see it working for a number of reasons:
Update: According to futures sales on Tradesports.com, John Poindexter's chances of keeping his job after this uproar are around 70%.