Yesterday's InformationWeek had an article about how cellphone pictures sent via MMS (Multimedia Messaging Service) by customers of U.K. mobile network Operator O2 are winding up available via Google search pages. The article, titled Picture Leak: O2's Security Through Obscurity Can't Stop Google, explains that O2 provides a fallback for customers who try to send photos from their cellphone to cellphones that don't support MMS, namely they post the photos online and then send the recipient a URL to the picture via email. For security, each URL includes a 16-hex-digit (64-bit) hex digit message ID. The "problem", as they breathlessly explain it, is that some of these URLs are getting indexed by Google, and can be discovered by performing a search with the inurl: search type.
The whole thing is much ado about nothing — further investigation shows that the reason a handful of these "secret" URLs wound up in Google is that people were using MMS to post photos directly to their public photo-blogs. While it's not the case here, I do have to wonder at the charge that secret URLs are somehow just security through obscurity, which usually refers to a system that is secure only as long as its design or implementation details remain secret. That's not the case here — even a modest 16-hex-digit ID is about as difficult to guess as a random ten-character password containing numbers and upper & lowercase letters. What can be a risk is that people and programs are used to URLs being public knowledge, and so sometimes they aren't safeguarded as well as one might safeguard, say, his bankcard PIN number. On the plus side, unguessable URLs can easily be made public when it's appropriate, for example when posting to your photo blog from your O2 cellphone. Now if only we could selectively prevent clueless reporters trying to write scare-stories from finding them...
From the "what could possibly go wrong" department, Microsoft just announced that they've developed a simple one-button tool to break into a computer and suck down an entire hard drive's contents onto a thumb drive:
COFEE, a preconfigured, automated tool fits on a USB thumb drive. Prior to COFEE the equivalent work would require a computer forensics expert to enter 150 complex commands manually through a process that could take three to four hours. With COFEE, you simply plug into a running computer to extract the data with the click of one button --completing the work in about 20 minutes.
It's basically a whole bunch of existing password guessers and other cracking software into a single one-touch device — and since it works on the live computer it can bypass encrypted disks like Vista's BitLocker so long as the user is still logged in.
Apparently Microsoft isn't concerned that they're building tools that can turn any two-bit felon into a highly-skilled data thief, or that they're providing products that exploit their very own security holes. After all, they're only supplying these devices to law enforcement — so what could possibly go wrong?
For the last couple years I've been working on a program that generates a large number of essentially random ID strings (it's actually a replicated document storage system that uses the hash of a file's content as its ID, but the details don't matter). Since IDs are independently generated there will always be some chance that two different files will just happen to have the same ID assigned — so how long do I need to make my ID string before that probability is small enough that I can sleep at night?
This is essentially the Birthday Paradox, just with bigger numbers and in a different form. For those who haven't heard of it, the canonical form of the Birthday Paradox asks what the probability is that, out of a random group of 23 people, at least two in share the same birthday. (The "paradoxical" part is that the answer is just over 50%, much higher than most people's intuition would suggest.) My question just turns that around and asks "how many random N-bit IDs have to be generated before there is a one in a million chance of any two of them being identical?"
Rejiggering the formulas given in Wikipedia, here's the approximation:
n ≅ (-2 · S · ln(1 - P))1/2
For example, the number of people you would need for a 50% chance that at least two of them have the same birthday is (-2 · 365 · ln(1/2))1/2, or between 22 and 23 people. As a more practical example, you would only need to generate 77,163 PGP keys before having a 50% chance of a collision between their 8-character short-form fingerprints.
As for my one-in-a-million chance, you’d need to randomly generate roughly 2(N - 19)/2 N-bit strings before having a one-in-a-million chance of a collision, which means I would need to randomly generate around 270 of my 160-bit ID strings before there would be a one-in-a-million chance of having a collision. I think I can sleep at night.
Russian hacker magazine Xakep Online has posted an interesting analysis of all the measures Skype goes to to avoid reverse-engineering of their protocol and code. If you can't read the original Russian you can get the gist (as I did) from the Google translation. A few highlighted techniques:
The article also goes into all the ways Skype routes around firewalls by looking for open ports, and suggests that along with encrypted traffic and peer-to-peer distribution it's the perfect tool to deliver a worm, trojan or virus payload under the radar of virus checkers and firewalls... if only you can find a way to get the target client to run your code. Essentially you're left with just one level of protection, namely Skype itself. I'm not convinced this is any more problematic than the Swiss-cheese that is Windows security already, but it's something to think about as we go forward.
(Thanks to Sergey for the link and summary of the Russian!)
Today's LATimes reports that Brandon Mayfield just won his $2 million lawsuit against the FBI for his wrongful detention in 2004. Brandon is the Oregon lawyer who the FBI pinched in connection to the 2004 Madrid train bombings because a partial fingerprint found in Madrid was a "close enough" match to his own. One quote from the article:
Michael Cherry, president of Cherry Biometrics, an identification-technology company, said misidentification problems could grow worse as the U.S. and other governments add more fingerprints to their databases.
The problem is emphasized in the March report from the Office of the Inspector General on the case, which reads much like a Risks Digest post and has a lot of take-home lessons. The initial problem was that the FBI threw an extremely wide net by running the fingerprints found in Madrid through the Integrated Automated Fingerprint Identification System (IAFIS), a database that contains the fingerprints of more than 47 million people who have either been arrested or submitted fingerprints for background checks. With so many people in the database the system always spits out a number of (innocent) near-matches, so the FBI then goes over the results. The trouble is that in this case (a) Mayfield's fingerprints were especially close, and (b) the FBI examiner got stuck in a pattern of circular reasoning, where once he found many points of similarity between the prints he began to "find" additional features that weren't really in the lifted print but were suggested by features in Mayfield's own prints.
People tend to forget that even extremely rare events are almost guaranteed to happen if you check often enough. For example, even if there was only a one in a billion chance of an innocent person being an extremely close match for a given fingerprint, that leaves about a 5% chance for each fingerprint checked of getting such a false positive. If we were to double the size of the database, that would rise to almost 10%. This kind of problem is inevitable when looking for extremely rare events, and applies even more broadly to fuzzy-matching systems like the TSA's no-fly list and Total Information Awareness (in all its newly renamed forms), which try to identify terrorists from their credit card purchases, where they've traveled or how they spell their name.
SPI Dynamics has an interesting proof-of-concept page that can snoop your browser's cache of visited URLs and figure out whether you've searched for specific terms on Google. Or rather, I assume it can on some people's computers... for some reason it always returns "yup, you searched for that" on both Firefox and Safari on my Mac.
Anders Sandberg has posted some fabulous Warning Signs For Tomorrow over at his blog. And in a similar vein, check out how Dow Chemical designed the biohazard symbol. (By way of Schneier on Security.)
Bruce Schneier answers the question "why do we bother making people with security clearances go through airport security?" with the obvious answer "how would an airport screener know if you have a security clearance?"
Heck, as long as we're living in fantasy land, why don't they let non-terrorists bypass security and just focus on The Terrorists? After all, it must not be too hard to tell who's a Terrorist and who isn't, since we're already single them out for torture, rendition to Syria and indefinite detention without review. What's forcing them to spend extra time in line at the airport compared to that?
You've probably already heard about the cell phone that screams after it's reported as stolen. My friend GirlPurple has suggested the perfect add-on market: Custom Scream Tones.
Kevin Drum has posted an email exchange between convicted lobbyist Jack Abramoff and Karl Rove's assistant, Susan Ralston, part of a larger set released in a bipartisan report by The House Government Reform Committee. Apparently Abramoff sent an email asking for favors to Ralston's personal(?) pager, and that email was forwarded to the Deputy Assistant to the President and then on to a White House aide. That aide in turn warned a colleague of Abramoff's that "it is better not to put this stuff in writing in their email system because it might actually limit what they can do to help us, especially since there could be lawsuits, etc." Abramoff's response to his colleague's warning: "Dammit. It was sent to Susan on her mc pager and was not supposed to go into the WH system."
Political scandal aside, this teaches a fundamental security issue with email. I have no idea whether Ralston's pager was set to automatically forward email while she was on vacation or (more likely) that she forwarded it on to the Deputy Assistant herself as a way to keep him in the loop. Regardless, it's clear that Abramoff recognized that having such emails in the official White House system would be a liability, but he had no control over whether its recipients (either Ralston or possibly her automatic forwarder) would be as prudent.
People who want to speak "off the record" usually think about whether a communication channel is likely to be archived, is subject to subpoena, is secure and so forth. But as it becomes easier to transfer between channels that becomes harder to predict. You might not expect me to archive my voicemail, but if I automatically forward my messages to my email as audio attachments then it probably will be. Similarly, you might expect email sent within a company to stay protected inside the firewall, but if just one recipient forwards his email to his GMail account then that security is blown wide open. The folks involved in the Abramoff scandal deserve to be outed, but the next person to be tripped up by this kind of error might not be so deserving.
A few days ago Ed Felton announced he and his students had released a detailed security analysis of the Diebold AccuVote-TS voting machine. The executive summary and/or demonstration video is well worth a look, and the full research paper is a must-read for anyone interested in computer security.
By later that day, the president of Diebold Election Systems had issued a rebuttal. I'm a security dabbler, not an expert, but to my semi-trained eye the rebuttal looks like a bunch of smoke. I'm looking forward to hearing the Princeton authors' response [Update 9/22: posted here], but while I'm waiting for that here's my own take on it:
September 13, 2006 – “Three people from the Center for Information Technology Policy and Department of Computer Science at Princeton University today released a study of a Diebold Election Systems AccuVote-TS unit they received from an undisclosed source. The unit has security software that was two generations old, and to our knowledge, is not used anywhere in the country.
If they really believe their current systems are secure, they should put their machines up for independent external review so groups like Felton's wouldn't have to rely on leaked code and old machines for testing. I also notice this rebuttal nowhere says "yes, those were security flaws in the machines we distributed back in 2002, but we've fixed them since then." And many (though not all) of the security problems cited in the report are inherent in the system's basic architecture — it'll take more than a software update to fix them.
Normal security procedures were ignored. Numbered security tape, 18 enclosure screws and numbered security tags were destroyed or missing so that the researchers could get inside the unit.
The main attack the Princeton paper talks about is one where a criminal (possibly working as a poll worker) infects the memory card used to set the list of races and candidates with a virus, or alternatively substitutes the real memory card with an infected one. Then the virus is automatically loaded onto the machine when the election parameters are set — before any numbered security tape is placed. Security tape only protects from a voter trying to infect a machine on election day, not from a substitute when the machines are still being configured for the next day's election.
A virus was introduced to a machine that is never attached to a network.”
Remember before the Internet, when viruses were things that were transmitted from floppy to floppy? That's what this virus does. And the election definition files and software upgrades are both transmitted via these memory cards, so if an infected machine gets a definition update then any machine that gets the update from that same card afterwards will also get the virus.
“By any standard - academic or common sense - the study is unrealistic and inaccurate.”
I suppose that could be, but he's said nothing to make me think it's the case.
“The current generation AccuVote-TS software – software that is used today on AccuVote-TS units in the United States - features the most advanced security features, including Advanced Encryption Standard 128 bit data encryption, Digitally Signed memory card data, Secure Socket Layer (SSL) data encryption for transmitted results, dynamic passwords, and more.”
None of these security measures matter for the attacks described in the report. To give an analogy, the Princeton report was all about how Diebold keeps leaving the windows wide open, and this paragraph is bragging about how strong the deadbolt is on the front door.
“These touch screen voting stations are stand-alone units that are never networked together and contain their own individual digitally signed memory cards.”
This is the first statement that addresses anything in the Princeton report, as the machine they studied did not have digitally-signed memory cards. Assuming he's not just, well, lying out his ass, that'd help against one of the attacks they mention. However, as the report points out in section 5.1, there are still many other attacks that are inherent in the design of the Diebold system's basic architecture that can't be fixed with simple software modifications like this. For example, a criminal could replace the EPROM chip on the motherboard directly.
“In addition to this extensive security, the report all but ignores physical security and election procedures. Every local jurisdiction secures its voting machines - every voting machine, not just electronic machines. Electronic machines are secured with security tape and numbered security seals that would reveal any sign of tampering.”
Actually, they talk about the physical security and election procedures at length in their report. For example, they point to a recent study of the AccuVote DRE election processes showing that more than 15% of polling places reported at least one problem with seals (see Figure III-16, p. 67). They also point out the difficulty in dealing with an attack where a voter simply unlocks the machine and does nothing more than break the seal, in an attempt to invalidate votes cast in a district that tends to favor his opponent. And, just to add insult to injury, Felton today posted that the lock on the Diebold voting machines can be opened by a hotel minibar key. Some physical security.
“Diebold strongly disagrees with the conclusion of the Princeton report. Secure voting equipment, proper procedures and adequate testing assure an accurate voting process that has been confirmed through numerous, stringent accuracy tests and third party security analysis.”
"Third party" in this case still means companies hired by the manufacturer, and reporting directly to the manufacturer. The system isn't made available for truly independent security analysis.
“Every voter in every local jurisdiction that uses the AccuVote-TS should feel secure knowing that their vote will count on Election Day.”
Translation: nothing to see here, please pay no attention to the huge gaping hole in our security and our reputation.
Bruce Schneier has a nice piece echoing the idea that the goal of terrorism isn't to blow up planes and kill people, it's terror itself.
The video shows Ellch and Maynor targeting a specific security flaw in the Macbook's wireless "device driver," the software that allows the internal wireless card to communicate with the underlying OS X operating system. While those device driver flaws are particular to the Macbook — and presently not publicly disclosed — Maynor said the two have found at least two similar flaws in device drivers for wireless cards either designed for or embedded in machines running the Windows OS. Still, the presenters said they ultimately decided to run the demo against a Mac due to what Maynor called the "Mac user base aura of smugness on security."
Yet another huge loss of names and Social Security numbers:
The information was prepared by the loan company in January for use by Hummingbird. The data was encrypted and password-protected, but subsequently decrypted and stored on the now-lost hardware by the Hummingbird employee, Texas Guaranteed Student Loan said.
And this, boys and girls, is perhaps the truest meaning of "information wants to be free." Not Free as in beer, not Free as in speech, but free as in free-flowing water streaming through even the smallest of holes in a dike.
An inexcusable number of security flaws have been found in Diebold voting machines the past few years, but a new report from BlackBoxVoting documents what Ari Rubin and Ed Felten at Freedom to Tinker say is the worst one yet:
A report by Harri Hursti, released today at BlackBoxVoting, describes some very serious security flaws in Diebold voting machines. These are easily the most serious voting machine flaws we have seen to date — so serious that Hursti and BlackBoxVoting decided to redact some of the details in the reports...
The attacks described in Hursti’s report would allow anyone who had physical access to a voting machine for a few minutes to install malicious software code on that machine, using simple, widely available tools. The malicious code, once installed, would control all of the functions of the voting machine, including the counting of votes.
From a short article in Left Lane News about how car thieves are using laptops to circumvent keyless-entry locks:
The expert gang suspected of stealing two of David Beckham’s BMW X5 SUVs in the last six months did so by using software programs on a laptop to wirelessly break into the car’s computer, open the doors, and start the engine...
While automakers and locksmiths are supposed to be the only groups that know where and how security information is stored in a car, the information eventually falls into the wrong hands.
This should come as a surprise to no one. What concerns me more is that such software is no doubt available not just to "expert gangs" but also the equivalent of script-kiddies who normally wouldn't even be able to figure out how to hot-wire a '69 Buick.
(Thanks to Regis for the link...)
...now two Enigma machines — that'd be another thing altogether!
For folks local to the Bay Area, Prof. Matt Blaze is speaking next week at Stanford on vulnerabilities in the systems currently being used by law enforcement for wiretapping. The talk is at 4:15PM next Wednesday, 3/8/06 at Stanford University's HP Auditorium, Gates Computer Science Building B01.
Signaling Vulnerabilities in Law-Enforcement Wiretap Systems
Matt Blaze, University of Pennsylvania
Telephone wiretap and dialed number recording systems are used by law enforcement and national security agencies to collect investigative intelligence and legal evidence. This talk will show how many of these systems are vulnerable to simple, unilateral countermeasures that allow wiretap targets to prevent their call audio from being recorded and/or cause false or inaccurate dialed digits and call activity to be logged. The countermeasures exploit the unprotected in-band signals passed between the telephone network and the collection system and are effective against many of the wiretapping technologies currently used by US law enforcement, including at least some ``CALEA'' systems. Possible remedies and workarounds will be proposed, and the broader implications of the security properties of these systems will be discussed.
A recent paper, as well as audio examples of several wiretapping countermeasures, can be found at http://www.crypto.com/papers/wiretapping/.
This is joint work with Micah Sherr, Eric Cronin, and Sandy Clark.
(Thanks to Mort for the link!)
I'm sure you can think of your own scenarios where this would be a Bad Thing™, but the case that brought it to my attention was from a supposedly-anonymous reviewer of an academic paper who discovered Remote's website in his firewall logs.
The simple moral of the story is that content formats should not be able to run arbitrary code, but the more general point is one of setting limits and expectations. End-users need to be able to limit what's run on their own computers, and when the actual limits are broader than what a naive user might expect (such as when their supposedly-static PDF document can actually access the network) it's extra important for the system to alert the user what's happening and get permission first.
To their credit, Adobe seems to have heeded the moral: the current version of Acrobat Reader (at least on the Mac) gives a pop-up warning saying the PDF is trying to access a remote URL, and allows you to save your security settings on a site-by-site basis. I don't know when they added this alert or whether it was in response to problems like those I mentioned, but regardless it's nice to see the feature.
(Thanks to Dirk for the link.)
Researchers at Pennsylvania State University have determined that it's possible to launch an effective denial of service attack on cellphone networks, either in a localized area or nationwide, by flooding known cellphones in the area with SMS messages (see summary, paper and NYTimes article). The attack relies on using web and Internet-based SMS portals to overwhelm the wireless data-band, which is also used for connecting voice calls. Since only messages that are actually delivered over-the-air contribute to the network congestion, attackers would first need to generating a "hit-list" of known-valid cellphones (for example, by scraping websites for cellphone numbers in a given prefix and then slowly testing those for SMS capability before starting the attack).
One snippit from the paper I found interesting was how different cellphone providers deal with a backup of SMS messages awaiting delivery to a single user (e.g. when the cellphone is turned off): AT&T buffered all 400 test SMS messages, Verizon only kept the last 100 messages sent (FIFO eviction), and Sprint only kept the first 30 (LIFO eviction).
(by way of the Mercury News)
Researchers at RUB and the University of Mannheim have a nice demonstration of how the recently discovered attack on the MD5 hash function can be used to fool someone into signing one document when they think it's another:
Recently, the world of cryptographic hash functions has turned into a mess. A lot of researchers announced algorithms ("attacks") to find collisions for common hash functions such as MD5 and SHA-1 (see [B+, WFLY, WY, WYY-a, WYY-b]). For cryptographers, these results are exciting - but many so-called "practitioners" turned them down as "practically irrelevant". The point is that while it is possible to find colliding messages M and M', these messages appear to be more or less random - or rather, contain a random string of some fixed length (e.g., 1024 bit in the case of MD5). If you cannot exercise control over colliding messages, these collisions are theoretically interesting but harmless, right? In the past few weeks, we have met quite a few people who thought so.
With this page, we want to demonstrate how badly wrong this kind of reasoning is! We hope to provide convincing evidence even for people without much technical or cryptographical background.
Their method is simple and clever. They use the newly discovered attack to generate two random strings that have the same hashed value (say R1 and R2). Then they put those at the start of a "high-level" document description language like PostScript and tack on something along the lines of "if the previous value was R1, print an innocuous message I can get signed, otherwise print the real message I want signed." A well-known weakness to the MD5 algorithm is that if R1 and R2 have the same hash values then R1+some text will have the same hash value as R2+the same text here, so depending on whether they use R1 or R2 as their preamble they get two very different messages with the same hash value.
Mr. Luviano, 39, obtained legal residence in the United States almost 20 years ago. But these days, back in Mexico, teaching beekeeping at the local high school in this hot, dusty town in the southwestern part of the country, Mr. Luviano is not using his Social Security number. So he is looking for an illegal immigrant in the United States to use it for him — providing a little cash along the way.
According to an article in today's Wired, the discussions with Frank Moss at this year's CFP conference actually had an impact. The State Department is now moving towards embracing the Basic Access Control security scheme, which essentially encrypts communication with the RFID chip using a key obtained by physically scanning a page on the passport itself. Definitely a step in the right direction.
One bit of the Wired article is wrong (or at least misleading) though:
Moss said the German government and other members of the European Union had embraced BAC because they planned to write more data to the chip than just the written data that appears on the passport photo page. Many countries plan to include at least two fingerprints, digitized, in their passport chips.
At CFP, Moss said the US passport RFID chip would include not only the written data the passport's main page but also a digital photograph, which presumably isn't significantly fewer bits than a couple fingerprints (not that I've looked up the specs to check sizes).
Just had a panel on Privacy Risks of New Passport Technologies, discussing among other things the new RFID tag the US is rolling out for passports in the coming months. The tags will contain a digitally signed copy of your photo plus all the information on your data page except the signature, and will be readable at a distance. The readers are designed to read chips about from about ten centimeters away, but the danger is that it's possible to design devices that read the tag from longer distances. The exact distances possible aren't clear to me, but a speaker from the ACLU demonstrated reading a passport with the type of RFID being used from three to four feet away. The State Department is now promising the passport cover will include a Faraday cage to prevent reading when the passport is closed, but that won't help when the passport is opened.
The dangers really boil down to someone snooping or stealing one's identity at a distance without one's knowledge or consent:
Sounds like pretty big flaws in something in theory designed to make us safer, all of which would be solved by simply making the data only communicate through physical contact. The lone proponent on the panel was Deputy Assistant Secretary of State for Passport Services Frank Moss. I was rather unimpressed with his answers — many parts sounded like a song and dance surrounded by apologies for not really understanding the technology (and thus not being able to explain any details. However, he did answer the one main question I had: why the heck did the US push so hard for passports that could be read at a distance? His answer seems to boil down to it was cheaper and a little more flexible. Specifically:
I'm sympathetic to the difficulties in standardizing over a hundred national documents, but that's a piss-poor excuse given the potential security holes it opens up. The follow-up argument of "we were stupid when we pushed for it, but it's too late now so tough" is equally unacceptable in my mind.
Update 4/14/05: Ed Felton at Freedom to Tinker was at the same panel and has posted his own summary. His conclusion about the reason we're getting stuck with a contactless system are in line with my own: "In short, this looks like another flawed technology procurement program."
Interesting comment by Edward Hasbrouck about the collection of data on where everyone travels, especially the collection of air-travel data. He sees the US, and especially people living in New York City (media) and Washington D.C. (government), as collectively suffering from post-traumatic stress disorder after 9/11. The Travel Panopticon is the core of that response to 9/11/2001. Our first response was panic, leading to investigation: integrated databases, etc. Now we're entering second phase of PTSD: trama, leading us to go from investigation into surveillance. Our main thrust is explicit prohibition of anonymous travel, and by that act to enforce the non-transportation of undesirables.
This sort of panic explains for why we require all sorts of inconvenient and sometimes dangerous privacy-violations when it comes to travel, even though it doesn't make us more secure. As Bruce Schneier points out, asking for ID before you get on a plane not only doesn't stop terrorists (unless we can convince them to put "terrorist" on their cards) but it doesn't even keep people from passing tickets on to someone else. When you're in a state of panic, it doesn't matter if something is sensible — you just want to be doing something, anything.
I'd not heard the term "two-factor authentication" before, but it turns out it's just using two passwords, one you make yourself and one you get from somewhere else. The little key-fobs that give you a new password every 60 seconds is an example, as are the less technological printed list of one-use passwords that have been around for years. In the latest Crypto-Gram, Bruce Schneier argues that two-factor authentication "solves the security problems we had ten years ago, not the security problems we have today." In particular, it does nothing to stop phishing (Man-in-the-middle) attacks or trojan horses.
I suppose solving security problems from ten years ago is better than not solving those problems, but at best it should be viewed as a stop-gap (and the cost of rolling out such measures should be weighed with that in mind).
Update 3/18/05: as a commenter pointed out, two-factor authentication isn't really the use of two passwords so much as two authentication methods. I was basically paraphrasing the PC World article, and I should really know better.
Bruce Schneier links to a paper on refinements to bumping, a lockpicking technique for pin-and-tumbler locks where you insert a specially filed-down key and give it a quick whack to bounce the top pins out of the way. The principle is the same as a lockpick gun, though the authors claim it works better.
I haven't played with lockpicks since my undergrad days, but I'll probably play around with their method and see how well it works. The biggest question I have is how much wear and tear this method causes to the lock vs. other methods — the paper suggests some ways to limit damage to the lock but it still seems like it'd be worse than the lockpick gun since the driving force is side-long (into the lock) rather than straight up. Still, it's got to be better than raking the lock. (I remember back when I was an undergrad at MIT there was one door in particular that needed its locks replaced every couple years due to the number of people raking it — most of the better pickers didn't rake for just that reason.)
This is a cute hack — these guys are able to "fingerprint" a networked device just by looking at how quickly its clock loses or gains time compared to the true time (its clock skew).
Example applications include: computer forensics; tracking, with some probability, a physical device as it connects to the Internet from different public access points; counting the number of devices behind a NAT even when the devices use constant or random IP IDs; remotely probing a block of addresses to determine if the addresses correspond to virtual hosts, e.g., as part of a virtual honeynet; and unanonymizing anonymized network traces.
Link by way of Mitch Kapor, who unlike me isn't so enamored by the elegance of their technique to ignore the obvious security and privacy implications.
Sounds like the security violation that led to the posting of Paris Hilton's private list of celebrity phone numbers was pretty straight-forward: they Googled the answer to her secret question (what's your favorite pet's name?) to "recover" her password on T-Mobile's online web account. Ironically enough, Bruce Schneier blogged about this very problem just last week.
There's a nasty phishing exploit that was made public yesterday that lets anyone fake any domain including SSL certificates. The problem comes out of international domain name support and the fact that the English letter a and the Cyrillic letter а look almost identical. It affects pretty much every web browser except IE and Lynx, which don't support international domain names yet. (If you installed the IE plugin for IDN support, you're still vulnerable.)
The phishing attack is really simple. Domain names can now include non-Latin characters, which are mapped back into a "common name" so it's backwards-compatable. So, for example, the Latvian domain name in http://tūdaliņ.lv translates into the common name http://xn--tdali-d8a8w.lv/. So all you have to do is register something like the domain www.xn--pypal-4ve.com and then send people to the innocuous-looking www.pаypal.com. (Course, if you've already fixed your browser you won't be able to follow the link anymore....) If you look carefully or if your browser isn't displaying this page as Unicode you can see the letter а is in a different font (in fact, it's a Cyrillic "a").
Temporary fix for Firefox:
You can check to see if you're vulnerable by going to the website http://www.shmoo.com/idn/
Update: It turns out the fix I listed does not work in at least some versions of Firefox (sigh). The user preference gets set all right, but for some reason Firefox ignores it. Tech.Life.Blogged has posted both a somewhat kludgy workaround that at least disables IDN support until you install a new plug-in, and a nicer fix that just involves installing the AdBlocker extension and configuring it to block URLs that contain characters outside of the normal ASCII.
Longer term we really need a preference that paints the address-bar or otherwise warns us when a domain contains characters from more than one language set — that'd solve both the problem of pаypal and the equivalant domain that's all Cyrillic except for the Latin character a.
Update 2/15/05: Sounds like one of the original authors of IDN, Paul Hoffman, has proposed something that goes one better than what I was proposing: highlight characters from different languages in different colors. That way it's not a "warning" (and constant false alarm for languages that routinely mix character-sets) but still stands out if you weren't expecting it. (Thanks to Boing Boing for the link.)
Update 2/26/05: Firefox 1.01 has been released with a fix — now punycode appears on the URL line as the encoded www.xn--pypal-4ve.com (it can be changed back to the old display in the configuration). While not as pretty as Hofflan's solution, it'll work. Note also that Shmoo has stopped hosting https://www.pаypal.com, though they still have a test link up at http://www.shmoo.com/idn/.