Moving On..
August 27, 2007
I’ve moved my blog to my own website, here, so I can take control of the formatting of my posts, and finally stop posting broken code! I’ve also put up a new article, after a long delay due to my not having Internet access all summer..
Breaking The Running Key Cipher: Part 1
June 20, 2007
I love challenges of all kinds, both the algorithmic and cryptographic. There is just something inherently compelling about pitting your wits alone against an intellectual challenge, free to apply any and all means to beat it. I recently took part in a crypto challenge where we were meant to break a text encoded with the running key cipher, which I hadn’t encountered before, and I came up with a rather satisfying solution for I thought might be good to share here.
For those of you who aren’t familiar with the running key cipher, it’s very simple. Say you have some text you would love to be kept secret:
IENJOYEATINGWHILESUSPENDEDUPSIDEDOWN
But still feel the need to communicate it to someone, with whom you have agreed some (usually well known) key beforehand, say:
GOBOILYOURBOTTOMSYOUSONSOFSILLYPERSONS
We just “add” the two texts together, with the letters being used as numbers in the obvious way: A = 0, B = 1, …, Z = 25. This means that e.g. C + B = 2 + 1 = 3 = D. Furthermore, we do this modulo 26, so that e.g. Z + B = A. Additionally, since in our case our key is a bit longer than the secret message, we will just not use the last couple of letters of the key. This means that our encoded message is:
OSOXWJCONZOUPAWXWQIMHSAVSIMXDTBTHFOB
The receiver can just subtract their known key to retrieve the original text. Wow, great! Since we never never reuse the key at any point in that message, and assuming we never use they key again, it must constitute a one time pad! This means that we must have perfect security, right? Ah, grasshopper, if it only it were so simple! Actually, this is not very secure at all because both our key and our message exhibit distinctly non-random distributions of letters. A simple example of this can be seen if you imagine that we subtract just the letter E (the most commonly used in English) from every letter in the intercepted ciphertext. We would expect to get considerably more than 1/26 of the resulting letters being either from the key or plaintext. If we do this with our example, we obtain:
KOKTSFYKJVKQLWSTSMEIDOWROEITZPXPDBKX
K K K K K KP
I’ve labelled each resulting letter with a K or a P depending on whether it came from the key text or the plain text. As you can see, most of the letters are from the key because it contains very few E characters.
This is all very well, but it’s clearly a fairly crude attack. If we want to have a chance of actually breaking the cipher we need some more heavyweight methods. As far as I know, there are a choice of three major techniques:
- Known plaintext attacks
- Crude N-gram analysis
- Hidden Markov model N-gram analysis
The last one is what I actually used to break the challenge, and the one that I think is just plain coolest, but we need to build up to it as it’s pretty scary :-). So, let’s take a look at the other techniques first:
Known Plaintext Attack
This is definitely the easiest attack to employ if you know something about the message being sent. For instance, if you, as the evil attacker, happen to know that the sender is a passionate gourmand, you might guess that EATING will appear somewhere in the key or plaintext. Knowing this, you can exhaustively subtract EATING from the ciphertext at every offset. If we do this for our example text, we obtain:
Text offset | Text deciphered with key |
---|---|
0 | KSVPJD |
1 | OOEOWW |
2 | KXDBPI |
3 | TWQUBH |
4 | SJJGAT |
5 | FCVFMI |
6 | YOURBO |
7 | KNGGHJ |
8 | JZVMCU |
9 | VOBHNQ |
10 | KUWSJR |
11 | QPHOKQ |
12 | LADPJK |
13 | WWEODC |
14 | SXDIVG |
15 | TWXAZB |
16 | SQPEUM |
17 | MITZFU |
18 | EMOKNP |
19 | IHZSIM |
20 | DSHNFC |
21 | OACKVG |
22 | WVZAZR |
23 | RSPEKX |
24 | OITPQN |
25 | EMEVGV |
26 | IXKLON |
27 | TDATGB |
28 | ZTILUZ |
29 | PBAZSI |
30 | XTOXBV |
Most of those look like gibberish, but one stands out as eminently readable English text: YOURBO. This is, of course, part of the key text. The diligent cryptanalyst will then typically be able to use further guesses (perhaps inspired by the discovered piece of text) to reveal more of the message and key. For instance, he might try every word beginning with BO to try and extend the known plaintext rightwards (though this is a rather difficult proposition since there are at least 368 possibilities according to my dictionary). A small aside here about my methodology: I found the optimal experience was got by combining a suitable dictionary (I use 5desk) with grep in a Cygwin Bash shell. You can then make queries such as the one below, which counts the number of words in the dictionary beginning with “bo”.
$ grep ‘^bo’ 5desk.txt | wc -l
368
Note that it is not strictly possible to tell whether the English words you recover are part of the key or plain text. This is because of the symmetry of the modular addition we got to perform the ciphering step. However, it is usually possible to work it out by your knowledge of the messages probable contents. Once you know which text stream is part of the key, a Google search will usually allow you to obtain the entire thing from your fragmentary knowledge, assuming that the key is a public text (as is typical: Bible extracts are popular). This will let you break the code!
However, what if you don’t know anything about the message, or, like me, you’re just a bit rubbish at making good guesses as to how to extend your known fragment of text? We probably need to employ something a little more sophisticated, like one of the N-gram analyses I mentioned earlier. However, as this blog entry is already getting overlong I shall leave exposition of these rather more clever techniques for a future entry: watch this space, especially those of you whom I have just bored senseless by explaining the absolute basics.
Uninformed Journal Volume 7 Released
May 14, 2007
For those of you who are unaware, the lastest edition of the very entertaining Uninformed Journal came out recently. It’s a pretty interesting edition, though not as bulky as some of their previous releases: I hope the project isn’t losing momentum. In particular I found that the article by skape (one half of the duo who originally broke Windows Patchguard [pdf]) about dynamic analysis of memory access in arbitrary software was nice: he outlines a variety of tricks for doing this, using page tables, segment registers and runtime instrumentation, and some research areas where this would be useful, like taint propagation. There’s probably a product or two waiting in there for some enterprising company to create :-).
For those of you who enjoy Uninformed, you can also read skywings (the other half of the areformentioned duo) blog Nynaeve for regular fixes of in-depth technical information.
Fixing Experts Exchange With Greasemonkey
May 11, 2007
Well, I’m a bit behind the curve here. I hadn’t until now felt the need to install GreaseMonkey, but Experts Exchange (which frequently shows up in my Google search results) have started to blur the comments people make on the questions. You have to sign up to view them, and since I have a pathalogical aversion to such inconveniences as 30 seconds of registration I did what any good programmer would and spent 10 minutes writing a script to solve the program.
The script plugs into GreaseMonkey and first removes the blurring, which is just done by an overlay, and then replaces the answer text with its ROT13ed equivalent, since thats the “encryption” they have opted to use! Anyway, the whole thing is basically a mashup of the scripts here and here, but I include the full source below for your convenience: do with it as you will.
// ==UserScript==
// @name ExpertsExchangeFilter 2
// @namespace All
// @description Remove Experts Exchange Stuff
// @include http://experts-exchange.com/*
// @include http://www.experts-exchange.com/*
// ==/UserScript==
function rot13(src) {
var dst = new String('');
var b;
var t = new String('');
var clear = 0;
for (var ctr = 0; ctr < src.length; ctr++) {
b = src.charCodeAt(ctr);
if (60 == b || 91 == b) {
clear = 1;
}
if (!clear) {
if (((b>64) && (b<78)) || ((b>96) && (b<110))) {
b = b + 13;
} else {
if (((b>77) && (b<91)) || ((b>109) && (b<123))) {
b = b - 13;
}
}
}
t = String.fromCharCode(b);
dst = dst.concat(t);
if (b == 62 || b == 93) {
clear = 0;
}
}
return dst;
};
a = document.getElementsByTagName('div')
for (i = 0; i < a.length; i++)
{
if (a[i].className == 'infoBody')
{ a[i].removeChild(a[i].childNodes[1]); }
else if (a[i].className == 'answerBody quoted')
{ a[i].innerHTML = rot13(a[i].innerHTML); }
}
Cute Haskell Code
May 9, 2007
For some recent supervision work on my Security lectures, I was given the task of decoding a string encrypted with a simple shift cipher. This cipher, given a key, simply moves each letter on in the alphabet by an amount given by the key, wrapping at the end. Being a good functional programmer, I decided to implement a brute force solver for this in Haskell, and I’m rather pleased by the terseness of the result, so I felt compelled to share it.
import List(elemIndex)
cipherText = “LUXDZNUAMNDODJUDTUZDGYQDLUXDGOJDCKDTKKJDOZ”shift i x = (cycle [‘A’..’Z’])!!(maybe (error “Bad character ” ++ (show x)) (+i) (elemIndex x [‘A’..’Z’]))
main = sequence_ (map (\i -> putStrLn $ map (shift i) cipherText) [0..25])
If you ignore the boilerplate, there’s only two lines of actual code there :-). I happen to think it’s rather understandable, though I’d be interested to get the opinion of someone who isn’t a Haskell hacker on the veracity of that statement. A hint you may need understanding it is that “cycle a_list” just returns the list which is “a_list” concatenated infinite times, and that “!!” is Haskells list indexing operator.
On this note, I saw an interesting blog post at retrospections the other day which compared the LOC count for a number of open source source control programs: Darcs (Haskell) and Mercurial (Python) were tied at 20KLOC, with the next best being Bazaar-NG (Python) at 47KLOC. Obviously this isn’t a strictly apples-to-apples comparison, though, so take it with a grain of salt!
Frustration + Lazyweb = Results
May 7, 2007
OK, to follow up on my last post about the quirks of XMLHTTPRequest, fuzzyman very kindly provided most of the solution I needed.
What I was trying to accomplish is optional HTTP authentication: that is, a resource logs you in if your credentials are correct, but if they aren’t present then it just lets you go on as an anonymous user. This is essential if you are developing, say, a web shop: if you want to offer personalized item selections you need to request login, but if you require authentication just to browse the site you’ve lost a good LARGE_NUMBER% of your customers right there.
However, as fuzzyman rightly pointed out, most browers do not even bother to send the Authorization header unless they actually get a 401 on a page, even if credentials are explicitly provided (as in my use of XMLHTTPRequest). The solution to this is to “fool” the browser into thinking your site requires authentication by creating a dummy action that just requires authentication. The slight complication is that this action must be in the root directory! If you attempt to create the dummy action in a subdirectory, the browser may only send the authentication information thus forced into it when paths are accessed that appear to be in that directory. This means, for example, that if you have the authenicated action “/sessions/secret” then authorization info will be sent for “/sessions/foo”, but will not for “/products/list”. Making an action like “/secret” works around this, although it is slightly ugly.
class SecretController < ApplicationController
def index
if authenticated?
render :nothing => true
else
response.headers[“WWW-Authenticate”] = ‘Basic realm=”Controllr”‘
render :nothing => true, :status => 401
end
endend
So then in your user creation view and login view you will have a script block which forces the logged in / newly created user to login. This example is for user creation (hence @user.password) and uses a modified version Prototype (the de-facto Rails JS library) to perform the request. I had to modify Prototype to add support for using the username and password parameters of the underling XMLHTTPRequest object, despite the fact that they are widely supported in practice.
new Ajax.Request(“<%= url_for :controller => ‘sessions’, :action => “secret” %>”, {
username: “<%= @user.username %>”,
password: “<%= @user.password %>”,method: “get”,
asynchronous: true,
onComplete: function() {
window.location = “<%= url_for :action => “index” %>”;
}
});
Yes, this does send the username and password in plaintext over the wire :-). However, this is OK since they are sent in the clear during signup anyway. I’m also currently using Basic authentication, which means that upon ANY login the username/password are vulnerable (albeit after Base64 decoding): I will probably change this at some point, but the Digest scheme I should be using requires a bit of server side state to prevent replay attacks so is a little tricker to implement.
Right, I should probably try and get some revision done now :-). If only I could answer on Ruby on Rails instead of complexity theory…
XMLHTTPRequest + Authentication = Frustration
May 6, 2007
So I just spent the last 2 hours or so of my life buggering around with Ruby on Rails and trying to get it to do a RESTful login (i.e. one using HTTP Authorization headers, as opposed to the normal cookie stuff). There are some nice articles about pulling this feat off, such as here and here: the basic trick is to use XMLHTTPRequest to force the username/password from form fields into the browers authentication cache. However, it seems that if the resource your XMLHTTPRequest is trying to talk to never returns a 401 (Access Denied) then XMLHTTPRequest never feels the need to send the Authorization header at all, even if you specify a username and password for it. I’m really at a loss as to why it has this bizarre behaviour, so I’m really hoping I’ve misdiagnosed it, but it’s looking unlikely.
This afternoon has been my first serious attempt to play with Rails, and the whole thing has been nothing but frustration! As well as the usual the-web-is-crap issues like the above, I’ve had to contend with documentation that is scattered over the Ruby and Rails websites, when it exists at all! Some of the stuff I’ve had to use (like the base.send :helper_method call to expose some things neatly to my views) seem vital but don’t appear anywhere but as cursory mentions in changelogs. Furthermore, their habit of introducing breaking changes means some code examples I find don’t work without some obscure patching, and when things go wrong there is so much framework magic going on I have a hard time debugging it! Hopefully this feeling will fade with time, as lots of other people seem to praise Rails to the heavens, but I can’t remember even being this frustrated with a new technology :-).
Bitesize Functional Programming: Active Patterns
May 3, 2007
Welcome to the first in what I hope will be an ongoing series of blog posts on research in functional programming. I see a lot of really neat papers come and go that show the great things that are happening in this design space, and I’m hoping to open their discoveries to a slightly wider audience by providing more what I hope are readable descriptions of them for people who don’t have the time to trawl through 20 pages or so. By using examples liberally I hope I can provide an alternative to the sometimes rather dry language of the abstracts. Anyway, that explained we can get on with the show!
A really cute paper just came up on Lambda the Ultimate, about the implementation of a new feature in the .NET functional programming language F# called Active Patterns. The basic idea is to let you pattern match not just structurally, like this:
> data Animal = Armadillo String
| Mink String
| FunctionalProgrammer String Integer
>
> show (Armadillo name) = name
> show (Mink name) = name
> show (FunctionalProgrammer name iq) = name ++ “, IQ of ” ++ (show iq)
But also FUNCTIONALLY, by letting you define your own match functions:
> (|Text|Element|) xmlNode = if xmlNode.ContentType = Text then Text (xmlNode.Text) else Element (xmlNode)
>
> match myXMLNodeInstance of
> Text s -> print (“Looks like a text node containing ” ++ (show s))
> Element e -> print (“Hmm, it’s something else!”)
And actually it goes beyond this because your pattern matching functions can receive arguments before the data you actually want to pattern match on, or can be partial functions (i.e. fail to pattern match on some inputs). Their “curried” nature lets the authors of the paper define a function which lets them write a pattern matching on the Nth bit of an integer, and on that integer rotated N times, where N is potentially supplied at runtime. This clearly has great potential for expressiveness!
I think that’s quite enough about active patterns, but I must say that I’m quite tempted to go and add F# to my repetoire now :-). Coming up next time, possibly: data parallel Haskell (concurrency for free!) or constructor tagging (cute optimisation for lazy functional languages).
Global Hotkeys With .NET
May 1, 2007
So, last year while working at Resolver Systems I worked with the author of Movable Python, which is a fairly neat application that lets you carry Python around on a USB stick. As an intellectual exercise I reimplemented the core functions in C# with a slightly less eccentric interface (no offence meant, Michael!), but was suprised to find I had to roll my own code to setup global hotkeys to run scripts, which was a nice feature of Movable I wanted to try and add.
The Win32 APIs for this are themselves are pretty grungy (all sorts of nastiness with bitflags and message loops), but I think I’ve been fairly successful in abstracting that away in this simple class, which you can get from here. With this, you can setup a hotkey like this:
Hotkey hk = new Hotkey();
hk.KeyCode = Keys.1;
hk.Windows = true;
hk.Pressed += delegate { Console.WriteLine(“Windows+1 pressed!”); };if (!hk.GetCanRegister(myForm))
{ Console.WriteLine(“Whoops, looks like attempts to register will fail or throw an exception, show an error/visual user feedback”); }
else
{ hk.Register(myForm); }// .. later, at some point
if (hk.Registered)
{ hk.Unregister(); }
Obviously this is a bit of a kitchen sink example to try and show every feature, but even then it’s a damn sight easier than what you would use to set up even a basic usage manually with P/Invoke. If you are wondering about the requirement for a form parameter to Register and GetCanRegister, this is simply a requirement of the underlying APIs, probably due to the fact that they signal hotkey presses in the application message loop rather than via a callback.
I hope someone finds this useful and to facilitate that process I hereby place all the code into the public domain to be messed around with to your hearts content. Enjoy!
So, Does Anyone Even Use All These Darn CPU Instructions?
April 27, 2007
That’s the question I found myself asking earlier this month when I was writing a simple compiler for an OCaml dialect called MinCaml. I don’t know if you’ve ever taken a look at the Intel IA32 instruction references, but there are two volumes of detailed descriptions of all the functions one of these CPU provides: about 600 in total!
As this compiler backend was my first real go at writing out assembly (though I’d read quite a lot of it before) I found the task of potentially learning all these instructions extremely intimidating, and wished I had an assembly language “cheat sheet” where I could go to find out the most common (and hence probably the most useful) instructions. So, today I got around to writing a little program to use UDis86 to scan all the binaries in C:\Windows\System32 and my Cygwin /bin directory and spit out some instruction counts. Now, before we see the results let me first say that my sample has a bias in that it’s all precompiled stuff that will be designed to take advantage of only the lowest common denominator of the target CPU instruction sets. This means that the distribution below will probably be quite different than if you scan, say, your machine-specific Gentoo installation. Right, onto the big monstrous Excel pie chart:
I’ve also uploaded my raw data to my personal site, along with the program I used to get the results for anyone who is interested.
In the finest traditions of top 10 lists everywhere, let’s give a quick rundown of the most popular instructions:
- add: is anyone really suprised by this?
- mov: again, it’s fairly hard to get work done without moving data around
- push/pop: these instructions are commonly used for setting up/destroying function stack frames, hence their popularity despite IA32s register orientation
- invalid: do’h! These are from areas of code where udis86 just barfs :-). Their high ranking could mean a lot of instructions are in use that it can’t deal with, or (more likely) it is trying to disassemble executable regions of the binary that just don’t have anything useful in them
- cmp: the absolute bedrock of branching on the IA32, but what is suprising is that it’s so much higher than jz. Are there really that many pure numeric comparisions in programs?
- dec/inc: I’m fairly suprised that dec beats inc here, especially since sub is so much lower than add in my data: does anyone have a theory?
- xor/or: xor no doubt clocks in so high because of the “xor reg, reg” idiom used to zero out registers, but or is quite popular too for something which is really only good for bit flags and the odd if expression…
Well that was fun, in a geeky sort of way! I’d be quite interested in knowing the reasons behind some of the oddities above: maybe another blog entry is in order…?
Edit: an experienced reverse engineer, t_w, commented on Reddit that my count for add was quite high, and that it might be caused by counting junk code. It turns out that the byte sequence 0x0000 disassembles to an add instruction, which skews my results since the null byte is common padding for executable .text sections. However, after correcting for this (the charts above have taken this into account) it still leaves add in first place (it had 83355055 counted occurances before excluding this sequence, but still 45303805 after exclusion).