Register article on using non-randomness of encrypted file content to reduce time needed to decrypt by brute force

Peter Fairbrother zenadsl6186 at zen.co.uk
Fri Aug 16 19:29:07 BST 2013


On 16/08/13 09:37, Charles Lindsey wrote:
> On Thu, 15 Aug 2013 11:00:34 +0100, Brian Morrison <bdm at fenrir.org.uk>
> wrote:
>
>> Not seen this mentioned anywhere else yet:
>>
>> http://www.theregister.co.uk/2013/08/14/research_shakes_crypto_foundations/
>>
>>
>> Any opinions from those with direct knowledge of such techniques?
>>
> Isn't that more or less how Colossus worked?
>

Yes, and no, and yes.

Yes because Colossus used non-randomness of plaintext to help decrypt.

The paper however is about calculating how many guesses it takes to find 
a solution *_when you order guesses according to the probability that 
they are correct_*.

So also no, because the Colossus machine mostly used non-randomness of 
plaintext to detect when a (partial) solution had been found, but for eg 
Tutte's "1+2 break in" the individual tests were not ordered in terms of 
how likely they were to be correct - the machine just did a batch of 
tests in non-significant order.

And also yes, sort-of, because the Colossus process, as opposed to the 
machine, also used non-randomness to order the choice of what to test 
next - most likely first, then next most likely, and so on.



The authors don't introduce any new techniques in this paper, just some 
new thoughts on estimating the number of guesses needed when using the 
optimum ordering strategy above - try the most likely first, then the 
next most likely, and so on - to test on non-linearly distributed guess 
spaces, where some guesses are more probable than others.

It seems some approximations which may have been used in calculating how 
many guesses are needed in those situations are "ill-advised".


This isn't significant when considering eg brute-forcing a 
ciphertext-only attack on a modern cipher with randomly-chosen keys - 
there is no way to use the low entropy of the plaintext to order the 
trial keys by the probability that they are correct, so the test keys 
are effectively chosen at random and the old estimates hold.


However if you are brute-forcing eg a user-chosen password in a highly 
non-linear space, where some choices are much more likely than others, 
then you can order the trials by likelihood, and if you do then their 
result for the number of trials needed is lower than that given by some 
other methods, implying that it is easier to do some theoretical 
password-type searches than we previously thought it was.


This has no significance as far as today's actual practical attacks on 
passwords are concerned, because for real attacks on passwords we 
already know about how hard it is to find a match, from practical 
experience - but maybe some attacks which aren't done at present might 
be easier than we thought they were, and could be more productive than 
we thought.


However I doubt there will be many of those new situations, as the 
approximations they have found to be ill-advised aren't widely used by 
cryptographers.

More, experienced cryptographers would likely take any results based on 
them with a big pinch of salt anyway.

I know I would, for other reasons - how do you accurately find the 
probability of each of a list of passwords being correct? the entropy of 
a non-linear password space? average/minimum case? distribution of 
guesswork? what's the entropy of english letters? - long before I 
wondered whether some math approximations were right or not.




It is a bit interesting - but it would have been much more so if they 
had introduced a methodology for actually calculating the guesses 
needed, rather than just showing this or that approximation is wrong.

And it doesn't shake crypto foundations much, if at all.


-- Peter Fairbrother



More information about the ukcrypto mailing list