A Simple Introduction to Public Key Cryptography

April 30th, 2012 No comments

I found this video today featuring Dr Yan Wong from the BBC. Whilst it is very short, the video does provide a nice simple introduction to some of the ideas behind public key cryptography, which secures most of e-commerce on the web. Definitely worth a watch if you don’t want to get into the messy details.

However, if you do want to get into the messy details of public key cryptography, I suggest perusing the Wikipedia article on the subject.

Infosecurity Europe 2012

April 26th, 2012 No comments

Yesterday I attended Infosecurity Europe 2012 and had a brilliant time. This was my first time going, but it will hopefully be the first of many. During the day I saw quite a few exhibits, talked with (and grilled) a few people about security, and of course grabbed as many freebies as possible.

The highlight of the day for me was meeting Bruce Schneier and getting a signed copy of his new book Liars & Outliers, which I look forward to reading and possibly reviewing on this blog.

I also met up with a couple of people from WhiteHat Security to talk about their business and what new things they were doing in the industry. They were very interested in my blog, and hopefully in the future we may share content, as they are looking to include guest writers on their own blog (which I highly recommend).

Infosecurity Europe is definitely a great place to go if you want to meet up with interesting people and stay on top of advancements in the industry, so be sure to mark it in your diaries for next year!

Blog Updates

April 13th, 2012 No comments

These are just a few quick updates to explain what I am doing with the blog.

Firstly,  regular readers may notice that I have a new blog banner image, which was kindly designed for me by a friend who wishes to remain anonymous for now. I think it looks a lot better than the old styled header, and it makes the template I’m using a bit more unique.

Secondly, I’m going to change the format of the blog somewhat. Until now, I’ve mostly focused on long detailed articles, from explaining security concepts, to creating lists of recommended browser add-ons, to even attempting to refute academics and professionals. The problem is that these articles take a lot of time to research and write, and my free time for doing them is limited by both my work and my university studies.

On the other hand, I subscribe to a lot of security feeds and mailing lists, and will occasionally tweet about various things that I come across. So I’ve decided to adapt this habit, and instead of just tweeting about something, I will give it a short write-up on this blog. Sometimes I may just put a link and a short comment, other times I may write a couple of paragraphs. Whatever I do, you’ll still get to read some good content that you may have missed elsewhere on the web. I tried a similar thing back in January and February with “Cryptogasm Quickies”, but instead of doing a single post with multiple items, you’ll get a post per item.

This isn’t to say that I will never write in-depth articles again; on the contrary, I have a few that I am working on, but instead of the blog feeling inactive for days (and sometimes weeks) on end whilst I work on them, I will provide small amounts of content to keep you all up to date with various pieces of security news and views.

Thirdly and finally (and nothing to do with the blog), I am attending the Infosecurity Europe 2012 convention on Wednesday 25th April. If anyone else is going, let me know via twitter and perhaps we can meet up for a drink.

On Password Strength

March 28th, 2012 No comments

If you haven’t already subscribed to the WhiteHat Security Blog then you should. They produce a nice amount of articles that are easy to understand, and often provide interesting insights into the security industry. However, with such a wide range of topics, mistakes can be made (or concepts overlooked), and it is one particular error that I wanted to discuss in a bit more detail here.

Founder and CTO of WhiteHat Security, Jeremiah Grossman, wrote an article about how to keep yourself safe online, and whilst 99% of the article is accurate and good advice, there is one section on making passwords hard to guess where I think Grossman has entirely the wrong idea:

Pick passwords that are hard to guess, not found in the dictionary, six characters or more in length, and sprinkle in a number or special character for good measure. Something like: y77Vj6t or JX0r21b

Whilst having a password that is not found in the dictionary is sound advice, I disagree with both the minimum length suggested, and Grossman’s apparent meaning of “hard to guess”. From the examples given, and the suggested requirements for passwords, it seems that Grossman is trying to protect against a scenario where a malicious user performs a dictionary attack against some kind of login form for a specific user account.

This type of attack does not require the attacker to know any prior information about the target’s password, but instead simply tries various common passwords hoping for a match. The problem is, this type of attack is one of the least common, usually because it targets only one account at a time, and can be easily thwarted by a system simply by detecting multiple bad login attempts and locking the user account for a certain period of time.1

Password Attacks 101

If you really want to get a user’s password (or multiple users’ passwords), your best bet is to either sniff the network, exploit any trust the target user might have in you (or someone you know), compromise the user’s own system with malware that records their keystrokes, or breach the password database and crack the hashes (assuming the target system uses hashes). Out of these four attacks, by far the most common (and most well publicised) is cracking a list of stolen hashes.

The advantage of cracking such a list is that all the actual effort can be done on the attacker’s system, where there are no defences that can stall or thwart the attempt(s). Cracking a hash can be achieved either by employing the same dictionary attack I described above, or by a method known as brute-forcing. Whilst dictionary attacks are not always guaranteed to work, brute-force attacks are. This is because instead of relying on a pre-generated list of passwords, the brute-force attack goes further, actively generating all possible passwords and checking them against the given hash(es).

Since a lot of people are still terrible at choosing secure passwords,2 it is probably best to employ both these types of attack; first using a dictionary to weed out the weak choices, and then brute-forcing the rest.

Password Haystacks

So how long would it take on average to crack the passwords suggested by Grossman? Well, according to Steve Gibson‘s search space calculator, around 35.79 seconds on a decent offline cracking machine. Depending on the algorithm being used, that time could be longer or shorter, but the point is, it’s not very long at all. This is where I disagree with Grossman’s meaning of “hard to guess”. For a human, a password like “y77Vj6t” would indeed be hard to guess, but for a computer, it is simple. There are only 7 characters involved, and each character can be 1 of a very small number of characters (62 since we are only using letters and numbers). That means that in a worst case scenario, the attacker has to generate and check 3,579,345,993,194 (roughly 3.5 trillion) possible passwords. That may sound like a lot, but hardware today can do literally billions of operations a second, resulting in the 35 second average.

Of course, not all attackers will have access to such hardware, but all that means is that the time required to crack a hash is a little longer, and as I explained before, time is not a major concern when the hashes are already stolen. To really create a password that is hard to guess (by humans and computers), you need to increase the amount of search space that a brute-forcing algorithm has to use. Steve Gibson uses the analogy of looking for a needle in a haystack: given enough time, you will find the needle, but the bigger the haystack, the less chance you will have of succeeding in your search within a certain time-frame. So it is with brute-force attacks. The more types of characters you use in your password, and the longer it is, the less chance that a brute-force attempt will find it in a reasonable period of time.

 Creating a Strong Password

There are many different opinions on what strong passwords should look like, and there is obviously a lot of disagreement over various different “methods” for creating them. For some the issue is one of security vs. memorability, and there is a general belief that any password that is secure enough not to be brute-forced cannot be remembered easily either. I think this is patently and demonstrably false, and I shall share with you my method of creating extremely strong and easy to remember passwords. Firstly, let me define a new set of requirements that all strong passwords should comply with:

  1. At least one of every type of character (lowercase and uppercase letters, numbers, and symbols).
  2. At least 12 characters in length.
  3. It should not be found in any dictionary.
  4. It should be unique. In other words, it should be something that nobody (not even yourself) has used before.
  5. It should not be based on nor contain any personal details.

If you think those requirements would result in passwords like “Aj18!d#B6]0W”, then you have been taught to think about passwords in entirely the wrong way. Allow me to correct your thinking, with the following easy to remember and highly secure password:

I’m bathing in 34 fish, crikey!

This password (more accurately, a passphrase) has 1 uppercase letter, 20 lowercase letters, 2 numbers, and 8 symbols (counting spaces as such). It is 31 characters long, and although the individual words of the passphrase are found in dictionaries, the entire password is not. Finally, since it is a nonsense sentence, the chances of someone else having used it in the past are very slim indeed, and it does not contain any personal details. According to Steve Gibson’s search space calculator, a brute-force attempt that makes one hundred trillion guesses per second would take 65.53 trillion trillion trillion centuries to crack this passphrase.

The Science Bit

So this type of passphrase is very strong, and I hold that it is also very memorable, because it relies on the same techniques for improving memorability as mnemonics do. Remembering the 5 musical notes represented by the lines on a treble clef stave (EGBDF) is hard for most people, but almost everyone who has studied music will remember the helpful mnemonic “Every Good Boy Deserves Favour”. Science suggests that mnemonics are effective,3 and further research concluded that we remember humourous sentences better than non-humourous ones.4 Thus, in my opinion, a humourous nonsensical sentence is ideal as a secure and memorable passphrase.

I have proposed this before in various discussion groups, and the feedback has more often than not been positive. However, I am of course open to criticisms, and I shall go into more detail about possible objections to this method in a separate article. The real test would be for people to start using these sorts of passphrases in their daily lives and report back their findings. Were they easy to remember? How long did you make them on average? How many did you manage to recall before you started running into trouble? Any feedback will be interesting to hear.

My own prediction is that even hours after you have finished reading this article, you will still be able to remember the passphrase I generated a few paragraphs ago.

Update (30/3/2012): Added a 5th requirement for strong passwords, concerning the inclusion of personal details.

Prof. Alan Woodward is Wrong; The Internet is Fine

March 14th, 2012 No comments

I was reading the BBC News website the other day, and I stumbled on an opinion piece entitled “The internet is broken – we need to start over“. The author, Professor Alan Woodward (Department of Computing, University of Surrey) argues that we need to totally rethink the Internet, because it wasn’t designed with security in mind, and is always going to be prone to vulnerabilities. I respectfully disagree with the professor, and despite him claiming to understand the root of the problems, he seems to have gravely misunderstood the technologies involved.

The Blame Game

Firstly, I am not denying that there are vulnerabilities on the Internet; the examples of exploits that the Prof. Woodward gives in his article (identity fraud and cyber-attacks from hacktivists) are both valid and important. No, my main point of contention is on where Prof. Woodward places the blame for these vulnerabilities: IP. In the article, Prof. Woodward asserts:

We need to understand the root of the problem.

Those who designed the Internet Protocol (IP) did not expect that someone might try to intercept or manipulate information sent across it.

Whilst this is true, Prof. Woodward is effectively doing the digital equivalent of “shooting the messenger”. IP is not to blame for identity fraud, and could hardly be blamed for cyber-attacks either. Identity fraud is often committed with the help of phishing websites, which, whilst being transmitted to the victim over IP, are only effective once loaded into a web browser (and interacted with by the victim). Replacing or changing IP will not stop these websites, or reduce their effectiveness.

In regard to cyber-attacks, Denial of Service (DoS) attacks such as ICMP flooding are carried out over IP, but are not reliant on it, and could be easily adapted to almost any other protocol aimed at replacing IP. For instance, a real-world analogy to a typical DoS attack would be to repeatedly phone up a pizza company and order hundreds of pizzas to false addresses, wasting the pizza company’s time and money. Is the answer to scrap the phone system? Hardly. A better solution would be for the pizza company to have more checks in place to reduce the attack vectors that are available.

Secure Protocols Do Not Imply Secure Applications

Prof. Woodward continues his article by mentioning some of the protocols that are built on top of IP and add security to the Internet. However, he misunderstands the relationship between those protocols and the applications that are served across them.

And, yes, many of these technologies included the ability to secure the data that is being transmitted over the internet. All will have used one of these ‘secure’ technologies, most usually when buying something over the internet.

But, stop and ask yourself this, if it is ‘secure’, why are there so many successful attacks?

One of the technologies that I presume he was talking about is TLS/SSL, which is capable of both encrypting data sent across the Internet, and detecting if that data has been altered along the way. In the case of the web, these protocols are in use when you visit a site using HTTPS (i.e. most, if not all banks and online shops). These protocols have been proven secure by rigorous mathematical processes.

So, to answer the professor’s question, there are so many successful attacks because the attacks aren’t on the protocols, but on the application that is transmitted via the protocols. ICMP floods don’t attack the IP protocol; they use the IP protocol as a transport mechanism and attack the server they are sent to, just as one might use the phone system to transmit one’s voice and “attack” a pizza company with fake orders. Cyber-attacks like Cross-Site Scripting (XSS) and SQL Injection rely on vulnerabilities in the web application, and have absolutely nothing to do with IP, TCP, or even HTTP.

Who Builds the House?

Prof. Woodward further argues that users shouldn’t be expected to change their behaviour to “fix” the Internet, since this wouldn’t fix the problem.

It is unreasonable to expect users in general to understand complex technologies to the degree necessary to ensure they operate securely over the internet.

It’s analogous to a house. By default a house should be built to allow it to be occupied safely.

If you chose to start knocking down walls then it is your fault if the house collapses. But if the foundations of any structure are unsound, no matter how strong or unmodified the building on top, there is always a significant risk of safety being undermined through no fault of yours.

I take issue with this analogy. IP isn’t some big static thing that surrounds you and protects you, and it never should be. IP is a simple protocol that lets you send your data over the Internet, and is more analogous to an envelope than a house. We don’t (or at least, shouldn’t) put sensitive data inside envelopes, since we know that envelopes are easily intercepted and opened.

A house, however, is a perfect analogy for an application on the Internet, and in this regard I completely agree with the rest of the professor’s example. My question is, who builds the house? Not the user, but the builder. It isn’t up to the user to secure the applications on the Internet, but it is up to those who create them. If IP really were to blame for the vulnerabilities and attacks on the Internet, then why are only certain companies / applications affected? IP may be responsible for transporting the attacks, but I hold that any replacement protocol would suffer the same fate.

The Internet Lives

The last part of Prof. Woodward’s article focuses on whether governments should regulate all or part of the Internet. That is a separate discussion for another article, and I won’t comment on it here. I think I have shown that the professor is wrong in many aspects of his argument, from his blaming of IP to his confusion of protocols and applications. IP isn’t perfect at security, but it was never designed to be, and never needs to be. We have protocols that can add security where and when it is needed, but we should not rely on them alone. Application developers need to be held accountable for the security vulnerabilities that they let into their code, as it is them, ultimately, who are responsible for security on the Internet.