Archive

Archive for the ‘Web Security’ Category

Exposed Webcam Viewer Removal

May 28th, 2013 No comments

TL;DR: This is a large wall of text that explains why I removed the exposed webcam viewer from my site. The short version is: excessive workloads and shoddy journalism. The long version is below.

A little over a month ago, I removed the Exposed Webcam Viewer from this site. Not only that, but I removed all the articles I’d written about it, and effectively scrubbed the cache of it from Google. I did all this pretty much silently, apart from a comment I made from my reddit account in response to concern that I’d either been arrested or otherwise forced to take down the site.

However, it seems that I had far more fans of the Exposed Webcam Viewer that I originally thought, and I’m still getting emails from people asking why it was taken down. Hopefully this short post will put some closure on the whole thing.

There were a multitude of reasons why I took the viewer down. None of them were related to law enforcement or other government agencies forcing my hand (in fact, I haven’t had any contact from a single one). The main reason the site came down was because it was simply too much work to maintain effectively. I know a lot of people appreciated the easy to use interface (despite not being the most aesthetically pleasing), but what most people were unaware of is how much work had to go on behind the scenes.

Firstly, there was the finding of new content. Although at first I used automated scripts to try to find webcams online, these were less successful later on. As a result, a lot of the cameras that I added were found through quite a lot of manual effort.

Secondly, there was the matter of keeping content up to date. Quite often, cameras would go offline when owners either turned them off, or they were otherwise disconnected. Although there were around 9,000 cameras that were shown as online in the viewer, there were almost 40,000 URLs in the database that had to be checked on a regular basis. Again, this was achieved via automated scripts (initially via curl, and later with my own code), but it was still a pain to ensure that these had all run correctly. In addition, at times when the site was under heavy load, almost all the cameras would be knocked offline due to the sheer number of people trying to connect to them. When this happened, I had to wait until the site was less active in order to run the scripts (else a number of webcams that were only temporarily down would be marked as offline erroneously). As you might imagine, this scenario happened quite often as the site became more and more popular.

Thirdly, whilst a lot of people wanted new or improved features for the interface, development is a very slow process, given my standards for secure programming (on the plus side, the application was hardened against pretty much all web based attacks).

Finally, and possibly one of the most important reasons was the general level of abuse I saw. Granted, quite a lot of people were very good at emailing me when they found a camera pointed at a child, or in a bedroom somewhere, and they were of course removed as quickly as possible. However, a number of feeds seemingly slipped through, and the final straw was when a foreign news channel (who shall remain nameless) decided to do a story on the viewer, featuring a feed of someone’s bedroom and linking to the feed on their website. All this was done without any attempt to contact me (either for an interview, or to let me know about the feed). It was a pathetic attempt at journalism; missing the entire point of the viewer (which was to warn people about insecure cameras, not be some kind of tool for voyeurism), and broadcasting an innocent couple in bed. The webcam page was viewed over 75,000 times before I spotted it and took it down.

Suffice to say, after that incident I was left with a bad taste in my mouth, and I decided to shut down the viewer before someone else could do any more damage. No, there aren’t any immediate plans to bring it back up again. I’m quite enjoying all the free time I now have to work on other projects. I won’t say anything definite though; times change, maybe my mind will too. So, watch this space I guess.

Ten Tips For Securing Your Web Applications

September 30th, 2012 No comments

Web applications are often notoriously insecure. With more of us migrating to web-based technologies, ridding the web of these insecurities becomes a top priority. Here are ten tips that should help you secure your web applications.

1. Send all confidential data over a secure connection.

At the very least, send user credentials (i.e. username and password) over HTTPS. At the very most, send all data over HTTPS, especially when your apps are dealing with large amounts of personal information. There are almost no excuses for not using HTTPS these days, especially when buying an SSL certificate is so cheap. Be aware that if you choose to only send credentials over HTTPS, your web application will be susceptible to session hijacking attacks.

Never send any confidential data in an email, especially password confirmation emails. Email is not a secure method of communication, and it probably never will be (PGP is not widely used at all). When dealing with passwords, always let the user set their own, as opposed to generating it for them. That way, you do not need to send their password in an email since they already know what it is.

2. Encrypt confidential data before storing it.

If your web application stores credit card numbers of users or other confidential data, make sure that this data is encrypted in whatever storage medium you are using. If your web application needs to access this data, it should be copied and decrypted in memory, before discarding the copy. At no point should the unencrypted data be stored in some permanent location.

Additionally, the key(s) used for encryption / decryption should not be stored in the same location as the encrypted data. This is to minimize damage if the storage medium is compromised (for instance, if hackers gain access to a database containing encrypted data, the decryption key should not also be compromised).

3. Salt and hash all passwords in the database.

This is possibly one of the most important things a web application designer should implement in terms of user security, but again and again we see large organizations and companies either ignoring or misunderstanding the importance of salting and hashing passwords.

There are absolutely no excuses for not salting and hashing passwords. Your web application should never be able to retrieve a user’s password, either for a comparison or for sending to the user in case they forget it. When the user first registers, their password should be concatenated with a salt (some unique random string of characters) and then hashed with a strong hashing algorithm (SHA-256 for example). PHP has a built-in function called crypt() that supports numerous hashing methods.

Blocking The Pirate Bay

May 9th, 2012 No comments

Just over a week ago, the High Court in the UK ruled that ISPs in the country must block access to notorious file sharing site The Pirate Bay. Since that ruling, only Virgin Media has complied with the demand, and that resulted in their website getting taken offline after Anonymous targeted it with a DDoS attack. There are more serious problems for ISPs than hacktivist retaliation over the enforcement of this ruling though; blocking access to something on the Internet is very hard indeed.

DNS Filtering

When you type in the web address for The Pirate Bay (https://thepiratebay.se), the first thing your browser does is send a query to a DNS server to translate the domain name (thepiratebay.se) into an IP address (194.71.107.15). Without this vital step, your browser is unable to make any connections to The Pirate Bay at all, so one way to block access would be to have the DNS server respond with a fake or invalid IP address. All ISPs have their own DNS servers, and these are usually set as the default in home routers, so this is easy to do. However, this default can be overridden, sometimes on the router itself, but also on your home computer. To get around this type of block, you would simply have to tell your computer to get The Pirate Bay’s IP somewhere else.

IP Filtering

The other popular method for blocking content is to block connections to the IP addresses themselves. The ISP will collect all IP addresses that correspond to the site that needs to be blocked, and when connections to that IP are detected, they are either dropped, or routed somewhere else. In the case of Virgin Media’s block of The Pirate Bay, it seems that this is the method they are using, with all traffic destined for The Pirate Bay’s IPs being routed to Virgin Media’s servers instead.

The main problem with this type of filtering is that it also blocks any other websites that are hosted at the same IP address. This isn’t an issue with The Pirate Bay, who own and operate their own IP addresses and have dedicated servers, but could be if this type of blocking is widely used in the future. For instance, this blog is hosted on a dedicated server along with several other websites, one of which is my personal site (adrianhayter.com). If some ISP were to decide that cryptogasm.com needed to be blocked, and they blocked its IP address, then access to adrianhayter.com would also be blocked. That’s not good at all.

Although IP filtering is harder to get around than DNS filtering, it is still possible by using proxies.

Proxies

Proxy servers (proxies) are servers dotted around the Internet which allow you to forward requests and receive responses through them. As long as the proxy’s ISP isn’t blocking the content you seek, you will be able to access it. There are many proxy servers out there on the Internet, including ones that have been set up to directly counter the blocks on The Pirate Bay.

There are some problems with using proxies though; the main one being that they tend to be much slower than accessing the site normally. However, this is a small price to pay to avoid censorship. The good thing about proxies is that they can be based anywhere, and so blocking access to them becomes almost impossible, as new ones will emerge all the time. The one thing that ISPs can do to counter the use of proxies is to use deep packet scanning.

Deep Packet Scanning

When handling the packets of data that you have either sent or have requested, your ISP typically scans the headers in order to send them on to the correct locations. However, they do have the ability to scan the bodies of the packets as well, and with the right analysis could be able to detect whether the content inside came from The Pirate Bay. Luckily, this technique is easily mitigated by using HTTPS, which means that all data transmitted between yourself and The Pirate Bay (or the proxy) encrypted.

So, if you are using a proxy to gain access to The Pirate Bay, or another blocked website, make sure that the proxy itself supports HTTPS (usually denoted by a padlock or green tick in your address bar). The two proxies I listed above both support it, so they should be fine to use.

Solving Piracy

Don’t get me wrong, I don’t condone piracy, but I also don’t think the solution to it lies with blocking good websites (The Pirate Bay has a lot of legal content, as do other torrenting sites). In my opinion, the main reason people pirate things is because it is easy to do so. People do not mind paying for things, but they want to pay for things on their terms, which is why services like Spotify and Netflix are so popular.

The solution to piracy is for the copyright owners to embrace change, and to start services of their own, which allow their customers to buy a single song rather than the entire album, or a few episodes of their favourite TV show and not the box set. This popular cartoon by The Oatmeal lays the argument out quite neatly.

Update (25/5/2012): The Pirate Bay recently announced a new IP address which they can be reached at: 194.71.107.80.

HTTPS Everywhere Updated (Now Available for Google Chrome)

February 29th, 2012 No comments

HTTPS Everywhere

I’ve mentioned the HTTPS Everywhere add-on for Firefox a couple of times on this blog. The add-on, developed by the Electronic Frontier Foundation (EFF), attempts to boost your security whilst browsing by sending as many requests as possible over HTTPS. Yesterday, version 2.0 of the add-on was released, with a new feature that “detects encryption weaknesses and notifies users when they are visiting a website with a security vulnerability”. Using data obtained through EFF’s SSL Observatory project, combined with new research into weak public keys, the add-on can now warn people about potential security problems with their connection to various websites.

Until recently, it was impossible for Google Chrome extensions to intercept an HTTP request, and so a Chrome extension was not developed (although there are alternatives). With the new WebRequest API however, a beta version of HTTPS Everywhere has now been released! Both the updated Firefox add-on and the new Google Chrome extension are available here.

Top Ten Web Hacking Techniques 2011

February 15th, 2012 No comments

WhiteHat Security is holding a competition to find the top ten web hacking vulnerabilities of 2011. There are 50 vulnerabilities in the competition, and there is an open ballot to vote for the top 15. Those 15 finalists will then be judged by a panel of security experts and they will announce the winners.

Voting is open to anyone until 26th February.