Heartbleed Is Not A Big Deal?

Posted on April 17, 2014 by in Cyber Attacks, Data Analysis, Threats and Attacks, Vulnerability Management

As of this morning we have observed 224 breaches related to CVE-2014-0160, the Heartbleed vulnerability. More than enough has been said about the technical details of the vulnerability, and our own Ryan Huber covered the details a few days ago. I want to talk about the vulnerability management implications of Heartbleed, because they are both terrifying and telling.

The Common Vulnerability Scoring System ranks CVE-2014-0160 as a 5.0/10.0. A good observer will note that the National Vulnerability Database is not all that comfortable with ranking the vulnerability that broke the internet a 5/10. In fact, unlike any other vulnerability in the system we’ve seen, there is an “addendum” in red text:

 “CVSS V2 scoring evaluates the impact of the vulnerability on the host where the vulnerability is located. When evaluating the impact of this vulnerability to your organization, take into account the nature of the data that is being protected and act according to your organization’s risk acceptance. While CVE-2014-0160 does not allow unrestricted access to memory on the targeted host, a successful exploit does leak information from memory locations which have the potential to contain particularly sensitive information, e.g., cryptographic keys and passwords. Theft of this information could enable other attacks on the information system, the impact of which would depend on the sensitivity of the data and functions of that system.”

So what does this mean for your organization? How should you prioritize the remediation of Heartbleed vs other vulnerabilities? NVD’s answer is “think about what can be stolen.” The problem here is that the CVSS environmental metric, which is used to account for an organization’s particular environment, can only reduce the score. So we’re still stuck at a 5. Why?

CVSS is failing to take into account quite a few factors:

1. It’s a target of opportunity for attackers:

The amount of sites affected by the vulnerability is unfathomable – with broad estimates between 30-70% of the internet.

2. It’s being actively and successfully exploited on the Internet:

We are logging about 20 breaches every few hours. The rate of incoming breaches is also increasing, on April 10th, we were seeing 1-2 breaches an hour. Keep in mind this is just from the 30,000 businesses that we monitor - not 70% of the Internet.

3. It’s easy to exploit:

There exists a metasploit module and exploit code on ExploitDB.

We already knew heartbleed was a big deal – this data isn’t changing anyone’s mind. The interesting bit, is that Heartbleed is not the only vulnerability to follow such a pattern. Of all the breached vulnerabilities in our database, Heartbleed is the fifth most breached (that is, most instances recorded) with a CVSS score of 5 or less.

The others that CVSS is missing the boat on, in order of descending breach volume, are:

1. CVE-2001-0540 - Score: 5.0

2. CVE-2012-0152 - Score: 4.3

3. CVE-2006-0003 – Score: 5.1

4. CVE-2013-2423 - Score: 4.3

Two of these are terminal denial of service, and two of these are remote code executions. The common thread is that all of these have a network access vector and require no authentication, all of these have exploits available, affect a large number of systems and are currently being breached.

Heartbleed IS a big deal. But it’s not the only one – there are plenty of vulnerabilities which have received less press and are buried deep within the PCI requirements or CVSS-based prioritization strategies which are causing breaches, today. It’s important to check threat intelligence feeds for what’s being actively exploited, to think like an attacker and to have the same information an attacker has.

It’s also important to learn a lesson from this past week: while the press took care of this one, it won’t take care of a remote code execution on a specific version of windows that your organization happens to be running. Just don’t say it’s not a big deal when a breach occurs on a CVSS 4.3. You’ve been warned.

The More You Know… (Heartbleed Edition)

Posted on April 9, 2014 by in Cyber Attacks, Industry, Risk I/O, Threats and Attacks, Vulnerability Management

Yesterday, the information security community was made aware of a critical vulnerability in some versions of OpenSSL, one of the most commonly used software “libraries” for secure internet communications. When your web browser is connected via HTTPS (your less tech savvy friends might refer to it as the “lock icon”), there is a high probability that OpenSSL is involved in your communication with that website. It is the job of software like OpenSSL to ensure that your communications are unreadable and unmodifiable by anyone who might be listening in, which is especially important for communication of sensitive data.

It is important to note that this vulnerability affects nearly everyone, from small businesses to internet giants. I won’t go deep into the technical details of how the vulnerability works, which can be found at http://heartbleed.com/, but will instead talk about its impact and the steps Risk I/O is taking to keep your data safe.

In simplified terms, TLS/SSL secure communication requires a server to have a certificate and a private key. When your web browser connects to a server, it is given the certificate, which includes a special one-way key. The certificate is used to verify that the server is who it claims to be. The embedded one-way key is used to send messages that can only be read by someone with the server’s private key. An important point here is that even a legitimate client/user of the website should not be able to access the private key. For communication to remain secure, the private key must NEVER be readable by anyone except the server.

When a webserver is started, it loads the private key into memory, allowing it to reference the key whenever it is needed to send or receive a message. The vulnerability, CVE-2014-0160 or “Heartbleed,” allows an attacker to read (somewhat random) portions of the server’s memory in chunks up to 64kb in size. See where this is going? The private key is the most important thing protecting communication, and the vulnerability allows you to read random bits of data right out of the server’s memory, which means with enough tries you will have a complete map of everything including the prized private key.

As security practitioners ourselves, we have been working to mitigate the impact of this vulnerability. Our external load balancer software has been upgraded with the latest version of OpenSSL, which has fixed the bug. Unfortunately, because there is no way to detect whether the vulnerability has been used to steal private key information, we have also taken the step of revoking our old certificates and creating new private keys for https://www.risk.io. This change was made without user impact, and ensures that if a third party did gain access to our private key, they cannot use it to intercept or modify communications between Risk I/O and our users.

We monitor all communication in and out of our infrastructure, and we have no reason to believe any user data was intercepted during the short window where we were vulnerable. That said, and in the interest in being proactive, we will begin requiring our users change their password when they next log into Risk I/O.

In summary, this vulnerability sucked. Seriously sucked. The long term impact of CVE-2014-0160 remains unclear, but the prompt response from security-focused organizations has likely done a lot to mitigate what could have been a much more serious issue.

On Physical Security

Posted on March 31, 2014 by in Industry, Open Source, Remediation, Security Management, Threats and Attacks

Our mission at Risk I/O is to help businesses understand threats to their infrastructure, but as security practitioners we are interested in many forms of security, including physical. This blog post concerns something of particular interest to me, securing my office and a nearly successful theft, which was thwarted by a bit of hobbyist tech.

Risk I/O is an emerging tech company, and some of us work from home from time to time. I don’t have a car, so the garage is where I decided to set up my office. Because there would be some potentially valuable equipment (monitors/etc) in the garage, and because of my infosec background, physical security was an early consideration.

A quick YouTube search will show you how easy it is to open most automatic garage door systems with just a coat hanger. The technique involves making a hook on one end of the hanger, pushing it into the gap between the door and the top of the frame, and grabbing the emergency release. Bam, they’re in. The fix (nee: remediation) here turns out to be pretty simple. Wrap a zip tie around the emergency release, which the hanger won’t have enough leverage to break. The emergency release still works as intended, just requiring a firmer pull.

The door opener itself is a relatively modern LiftMaster, which utilizes a rolling code system. This rolling code prevents potential thieves from monitoring the radio signal and replaying it to open the door. This is a good first step, but considering garage theft is relatively common, I became interested in thwarting more types of attack.

Thanks to a retired KegBot, I had a few Arduinos and Raspberry Pis at my disposal. These made a great platform to throw some tech at the problem. I ordered some simple door sensors, a PIR motion sensor, and a relay that could be used to open or close the door. Total cost (including the Arduino/Pi/misc) was about $75. After a few hours of coding, I had a mostly functional system that could detect whether the door was open, whether something was moving inside the garage, and open the door. The project code and some basic info is freely available on GitHub: https://github.com/rawdigits/garage-io.

Fast forward a year and I have been using this homemade garage system daily. My iPhone acts as the primary method of opening or closing the door. The security features seemed like an interesting bit of learning, but I assumed they would never be put to the test. A few weeks ago, they were!

At 4am on February 14th someone was able to activate the automatic door, which, by design, sets my iPhone into a frenzy. My first thought was “wow, there is some bug in my code and having it wake me up at 4am sucks.” I opened the URL for the garage camera on my phone and sure enough the door was wide open! There was no one visible, so I immediately ran out to see what might have happened. San Francisco was asleep and there was no one around. Maybe it was a bug after all? I did a quick inventory and decided nothing was missing, but decided to check still images captured by the camera.

Here are a few of those images:

I never actually saw the thieves, but I think they must have been waiting around a corner waiting for the automatic light to turn off before pilfering the garage.

The next step was incident response. How the hell did they get in? In my initial assessment, I hadn’t noticed the wires that split off from the physical button inside the garage and ended up at a “key switch”. This 40-year old key switch uses a 3 tumbler lock that, when turned, is the same as pressing the button. A closer look revealed that it was so worn out that you didn’t even need the proper key to turn it. Facepalm.

The moral of this story is that you should play with Arduinos and Raspberry Pis, because it will pay off in not having some valuable items stolen.  (Ok, perhaps that’s a bit far fetched, but if you have the time, they are really fun.)

The real takeaway here is that security is hard, and there is no such thing as perfect security. Despite your best efforts there are often a number of variables at play, which might be overlooked. Monitoring is sometimes viewed as low priority, but as in cases like this, it may just save you from a devastating breach.

P.S. I later learned that these thieves were successful in stealing from over 20 garages in the neighborhood over a one week period. Hopefully mine will continue to elude them and any future attackers.

A Simplified Interface, Perimeter Scanning & A Free Risk Profile (Oh My!)

Posted on March 11, 2014 by in Feature Release, Launch, Remediation, Risk I/O, RiskDB, Threats and Attacks, Vulnerability Assessment, Vulnerability Intelligence, Vulnerability Management

The Risk I/O Team is excited to announce the latest release of our vulnerability threat management platform. In this release, we’ve updated the user interface, and made vulnerability scanning available for perimeters too. You can also now create a free risk profile on any technology.

The latest release of our platform includes:
Vulnerability Threat Management
Simplified User Interface - As you may have noticed, we recently announced our new and completely streamlined interface. In this updated interface, you now have all of your assets, vulnerabilities and patches in a single searchable and filterable view. This makes it dead simple to identify the issues that are most likely to be the cause of a breach and how to quickly address them. Each patch is listed in a “bang for your buck” order based on risk reduction.

Bundled Perimeter Scan – Need to understand your likelihood of a breach in your perimeter, but lack a vulnerability scanner? Risk I/O now bundles a perimeter scan with the service, allowing you to understand your vulnerability and threat risks in real-time. Vulnerability data from the perimeter scan is also synced with Risk I/O’s threat processing engine. Powered by Qualys, the perimeter scan can be up-and-running within minutes, so you can start gaining visibility immediately.
Vulnerability Threat Management
Free Technology Risk Profile – Leveraging scoring technology from our Risk Meter and threat processing, we now offer a free risk profile on any technology. Simply search for a technology and find out the risk  score and its known vulnerabilities. This is available in our recently updated RiskDB, a free, centralized, and open repository of security vulnerabilities sourced from vulnerability databases. We’ll be continuing to expand RiskDB in order to offer even more insight, so check back often!

Take the new-and-improved Risk I/O for a spin today to better understand your security risk and prioritize what’s important. We think you’ll appreciate the time it saves your vulnerability assessment and remediation. If you don’t already have a Risk I/O account, you can create one for free.

“Threat Intelligence” By Any Other Name: RSA 2014 Recap

Posted on March 4, 2014 by in Big Data, Data Science, Metrics, Open Source, Threats and Attacks, Vulnerability Database, Vulnerability Intelligence, Vulnerability Management

Vulnerability Threat ManagementI’m told that every year RSA has a theme, and that this theme is predictive of the year to come for the information security industry. Sometimes, that theme is hidden. Other times, (such as last year) that theme is a race car engine with the words “Big Data” splattered all over it jumping out at you on every corner.

At the Information Security Media Group RSA themes discussion during Day 3, Executive Editor of BankInfoSecurity Tracy Kitten remarked that the challenges she hears about today are decades old: “the challenge of siloed channels.” That is, institutions are still maintaining legacy infrastructure while investing in new technology. Couple her interpretation with the showroom floor buzz around threats, as well as the conference’s stated theme of sharing knowledge, and you have my take. This year’s theme was undoubtedly the proliferation and sharing of “Threat Intelligence.”

This means different things for different practitioners and vendors, so since the gathering and dissemination of “threat intelligence” runs through our company’s veins, I’ll share Risk I/O’s views.

Ryan Huber, our security architect kicked things off with a talk at BSidesSF about his latest open source project, Bouncer. Sharing this tool with the community under an open-source license is an excellent example of the kind of sharing of methodology and data that information security professionals need in order to stay ahead of adversaries.

Vulnerability Threat ManagementI gave a talk the following day at BSides about the fruits of correlating data from disparate data sources and the kinds of insight that this generates. My model of proper information security threat intelligence is a game-theoretic one, where information about attackers’ potential actions, in progress attacks, successful attacks, and near misses informs our decision making about vulnerability assessment and remediations.  While still not a complete picture of the threat landscape, I used data from public vulnerability databases, the Metasploit Project, Exploit Database, and aggregated metadata from Risk I/O’s 2,000 enterprises, 1,500,000 live assets and over 70 million live vulnerabilities to assess the effectiveness of CVSS as a remediation policy. Add to the mix data coming in every hour from the Open Threat Exchange (OTX) that looks for indicators of compromise across 20,000 enterprises, and the results were less than stellar. The best case scenario (remediating CVSS 10 only) yielded a 3.5% predictive value positive for breaches ongoing in the last two months.

A better strategy would be to take a look at Metasploit and Exploit Database. Remediating vulnerabilities with entries there yields almost a 30% predictive value positive while retaining over 40% sensitivity.

Of note was the talk right before mine, by Trey Ford of Rapid7, which harped on the legislative realities of information security, and why it is so difficult for vendors and businesses to come together in information-sharing efforts. He called for a cultural change wherein vendors, businesses, and federal agencies alike share both data and research in an effort to stay ahead of attackers. The kinds of partnerships we have forged is the beginning of such efforts taking shape. Staying ahead of the attackers with such data-driven insights is precisely what my talk followed up with.

Kymberlee Price from Blackberry shared a similar story at Metricon 9, where she spoke about attempting to correlate public vulnerability data in order to baseline her incident response practice on the vendor side. In the discussion that followed, folks from academia and business alike debated the propensity of institution to share data about vulnerabilities or malware – and we saw some great examples throughout Metricon.

My talk at Metricon was largely centered around how we do data science operations here at Risk I/O, and how we intentionally simplify our process in order to allow for better transparency. We intentionally limit the complexity of tools we use for data analysis, so that the results could be reproduced back home. We use everyone on the teamfrom CEO, to marketing, to devin order to generate data driven insights. Everyone who works here is a data scientist.

You can get familiar with our data discovery and development process after the jump, as the transcript of the talk is written down, and, as always, reach out with any questions on comments right here or on twitter.

Vulnerability Threat Management 2.0

Posted on February 20, 2014 by in Feature Release, Launch, Risk I/O, Security Management, Threats and Attacks, Vulnerability Intelligence

Vulnerability Threat Management

When it comes to managing your IT environment, there is often just too much to look at. As our Data Scientist Michael Roytman mentioned in his recent research paper, the biggest challenge isn’t finding security defects, but rather managing the mountain of data produced by security tools in order to fix what’s most important first. Well our latest version of Risk I/O does just that, making the fixing of vulnerabilities much more efficient.

Risk I/O now offers a complete threat analysis showing the most likely entry points with a prioritized remediation list to quickly get to a lower risk of a breach. We indicate which of your assets are most at risk, so you know exactly where to start, saving you time and helping reduce your risk exposure. And all of this information is available to access and manage in one place.

With this latest release of Risk I/O you can now:

1. See associated assets, vulnerabilities, and a Quick List with your most critical remediations, all in a single click.

2. View real-time asset, vulnerability and patch data determined by your filter and search criteria.

3. Create groups based on asset or vulnerability filters and search criteria.

4. Bulk edit your vulnerabilities and assets.

5. Set priority to the most critical vulnerabilities so your team can take action.

6. Manage remediation directly within Risk I/O or through our integration with Jira.

7. Summarize any view of assets and vulnerabilities with a Risk Meter score and dashboard.

If you’re a Risk I/O user, you can now take the new version for a spin in your Risk I/O instance. Not a Risk I/O user? You can signup for a 30-day trial of our complete vulnerability threat management platform now.

Measuring vs. Modeling

Posted on December 10, 2013 by in Data Science, Industry, Static Analysis, Vulnerability Intelligence

BaythreatThis month our data scientist Michael Roytman is featured in the USENIX Association’s Journal alongside Dan Geer. Their article harkens back to our long-running theme of focusing on remediating the vulnerabilities which _actually_ generate risk for your environment. Michael and Dan argue that using CVSS as a guide for remediation is not only ineffective at identifying vulnerabilities likely to be exploited, it is also a less cost-efficient way to run a security practice.

To quote from the article…

“Using CVSS to steer remediation is nuts, ineffective, deeply
diseconomic, and knee jerk; given the availability of data it is also
passé, which we will now demonstrate.”

Take a look at the article for yourself: https://www.usenix.org/system/files/login/articles/14_geer-online_0.pdf

What I Learned at BayThreat 2013

Posted on December 9, 2013 by in Big Data, Event, Industry, Open Source, Security Management, Static Analysis

BaythreatBayThreat, an annual bay area information security conference, was this past weekend. As in years past it was top notch and well organized. The conference returned to it’s old home, the Hacker Dojo, for this fourth incarnation.

Some highlights (in no particular order):

  • Nick Sullivan spoke on white box cryptography, and the lack of a current open source implementation. White box cryptography attempts to address situations where the attacker has already compromised a host, but you want to prevent them from making use of encryption keys. Nick outlined some techniques, caveats and examples of current implementations. He then announced the Open WhiteBox project, which aims to release an open source implementation of this style of crypto.
  • Allison Miller discussed using operations management paradigms to create risk models. Using (don’t call it big) data to find leading risk indicators allows you to focus on the variables that matter. She also covered using feedback loops to improve and adjust your model over time, keeping you responsive to new threats.
  • Scott Roberts explained how GitHub uses Hubot to manage many aspects of operations, including security. Having the company exist in a series chatrooms allows everyone to be involved in responding to security incidents, something Scott compared to pair programming. GitHub has given Hubot a central role in management and is easily extensible, allowing others to customize it for their needs.
  • Finally, Nathan McCauley from Square presented on the challenges of deploying hardware cryptographic devices on the cheap. Square allows merchants to accept payments via a small hardware device that plugs into a smartphone or tablet. Creating such a device brought  interesting challenges such as: no random number generator, only 256 bytes of memory, low power and overseas production. The talk covered how Square addressed these during the design of their solution.

I also presented on surviving an application DoS attack. BayThreat did not disappoint, and I’ll definitely be returning next year. If you would like to know more about BayThreat and these subjects, check out their website at http://www.baythreat.org/.

Introducing Nessus Auto-Close with Risk I/O

Posted on November 13, 2013 by in Network Scanners, Remediation, Risk I/O, Vulnerability Assessment

Our Latest Nessus Connector

Our latest Nessus connector auto-closes remediated vulnerabilities and tracks state.

One of the common issues with running multiple siloed scanners is tracking the state of vulnerabilities over time. Which vulnerabilities should be closed based on my subsequent findings (or lack thereof)? This problem can be exacerbated when centralizing these point scanners into a central repository such as Risk I/O. Our  Nessus connector now tracks the state of all reported vulnerabilities and auto-closes any that have been remediated.

With the latest updates to our Nessus connectors we address this problem, making managing state and programs much simpler. Now when you run your Nessus connector we analyze all of the plug-ins and scan policies used, as well as which assets were scanned in order to determine which vulnerabilities are no longer present as compared to previous scans. This works with both our Nessus API connector as well as our Nessus XML connector. When using the Nessus XML connector, just load the files in chronological order to ensure Risk I/O auto-closes correctly; for the Nessus API connector we’ll handle all of those details for you.

To fully automate the management of these Nessus findings, you can use the Risk I/O Virtual Tunnel to connect to your on-premise scanner and schedule and import findings automatically. From there, Risk I/O will analyze your findings via our processing engine matching them against any threats including exploits and breaches we observe across the Internet.

We’re big believers in automation in order to scale security programs, allowing your team to focus on fixing what matters. If you already have a Risk I/O account, give our new Nessus connector functionality a try. You’ll find it in the Connectors tab. If you don’t yet have an account, you can sign up and give it a whirl.

SIRAcon Attendees, Start Your Engines

Posted on October 25, 2013 by in Data Analysis, Data Science, Event, Industry, Metrics, Remediation, Security Management, Vulnerability Intelligence, Vulnerability Management

“Information is the oil of the 21st century, and analytics is the combustion engine.” -  Peter Sondergaard, SVP Gartner

SIRAcon

This week I attended SIRAcon in Seattle, a conference hosted by the Society of Information Risk Analysts. I spoke about the methodology behind Risk I/O’s “fix what matters” approach to vulnerability management, and how we use live vulnerability and real-time breach data to build the model, as well as why such a model performs better than existing CVSS-based risk rankings. However, there were a few persistent themes between the many qualified and excellent speakers at the conference. Combining and implementing these practices is not a simple matter, but organizations should take note, and as an industry, information security can evolve.

1. This is not our first rodeo.

Risks are everywhere – and other industries not that different from ours have caught on. Ally Miller’s morning keynote discussed the structured, quantified way in which fraud detection teams are built: starting with real-time data collection, updating of large, global models which guide decisions about fraud, and the ability to make those decisions in real-time. This requires clever interfacing with business processes and excellent infrastructure, but it’s been done before, and needs to be done with respect to vulnerability management as well. Alex Hutton used Nate Silver’s latest book on Bayesian modeling to raise some parallel questions about infosec. He drew analogues to seismology and counter-terrorism, and the maturity and relative similarity of those fields (large, often hard to quantify or observe risk) is something for us to explore as well. Lastly, his talk raised a healthy discussion on the differences between forecasting and prediction. A prediction describes the expectation of a specific event (“it will rain tomorrow”), whereas a forecast is more general, and describes the probability of a number of events over time (“there will be 2 inches of rain in december”, “there is a 20% chance of rain over the next day”). Largely, the discussion was focused on how management perceives differences between the two. In seismology, we fail at prediction because the mechanics are obfuscated, and so we can only forecast. The same seems to be largely true of infosec.

2. Good models need good data.

Adam Shostack from Microsoft gave a very convincing closing keynote on the value of data-driven security programs. Running experiments targeted at collecting data will generate scientific tools and take the qualitative (read: fuzzy) decision-making out of risk management. The alternative is the status quo – reliance on policies or measuring organizational performance against standards, which is paramount to stagnation, which no one can say about our adversaries. He states that although almost all organizations have been breached, it is incredibly difficult to develop models of breaches, largely because global breach datasets are hard to come by. Not so! We’re hard at work incorporating new sources of breach data into Risk I/O – but he’s most certainly correct that this is a hard project for any single company to undertake. Adam concluded with a call for organizations to encourage better sharing of data (hear, hear), and this mirrored the sentiment of other talks (particularly Jeff Lowder’s discussion of why we need to collect data to establish base-rate probabilities) about the need for a centralized, CDC-like body for infosec data.

So let’s get some data. We’re already off to a pretty good start.