What You Miss When You Rely on CVSS Scores

Posted on February 26, 2015 by in Data Science, Remediation, Vulnerability Database, Vulnerability Intelligence, Vulnerability Management

Effective prioritization of vulnerabilities is essential to staying ahead of your attackers. While your threat intelligence might expose a wealth of information about attackers and attack paths, integrating it into decision-making is no easy task. Too often, we make the mistake of taking the data given to us for granted – and this has disastrous consequences. In this blog post, I’ll explain what we miss by trusting CVSS scores, and what should absolutely be taken into consideration to focus on the vulnerabilities posing the greatest risks to our organizations.

Part of what Risk I/O does as a vulnerability threat management provider is leverage threat intelligence sources to help our customers understand the likelihood of a vulnerability breach. While still not a complete picture of the threat landscape, we use data from public vulnerability databases, zero-day vulnerability intelligence and aggregated metadata from Risk I/O’s 10,000 enterprise users, 1,100,000 live assets and over 100 million live vulnerabilities to assess the effectiveness of CVSS as a remediation policy.

A wide array of threat feeds correlated with a company’s internal assets
Figure 1: Risk I/O correlates attack, threat and exploit data against user vulnerabilities data 24/7 to assess the effectiveness of CVSS as a remediation policy.

And what we’ve found is that some of the most damaging CVE’s have been incorrectly assigned with “low” CVSS scores. What are some types of low CVSS scores that are currently being attacked?

Terminal Denial of Service – CVE-2012-0152, CVE-2012-3170
Unauthorized Modification – CVE-2012-2566, CVE-2012-0867, CVE-2012-1715
Information Disclosure – CVE-2012-6596, CVE-2014-0160

Dell CTU researchers have found significant scanning activity in most all sectors and across the world related to these vulnerabilities, and at Risk I/O we’ve observed over 200 million breaches, yet we’re still stuck basing our remediation policies on CVSS and vendor assigned scores. Why?

Where Is CVSS Failing?

CVSS scoring is failing to take into account quite a few factors:

1. Targets of opportunity for attackers:
The amount of sites affected by CVE-2014-0160 is unfathomable – with broad estimates between 30-70% of the Internet. And randomly selecting vulnerabilities from a stack gives one about a 2% chance of remediating a truly critical vulnerability. All of these vulnerabilities give attackers a known edge on the probability that randomly targeting with a weaponized exploit will yield results – and this is why they use them.

2. Active and successful in-the-wild exploitation:
We are logging about 2M breaches (or successful exploits) every day across live vulnerabilities. The rate of incoming breaches is also increasing.

3. They’re easy to exploit:
Metasploit module and ExploitDB are databases that offer information about attackers’ behaviors. This informs our decision making about vulnerability assessment and remediations. The best policy is fixing vulnerabilities with entries in both Metasploit and ExploitDB, yielding about a 30% success rate (or 9x better than anything CVSS gets to).

What About the Security Risk Assessment Methods Used Today?

Security risk assessment has been lagging behind our capabilities for years. With the constant release of a countless volume of vulnerabilities, we have hard evidence on why and where our vulnerability scoring systems are failing us. Today, we have access to free exploit databases, open threat exchanges, and a number of proprietary tools. With cheap cloud resources available, we no longer need to rely purely on analysts’ opinions of what sort of risk a vulnerability posts. Instead, we can add context through structured, real-time analysis of the data.

But current risk assessment methodologies do not fit real “in-the-wild” attack data. Mauricio Velazco, the head of vulnerability management at the Blackstone Group drives the point home in an article in which he explained, “We have to mitigate risk before the exploit happens. If you try to mitigate after, that is more costly, has more impact, and is more dangerous for your company.” Current prioritization strategies based on the Common Vulnerability Scoring System (CVSS), and subsequent adaptations of such scores, have two fatal flaws:

1. Current risk assessments (CVSS included) lack information about what kind of attacks are possible. As security practitioners, we care about which vulnerabilities matter. We solve this problem by using live, currently open vulnerabilities to do our assessments.

2. Attackers do not go after the same vulnerability month after month, week after week, hour after hour. If certain types of attacks are failing, they change strategies. Below is a timeline of breach counts across 30,000 organizations worldwide. Each color represents a different CVE, and these are only the ones which have more than 10,000 breaches recorded to their name.

CVEs with more than 10,000 breaches recorded to their name
Figure 2: This timeline represents the CVEs with more than 10,000 breaches recorded to their name. Note that the type of CVE exploited changes, as well as the frequency with which they are exploited.

Current risk assessment strategies are based on CVSS scores, which are assigned sometimes years before an organization makes the unwise decision to patch or forego that vulnerability. Risks change in real-time, and as a result, risk assessment methodologies should be real-time as well.

What Do We Do About It?

Too often, infosec professionals find themselves working hard at the wrong thing. Working on the right thing is probably more important than working hard. So how should you prioritize the remediation of some vulnerabilities over others?

A better strategy would be to use threat intelligence to explore the source and spread of vulnerabilities. Since breach data stems from how attackers are behaving, having a handle on threat intelligence allows you to identify which vulnerabilities have a high probability of causing a breach. Checking threat intelligence feeds for what’s being actively exploited, to think like an attacker and to have the same information an attacker has is an action plan that infosec professionals can take to prioritize remediation. This shifts your strategy away from trying to fix everything and instead, focusing on identifying and remediating the few vulnerabilities which are most likely to cause a breach.

You can hear Michael Roytman’s discuss vulnerability prioritization at a NY Information Security Meetup.

What a Difference a Year Makes: Reflecting on our Dell SecureWorks Partnership

Posted on February 18, 2015 by in Industry, Partnership, Prioritization, Threat Intelligence

What a different a year makes. Nearly a year ago, Risk I/O was in the beginning phases of what would become one of our greatest successes to date: a partnership with Dell SecureWorks.  As we celebrate the one-year anniversary of the partnership, we wanted to highlight its significance and firm validation in the marketplace.

Partnership highlights include:

  • Threat intelligence supplied by Dell SecureWorks’ Counter Threat Unit has given our users unprecedented visibility into the vulnerabilities that matter most to their environments.  Dell CTU threat intelligence has quickly evolved into an invaluable data source within the Risk I/O platform.
  • We have realized exponential growth in our expanding customer base and can now benchmark data across every vertical market.
  • Risk I/O has fulfilled a need for SecureWorks by offering an integrated vulnerability threat processing service to their customers.  Customers can access their assets and vulnerabilities via single-sign on to Risk I/O in their Client Portal.
  • SecureWorks customer have access to breach and attack data seen on over 50k websites and over 10k corporate networks through our threat processing engine.
  • SecureWorks customers can aggregate their multiple scanning sources into one list of assets and vulnerabilities.  It has enriched SecureWorks managed services with instant access during incident response.

Risk I/O is growing faster than ever due to our partnership with Dell SecureWorks and their leadership in managed security services. As we enter the second year of the partnership, we predict even more success. The best is yet to come.

The Problem With Your Threat Intelligence

Posted on February 11, 2015 by in Agile Risk Intelligence, Threats and Attacks, Vulnerability Intelligence

It’s amazing how many organizations I see that have a threat feed or two and assume that they’re safe, sound, and on the leading edge of vulnerability management as a result. And to be clear, some of them are, because they’re using world-class practices and processes to make use of the data. But others? They’re not making use of their threat intelligence in a way that will ultimately enable them to stay ahead.

Here are the threat intelligence mistakes that I commonly see:

The “One and Done” Problem

A lot of companies use exploit availability information from a single source, and therefore assume that they can stop worrying about having additional threat information. There’s more bad guys, using different tactics, than a single threat feed alone can represent. This can lead into a similar problem, which I call “The Threat of the Day” an organization spends too much time and energy on a single, high-profile threat, without having the data or the processes to figure out which threat actually merits attention.

A world-class security organization will have threat intelligence coming from multiple sources, enough that they complement each other and provide a fuller picture of potential attacks.

The “More is Better” Problem

This is the polar opposite of the “One and Done” problem—having so many sources of threat intelligence that the organization becomes overwhelmed. Imagine sitting in a conference where there’s ten speakers at the microphone, all at once, and your job is to turn what they’re saying into actionable information. Not so easy, right?

And this leads me directly into the next problem…

The “No Team in Charge” Problem

Having all that threat information won’t help you if you don’t have the team in place to consume the threat intelligence and handle alerting, remediation and blocking. This problem particularly pertains to organizations that are just getting started with threat intelligence and don’t have their processes in place yet. As a result, they may see a lot of false positives from the feeds, or they just may get overwhelmed with the data.

Before an organization sets up threat feeds, it’s important to have people in charge, taking action on the data.

The “No Context” Problem

Most organizations know that they have to aggregate threat data, but they often fail to truly analyze the data. Data won’t help you unless it’s properly analyzed and understood with the specific vulnerabilities and weaknesses that the organization has. If a high-profile vulnerability such as “POODLE” is exposing a large portion of the Internet, it may not matter at all to your specific company, based on your own unique environment and assets.

It may be far more important for you to take action on some other exploit that’s rarely discussed or seen.

A wide array of threat feeds correlated with a company’s internal assets

The “No Communication” Problem

It’s essential to have an easy way to understand and share the output of the operation with the entire company. Non-technical business executives should be able to see at a glance which group of assets have which weaknesses, and the team itself should get recognition for the work it does in protecting the company. No one likes to build dashboards and reports all day. Communicating your company’s security posture at all times—and how your team has improved it—is a paramount responsibility of the security professional.

You knew it was coming…

Risk I/O Can Help

Let me take a moment to discuss what Risk I/O does in regards to threat intelligence. Risk I/O is a way to improve and contextualize your vulnerability scanning by providing prioritization, visualizations, and—of course—integrated threat feeds. We actually provide seven threat feeds, and the data comes through in such a way that you’re enabled to prioritize your latest fixes, clearly communicate your risk posture, and understand your weaknesses across your asset groups.

Whether or not you use Risk I/O, of course, the point is to have not just threat feeds—but an actionable plan for making use of them, and ensuring that your company is sufficiently integrating and contextualizing what’s happening in the “real world” with what’s happening inside your own organization.

Read more about threat intelligence and how to do it right in our latest white paper.

Secret #5 of Vulnerability Scanning: You Can Actually Prioritize, Rather Than Just Analyze

Posted on January 20, 2015 by in Industry, Network Scanners, Security Management, Vulnerability Assessment, Vulnerability Management

This is the third post by Ed Bellis in a three-part series on Vulnerability Scanning. To view all five secrets and two common “gotchas” of vulnerability scanning, please click here.

Typically, security teams spend tons of time putting together Excel spreadsheets and swimming through countless rows of data. Doing so will get the job done, eventually…kind of. But the problem is, as soon as you manage to rise to the top of your current data ocean, another wave will hit you. That is to say… by automating the detection you end up creating an ever growing mountain of findings that require more than manual effort to plow through. You can’t prioritize what to fix if you can’t even keep up with the inbound volume of data regarding potential threats, breaches and attacks.

What you need is a way to immediately prioritize the data in front of you. This is a case where tools—rather than elbow grease—may be of help. Platforms exist that can sit on top of your scan data and help you identify weaknesses in your infrastructure in the context of real-time threat data (i.e. what’s actually occurring in the world right now, and which may affect you).

This kind of platform solution—a GPS for your scan data—can be an immense time savings, and help guide your efforts in a much more efficient way than simply sorting by CVSS scores, each and every day.

Secret #4 of Vulnerability Scanning: Don’t Dump-and-Run, Make It Consumable

Posted on January 15, 2015 by in Industry, Network Scanners, Security Management, Vulnerability Assessment, Vulnerability Management

This is the second post by Ed Bellis in a three-part series on Vulnerability Scanning. To view all five secrets and two common “gotchas” of vulnerability scanning, please click here.

You know what I’m talking about when I talk about the infamous dump-and-run. “Here’s your 300-page PDF with a laundry list of every vulnerability known to man!”

From what I’ve seen, being the recipient of a dump-and-run is handled by systems administrators, developers, network engineers and other remediators exactly the same way: by filing it in the trash. The least effective way of getting critical issues fixed in your environment is the oversized PDF dump.

You need to make scan results consumable and actionable for those responsible for remediation. SysAdmins don’t want a laundry list of vulnerabilities listed out by their CVE identifier; they need an actionable list of what needs to get done, such as deploying a specific patch or updating to a specific group of assets with their relevant identifiers.

As Gene Kim so eloquently stated, “The rate at which information security and compliance introduce work into IT organizations totally outstrips IT organizations ability to complete, whether it’s patching vulnerabilities or implementing controls to fulfill compliance objectives. The status quo almost seems to assume that IT operations exist only to deploy patches and implement controls, instead of completing the projects that the business actually needs.”

Or to put it another way…don’t be that guy.

Secret #1 of Vulnerability Scanning: CVSS Is Only Part of the Picture

Posted on January 8, 2015 by in Industry, Network Scanners, Security Management, Vulnerability Assessment, Vulnerability Management

This is the first post by Ed Bellis in a three-part series on Vulnerability Scanning. To view all five secrets and two common “gotchas” of vulnerability scanning, please click here.

Information security can be a thankless job. I know, I’ve lived it first-hand. When I ran Security at Orbitz, it was absolutely critical that my team and I stayed on top of threats, attacks and potential exploits. And we had to ensure that our execution was flawless, every day, despite the fact that the influx of new data and threats was never ending. Any slip up could put the company at risk.

While in the trenches, we developed a series of best practices for working with vulnerability scanners such as Qualys, Nessus, Rapid7, WhiteHat and the rest. I found that following these practices dramatically improved our company’s security posture, and helped all of us sleep a lot better at night. Well minus those dealing with small children in the middle of the night.

Here’s what we learned:

1. CVSS is great. But it’s only part of the picture.

CVSS is table stakes these days when examining vulnerability scan results, but you need to be careful to not place too much reliance on CVSS when prioritizing your remediation tasks. CVSS has the ability to add temporal data in the effort to account for changing threats; however, temporal scores can only lower and not raise the actual score. I’ll say that again… temporal scores can only lower and not raise the actual score. So if you look at CVSS and only focus on the 8’s, 9’s and 10’s, you may be missing the real priorities.

Let me give you a hot button, commonly referenced example: the Heartbleed vulnerability exposed the majority of web servers running over SSL on the Internet and allowed for the leaking of data (including the very encryption keys that protected them). But how did CVSS rate Heartbleed? It scored at only a five.

Why did CVSS misread Heartbleed so badly? The scoring system doesn’t allow for a high score on a vulnerability whose impact is “information leakage,” even though in this case the information being leaked could have been—and was—highly sensitive. You have to take into account an ever-shifting threat landscape and model, asset priorities, and mitigating controls in order to take a holistic approach to prioritized remediation.

A Holiday Poem About Your Scan Data

Posted on December 16, 2014 by in Industry, Risk I/O, Security Management

     

    It’s almost year end, and you must understand
    security pros everywhere are tired of their scans.
    The data’s too much! And it just isn’t clear
    where the next threat might truly appear.

    Security folks need help, a surefire way
    to parse through Qualys, Nessus & more each day.
    To know what to prioritize, without having to bet
    and find vulnerabilities, breaches & 0day threats.

    In a matter of minutes, Risk I/O can solve this pain
    think of all the time in your day you will gain.
    Oh, to know what to fix! As quick as a flash!
    And be the hero at your holiday bash!

    If this sounds like a great way to greet the new year
    let us know and we can help, with good cheer.
    We’ll make your life easy, we’ll give you a gift
    (and we don’t cost much, if you’re focused on thrift).

    So reach out to us (if not now, maybe in January)?
    We’ll make your scans fun, simple & merry.

    Happy Holidays from the Team at Risk I/O

    P.S. Take the Free Trial

    P.P.S. Our Explainer Video May Make You Smile

 

Vulnerability Management Decision Support: Identifying & Prioritizing Zero-Day Vulnerabilities

Posted on November 10, 2014 by in Guest Blogger, Launch, Threats and Attacks, Vulnerability Intelligence

This is a guest blog post by Josh Ray, Senior Intelligence Director for Verisign iDefense Security Intelligence Services.

One of the biggest challenges facing security teams today is staying up-to-date on the ever-changing security threat landscape. The inclusion of Verisign iDefense Security Intelligence Services’ zero-day vulnerability intelligence into Risk I/O’s threat processing engine provides security practitioners with actionable intelligence on the most important cyber threats to help protect their enterprise.

OpenVAS Vulnerability Integration

Verisign iDefense vulnerability intelligence includes vulnerability, attack and exploit data, such as unpublished zero-day vulnerabilities, collected from over 30,000 products and 400 technology vendors around the world. This data complements the threat processing of Risk I/O’s SaaS-based vulnerability threat management platform, which continuously aggregates attack, threat, and exploit data from across the Internet, by matching it with customers’ vulnerability scan data to generate a prioritized list of vulnerabilities that are most likely to be exploited.

Having advance knowledge of zero-day vulnerabilities and leveraging a risk-based prioritization methodology provides network defenders with the information they need to develop and implement mitigation plans to help protect against exploits and reduce their organization’s cyber threat exposure until a patch, or official fix from the vendor, has been issued.

As we have seen numerous times over the last year, the cost of a compromise to an organization’s revenue and brand far outweigh any of the upfront costs of moving toward a proactive security model. Advance knowledge, coupled with risk-based prioritization, can help enterprises shrink their attack surface and make better resource allocation decisions to effectively save valuable time and money. That’s what the partnership between Risk I/O and Verisign iDefense Security Intelligence Services is all about.

To learn more about the benefits of getting your data processed with Verisign iDefense’s zero-day vulnerability data, click here.

About the Author:
Josh is a recognized cyber intelligence expert on matters related to cyber exploitation and adversarial tactics, techniques, procedures and technologies, and for his work on computer network exploitation and cyber adversarial actions. He has presented at a variety of DoD and commercial cyber intelligence conferences and symposiums.

Josh has more than 12 years of combined commercial, government and military experience in Cyber Intelligence, Threat Operations and Info Security, including managing Verisign iDefense, managing the Cyber Threat Intelligence Program at Raytheon and technical leadership roles with the Office of Naval Intelligence (ONI) and the Northrop Grumman Corporation at the Joint Task Force – Global Network Operations (JTF-GNO) providing support to focused operations.

Risk I/O Threat Processing – Now With Zero-Day Vulnerability Data

Posted on November 4, 2014 by in Feature Release, Launch, Threats and Attacks, Vulnerability Management

Today we are announcing the addition of zero-day vulnerability data from Verisign iDefense to our platform. With this addition, our vulnerability threat management platform now offers smarter prioritization based on unpublished vulnerability data, providing an early warning of exploits and vulnerabilities in your environment for which a fix is not currently available.

Using our threat processing engine, Risk I/O continuously correlates vulnerability scan results with live attack data, exploit data, and now zero-day vulnerability data. The result is a complete list of suggested vulnerabilities to mitigate. We think the addition of zero-day vulnerability data will save your organization time by allowing you to take action ahead of waiting for a fix to become available.

To start prioritizing zero-day mitigation, navigate to the Home tab in your instance of Risk I/O and simply select the zero-day vulnerabilities facet. Right away, you’ll notice that the vulnerability table will update to filter by those assets containing zero-day vulnerabilities.

Selecting the zero-day vulnerabilities facet will allow you to filter your asset down to those containing zero-days.

Once the list is generated, you can use the sliders on the right to filter your list down even more for remediation based on score, severity, threat and priority of the zero-day vulnerabilities tied to your assets. The Asset Filters allow you to filter your asset list by tags that you created, giving you additional information on which vulnerabilities are most important to address first and how this affects your organization.

You can filter your list down even more to understand which vulnerabilities are most important to address first.

Short on time but want to apply the same edit to multiple assets? Use our enhanced bulk editing to apply tagging and other actions to multiple assets at a single time.

You can filter your list down even more to understand which vulnerabilities are most important to address first.

Give this new feature a spin by heading into your Risk I/O instance to find out if any zero-day vulnerabilities are affecting assets in your on your network. We think you’ll appreciate the enhanced security that comes with this automatic alerting. If you don’t already have a Risk I/O account, you can create one for free.

Laying the Foundation for Change

Posted on October 14, 2014 by in Industry, Risk I/O, Security Management

This blog post was written by new CEO of Risk I/O, Karim Toubba. You can read more about our new CEO announcement here.

I have always been drawn to solving substantive problems that lay the foundation for change, particularly in the security industry. To date, much has been written about the sophistication of the hacker and even the most casual news reader is bombarded with the latest highly publicized attack. Ironically, organizations continue to spend more money than ever on security technology  (the entire industry spent over $46B last year – ABI research).

While new technologies are needed to drive efficacy, especially in light of ongoing threats, they alone are not going to address this challenge. Talk to any security practitioner, from security operations or analyst to CISO, and they quickly point out that they are inundated with the newest tech to protect them against the latest attack. This so called “layered” security model has left organizations with a myriad of security technologies from network to application to client each of which provide an inherent value and hold critical information about attack patterns. Yet these technologies are still largely siloed and require increasingly highly skilled security staff to maximize the information these systems produce. As a friend of mine often reminds me, “there is no Moore’s law to the human brain.” While SIEM platforms attempt to aggregate the data, the boil the ocean approach, over reliance and the forensic and compliance use cases and often expensive and complex integration task means the mass market is not able to leverage the full capability of these solutions. While big data holds promise, most of the platforms have gone by way of general purpose platforms that can process any and all data missing the opportunity to focus on solving this vexing problem in security.

The long lived idea of “layered security” needs to give rise to a better way to connect the layers, understand what the data means, why it matters, and most importantly make it actionable in a meaningful way to security operations teams. Of course low time to value is a key tenet if we expect broad adoption.

Laying the foundation for change is never easy. It requires insight, a leap of faith, and maniacal execution. I joined the Risk I/O team to help lead the charge in solving this substantive problem. One, that when solved, will have a lasting impact on the security industry and our customers.