Security As Code at Cloud Security World

Posted on May 28, 2015 by in API, Cloud, DevOps, Event, Open Source

Last week Jason Rohwedder and I had the privilege of presenting a cloud automation use case at Cloud Security World. Our talk not only covered how we automate much of our security at Risk I/O, but how we use DevOps principles to ensure our security controls are consistent even at a high velocity.

While we have spoken about some of this content before, one thing was very new and in my opinion something that has massive potential to reduce everyone’s Mean Time to Remediate vulnerabilities.

Jason has been working on an open source project called Tattle that when boiled down is a ridiculously simple way to store and regurgitate data. In our case, software version data. Using Tattle allows someone to identify versions of software and packages running in any environment in a Common Platform Enumeration (CPE) format with hooks into many common configuration management tools like Chef, Puppet or Ansible, among others. By using Tattle in combination with the Risk I/O API, you could have a single simple script that queries for software versions running on any given asset and then updates those assets in Risk I/O. From there, Risk I/O will automatically create or close any known vulnerabilities for that particular asset and can alert you on new CVE’s that effect your assets as soon as they are published.

This dramatically lowers your mean time to remediation by avoiding vulnerability signature updates to your scanner and avoiding waiting on scanning windows to identify those new vulnerabilities before determining a course of remediation.

We’re really excited about the potential for Tattle and we’ll be updating this post as we make the source available on Github.

Below is our presentation from Cloud Security World. There are a number of other open source projects we have listed in the Resources to help you with security automation in your environment and hope you can take advantage of these.

Looking Before & Beyond a Breach: Lessons from a DBIR Featured Contributor

Posted on April 16, 2015 by in Data Science, Industry, Patch Management, Prioritization, Remediation

As you may know, the 2015 Verizon Data Breaches Investigations Report was recently released. This is the “gold standard” research document for information security, and we’re proud to say that Risk I/O was a featured vulnerabilities contributor, providing a rich correlated threat data set that spans 200M+ successful exploitations across 500+ common vulnerabilities and exposures from over 20,000 enterprises in more than 150 countries.

With our data set in hand, Verizon focused on identifying patterns within the successful exploits of prioritizing remediation and patching efforts for known vulnerabilities. A sample of their findings using Risk I/O data:

  • A patch strategy focused on coverage & consistency is far more effective at preventing data breaches than “fire drills.”

  • Just because a CVE gets old, doesn’t mean it goes out of style with the exploit crowd (they have a reputation for partying like its 1999).

  • It’s important to prioritize remediation of exploited vulnerabilities, beyond the top ten or so CVEs.

  • Whether a vulnerability should be patched quickly, or if it can just be pushed with the rest.

Probably the most interesting statistics that came from our research is that attackers aren’t just going after the flashy, media-cumulative percentage of exploited vulns by weeks from cve publish datesworthy vulnerabilities. An astonishing 99.9% of vulnerabilities that become exploited are at least a year old. It’s not the newest ones that attackers are using, it’s some of the oldest ones on record.

Of all of the risk factors in information security, vulnerabilities are probably the most controversial. Which vulnerabilities should be patched? And more generally, what can we all do before a breach to improve vulnerability management programs? Many more data-driven recommendations for improving your remediation strategy can be gleamed from this year’s report.

The Verizon Data Breach Investigations Report is a must-read for InfoSec professionals, and Risk I/O is proud to have participated. A special thanks to Bob Rudis and Jay Jacobs for their help and patience.

Vulnerability Management for the Midsize

Posted on March 19, 2015 by in DevOps, Industry, Remediation, Threat Intelligence, Vulnerability Management

It’s not fair. The big companies have the teams, the tools, and the processes required in order to run a best-in-class vulnerability management program.

But guess what? The bad guys don’t care about how big you are. In fact, non-targeted exploits accounted for 75% of the breaches from Verizon’s 2013 Data Breach Investigation Report—meaning even mid-sized companies are equally or greater subject to the same attacks that hit JP Morgan.

In a large company, there’s a security team with members who each wear different hats. But in a mid-sized company, you only have one or two—and yet, it’s equally (if not more) critical that your vulnerability management process is spot-on.

So how do you do it? Here’s how:

1. First of all, don’t cut corners on scanning.
The worst thing you can do is decide you’ll only scan quarterly (or twice a year) since otherwise it takes too much effort. Much like devops and deployments, your goal should be small and frequent changes versus trying to do it all a few times per year. In 2015, the year of the “one billion exploit,” continuous vulnerability scanning has become table stakes.

Easy for us to say? It can be done. Read on.

2. Everyone’s going to wear multiple hats, so figure out who they are.
You don’t have the option of hiring an entire team, so figure out if you can at least have a two-person team doing the work. You may have the person who does the vuln scanning as well as analyzing, but then someone in development or operations doing the remediation work. That’s a lot of work for two people, but if you have clearly defined responsibilities, it can get done.

And what’s even more important in a small team is to have a great line of communication between those two people. Take them out for a beer—be friends. You need each other.

3. Don’t just rely on vuln scanning—bring in threat intelligence and context.
What particularly matters for mid-sized companies is to be sure that their teams are prioritizing the right things. Looking solely at the output of vuln scanning won’t help, because it’ll be simply another list of vulnerabilities without real-world context. And as the saying goes, “Context is king.”

What you want to do is ensure that you have real-world context for the weaknesses you find, which will help you give exactly the right things to your remediation partner. You’ll want to understand which attacks are successful in the wild, the volume and velocity around those attacks, which industries and geographies are more likely to suffer from particular exploits and vulnerabilities, and the importance of vulnerable technology assets and any mitigating controls that may be in place. (And when you’re a two or three person team, this is critical. Don’t be the person who dumps a 300-page PDF and runs for the hills).

Having real-world context for the output of vuln scanning is a key strategy for ensuring you’re fixing the right things and not just spinning your wheels.

4. Keep management’s attention on what you’re doing.
It’s so easy for security to be an “afterthought” at the mid-sized level, but as previously mentioned, mid-sized companies are as subject to serious attacks and exploits as the big guys. You want to ensure that management keeps its focus on security as a priority issue. So communicate what you’re doing relentlessly—show your risk posture via dashboards and reports but do it in the language of management. In other words, your reports need to be business-friendly; don’t speak in technical or security jargon when you’re trying to truly communicate to the business.

Rather, focus their attention on risk, not on counts. In doing so, you’ll be equipped to get an allocation of budget and resources, so your two-person team blossoms into a full vulnerability management program as soon as possible.

5. Fit into existing processes and tools.
Just like dumping a 300-page PDF report on the system admin’s desk doesn’t help you reduce your risk any faster, neither does changing the process in regards to how your remediators work. Is your development team using a bug tracker to log and track issues? Reuse that same tool to manage remediation efforts. The same can be said of trouble ticketing and change management; if you can fit your efforts into the existing processes of the business, you’re likely to get more accomplished.

6. Don’t go it alone.
Here’s the critical piece: being a small team (or even a one-person show), you don’t have the luxury of relying on manual processes to get all of this done.

Fortunately, it’s 2015, not five years ago, and you don’t have to. There are platforms designed for medium-sized businesses that can help you consume and analyze your scan data, integrate with threat feeds, and push everything into dashboards. The work of the five-person team can be done with one or two. If a mid-sized company is NOT using these new platforms, they may find themselves trying to do the impossible.

Just as marketing uses email automation systems to launch campaigns, and Sales uses Salesforce to log deals, and customer service uses Zendesk to work with customers—well, security pros need to use the cloud-based solutions available to them to ensure that they’re able to get tons of work done with minimal effort.

Summary
You don’t have to be IBM to have a world-class vulnerability management program. With the right planning, processes, and tools–as well as small group of people who work well together and know their responsibilities inside and out–your mid-sized company can develop a fantastic approach to vulnerability management.

Are you putting together your first vulnerability management program for your mid-sized company? Interested in tips on making your current vulnerability management program world class? Read more about running an effective vulnerability management program even if you aren’t IBM in our latest white paper.

Vulnerability Cage Match

Posted on March 10, 2015 by in API, Feature Release, Metrics, Prioritization, Vulnerability Management

Sometimes you want to see the status of your open vulnerabilities across the various assets in your environment. And operating system continues to be an important datapoint. That’s why we’ve improved the TagView dashboard. With a new name, Compare, and an expanded set of filters (we’ve added the ability to filter by assets running a specific operating system) you can now compare your assets from even more angles.

Asset Filters

Simply choose the contenders from the drop-downs (asset tag vs. operating system), select to view as grouped, and voilá: your open vulnerability count, month-over-month, will appear for easy comparison.

Compare Dashboard Grouped

You can also compare multiple asset tags vs. multiple operating systems in the ultimate Battle Royale. Simply choose your asset values for tag1 vs tag2 vs tag3 vs os1 vs os2 etc., select to view the set as stacked, and your open vulnerabilities for each asset will appear, by month.

Compare Dashboard Stacked

This expanded Compare dashboard is just another way that Risk I/O is giving you visibility into your open vulnerabilities to aid in prioritization.

We’ve also updated our API with new asset group endpoints as well as new attributes. Users can now make calls to their API and pull all risk meter asset scores (as well as scores for individual assets). And users who use different methods to report on risk meter scores for assets and asset groups may find our new attribute risk_meter_score handy.

API Group Endpoints

Information on these new API features is located in our API documentation.

So whether you are setting up assets to duke it out in the open vulnerabilities match-of-the-century, or you’re one of our API users who check their asset risk meter scores daily, we think you’ll find our new features will enhance how you’re already using Risk I/O.

New! Features that Will Improve Your Vulnerability Prioritization

Posted on March 5, 2015 by in Feature Release, Remediation, Vulnerability Assessment

Today, we’re announcing new statuses, filters and displays that will impact how you sift through scan data, prioritize vulnerabilities and communicate with your team.

New! Vulnerability Statuses

We’ve added two new vulnerability statuses that will make it even easier for your team to track the lifecycle of a vulnerability: risk accepted & false positive. These statuses are flagged by the end user and can be assigned to an individual vulnerability, or to many at once.

New Vulnerability Status Filters

To assign a vulnerability as either risk accepted or as a false positive, navigate to the Home tab, select a vulnerability from the list, and then select the status from the dropdown dropdown. You can also flag the status of vulnerabilities in bulk right in the table.

Edit Vulnerability Status in Bulk

Note that risk accepted vulnerabilities and false positives will not affect the risk meter score (as only open vulnerabilities are counted). Assigning vulnerabilities with one of these new statuses ensures that your score is only affected by active, open vulnerabilities.

New! “Found” Date Display:

Let’s say that you wanted to know when your risk-accepted vulnerabilities were originally discovered. Simply filter your view by risk-accepted, and then select to display the “Found” date by using the Display dropdown.

Including the Found On Date in Vulnerability Details

Now let’s say that you wanted to track and manage the vulnerabilities that have been Risk Accepted. Select the Export this View dropdown, and a CSV export of your risk-accepted vulns will appear, including the Found date (also New!).

XML Report with Found On Date

Displaying and reporting on the date found will inform your team of the length of time since discovery, and will provide another decisioning factor for prioritization based on age.

Filter by Port:

You can also now filter your vulnerabilities by the port(s) on which they were discovered. Select the port(s) of interest in the Vulnerability Filters sidebar, and right away the table will filter out the vulnerabilities unassociated with those port(s).

Port Filter

Give these new vulnerability features a spin by heading over to your Risk I/O instance. We think you’ll appreciate the time saved parsing through your vulnerability data and the peace of mind that comes with improving your full picture of risk. And if you don’t already have a Risk I/O account, you can create one for free.

What You Miss When You Rely on CVSS Scores

Posted on February 26, 2015 by in Data Science, Remediation, Vulnerability Database, Vulnerability Intelligence, Vulnerability Management

Effective prioritization of vulnerabilities is essential to staying ahead of your attackers. While your threat intelligence might expose a wealth of information about attackers and attack paths, integrating it into decision-making is no easy task. Too often, we make the mistake of taking the data given to us for granted – and this has disastrous consequences. In this blog post, I’ll explain what we miss by trusting CVSS scores, and what should absolutely be taken into consideration to focus on the vulnerabilities posing the greatest risks to our organizations.

Part of what Risk I/O does as a vulnerability threat management provider is leverage threat intelligence sources to help our customers understand the likelihood of a vulnerability breach. While still not a complete picture of the threat landscape, we use data from public vulnerability databases, zero-day vulnerability intelligence and aggregated metadata from Risk I/O’s 10,000 enterprise users, 1,100,000 live assets and over 100 million live vulnerabilities to assess the effectiveness of CVSS as a remediation policy.

A wide array of threat feeds correlated with a company’s internal assets
Figure 1: Risk I/O correlates attack, threat and exploit data against user vulnerabilities data 24/7 to assess the effectiveness of CVSS as a remediation policy.

And what we’ve found is that some of the most damaging CVE’s have been incorrectly assigned with “low” CVSS scores. What are some types of low CVSS scores that are currently being attacked?

Terminal Denial of Service – CVE-2012-0152, CVE-2012-3170
Unauthorized Modification – CVE-2012-2566, CVE-2012-0867, CVE-2012-1715
Information Disclosure – CVE-2012-6596, CVE-2014-0160

Dell CTU researchers have found significant scanning activity in most all sectors and across the world related to these vulnerabilities, and at Risk I/O we’ve observed over 200 million breaches, yet we’re still stuck basing our remediation policies on CVSS and vendor assigned scores. Why?

Where Is CVSS Failing?

CVSS scoring is failing to take into account quite a few factors:

1. Targets of opportunity for attackers:
The amount of sites affected by CVE-2014-0160 is unfathomable – with broad estimates between 30-70% of the Internet. And randomly selecting vulnerabilities from a stack gives one about a 2% chance of remediating a truly critical vulnerability. All of these vulnerabilities give attackers a known edge on the probability that randomly targeting with a weaponized exploit will yield results – and this is why they use them.

2. Active and successful in-the-wild exploitation:
We are logging about 2M breaches (or successful exploits) every day across live vulnerabilities. The rate of incoming breaches is also increasing.

3. They’re easy to exploit:
Metasploit module and ExploitDB are databases that offer information about attackers’ behaviors. This informs our decision making about vulnerability assessment and remediations. The best policy is fixing vulnerabilities with entries in both Metasploit and ExploitDB, yielding about a 30% success rate (or 9x better than anything CVSS gets to).

What About the Security Risk Assessment Methods Used Today?

Security risk assessment has been lagging behind our capabilities for years. With the constant release of a countless volume of vulnerabilities, we have hard evidence on why and where our vulnerability scoring systems are failing us. Today, we have access to free exploit databases, open threat exchanges, and a number of proprietary tools. With cheap cloud resources available, we no longer need to rely purely on analysts’ opinions of what sort of risk a vulnerability posts. Instead, we can add context through structured, real-time analysis of the data.

But current risk assessment methodologies do not fit real “in-the-wild” attack data. Mauricio Velazco, the head of vulnerability management at the Blackstone Group drives the point home in an article in which he explained, “We have to mitigate risk before the exploit happens. If you try to mitigate after, that is more costly, has more impact, and is more dangerous for your company.” Current prioritization strategies based on the Common Vulnerability Scoring System (CVSS), and subsequent adaptations of such scores, have two fatal flaws:

1. Current risk assessments (CVSS included) lack information about what kind of attacks are possible. As security practitioners, we care about which vulnerabilities matter. We solve this problem by using live, currently open vulnerabilities to do our assessments.

2. Attackers do not go after the same vulnerability month after month, week after week, hour after hour. If certain types of attacks are failing, they change strategies. Below is a timeline of breach counts across 30,000 organizations worldwide. Each color represents a different CVE, and these are only the ones which have more than 10,000 breaches recorded to their name.

CVEs with more than 10,000 breaches recorded to their name
Figure 2: This timeline represents the CVEs with more than 10,000 breaches recorded to their name. Note that the type of CVE exploited changes, as well as the frequency with which they are exploited.

Current risk assessment strategies are based on CVSS scores, which are assigned sometimes years before an organization makes the unwise decision to patch or forego that vulnerability. Risks change in real-time, and as a result, risk assessment methodologies should be real-time as well.

What Do We Do About It?

Too often, infosec professionals find themselves working hard at the wrong thing. Working on the right thing is probably more important than working hard. So how should you prioritize the remediation of some vulnerabilities over others?

A better strategy would be to use threat intelligence to explore the source and spread of vulnerabilities. Since breach data stems from how attackers are behaving, having a handle on threat intelligence allows you to identify which vulnerabilities have a high probability of causing a breach. Checking threat intelligence feeds for what’s being actively exploited, to think like an attacker and to have the same information an attacker has is an action plan that infosec professionals can take to prioritize remediation. This shifts your strategy away from trying to fix everything and instead, focusing on identifying and remediating the few vulnerabilities which are most likely to cause a breach.

You can hear Michael Roytman’s discuss vulnerability prioritization at a NY Information Security Meetup.

What a Difference a Year Makes: Reflecting on our Dell SecureWorks Partnership

Posted on February 18, 2015 by in Industry, Partnership, Prioritization, Threat Intelligence

What a different a year makes. Nearly a year ago, Risk I/O was in the beginning phases of what would become one of our greatest successes to date: a partnership with Dell SecureWorks.  As we celebrate the one-year anniversary of the partnership, we wanted to highlight its significance and firm validation in the marketplace.

Partnership highlights include:

  • Threat intelligence supplied by Dell SecureWorks’ Counter Threat Unit has given our users unprecedented visibility into the vulnerabilities that matter most to their environments.  Dell CTU threat intelligence has quickly evolved into an invaluable data source within the Risk I/O platform.
  • We have realized exponential growth in our expanding customer base and can now benchmark data across every vertical market.
  • Risk I/O has fulfilled a need for SecureWorks by offering an integrated vulnerability threat processing service to their customers.  Customers can access their assets and vulnerabilities via single-sign on to Risk I/O in their Client Portal.
  • SecureWorks customer have access to breach and attack data seen on over 50k websites and over 10k corporate networks through our threat processing engine.
  • SecureWorks customers can aggregate their multiple scanning sources into one list of assets and vulnerabilities.  It has enriched SecureWorks managed services with instant access during incident response.

Risk I/O is growing faster than ever due to our partnership with Dell SecureWorks and their leadership in managed security services. As we enter the second year of the partnership, we predict even more success. The best is yet to come.

The Problem With Your Threat Intelligence

Posted on February 11, 2015 by in Agile Risk Intelligence, Threats and Attacks, Vulnerability Intelligence

It’s amazing how many organizations I see that have a threat feed or two and assume that they’re safe, sound, and on the leading edge of vulnerability management as a result. And to be clear, some of them are, because they’re using world-class practices and processes to make use of the data. But others? They’re not making use of their threat intelligence in a way that will ultimately enable them to stay ahead.

Here are the threat intelligence mistakes that I commonly see:

The “One and Done” Problem

A lot of companies use exploit availability information from a single source, and therefore assume that they can stop worrying about having additional threat information. There’s more bad guys, using different tactics, than a single threat feed alone can represent. This can lead into a similar problem, which I call “The Threat of the Day” an organization spends too much time and energy on a single, high-profile threat, without having the data or the processes to figure out which threat actually merits attention.

A world-class security organization will have threat intelligence coming from multiple sources, enough that they complement each other and provide a fuller picture of potential attacks.

The “More is Better” Problem

This is the polar opposite of the “One and Done” problem—having so many sources of threat intelligence that the organization becomes overwhelmed. Imagine sitting in a conference where there’s ten speakers at the microphone, all at once, and your job is to turn what they’re saying into actionable information. Not so easy, right?

And this leads me directly into the next problem…

The “No Team in Charge” Problem

Having all that threat information won’t help you if you don’t have the team in place to consume the threat intelligence and handle alerting, remediation and blocking. This problem particularly pertains to organizations that are just getting started with threat intelligence and don’t have their processes in place yet. As a result, they may see a lot of false positives from the feeds, or they just may get overwhelmed with the data.

Before an organization sets up threat feeds, it’s important to have people in charge, taking action on the data.

The “No Context” Problem

Most organizations know that they have to aggregate threat data, but they often fail to truly analyze the data. Data won’t help you unless it’s properly analyzed and understood with the specific vulnerabilities and weaknesses that the organization has. If a high-profile vulnerability such as “POODLE” is exposing a large portion of the Internet, it may not matter at all to your specific company, based on your own unique environment and assets.

It may be far more important for you to take action on some other exploit that’s rarely discussed or seen.

A wide array of threat feeds correlated with a company’s internal assets

The “No Communication” Problem

It’s essential to have an easy way to understand and share the output of the operation with the entire company. Non-technical business executives should be able to see at a glance which group of assets have which weaknesses, and the team itself should get recognition for the work it does in protecting the company. No one likes to build dashboards and reports all day. Communicating your company’s security posture at all times—and how your team has improved it—is a paramount responsibility of the security professional.

You knew it was coming…

Risk I/O Can Help

Let me take a moment to discuss what Risk I/O does in regards to threat intelligence. Risk I/O is a way to improve and contextualize your vulnerability scanning by providing prioritization, visualizations, and—of course—integrated threat feeds. We actually provide seven threat feeds, and the data comes through in such a way that you’re enabled to prioritize your latest fixes, clearly communicate your risk posture, and understand your weaknesses across your asset groups.

Whether or not you use Risk I/O, of course, the point is to have not just threat feeds—but an actionable plan for making use of them, and ensuring that your company is sufficiently integrating and contextualizing what’s happening in the “real world” with what’s happening inside your own organization.

Read more about threat intelligence and how to do it right in our latest white paper.

Secret #5 of Vulnerability Scanning: You Can Actually Prioritize, Rather Than Just Analyze

Posted on January 20, 2015 by in Industry, Network Scanners, Security Management, Vulnerability Assessment, Vulnerability Management

This is the third post by Ed Bellis in a three-part series on Vulnerability Scanning. To view all five secrets and two common “gotchas” of vulnerability scanning, please click here.

Typically, security teams spend tons of time putting together Excel spreadsheets and swimming through countless rows of data. Doing so will get the job done, eventually…kind of. But the problem is, as soon as you manage to rise to the top of your current data ocean, another wave will hit you. That is to say… by automating the detection you end up creating an ever growing mountain of findings that require more than manual effort to plow through. You can’t prioritize what to fix if you can’t even keep up with the inbound volume of data regarding potential threats, breaches and attacks.

What you need is a way to immediately prioritize the data in front of you. This is a case where tools—rather than elbow grease—may be of help. Platforms exist that can sit on top of your scan data and help you identify weaknesses in your infrastructure in the context of real-time threat data (i.e. what’s actually occurring in the world right now, and which may affect you).

This kind of platform solution—a GPS for your scan data—can be an immense time savings, and help guide your efforts in a much more efficient way than simply sorting by CVSS scores, each and every day.

Secret #4 of Vulnerability Scanning: Don’t Dump-and-Run, Make It Consumable

Posted on January 15, 2015 by in Industry, Network Scanners, Security Management, Vulnerability Assessment, Vulnerability Management

This is the second post by Ed Bellis in a three-part series on Vulnerability Scanning. To view all five secrets and two common “gotchas” of vulnerability scanning, please click here.

You know what I’m talking about when I talk about the infamous dump-and-run. “Here’s your 300-page PDF with a laundry list of every vulnerability known to man!”

From what I’ve seen, being the recipient of a dump-and-run is handled by systems administrators, developers, network engineers and other remediators exactly the same way: by filing it in the trash. The least effective way of getting critical issues fixed in your environment is the oversized PDF dump.

You need to make scan results consumable and actionable for those responsible for remediation. SysAdmins don’t want a laundry list of vulnerabilities listed out by their CVE identifier; they need an actionable list of what needs to get done, such as deploying a specific patch or updating to a specific group of assets with their relevant identifiers.

As Gene Kim so eloquently stated, “The rate at which information security and compliance introduce work into IT organizations totally outstrips IT organizations ability to complete, whether it’s patching vulnerabilities or implementing controls to fulfill compliance objectives. The status quo almost seems to assume that IT operations exist only to deploy patches and implement controls, instead of completing the projects that the business actually needs.”

Or to put it another way…don’t be that guy.