Announcing Our Latest Integration: Beyond Security

Posted on 05. Jun, 2014 by in Network Scanners, Risk I/O, Static Analysis, Vulnerability Assessment

Beyond Security Web Application Security

At Risk I/O, we’ve always made it our mission to integrate with the scanner tools used most. That’s why we’ve added integration with the BeyondSecurity AVDS web scanner to our vulnerability threat management platform.

With the new BeyondSecurity AVDS connector, you can discover and eliminate your network’s most serious security weaknesses. Simply sync your scan data via our new connector and Risk I/O will continuously process it against active threats from our threat processing engine. Risk meters can be used to pinpoint your exposure to active Internet attacks and breaches and to prioritize the vulnerabilities putting you at greatest risk.

Setting up your BeyondSecurity AVDS connector in Risk I/O, like our other connectors, is easy and requires simply adding it to your instance through the Connectors tab. Not a Risk I/O customer but would like to try out the integration? Signup for a free account and sync your scan data now.

Introducing Nessus Auto-Close with Risk I/O

Posted on 13. Nov, 2013 by in Network Scanners, Remediation, Risk I/O, Vulnerability Assessment

Our Latest Nessus Connector

Our latest Nessus connector auto-closes remediated vulnerabilities and tracks state.

One of the common issues with running multiple siloed scanners is tracking the state of vulnerabilities over time. Which vulnerabilities should be closed based on my subsequent findings (or lack thereof)? This problem can be exacerbated when centralizing these point scanners into a central repository such as Risk I/O. Our  Nessus connector now tracks the state of all reported vulnerabilities and auto-closes any that have been remediated.

With the latest updates to our Nessus connectors we address this problem, making managing state and programs much simpler. Now when you run your Nessus connector we analyze all of the plug-ins and scan policies used, as well as which assets were scanned in order to determine which vulnerabilities are no longer present as compared to previous scans. This works with both our Nessus API connector as well as our Nessus XML connector. When using the Nessus XML connector, just load the files in chronological order to ensure Risk I/O auto-closes correctly; for the Nessus API connector we’ll handle all of those details for you.

To fully automate the management of these Nessus findings, you can use the Risk I/O Virtual Tunnel to connect to your on-premise scanner and schedule and import findings automatically. From there, Risk I/O will analyze your findings via our processing engine matching them against any threats including exploits and breaches we observe across the Internet.

We’re big believers in automation in order to scale security programs, allowing your team to focus on fixing what matters. If you already have a Risk I/O account, give our new Nessus connector functionality a try. You’ll find it in the Connectors tab. If you don’t yet have an account, you can sign up and give it a whirl.

Introducing the Risk Meter

Posted on 08. Oct, 2013 by in Data Analysis, Data Science, Feature Release, Launch, Metrics, Threats and Attacks, Vulnerability Intelligence

Risk Meter

You may have noticed we’ve been publishing a lot of information lately on what factors go into the likelihood of a successful exploit. Our presentation at BSidesLV and subsequent events touched on some of the work we’ve been doing based on our processing of over a million successful breaches we have observed across the internet. While this data continues to grow we’ve been able to glean some great insights into what factors matter most when making remediation decisions.

Our data-driven approach is leaving us pretty excited about our latest feature to hit Risk I/O, the Risk Meter. The Risk Meter takes a number of factors into account including: Exploit Analytics, Asset Prioritization, and Adjusted CVSS. Take a look at the Risk Meter FAQ for more information on rankings.

Because the Risk Meter is an asset-driven model, you’ll naturally find it in the Assets tab. As you apply filter and search criteria to your assets, the Risk Meter score will change to reflect the current group of assets you are viewing.

Want to see a patch report that reflects those assets and Risk Meter? Just click the patch report button directly beneath your meter.

Want to save the Risk Meter for that asset group to your dashboard? Click the save button and it will automatically save that group into your Risk Meter dashboard.

Risk Meter Dashboard

The Risk Meter dashboard gives your team and management a quick, at-a-glance view of your vulnerability and exploit risk across your business, categorized by what’s meaningful to you. Each Risk Meter in your dashboard will display your current real-time score as well as summarize each one with information on the number of vulnerabilities and assets, how many vulnerabilities are easily exploitable, how many are being observed as breaches in the wild, how many are popular targets, as well as the number that have been prioritized.

If you already have a Risk I/O account, feel free to check it out now in your Assets tab. Don’t have an account? Sign up for a free trial.

Nmap + Risk I/O = Peanut Butter + Chocolate

Posted on 03. Sep, 2013 by in Feature Release, Network Scanners

No, I’m not speaking of a fancy new risk formula, but rather about one of our most popular integrations: Nmap.

Nmap can be a pretty powerful tool for asset discovery and figuring out what services and ports are open across your network. It can also be a great way to find configuration issues that could result in security weaknesses for your environment. By combining Nmap with NSE scripts you can even pull Common Vulnerabilities and Exposures (CVE) in some cases.

In Risk I/O, we add context to your vulnerabilities in order to prioritize the most critical.

You can now filter your assets by service name, port, protocol, and product.

Adding data from vulnerability scanners can make for a more complete picture and help factor in to remediation decisions. This is where Risk I/O plays a starring role. Combine this with some news ways to slice-and-view the data within our asset tab to get that holistic view of your network. You can now filter your assets by Service Ports, Service Names, Protocols and Products among other things. Want to see where telnet might be exposed in your DMZ or understand where you might be running a prohibited service? It’s as simple as a single checkbox in Risk I/O.

While filtering can make issues easy to find, there are also side benefits to this. For example, we learned many of our customers in the Energy sector are using this as part of their compliance efforts with their NERC CIP ports and services requirement (PDF). By identifying those through these easy-to-use filters and saving that as a saved search, they have a single click to provide the necessary documentation to their auditors or identify any prohibited services. I’ve included a very brief video below on doing just that.

If you’re already a Risk I/O customer, give the new facets in the asset tab a try. I’d love to hear about any use cases you may have. If you’re not currently a customer, you can sign up for free and give it a spin.

Introducing Quick Lists

Posted on 24. Jul, 2013 by in Feature Release, Launch, Pricing, Threats and Attacks, Vulnerability Database, Vulnerability Management

As you may have read, the Risk I/O platform now correlates live Internet attack data with your vulnerabilities. As your vulnerabilities are processed, we append any vulnerability with additional data around attacks, threats, or exploits. Together, they help to identify where attacks are most likely to occur within your environment.

With the addition of this data, Risk I/O is now able to help you to better prioritize the most critical vulnerabilities. Risk I/O has used this data to create Quick Lists, a new feature available on our Vulnerabilities page.

Vulnerability Page with Facets

Quick Lists are a set of facets that inform you of the vulnerabilities in your data that are putting you most at risk for a breach. Quick Lists are made up of the following facets:

  • Top Priority - Vulnerabilities listed in the most efficient way to reduce your risk.
  • Active Internet Attacks – Vulnerabilities with known attacks occurring right now.
  • Easily Exploitable – Vulnerabilities with known exploits.
  • Popular Targets – Vulnerabilities present in many environments including yours.

You can filter the vulnerabilities listed in your Vulnerability Table by facet. Simply select the facet and the results in the table will update.

The Top Priority filter identifies your highest priority vulnerabilities (those which will most improve your security posture if addressed). Although the vulnerabilities listed in each facet are determined by your vulnerability data (as well as the Internet attack data we append to each vulnerability), you can add vulnerabilities to the Top Priority list by flagging them.

In addition to the new Quick List feature, we are also introducing new pricing for our vulnerability intelligence platform. Our platform is now available for $1 a month per asset*, with volume discounts available. You can talk to the Risk I/O Sales Team regarding a quote or for more information. We will continue to offer a 30-day free trial of a complete version of our platform (with all features available). If you haven’t taken Risk I/O for a spin before, you can sign up right on our website.

 

*An asset is a server, IP address, router, app, or any other entity that a security scanner examines for vulnerabilities.

A Conference By Any Other Name

Posted on 14. May, 2013 by in Event, Metrics, Risk I/O, Vulnerability Management

HelloKittyMyNameIsLast week I had the opportunity to present at the Best Practices for Technology Symposium. I have to be honest, I’ve never heard of this event and given the name it’s easily missed. In fact, given my recent post on “best practices” and vanity metrics I would have likely avoided an event with such a name. But that would have been a mistake.

Gene Kim introduced me to Fred Palmer who runs this event which is why I seriously considered it. It turns out it’s nothing like what I thought but rather more than two days of emerging technology companies presenting some of their latest tech. I only wish I knew about this event earlier and I wish that I’d known the format was so open. It’s refreshing to hear an audience that actually wants to talk about whether or not the solution you’re pitching works for them rather than the thinly veiled sales pitch cloaked as thought leadership. Having been a practitioner the majority of my career, I think this format is sorely needed. Fred has done a great job bringing together interesting security technologies and providing open and honest feedback. Much like IANS, it has a great workshop format in an informal setting.

One question during the workshop really resonated with me:

Do you think organizations know what’s important to them?

Of course, the sad truth is “it depends.” We were talking about creating a platform like Risk I/O that was created with flexibility in mind. This allows users to slice the data into views that are important to them in order to get better and faster insight. But this is a valid question. What if the organization isn’t sure where to start or what should be important to them? We like to think our priority and trending along with the Heads Up Display are a great start but we will continue to help our customers by flagging and alerting on issues we see as important while maintaining transparency and flexibility. These go well beyond the standard CVSS calculators and take into account real in-the-wild information.

A common question we get is “What are the metrics others are using to measure themselves against?” We will continue to share important metrics to help teams jump-start their programs. It’s great to see practitioners getting together and sharing information that will benefit and raise the bar, and we’ll continue on our mission in helping you gain visibility into what’s important.

Best Practices = Vanity Metrics

Posted on 21. Mar, 2013 by in Industry, Metrics, Security Management, startup

After recently reading a post from Gary McGraw at Cigital arguing for software security training, I became a bit frustrated with cited “evidence” and posted this out on Twitter and received a short follow up from Lindsey Smith over at Tripwire…

vanitymetricstweet

Now let me say upfront, I have a lot of respect for Gary and his work AND actually agree with him on the subject of software security training. I’ll get into the why I agree with him in a bit. That said, here’s where my frustration comes in. Gary references the BSIMM as evidence that software security training works. Evidence? I find the BSIMM interesting but it leaves the taste of vanity metrics in my mouth. For those of you not familiar with the term vanity metrics, Eric Ries talks about them a lot as part of The Lean Startup:

“Actionable metrics can lead to informed business decisions and subsequent action. These are in contrast to “vanity metrics” – measurements that give “the rosiest picture possible” but do not accurately reflect the key drivers of a business. Vanity metrics for one company may be actionable metrics for another. For example, a company specializing in creating web-based dashboards for financial markets might view the number of web page views per person as a vanity metric as their revenue is not based on number of page views. However, an online magazine with advertising would view web page views as a key metric as page views as directly correlated to revenue.”

I think the BSIMM and best practices within information security often fall under the definition of vanity metrics. There are things I like about the BSIMM and it’s a great start but only focuses on one half of the data. Telling me what many companies are doing for their security controls becomes a lot more interesting when you also tell me how those controls faired over time. I would love to see the BSIMM and other models like it evolve into an evidence-based set of controls. Today, they certainly should not be cited as evidence that any control within them works as we’re completely missing that side of the picture. This is also not a post to pick on BSIMM but rather an attempt to call out our industry citing best practices without evidence.

I mentioned earlier in this post that I actually agree with Gary on software security training. The reason I can say this is based on evidence, not best practices. At my former employer, we implemented a number of measurements around application defects and specifically security defects. We also did various software security training exercises both internally and with help. As part of this we measured things like defect rates and density within specific groups both before and after. We continued these measurements over time and saw material drops in most categories. Was it completely due to training? No, but we saw a measurable impact each time that correlated with a specific set of training. It’s evidence similar to that I’d like to see combined with a set of “Best Practices.” At best, best practices are a set of things that others *may* be doing; at worst they are meaningless vanity metrics.

Remediate…Like a Boss

Posted on 12. Mar, 2013 by in Feature Release, Remediation, Risk I/O, Vulnerability Management

The Risk I/O dev team has been developing features at a ridiculous pace with no signs of slowing down. We will be releasing a host of new functionality to our vulnerability intelligence platform over the weeks to come, so stay tuned. Our latest additions will help you identify patches that will reduce the most amount of risk across your environment and quickly push them out to your ticketing system or manage remediation directly within Risk I/O.

We know that identifying remediation can be tedious, so we set out to solve this in a simple way. Patch data is pulled in directly from multiple sources and now made available via patch reports on your Assets tab. Viewing the patch report will give you a quick view of un-patched systems grouped together by patch and sorted in order of total risk score. This view gives you the biggest “bang-for-your-buck,” (in other words reducing the most amount of vulnerabilities with the least amount of work).

View the patches that reduce the most risk via our patch reports.

View the patches that reduce the most risk via our patch reports.

Creating trouble tickets in Risk I/O has always been fast and simple through our integration with ticketing systems. But we’ve now made this even faster by adding bulk creation of tickets in Risk I/O for both vulnerabilities and assets. With a quick search and select, you can create tickets for hundreds of vulnerabilities and assets in a matter of seconds. Within the vulnerabilities tab you can send multiple vulnerabilities to a single ticket via the bulk ticketing feature, or within the patch report you can create a single ticket to patch thousands of assets at once.

Send multiple vulnerabilities or patch multiple assets with a single ticket.

Send multiple vulnerabilities or patch multiple assets with a single ticket.

Log into your account now and try these features for yourself. Of course, if you don’t have a Risk I/O account yet, you can signup for free.

RSA Week Recap

Posted on 05. Mar, 2013 by in Event, Industry, Risk I/O, Security Management, Vulnerability Intelligence

Well the dust has finally begun to settle after another whirlwind week of activity around the RSA Conference. As in years past, my favorite track turned out to be the hallway track, although admittedly I didn’t get to see many of the talks and avoided the show floor most of the time.BSides SF

One program I was able to not only join but also participate in was e10+ put on by the Securosis team. This is the second time I’ve been and really like the format. Rather than having someone talk at you followed by a short Q&A, it tends to be a more participatory format where all attendees are engaged and contributed throughout. If you’ve been in Security for a while (at least 10 years) I’d definitely recommend it. I enjoyed our panel discussion about the grass being greener and browner running infosec for both small companies and large enterprises.

On Monday afternoon I gave an updated talk on the Security Mendoza line at BSidesSF. Not realizing all the drama that was about to follow my talk, I obliviously enjoyed the conference and hanging out with everyone. I also caught talks by Andrew Hay on cloud forensics and Brett Hardin talking about penetration testing (and why it sucks). I was a bit worried about timing given the handcuff competition running a bit over but was pleasantly surprised at the engaged Q&A following the talk. Clearly a lot of smart people in the room thinking about this problem. I believe BSidesSF will be posting the talks online and some follow up interviews will also be made available via BrightTalk. I’ll update this post once available but I’m also embedding my slides below.

Outside of the many meetings, events, and parties, the week was wrapped up by Metricon. Having attended several in the past I was bummed I wasn’t able to make this one, although admittedly I was spent by Friday. Fortunately our own Michael Roytman attended and took great notes! Metricon was a different format this year including workshops like groups and lightning talks. Michael wrote up a blog post recently on using game theory to solve infosec problems. Within the post he references a paper that does a good job displaying why network topology isn’t nearly as important as you think when prioritizing vulnerability remediation. If you’re relying on firewalls and ACLs as your mitigating controls you might want to take a close look at the referenced research.

Overall we had a very good conference, if for nothing else a red hot hallway track. That said, I’m looking forward to a little conference respite before Thotcon and BSides Chicago.

Heads Up! (Display)

Posted on 22. Jan, 2013 by in Data Analysis, Feature Release, Metrics, Risk I/O, Vulnerability Management

Heads Up Display

Visualizing your vulnerability data with our new Heads Up Display.

I’m happy to share our latest enhancement to visualizing your vulnerability data. Today, we are launching a new Heads-Up Display (HUD): a “mini dashboard” if you will,  that allows you to visualize the current state of your vulnerabilities and defects.

Our new Heads-Up Display shows a live presentation of your vulnerabilities. It provides up-to-the-minute information on aspects of your vulnerability management program such as scoring, asset priorities, exploitability and impact calculations, with more metrics on the way. Each metric is interactive: rolling over a graph will show you the actual value of the attribute represented in that graph, while clicking a graph performs a live filter based on that attribute.

A simple use case for quickly finding vulnerabilities in your environment that have a very high likelihood of being exploited may be as follows:

  1. Navigate to the Vulnerabilities tab within your instance of Risk I/O where you will find the Heads Up Display.
  2. Let’s start by filtering on vulnerabilities that don’t require local access by clicking the Network portion of the Access Vector chart. This filter brings our open vulnerabilities down from 63,060 to 50,970.
  3. Now lets drill down further by continuing to look at the simplest of vulnerabilities to exploit. I’ll click the Low value for Access Complexity and None Required for level of Authentication. This brings our list down even further to 16,462.
  4. From here I can narrow the number of vulnerabilities to tackle by filtering on their impact subscores. This will give us issues that are not only likely to be exploited but with higher impacts. Lets filter by choosing Complete impacts of Confidentiality, Integrity and Availability. This brings our open list down to 5,709 open vulnerabilities.
  5. Next up, we’ll take our current list and narrow it further by only looking at vulnerabilities that have a Known Exploit. Choosing this value to filter on results in 1,111 open vulnerabilities.
  6.  Heads-Up Display

    The 16 most egregious vulnerabilities via HUD.

  7. So far, we’ve been focusing on vulnerabilities that are easy to exploit, could have a higher impact on our environment, and have a publicly available exploit from sources like the Metasploit Framework, ExploitDB, etc. Let’s take this one step further and filter only on vulnerabilities within our DMZ that may be publicly facing. Since I have tagged these assets with DMZ within my instance of Risk I/O, I can simply select the tag ‘DMZ’ to filter on. This gives me a very short list of 16 open vulnerabilities to work with.

As mentioned, this is a simple use case to find the most egregious of vulnerabilities within our environment and I’m certain you will have and find many others. We think HUD will be one of the easiest ways to stay on top of your vulnerability management. We’ll be sprinkling more mini visualizations throughout Risk I/O in the future as we identify specific metrics that would be helpful to see in a more visual fashion. Give it a try and let us know what you think.