Five Common Vulnerability Management Mistakes to Avoid

Posted on July 21, 2015 by in Industry, Threat Intelligence, Vulnerability Management

Vulnerability Management is often undersourced and undertooled, and yet stands at the epicenter of protecting the organization from a breach. Bringing to bear best practices can mean the difference between success and failure, but what does “best practices” mean and what evidence exists that supports them? In the trenches as former CISO of Orbitz as well as my work with dozens of enterprise customers here at Risk I/O, here are the mistakes that I’ve seen the most successful InfoSec teams avoid.

1. Remediate All the Things 

This may be the hardest things for security teams to understand: vulnerability management is not a numbers game. You get no prizes for fixing as many vulnerabilities as possible–in fact, you expend precious energy and resources fixing the wrong things.

Prioritization is key. Which are the vulnerabilities that truly pose a clear and present danger to your infrastructure, based on your assets?  Hint: relying on a static CVSS score–which has no relevant context for the actual threats to your specific organization and environment—won’t give you the full picture.

Your development team will thank you for having a clear strategy for knowing what to remediate and when—and then strategically allowing them to ignore the several thousand vulns that won’t actually make a material difference.

Vulnerability Prioritization

2. Rely Too Much on a Single Tool (*cough* Excel *cough*)

If prioritization is the name of the game, Excel can’t be the core of your strategy. Why? Because attacks are increasingly automated, which means you won’t be able to keep up with the sheer tsunami of attacks and exploits using manual methods. It’s impossible.

Find the right tools and platforms that can help your prioritization efforts be as automated and scalable as the techniques employed by your adversaries.

3. Don’t Mix the Right Potion (vulns + threats + 0 days)

What are the ingredients for automated prioritization? This is another area where Excel fails, because Excel will help you crunch the numbers but it won’t get to the heart of the issue, which is the need for context. How do you get context? Using vulnerability data as a base, you’ll need to add threat intelligence. But be selective about your data sources. When adding threat intelligence to vulnerability and asset data, you want to be heavy on the “how” and light on the “who.”  What vulnerabilities are being exploited and how? And if you have access to zero-day vulnerabilities, you’ll want a way to correlate that with your assets.

This gets to understanding your assets–where they are located, how they are accessed, and how important they are. This is all critical context you need to have when prioritizing issues. Remember—you don’t want to remediate all the things. Just the ones that matter.

4. Ignore Your Risk Landscape

Other teams track their progress on a regular cadence, carefully evaluating where they were last quarter versus this quarter. I’ve seen InfoSec teams do this as well, but often only in terms of tracking the sheer quantity of vulnerabilities they’re reducing. This is playing the numbers game (and doing it badly).

What matters is your team’s work set against a larger risk landscape.  What is your organization’s risk, and where was it two quarters ago, and what has been the reduction or increase over time? Which assets are most affected, and how can you minimize that risk with the least amount of effort (meaning, which vulnerabilities can you remediate that will make the most impact?)

Shifting from a vulnerability mindset to a risk-assessment mindset is absolutely critical.

5. Scanning Too Much/Not Enough

Once a month scanning “checks a box” but it won’t help you deal with active Internet breaches and same-day threats. You may be worried about the mountain of data that more frequent scanning will produce, but this risk can be minimized by having the right team designations and prioritization process in place—in part automated—as I discussed above.

 

Summary

Having a set of best practices can create great dividends for InfoSec teams who have to do a lot with a little. And don’t forget the most important tip—be sure to celebrate your successes. Don’t hold back on the beer.

The Three CVEs that You’re Not Paying Attention to (But Probably Should)

Posted on June 17, 2015 by in Industry, Remediation, Threats and Attacks, Vulnerability Management

The Risk I/O philosophy is all about fixing what matters – that is, using data to make decisions that make the most of the limited actions you can take in a day, a week, a month. It’s not about the sheer volume of vulnerabilities that your team closes — it’s closing the ones that reduce your overall risk the most.

Sometimes, the vulnerabilities that get the most attention aren’t the ones that represent the greatest threat. In my research, I’ve discovered a series of vulnerabilities that aren’t sexy, and don’t hog the spotlight–but in many environments actually represent major weaknesses. In fact, these three vulnerabilities have each been exploited over 100,000 times in 2014 alone!

The vulns I want to highlight are CVE-2010-3055, CVE-2002-0649, and CVE-2000-1209. They don’t have cutesy publicized names, so it might be a bit boring to talk about them. But you know what? If other people get to put ridiculous code names on their vulns, then I get to do the same thing. So let’s take a look.

Vulnerability CVE-2010-30551 Poster. CVE-2010-3055 has been exploited 121,000 times in 2014. Let’s call it the Poster vulnerability. It allows attackers to run arbitrary code in phpmyadmin via a POST request, and phpmyadmin runs millions of sites worldwide. It’s a CVSS 7.5, which means it’s bound to fly under the radar more often than not. But it shouldn’t. Security teams need to start worrying about Poster! https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2010-3055

Vulnerability CVE-200-12092. Slammer. I’m calling CVE-2002-0649 the Slammer vulnerability. It’s an ancient worm that exploits SQL Server 2000 and Microsoft Desktop Engine 2000. Reading the wikipedia article on the worm (http://en.wikipedia.org/wiki/SQL_Slammer) makes it seem like it’s a long forgotten problem, but we’ve seen 156,000 successful exploitations in 2014. It’s not new, it’s not hip, it’s not current, so one talks about it–but it’s a significant threat. https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2002-0649

Vulnerability CVE-2002-06493. Enterprise. Last up is Enterprise, which exploits (1) Microsoft SQL Server 2000, (2) SQL Server 7.0, and (3) Data Engine (MSDE) 1.0, including third party packages that use these products such as (4) Tumbleweed Secure Mail (MMS) (5) Compaq Insight Manager, and (6) Visio 2000, and are exploited by the Voyager Alpha worm. CVE-2000-1209 is also not to be forgotten, with 272,000 successful exploitations. Resistance is futile? https://web.nvd.nist.gov/view/vuln/detail?vulnId=2000-1209

To name something is to have power over it – but it’s the quiet ones that you need to be worried about. Pay less attention to the flashy, glitzy vulnerabilities and pay more to the ones that are truly a lurking threat.

Catching Bees with Honey – One HoneyPot Farm’s Quest to Protect the Net

Posted on June 11, 2015 by in Cyber Attacks, Ethical Hacking, Industry

They say you can catch more bees with honey than vinegar.

On the web, that bee is someone hacking through the layers of the web itself. The honey is the vulnerability of poorly secured websites and servers. When lucky, the hacker finds a way to get to the data and can harvest it for his or her own benefit. But sometimes, he falls prey to a facade that looks and acts like unsecured data but is actually a trap.

This is (essentially) a honeypot.

At its core, a honeypot is a server that uses exposed vulnerabilities to attract malicious hackers. The data kept on the server is either unimportant or non-existant – but the trap is real. When a hacker enters the server, the malware they use is captured, analyzed, and recorded. Honeypots capture data by utilizing intrusion detection systems, such as Snort, in combination with strategically open vulnerabilities. Often times, the honeypot will mimic a server that was recently publicized for being breached. The data is then analyzed to determine the attacker’s intent. Those watching the pot use this information to create signatures based on the attack, matching them with currently known exploits or zero-day attempts.Honeypot Farm

For the past year, I’ve been tracking the latest attacks through a growing number of honeypots on my honeypot farm, h8ck3d.com. The farm started as a single honeypot, collecting a few attacks each week. Now it’s collecting as many as a thousand unique attacks daily. Each attack is contained, analyzed, geo-located, and categorized by CVE or product. As a researcher, you can log in and see real-time attacks from around the globe attempting to exploit known CVEs. This information is freely available, in real-time, for white hats through an interactive map and REST API.

I’ve been intensly interested in the harvesting of my honeypot farm, as it gives me a unique perspective when handling CVEs for clients at Risk I/O. With the seemingly endless number of breaches happening at the enterprise level, I’m hoping that the prevelance of measures like my farm can help protect the net. The more attackers who unknowingly share their malware with a honeypot before they hit a ‘real’ server, the quicker we can analyze and protect those at risk. Catching hackers with a little bit of sweetness. Like bees to honey.

David Hunt is a senior software engineer for Risk I/O. He has focused most of his work in the areas of agriculture and defense, with cyber security as an overarching theme in his work. His honeypot farm can be found at h8ck3d.com.

Security As Code at Cloud Security World

Posted on May 28, 2015 by in API, Cloud, DevOps, Event, Open Source

Last week Jason Rohwedder and I had the privilege of presenting a cloud automation use case at Cloud Security World. Our talk not only covered how we automate much of our security at Risk I/O, but how we use DevOps principles to ensure our security controls are consistent even at a high velocity.

While we have spoken about some of this content before, one thing was very new and in my opinion something that has massive potential to reduce everyone’s Mean Time to Remediate vulnerabilities.

Jason has been working on an open source project called Tattle that when boiled down is a ridiculously simple way to store and regurgitate data. In our case, software version data. Using Tattle allows someone to identify versions of software and packages running in any environment in a Common Platform Enumeration (CPE) format with hooks into many common configuration management tools like Chef, Puppet or Ansible, among others. By using Tattle in combination with the Risk I/O API, you could have a single simple script that queries for software versions running on any given asset and then updates those assets in Risk I/O. From there, Risk I/O will automatically create or close any known vulnerabilities for that particular asset and can alert you on new CVE’s that effect your assets as soon as they are published.

This dramatically lowers your mean time to remediation by avoiding vulnerability signature updates to your scanner and avoiding waiting on scanning windows to identify those new vulnerabilities before determining a course of remediation.

We’re really excited about the potential for Tattle and we’ll be updating this post as we make the source available on Github.

Below is our presentation from Cloud Security World. There are a number of other open source projects we have listed in the Resources to help you with security automation in your environment and hope you can take advantage of these.

Looking Before & Beyond a Breach: Lessons from a DBIR Featured Contributor

Posted on April 16, 2015 by in Data Science, Industry, Patch Management, Prioritization, Remediation

As you may know, the 2015 Verizon Data Breaches Investigations Report was recently released. This is the “gold standard” research document for information security, and we’re proud to say that Risk I/O was a featured vulnerabilities contributor, providing a rich correlated threat data set that spans 200M+ successful exploitations across 500+ common vulnerabilities and exposures from over 20,000 enterprises in more than 150 countries.

With our data set in hand, Verizon focused on identifying patterns within the successful exploits of prioritizing remediation and patching efforts for known vulnerabilities. A sample of their findings using Risk I/O data:

  • A patch strategy focused on coverage & consistency is far more effective at preventing data breaches than “fire drills.”

  • Just because a CVE gets old, doesn’t mean it goes out of style with the exploit crowd (they have a reputation for partying like its 1999).

  • It’s important to prioritize remediation of exploited vulnerabilities, beyond the top ten or so CVEs.

  • Whether a vulnerability should be patched quickly, or if it can just be pushed with the rest.

Probably the most interesting statistics that came from our research is that attackers aren’t just going after the flashy, media-cumulative percentage of exploited vulns by weeks from cve publish datesworthy vulnerabilities. An astonishing 99.9% of vulnerabilities that become exploited are at least a year old. It’s not the newest ones that attackers are using, it’s some of the oldest ones on record.

Of all of the risk factors in information security, vulnerabilities are probably the most controversial. Which vulnerabilities should be patched? And more generally, what can we all do before a breach to improve vulnerability management programs? Many more data-driven recommendations for improving your remediation strategy can be gleamed from this year’s report.

The Verizon Data Breach Investigations Report is a must-read for InfoSec professionals, and Risk I/O is proud to have participated. A special thanks to Bob Rudis and Jay Jacobs for their help and patience.

Vulnerability Management for the Midsize

Posted on March 19, 2015 by in DevOps, Industry, Remediation, Threat Intelligence, Vulnerability Management

It’s not fair. The big companies have the teams, the tools, and the processes required in order to run a best-in-class vulnerability management program.

But guess what? The bad guys don’t care about how big you are. In fact, non-targeted exploits accounted for 75% of the breaches from Verizon’s 2013 Data Breach Investigation Report—meaning even mid-sized companies are equally or greater subject to the same attacks that hit JP Morgan.

In a large company, there’s a security team with members who each wear different hats. But in a mid-sized company, you only have one or two—and yet, it’s equally (if not more) critical that your vulnerability management process is spot-on.

So how do you do it? Here’s how:

1. First of all, don’t cut corners on scanning.
The worst thing you can do is decide you’ll only scan quarterly (or twice a year) since otherwise it takes too much effort. Much like devops and deployments, your goal should be small and frequent changes versus trying to do it all a few times per year. In 2015, the year of the “one billion exploit,” continuous vulnerability scanning has become table stakes.

Easy for us to say? It can be done. Read on.

2. Everyone’s going to wear multiple hats, so figure out who they are.
You don’t have the option of hiring an entire team, so figure out if you can at least have a two-person team doing the work. You may have the person who does the vuln scanning as well as analyzing, but then someone in development or operations doing the remediation work. That’s a lot of work for two people, but if you have clearly defined responsibilities, it can get done.

And what’s even more important in a small team is to have a great line of communication between those two people. Take them out for a beer—be friends. You need each other.

3. Don’t just rely on vuln scanning—bring in threat intelligence and context.
What particularly matters for mid-sized companies is to be sure that their teams are prioritizing the right things. Looking solely at the output of vuln scanning won’t help, because it’ll be simply another list of vulnerabilities without real-world context. And as the saying goes, “Context is king.”

What you want to do is ensure that you have real-world context for the weaknesses you find, which will help you give exactly the right things to your remediation partner. You’ll want to understand which attacks are successful in the wild, the volume and velocity around those attacks, which industries and geographies are more likely to suffer from particular exploits and vulnerabilities, and the importance of vulnerable technology assets and any mitigating controls that may be in place. (And when you’re a two or three person team, this is critical. Don’t be the person who dumps a 300-page PDF and runs for the hills).

Having real-world context for the output of vuln scanning is a key strategy for ensuring you’re fixing the right things and not just spinning your wheels.

4. Keep management’s attention on what you’re doing.
It’s so easy for security to be an “afterthought” at the mid-sized level, but as previously mentioned, mid-sized companies are as subject to serious attacks and exploits as the big guys. You want to ensure that management keeps its focus on security as a priority issue. So communicate what you’re doing relentlessly—show your risk posture via dashboards and reports but do it in the language of management. In other words, your reports need to be business-friendly; don’t speak in technical or security jargon when you’re trying to truly communicate to the business.

Rather, focus their attention on risk, not on counts. In doing so, you’ll be equipped to get an allocation of budget and resources, so your two-person team blossoms into a full vulnerability management program as soon as possible.

5. Fit into existing processes and tools.
Just like dumping a 300-page PDF report on the system admin’s desk doesn’t help you reduce your risk any faster, neither does changing the process in regards to how your remediators work. Is your development team using a bug tracker to log and track issues? Reuse that same tool to manage remediation efforts. The same can be said of trouble ticketing and change management; if you can fit your efforts into the existing processes of the business, you’re likely to get more accomplished.

6. Don’t go it alone.
Here’s the critical piece: being a small team (or even a one-person show), you don’t have the luxury of relying on manual processes to get all of this done.

Fortunately, it’s 2015, not five years ago, and you don’t have to. There are platforms designed for medium-sized businesses that can help you consume and analyze your scan data, integrate with threat feeds, and push everything into dashboards. The work of the five-person team can be done with one or two. If a mid-sized company is NOT using these new platforms, they may find themselves trying to do the impossible.

Just as marketing uses email automation systems to launch campaigns, and Sales uses Salesforce to log deals, and customer service uses Zendesk to work with customers—well, security pros need to use the cloud-based solutions available to them to ensure that they’re able to get tons of work done with minimal effort.

Summary
You don’t have to be IBM to have a world-class vulnerability management program. With the right planning, processes, and tools–as well as small group of people who work well together and know their responsibilities inside and out–your mid-sized company can develop a fantastic approach to vulnerability management.

Are you putting together your first vulnerability management program for your mid-sized company? Interested in tips on making your current vulnerability management program world class? Read more about running an effective vulnerability management program even if you aren’t IBM in our latest white paper.

Vulnerability Cage Match

Posted on March 10, 2015 by in API, Feature Release, Metrics, Prioritization, Vulnerability Management

Sometimes you want to see the status of your open vulnerabilities across the various assets in your environment. And operating system continues to be an important datapoint. That’s why we’ve improved the TagView dashboard. With a new name, Compare, and an expanded set of filters (we’ve added the ability to filter by assets running a specific operating system) you can now compare your assets from even more angles.

Asset Filters

Simply choose the contenders from the drop-downs (asset tag vs. operating system), select to view as grouped, and voilá: your open vulnerability count, month-over-month, will appear for easy comparison.

Compare Dashboard Grouped

You can also compare multiple asset tags vs. multiple operating systems in the ultimate Battle Royale. Simply choose your asset values for tag1 vs tag2 vs tag3 vs os1 vs os2 etc., select to view the set as stacked, and your open vulnerabilities for each asset will appear, by month.

Compare Dashboard Stacked

This expanded Compare dashboard is just another way that Risk I/O is giving you visibility into your open vulnerabilities to aid in prioritization.

We’ve also updated our API with new asset group endpoints as well as new attributes. Users can now make calls to their API and pull all risk meter asset scores (as well as scores for individual assets). And users who use different methods to report on risk meter scores for assets and asset groups may find our new attribute risk_meter_score handy.

API Group Endpoints

Information on these new API features is located in our API documentation.

So whether you are setting up assets to duke it out in the open vulnerabilities match-of-the-century, or you’re one of our API users who check their asset risk meter scores daily, we think you’ll find our new features will enhance how you’re already using Risk I/O.

New! Features that Will Improve Your Vulnerability Prioritization

Posted on March 5, 2015 by in Feature Release, Remediation, Vulnerability Assessment

Today, we’re announcing new statuses, filters and displays that will impact how you sift through scan data, prioritize vulnerabilities and communicate with your team.

New! Vulnerability Statuses

We’ve added two new vulnerability statuses that will make it even easier for your team to track the lifecycle of a vulnerability: risk accepted & false positive. These statuses are flagged by the end user and can be assigned to an individual vulnerability, or to many at once.

New Vulnerability Status Filters

To assign a vulnerability as either risk accepted or as a false positive, navigate to the Home tab, select a vulnerability from the list, and then select the status from the dropdown dropdown. You can also flag the status of vulnerabilities in bulk right in the table.

Edit Vulnerability Status in Bulk

Note that risk accepted vulnerabilities and false positives will not affect the risk meter score (as only open vulnerabilities are counted). Assigning vulnerabilities with one of these new statuses ensures that your score is only affected by active, open vulnerabilities.

New! “Found” Date Display:

Let’s say that you wanted to know when your risk-accepted vulnerabilities were originally discovered. Simply filter your view by risk-accepted, and then select to display the “Found” date by using the Display dropdown.

Including the Found On Date in Vulnerability Details

Now let’s say that you wanted to track and manage the vulnerabilities that have been Risk Accepted. Select the Export this View dropdown, and a CSV export of your risk-accepted vulns will appear, including the Found date (also New!).

XML Report with Found On Date

Displaying and reporting on the date found will inform your team of the length of time since discovery, and will provide another decisioning factor for prioritization based on age.

Filter by Port:

You can also now filter your vulnerabilities by the port(s) on which they were discovered. Select the port(s) of interest in the Vulnerability Filters sidebar, and right away the table will filter out the vulnerabilities unassociated with those port(s).

Port Filter

Give these new vulnerability features a spin by heading over to your Risk I/O instance. We think you’ll appreciate the time saved parsing through your vulnerability data and the peace of mind that comes with improving your full picture of risk. And if you don’t already have a Risk I/O account, you can create one for free.

What You Miss When You Rely on CVSS Scores

Posted on February 26, 2015 by in Data Science, Remediation, Vulnerability Database, Vulnerability Intelligence, Vulnerability Management

Effective prioritization of vulnerabilities is essential to staying ahead of your attackers. While your threat intelligence might expose a wealth of information about attackers and attack paths, integrating it into decision-making is no easy task. Too often, we make the mistake of taking the data given to us for granted – and this has disastrous consequences. In this blog post, I’ll explain what we miss by trusting CVSS scores, and what should absolutely be taken into consideration to focus on the vulnerabilities posing the greatest risks to our organizations.

Part of what Risk I/O does as a vulnerability threat management provider is leverage threat intelligence sources to help our customers understand the likelihood of a vulnerability breach. While still not a complete picture of the threat landscape, we use data from public vulnerability databases, zero-day vulnerability intelligence and aggregated metadata from Risk I/O’s 10,000 enterprise users, 1,100,000 live assets and over 100 million live vulnerabilities to assess the effectiveness of CVSS as a remediation policy.

A wide array of threat feeds correlated with a company’s internal assets
Figure 1: Risk I/O correlates attack, threat and exploit data against user vulnerabilities data 24/7 to assess the effectiveness of CVSS as a remediation policy.

And what we’ve found is that some of the most damaging CVE’s have been incorrectly assigned with “low” CVSS scores. What are some types of low CVSS scores that are currently being attacked?

Terminal Denial of Service – CVE-2012-0152, CVE-2012-3170
Unauthorized Modification – CVE-2012-2566, CVE-2012-0867, CVE-2012-1715
Information Disclosure – CVE-2012-6596, CVE-2014-0160

Dell CTU researchers have found significant scanning activity in most all sectors and across the world related to these vulnerabilities, and at Risk I/O we’ve observed over 200 million breaches, yet we’re still stuck basing our remediation policies on CVSS and vendor assigned scores. Why?

Where Is CVSS Failing?

CVSS scoring is failing to take into account quite a few factors:

1. Targets of opportunity for attackers:
The amount of sites affected by CVE-2014-0160 is unfathomable – with broad estimates between 30-70% of the Internet. And randomly selecting vulnerabilities from a stack gives one about a 2% chance of remediating a truly critical vulnerability. All of these vulnerabilities give attackers a known edge on the probability that randomly targeting with a weaponized exploit will yield results – and this is why they use them.

2. Active and successful in-the-wild exploitation:
We are logging about 2M breaches (or successful exploits) every day across live vulnerabilities. The rate of incoming breaches is also increasing.

3. They’re easy to exploit:
Metasploit module and ExploitDB are databases that offer information about attackers’ behaviors. This informs our decision making about vulnerability assessment and remediations. The best policy is fixing vulnerabilities with entries in both Metasploit and ExploitDB, yielding about a 30% success rate (or 9x better than anything CVSS gets to).

What About the Security Risk Assessment Methods Used Today?

Security risk assessment has been lagging behind our capabilities for years. With the constant release of a countless volume of vulnerabilities, we have hard evidence on why and where our vulnerability scoring systems are failing us. Today, we have access to free exploit databases, open threat exchanges, and a number of proprietary tools. With cheap cloud resources available, we no longer need to rely purely on analysts’ opinions of what sort of risk a vulnerability posts. Instead, we can add context through structured, real-time analysis of the data.

But current risk assessment methodologies do not fit real “in-the-wild” attack data. Mauricio Velazco, the head of vulnerability management at the Blackstone Group drives the point home in an article in which he explained, “We have to mitigate risk before the exploit happens. If you try to mitigate after, that is more costly, has more impact, and is more dangerous for your company.” Current prioritization strategies based on the Common Vulnerability Scoring System (CVSS), and subsequent adaptations of such scores, have two fatal flaws:

1. Current risk assessments (CVSS included) lack information about what kind of attacks are possible. As security practitioners, we care about which vulnerabilities matter. We solve this problem by using live, currently open vulnerabilities to do our assessments.

2. Attackers do not go after the same vulnerability month after month, week after week, hour after hour. If certain types of attacks are failing, they change strategies. Below is a timeline of breach counts across 30,000 organizations worldwide. Each color represents a different CVE, and these are only the ones which have more than 10,000 breaches recorded to their name.

CVEs with more than 10,000 breaches recorded to their name
Figure 2: This timeline represents the CVEs with more than 10,000 breaches recorded to their name. Note that the type of CVE exploited changes, as well as the frequency with which they are exploited.

Current risk assessment strategies are based on CVSS scores, which are assigned sometimes years before an organization makes the unwise decision to patch or forego that vulnerability. Risks change in real-time, and as a result, risk assessment methodologies should be real-time as well.

What Do We Do About It?

Too often, infosec professionals find themselves working hard at the wrong thing. Working on the right thing is probably more important than working hard. So how should you prioritize the remediation of some vulnerabilities over others?

A better strategy would be to use threat intelligence to explore the source and spread of vulnerabilities. Since breach data stems from how attackers are behaving, having a handle on threat intelligence allows you to identify which vulnerabilities have a high probability of causing a breach. Checking threat intelligence feeds for what’s being actively exploited, to think like an attacker and to have the same information an attacker has is an action plan that infosec professionals can take to prioritize remediation. This shifts your strategy away from trying to fix everything and instead, focusing on identifying and remediating the few vulnerabilities which are most likely to cause a breach.

You can hear Michael Roytman’s discuss vulnerability prioritization at a NY Information Security Meetup.

What a Difference a Year Makes: Reflecting on our Dell SecureWorks Partnership

Posted on February 18, 2015 by in Industry, Partnership, Prioritization, Threat Intelligence

What a different a year makes. Nearly a year ago, Risk I/O was in the beginning phases of what would become one of our greatest successes to date: a partnership with Dell SecureWorks.  As we celebrate the one-year anniversary of the partnership, we wanted to highlight its significance and firm validation in the marketplace.

Partnership highlights include:

  • Threat intelligence supplied by Dell SecureWorks’ Counter Threat Unit has given our users unprecedented visibility into the vulnerabilities that matter most to their environments.  Dell CTU threat intelligence has quickly evolved into an invaluable data source within the Risk I/O platform.
  • We have realized exponential growth in our expanding customer base and can now benchmark data across every vertical market.
  • Risk I/O has fulfilled a need for SecureWorks by offering an integrated vulnerability threat processing service to their customers.  Customers can access their assets and vulnerabilities via single-sign on to Risk I/O in their Client Portal.
  • SecureWorks customer have access to breach and attack data seen on over 50k websites and over 10k corporate networks through our threat processing engine.
  • SecureWorks customers can aggregate their multiple scanning sources into one list of assets and vulnerabilities.  It has enriched SecureWorks managed services with instant access during incident response.

Risk I/O is growing faster than ever due to our partnership with Dell SecureWorks and their leadership in managed security services. As we enter the second year of the partnership, we predict even more success. The best is yet to come.