Laying the Foundation for Change

Posted on October 14, 2014 by in Industry, Risk I/O, Security Management

This blog post was written by new CEO of Risk I/O, Karim Toubba. You can read more about our new CEO announcement here.

I have always been drawn to solving substantive problems that lay the foundation for change, particularly in the security industry. To date, much has been written about the sophistication of the hacker and even the most casual news reader is bombarded with the latest highly publicized attack. Ironically, organizations continue to spend more money than ever on security technology  (the entire industry spent over $46B last year – ABI research).

While new technologies are needed to drive efficacy, especially in light of ongoing threats, they alone are not going to address this challenge. Talk to any security practitioner, from security operations or analyst to CISO, and they quickly point out that they are inundated with the newest tech to protect them against the latest attack. This so called “layered” security model has left organizations with a myriad of security technologies from network to application to client each of which provide an inherent value and hold critical information about attack patterns. Yet these technologies are still largely siloed and require increasingly highly skilled security staff to maximize the information these systems produce. As a friend of mine often reminds me, “there is no Moore’s law to the human brain.” While SIEM platforms attempt to aggregate the data, the boil the ocean approach, over reliance and the forensic and compliance use cases and often expensive and complex integration task means the mass market is not able to leverage the full capability of these solutions. While big data holds promise, most of the platforms have gone by way of general purpose platforms that can process any and all data missing the opportunity to focus on solving this vexing problem in security.

The long lived idea of “layered security” needs to give rise to a better way to connect the layers, understand what the data means, why it matters, and most importantly make it actionable in a meaningful way to security operations teams. Of course low time to value is a key tenet if we expect broad adoption.

Laying the foundation for change is never easy. It requires insight, a leap of faith, and maniacal execution. I joined the Risk I/O team to help lead the charge in solving this substantive problem. One, that when solved, will have a lasting impact on the security industry and our customers.

Risk I/O Now Integrates With OpenVAS

Posted on October 6, 2014 by in Feature Release, Network Scanners, Open Source, Risk I/O, Vulnerability Assessment

Last week we quietly launched our 26th and latest connector. With our latest integration our customers can load their OpenVAS results directly into Risk I/O for threat processing and prioritization.OpenVAS Vulnerability Integration

To take advantage of the OpenVAS integration, navigate to the Connectors tab and click New Connector. From there select the OpenVAS connector, name it and save it. You can then click the Run button on your OpenVAS connector. This will prompt you to upload your XML output that you generated from your OpenVAS scanner. Select the location of the XML file or simply drag and drop it into your browser.

Remember that this connector allows Risk I/O to consume scanner output directly from OpenVAS along with a number of tools that use OpenVAS under the hood.

If you’re not currently a Risk I/O customer you can sign up for a free 30-day trial and give our OpenVAS integration a whirl.

Mo’ Vulnerabilities, Mo’ Problems

Posted on September 19, 2014 by in Remediation, Risk I/O, Vulnerability Management

*This originally appeared as a guest post in the Tripwire – The State of Security blog as Mo’ Vulnerabilities, Mo’ Problems…One Solution.

Security practitioners juggle many tasks, with vulnerability management requiring the most time and effort to manage effectively. Prioritizing vulnerabilities, grouping those vulnerabilities and assets, and assigning them to the appropriate teams takes considerable time using current scanning technology.

The end goal of any successful vulnerability management program is to keep organizational data and assets safe from breaches. Security practitioners must ask themselves: Do I have visibility that my current plan is working? When I am given a small window of time to remediate vulnerabilities, am I targeting the right ones?

Risk I/O’s risk meters use vulnerability data from scanning technologies, such as Tripwire IP360, to monitor any group of assets and vulnerabilities. Instead of trying to fix everything, risk meters shift your strategy towards identifying and remediating the few vulnerabilities that are most likely to cause a breach. Risk I/O takes millions of daily breaches and exploits via threat feeds and makes a comparison to your vulnerability data every 30 minutes. Your monthly scans can be turned into dynamic risk meters to ensure that any vulnerability that has been breached in the wild does not find its way into your environment.

Let’s say that you are a security practitioner that needs to separate your assets and vulnerabilities by five office locations to ensure that the team in each location is keeping up with their required remediation windows. You could create risk meters for each of those locations and monitor the overall health of each environment as a whole.

Now let’s say that you upgraded a large section of your desktops and laptops to Windows 8, and each office location received a portion of these OS upgrades. You can monitor those specific devices separately with their own risk meter. Using the entire list of organizational assets, select just those Windows 8 machines and create a risk meter to ensure that the OS upgrade goes smoothly and to act on any potential threats that arise quickly. Take a look at the video below to learn how risk meters allow you to monitor your assets at a glance in any way you choose.

Companies large and small can use risk meters to validate their remediation efforts and focus on the assets and vulnerabilities that matter most. Attackers target not only the CVSS 9’s and 10’s of the world, but they also target the old and forgotten vulnerabilities that were never remediated. Adding risk meters to your vulnerability management program will provide you with visibility to ensure that you are protecting your organization from the risk of a breach.

11 Tips and Tricks for the RIO Power User

Posted on August 18, 2014 by in Risk I/O, Vulnerability Intelligence

1. Keyboard Shortcuts
Keyboard shortcuts are available from the home screen. Want to know what they are? Click the Keyboard Shortcuts link in the bottom right sidebar or just <shift>+?

2. Threat Trends Click-Through
Clicking on any of the attack or breach bubbles within the threat trends view will filter your assets by only displaying those that are vulnerable to that attack or exploit. Didn’t know threat trends existed? Go to the dashboard and open the threat trends “drawer” by clicking on it in the bottom of your screen.

3. Threat Trends History
Speaking of threat trends and keyboard shortcuts, there’s a hidden shortcut within threat trends. By clicking on the left and right arrows, you can page through threat trends historically one week at a time.

 

4. Bulk Editing
You can edit multiple assets and vulnerabilities at a time using the bulk editing menu. To edit multiple assets or vulnerabilities at once, just select the ones you want to edit with the checkbox on the left side of the asset and vulnerability table. At the top right of the table you’ll see our bulk editor. For assets, you can set their priority score, add and remove tags, and mark them inactive or active. For vulnerabilities, you can create a Jira ticket (requires a Jira connector) or edit any custom fields. Didn’t know we had custom fields??

5. Custom Fields
In addition to tagging assets you can create custom fields for vulnerabilities. To define a new custom field, click the gear icon in the upper right and choose custom fields. Click New Custom Field. Complete the form by naming the field, provide an optional description, select the field data type (string, numeric, or date), and if you’d like to filter your vulnerabilities on this field check the faceted search box then save it.

Once you have defined your custom fields you can add them to vulnerabilities either in bulk via the method above or on an individual vulnerability. To define for an individual vulnerability, just click on the vulnerability details arrow from the home screen and then click edit on the right hand side of your screen. Assign your own creative values to your custom hearts content.

6. Heads Up Display (HUD)
Our Heads Up Display is accessible from the home screen by clicking on the bar chart in the upper right corner. Opening up the HUD displays a breakdown of the CVSS metrics and subscores of the vulnerabilities currently under review. You can click on any of the values within the charts to filter your vulnerabilities by those values.
HUD

 

7. Compare Teams/Applications/Networks/BUs via Tagview
You can compare any set of assets side-by-side with meta data using the tag view within the dashboard. Want to compare multiple teams, different applications or maybe even business units against each other? Within the dashboard select the tag view tab and enter the tagged assets you’d like to compare to each other in the tag filter box. For easier comparison, you can select either stacked or grouped charts.

 

8. RBAC
You can restrict access in RIO using Role Based Access Control (RBAC). First you’ll need to create a role by clicking the gear in the upper right of your screen and selecting user roles. Select New Role and complete the form including naming the role, selecting whether the role will have read only or read+write access and then entering the tag(s) to the assets this role will have access to. Next save the role.

Assign a user to a role from the gear in the upper right select users. You can edit an existing user or create a new user. In the user form select the role from the role drop down and save it. Done.

9. Search by IP Range
If you want to find assets by an IP range, you can use the search box in the home screen. An example of a search query by ip range would be:

ip_address_locator:[10.0.0.0 TO 10.255.255.255]

This will produce a list of assets in your 10-dot network.

If you want to find all of your internal RFC 1918 assets you could perform a search like:

ip_address_locator:[10.0.0.0 TO 10.255.255.255] ip_address_locator:[192.168.0.0 TO 192.168.255.255] ip_address_locator:[172.16.0.0 TO 172.16.255.255]

You can also perform a negative search. For example, you could take the same search above and find any asset that doesn’t have an RFC 1918 internal address by adding a ‘-‘ in front of the key to look like this:

-ip_address_locator:[10.0.0.0 TO 10.255.255.255] -ip_address_locator:[192.168.0.0 TO 192.168.255.255] -ip_address_locator:[172.16.0.0 TO 172.16.255.255]

10. Jira Ticketing
If you use Jira for trouble ticketing or bug tracking, you can send vulnerabilities for remediaiton to Jira directly from Risk I/O. You can send multiple vulnerabilities to a single ticket in Jira using the bulk editor as described above. You can also send an individual vulnerability to Jira by opening up the vulnerabilities details page and clicking the Create Jira Issue button on the right side panel. After you submit the issue, we’ll persist the issue ID, assignee, due dates and it’s status within the vulnerability details in Risk I/O.

11. RESTful API
Did you know we have a robust RESTful API? You can find the full doc here: https://api.risk.io

Black Hat 2014 Recap: Actionable Takeaways from a Security Data Scientist

Posted on August 13, 2014 by in Data Science, Event, Industry, Vulnerability Intelligence

This is my second Black Hat conference, and the best one yet. Last year was full of gloom about all sorts of devices exploited, revelations about the NSA and uncertainty about what threat intelligence meant or how good it was. This year, from the keynote down to an obscure track at BSides which I participated in, the tone was much more optimistic.

Dan Geer’s keynote at Blackhat this year sounded more like a state of the union address than a speech about information security, and this is largely due to the fact that the quote/unquote cyber domain has now reached breadth and depth of such proportions that it might as well be its own political system.

His claim is that cybersecurity has reached critical mass—that our practice areas are being taken seriously outside of our domain—in Congress, across business units and governmental agencies. Sadly, though, he claims that the rate of technological change has made it impossible to keep up with every aspect of info sec; he says this time passed “about six years ago.”

I quote: “Black

“When younger people ask my advice on what they should do or study to
make a career in cyber security, I can only advise specialization.
Those of us who were in the game early enough and who have managed
to retain an over-arching generalist knowledge can’t be replaced
very easily because while absorbing most new information most of
the time may have been possible when we began practice, no person
starting from scratch can do that now.”

I am one of those that has never had a grasp of the full field, I have known vulnerability management and only vulnerability management since I started applying techniques from operations research to the practice two years ago. And so, I want to sum up Black Hat in the only way I know how: from a math background, with takeaways about vulnerability management.

Why do I see a very bright future for vulnerability management from this year’s Black Hat? A few talks and trends:

1. The Keynote’s (Cyber)CDC suggestion (and the push to share data in general)

As fundamental requirement for future information security best practices, Geer called for mandatory reporting for all types of vulnerabilities: not only for those with Internet-wide implications (like Heartbleed), but for all organizations, both large and small. Geer wants mandatory reporting to follow the model of the US Centers for Disease Control, where details of outbreaks of diseases beyond a specific threshold must be released to the general public.

“When you really get down to it, three capabilities describe the CDC and why they are as effective as they are: (1) mandatory reporting of communicable diseases, (2) stored data and the data analytic skill to distinguish a statistical anomaly from an outbreak, and (3) away teams to take charge of, say, the appearance of Ebola in Miami. Everything else is details. The most fundamental of these is the mandatory reporting of communicable diseases.“

The CDC is effective at stopping pandemics because they force mandatory disease reporting, have expert-away teams, and analyze historical data. Infosec experts should do the same. In fact, much of Risk I/O’s approach to vulnerability management is already exactly this – we collaborate with industry partners to gather information about attacks, breaches and exploits to create a central repository of data that we can then use to guide vulnerability strategy. If there were mandatory disclosures, we’d have much richer data, and on a larger scale. The methods by which we prioritize vulnerabilities would become much more powerful.

2. Alex Stamos on Lessons from his first 6 months as CISO at Yahoo

Alex’s talk did a really good job of characterizing what a security practice at scale means, which has been hard to pin down before. He suggests that scale for security really means a large amount of data, systems, and users, as well as a diversity of users and threat models. There is wisdom in this taxonomy, because of that very last part. A diversity of threat models, to me, means two things: a diversity of threat intelligence, coming from many different sources in order to capture as much of the reality of what’s happening out there as we can, and a diversity of ways to segment that data in order to defend against script kiddies or more advanced attackers.

Alex’s talk was about overcoming “security nihilism,” which is exactly what referred to in my Black Hat preview when I suggested we should ignore the new “sexy” vulnerabilities coming out. Just because we see hundreds of new devices exploited at Black Hat ever year, doesn’t mean there isn’t hope! Attackers change their tactics daily, but for the most part, they rely on exploits that have been around for years and are easily weaponized. If we can focus on stopping this massive part of attacks, we’ll achieve much better security.

3. The Ground Truth Track at BsidesLV (and the attendance numbers!)

The Ground Truth track was all about math and machine learning in info sec, and I invite you to check out the videos on youtube. The material is technical and applied to a various segments of the practice. This might sound like a bit of shameless self-promotion since I spoke at this track myself, but less so than the content, I was impressed with the attendance numbers. The room was packed the entire day, which means folks are paying attention to mathematical models, machine learning, and data-driven approaches to security. The golden age is upon us! Of course, we have a lot more work coming up – with more and better data comes the task of incorporating it into our models, and with more models comes the even more difficult task of determining which are the correct ways to do it.

An important moment for me in the keynote was when Dan Geer said, “For every complex problem there is a solution that is clear, simple, and wrong.” Let’s make sure our solutions stay away from there. Stay tuned for new models and data analysis in the coming weeks!

There’s No Such Thing As a Cool Vulnerability

Posted on July 31, 2014 by in Data Science, Event, Industry, Security Management, Threats and Attacks

If you work in vulnerability management, all the vulnerabilities you’ll hear about at Black Hat are irrelevant. Every year at Black Hat and DEF CON, new vulnerabilities get released, explained and demoed. This year, you’ll see everything from remote car hacks, to hotel room takeovers, to virtual desktop attacks to Google Glass hacks. But once you get back home, don’t let the hype get you. It might be months before the code is weaponized, attacks will still go after the old, reliable vulnerabilities, and chances are, you will have enough security debt to keep your head down anyhow. This is not to say that you shouldn’t go see a talk about hardware level vulnerabilities in the NEST thermostat. It’s interesting. I own a NEST. Go see it.

But when you get back, get back to what matters. In reality, attackers seem to care about efficiency just as much as you ought to. The data shows that attackers shift tactics over time…a lot. Below is a gif of a small sample (past 3 months, week by week snapshots) of attacks and breaches we’ve recorded at Risk I/O, grouped by CVE type (attacks are WASC). The x-axis is the amount of breaches during the week, the y-axis is the week-over-week change.

“threat

You can take a closer look at the technical details by signing up for a trial of Risk I/O (this feature is currently in beta, but will be released shortly). More important than these details is the fact that breaches shift wildly week over week, both in variety and in volume. In fact, the vast majority of breaches occur on CVEs published 10 years ago. What this means for us is that the newness of a vulnerability—or the hype assigned to it—is irrelevant. Getting a handle on attackers’ behavior is the only way to know which vulnerabilities matter.

So, given this mindset, which talks am I excited for?

1. Building Safe Systems at Scale: Lessons from Six Months at Yahoo! by Alex Stamos

Alex will detail his first six months as the CISO of Yahoo. He’ll review the impact of the government surveillance revelations on how Yahoo designs and builds hundreds of products across dozens of markets. The talk includes discussion of the challenges Yahoo faced in deploying several major security initiatives and useful lessons for both Internet companies and the security industry from his experience.

2. Epidemeology of Software Vulnerabilities by Kymberlee Price and Jake Kouns

This talk will discuss the proliferation of vulnerabilies through third-party libraries. It’ll use vulnerability data to explore the source and spread of these vulnerabilities through products, as well as actions the security research community and enterprise customers can take to address this problem.

3. Secure Because Math: A Deep-Dive on Machine Learning-Based Monitoring by Alex Pinto

The presentation will describe the techniques and feature sets that were developed by Alex in the past year as a part of his ongoing research project on the subject. In particular, he’ll present some interesting results obtained since his last presentation at Black Hat USA 2013, and some ideas that could improve the application of machine learning for use in information security, specially in its use as a helper for security analysts in incident detection and response. The techniques should be applicable to many types of infosec analytics.

Stay tuned for my recap in case you can’t attend or were busy doing other things in Vegas!

Risk I/O Needs YOU

Posted on July 30, 2014 by in Risk I/O

At Risk I/O our number one goal is making the web and our customers safer by using real-world data to drive security decisions. We work hard to collect information across the Internet that can act as a “neighborhood watch” for our customers. Because we believe our work is critically important, we look for people that are equally as passionate about what they do and how they do it.The Team

Everyone these days talks about how they’re going to “change the world” but truthfully this just seems like something companies say. We like many, offer a number of benefits while working at Risk I/O. Make no mistake that these are important and ultimately help define the culture… but you have to believe what you’re building makes a difference as everything else can easily be replaced by the next recruiter to hit you up on LinkedIn or Twitter.

We are a data-driven company in everything that we do. Whether it’s solving complex security issues for our customers, prioritizing a product roadmap or just figuring out when’s the best time to hold a stand-up, we use data to help make informed decisions. We won’t hesitate to kill a feature that isn’t proven to be valuable by our customers.

Because we believe that what everyone is working on is valuable, you won’t find yourself toiling away on an internal project that never sees the light of day. If you’re engineer we guarantee your code will see its way into production, often in your very first week. We value feedback and openness. Contributing back via charity and open source is important to us and our team as is evident with projects like slackr, bouncer and others.

As a culture, we love working with smart people but humility is equally important. The “no rockstars” rule is an important one. We work very closely together but can also be geographically distributed. We have offices in Chicago and San Francisco but we also have a guy who lives in an Airstream. As important as our work is to us, we try to make it fun. Whether it’s hardware hacking on our newest kegbot, bringing your dog along to work or just going out with co-workers after work, we believe these are all important parts of “the job”.

Other perks that help include unlimited paid time off, medical, dental and vision coverage, 401K (coming soon), free unlimited bike sharing service in Chicago and SF, oh and did I mention that kegbot is making it’s way to the office?

I wrote this post with an interest in finding like minded people to help our cause. If this is you… we’re hiring. Our current needs include engineers, designers and sales but we’re always on the look out for great talent. Thanks for taking the time to read this and I hope you join us.

QualysGuard Connector: Now With WAS Inside

Posted on July 28, 2014 by in DAST, Feature Release, Network Scanners, Risk I/O, Vulnerability Management

Qualys
At Risk I/O, we’re always striving to ensure our integrations are seamless and complete. Risk I/O is happy to announce that as of today, our QualysGuard connector has expanded to pull in results from your Qualys VM and Qualys WAS scans.

What does this mean for you? If you are a Risk I/O user with a Qualys connector, you’ll see both VM and WAS scanner results in your QualysGuard connector results if the user you configured your connector with has access in Qualys to those results. If you’d like to begin pulling in your Qualys Web Application Scan results, ensure the user in your connector configuration has access to those results within your QualysGuard portal.

If you have any questions or need assistance, you can reach out to the Risk I/O team at support@risk.io.

And if you use the Qualys scanner but haven’t tried out Risk I/O, you can signup for a 30-day free-trial. All trials include the ability to sync data with our vulnerability threat management platform from a list of over 20 security tools, including Qualys.

Announcing Our Latest Integration: Beyond Security

Posted on June 5, 2014 by in Network Scanners, Risk I/O, Static Analysis, Vulnerability Assessment

Beyond Security Web Application Security

At Risk I/O, we’ve always made it our mission to integrate with the scanner tools used most. That’s why we’ve added integration with the BeyondSecurity AVDS web scanner to our vulnerability threat management platform.

With the new BeyondSecurity AVDS connector, you can discover and eliminate your network’s most serious security weaknesses. Simply sync your scan data via our new connector and Risk I/O will continuously process it against active threats from our threat processing engine. Risk meters can be used to pinpoint your exposure to active Internet attacks and breaches and to prioritize the vulnerabilities putting you at greatest risk.

Setting up your BeyondSecurity AVDS connector in Risk I/O, like our other connectors, is easy and requires simply adding it to your instance through the Connectors tab. Not a Risk I/O customer but would like to try out the integration? Signup for a free account and sync your scan data now.

Heartbleed Is Not A Big Deal?

Posted on April 17, 2014 by in Cyber Attacks, Data Analysis, Threats and Attacks, Vulnerability Management

As of this morning we have observed 224 breaches related to CVE-2014-0160, the Heartbleed vulnerability. More than enough has been said about the technical details of the vulnerability, and our own Ryan Huber covered the details a few days ago. I want to talk about the vulnerability management implications of Heartbleed, because they are both terrifying and telling.

The Common Vulnerability Scoring System ranks CVE-2014-0160 as a 5.0/10.0. A good observer will note that the National Vulnerability Database is not all that comfortable with ranking the vulnerability that broke the internet a 5/10. In fact, unlike any other vulnerability in the system we’ve seen, there is an “addendum” in red text:

 “CVSS V2 scoring evaluates the impact of the vulnerability on the host where the vulnerability is located. When evaluating the impact of this vulnerability to your organization, take into account the nature of the data that is being protected and act according to your organization’s risk acceptance. While CVE-2014-0160 does not allow unrestricted access to memory on the targeted host, a successful exploit does leak information from memory locations which have the potential to contain particularly sensitive information, e.g., cryptographic keys and passwords. Theft of this information could enable other attacks on the information system, the impact of which would depend on the sensitivity of the data and functions of that system.”

So what does this mean for your organization? How should you prioritize the remediation of Heartbleed vs other vulnerabilities? NVD’s answer is “think about what can be stolen.” The problem here is that the CVSS environmental metric, which is used to account for an organization’s particular environment, can only reduce the score. So we’re still stuck at a 5. Why?

CVSS is failing to take into account quite a few factors:

1. It’s a target of opportunity for attackers:

The amount of sites affected by the vulnerability is unfathomable – with broad estimates between 30-70% of the internet.

2. It’s being actively and successfully exploited on the Internet:

We are logging about 20 breaches every few hours. The rate of incoming breaches is also increasing, on April 10th, we were seeing 1-2 breaches an hour. Keep in mind this is just from the 30,000 businesses that we monitor – not 70% of the Internet.

3. It’s easy to exploit:

There exists a metasploit module and exploit code on ExploitDB.

We already knew heartbleed was a big deal – this data isn’t changing anyone’s mind. The interesting bit, is that Heartbleed is not the only vulnerability to follow such a pattern. Of all the breached vulnerabilities in our database, Heartbleed is the fifth most breached (that is, most instances recorded) with a CVSS score of 5 or less.

The others that CVSS is missing the boat on, in order of descending breach volume, are:

1. CVE-2001-0540 – Score: 5.0

2. CVE-2012-0152 – Score: 4.3

3. CVE-2006-0003 – Score: 5.1

4. CVE-2013-2423 – Score: 4.3

Two of these are terminal denial of service, and two of these are remote code executions. The common thread is that all of these have a network access vector and require no authentication, all of these have exploits available, affect a large number of systems and are currently being breached.

Heartbleed IS a big deal. But it’s not the only one – there are plenty of vulnerabilities which have received less press and are buried deep within the PCI requirements or CVSS-based prioritization strategies which are causing breaches, today. It’s important to check threat intelligence feeds for what’s being actively exploited, to think like an attacker and to have the same information an attacker has.

It’s also important to learn a lesson from this past week: while the press took care of this one, it won’t take care of a remote code execution on a specific version of windows that your organization happens to be running. Just don’t say it’s not a big deal when a breach occurs on a CVSS 4.3. You’ve been warned.