Whiteboard Workflow Series: Infrastructure Vulnerability Management

Key Takeaways

  • Monitoring external libraries (in your production code base) for vulnerabilities is a daunting task, and an enterprise infrastructure is close to unmanageable without the correct tools.
  • Here we introduce an open source prototype system for gathering information about running code, using Recorded Future for determining which components are risky — the system also alerts on which vulnerabilities are potentially affecting which machines and components.

Tracking large codebase vulnerabilities is hard. It involves monitoring external code imports as well as ensuring your external servers are downloading files that are not compromised. In an enterprise environment, external dependencies can change daily and vulnerability management teams are often left in the dark.

Beyond the difficulty of staying on top of which formal vulnerabilities affect your software stack, an over-reliance on formal vulnerability advisories — on CVEs being properly published, for example — can be dangerous due to missing context in the advisory.

Below we demonstrate a prototype vulnerability alerting framework, code-named Heart API, that collects whatever external dependencies are being used in your infrastructure and tries to map those modules to vulnerabilities (CVEs, etc.) via Recorded Future queries for risk scores, to generate alerts enabling patch prioritization.

Heart API

Here’s our video explaining how Recorded Future can help with infrastructure vulnerability management:

https://www.youtube.com/watch?v=fMGl-bM9ISw?rel=0&showinfo=0?rel=0&vq=hd1080

For those of you who’d prefer to read, here’s the transcript:

Staying on top of which code is being run at any given time on servers across your network is a challenging issue to address, and keeping track of which vulnerabilities affect the tools you’re using is even harder.

In this Whiteboard session we’re introducing our prototype API, Heart, which uses Recorded Future real-time threat intelligence to identify vulnerabilities in assets deployed to your production environment.

Beyond formal vulnerabilities we also check the risk score of any file hashes generated or downloaded during the deploy as well as the risk score of any IP address contacted during the build of server.

Our example here uses the open source configuration management system Chef, but the core component of this is agnostic — you can choose how to enumerate your running components.

So how does it work?

Firstly the Recorded Future Heart API gets deployed to a machine and starts running.

Then a Chef cookbook (with some configuration) is included on each server running Chef.

Whenever Chef runs on one of those machines all the dependencies and contacted hosts are enumerated and sent to the Heart API via the included cookbook, Heart API adds the dependencies to its database.

At the end of this process you can ask the Heart API to summarize the current vulnerabilities affecting all the dependencies in the system, and this will try to map the names and versions to known vulnerabilities and extract the risk scores for them.

It’s then just a question of alerting the right person and filtering out vulnerabilities that you know have already been handled.

Example

Before we dive into the technical details of how it actually works, consider the example alert below and subsequent workflow that we observed while using Heart API internally.

When a Heart API check is performed, a summary for each possible vulnerability affecting the monitored infrastructure is made and an alert is sent to the appropriate teams. In this case, we saw an alert based on OpenSSL and CVE-2016-2178.

Heart API Alert

Heart API alert for CVE-2016-2178 displayed in Uchiwa.

Examining the MITRE page gives no further information, it just states that the CVE was reserved on January 29 but no additional information is given.

CVE-2016-2178

MITRE’s page for CVE-2016-2178 at the time of alert.

Looking at the overall mentions of CVE-2016-2178 captured in Recorded Future we can see that the reporting around it started on June 6, the same day we got our alert!

CVE-2016-2178 Timeline

Recorded Future timeline view of CVE-2016-2178.

Investigating the earliest mentions of the CVE we find a link to the OpenSSL commit logs, which links CVE-2016-2178 to an issue in OpenSSL’s DSA signing algorithm allowing a cache-timing attack.

CVE-2016-2178 Reference

By going to the commit logs we can see that a patch has been committed, but as far as we can tell it hasn’t made it out to any pre-compiled packages yet.

CVE-2016-2178 OpenSSL Commit

OpenSSL commit providing context for CVE-2016-2178.

After becoming aware of the OpenSSL issue we set up a Recorded Future alert to notify us on any new information regarding CVE-2016-2178, including mention of any patches being made available for our systems.

Further reading on the vulnerability is available in the academic paper highlighting the issue.

After running Heart API on our own servers we’ve managed to find a few open vulnerabilities running on internal servers that we have now mitigated!

Technical Details

To enumerate which external code is being used, as well as which external hosts are being contacted during a deploy, we’ve created a special cookbook for Chef which gathers up the information on whatever machine it’s currently being run on before sending it over to the main component of the prototype — the Heart API.

We’ve used Chef here as an example, but the Heart API is agnostic of how you choose to collect the dependency information.

Heart API is a REST-server that continuously collects the information sent to it from the deployment servers and provides an endpoint for requesting a summary of the risks affecting the external dependencies it currently has stored.

For hashes and IP addresses this is straight-forward, but for code dependencies (JAR files, Ruby gems, etc.) this becomes a bit trickier since we have to try to map file metadata to formal vulnerabilities. In the example code we use two different routes to discover active vulnerabilities; first we try to map the library name and version against Common Platform Enumerations (CPEs) and then we use Recorded Future to discover CVEs linked to those CPEs.

The second method is finding vulnerabilities being mentioned together with the library name in the past year on the web (including social media and .onion sites). Since this method does not take versions into account it might produce some false positives that require filtering, but it also provides unique connections between dependencies and poorly defined formal vulnerabilities. This connection can predate, for example, a formal CVE being updated to indicate which products are affected.

To act on the vulnerability overview that Heart API generates, we’ve included a sample Heart API client that requests the results and then formats the results for consumption by Sensu, to send out alerts to the appropriate responders. Included in these alerts is information on which machines are running the vulnerable code as well as a link to the Recorded Future vulnerability summary, providing a rapid summarization of the vulnerability, suspicious IP address, or hash that the alert deals with.

All of the code is available on GitHub and it requires access to the Recorded Future API.

The post Whiteboard Workflow Series: Infrastructure Vulnerability Management appeared first on Recorded Future.

     

from Recorded Future http://ift.tt/2a9OXPa
via IFTTT