Are SSL implementation Problems Policy Problems?

Another interesting article from ThreatPost, which highlights a problem faced by many of us. TLS/SSL implementation is something which often taken for granted. It is one of the most misunderstood aspects of security. Like firewalls, those who don’t fully understand security perceive SSL and Firewalls as magic pills, which will solve all the security problems. Of course there is no magic pill.

Looking at this article it is clear the see that developers and implementors seem to think that simply turning on SSL will equal security. However weak implementations of SSL can mean that attackers may be able to manipulate SSL handshakes to force the use of weak cipher suites.

There are many mechanisms to combat this:

1) Education -> Project implementors, network professionals and developers need to be shown how to use SSL appropriately.

2) Governance -> Updating project implementation policies to include governance that assures the implementation of SSL comply with predefined standards and guidelines.

3) Testing -> Include compliance testing as part of QA processes, which verify that SSL implementations are strong and comply with the prescribed governance.

Check out the SSL Labs at Qualys.

Advertisements

Mapping Cyber Attacks

This stood out on Slashdot:

HoneyMap displays information in real-time about current ongoing bot attacks against honey pot machines running on secure virtual machines.

Looks amazing and shows the extent of the problem.

Don’t call me Shirley!

The other day someone came into my office and asked: “Why should I have to validate the web form input at the web server? Surely it better to do it in the browser!”

*Humph!*

“I can think of many reasons why you are wrong”, I said, “and don’t call me Shirley!”

Firstly, and I can hear the clockwork ticking over in many web developers heads, you may get a slight performance improvement if you don’t validate the input at the server. However, any gains need to be offset against the losses. Are you willing to risk the chance [did I say chance? I mean damage!] that an attack will do for a mere millisecond or two gain in performance? Lets think about it logically: are users really going to gain anything by you doing this? Simple answer : NO!!! The user of your web application is never going to notice and doesn’t even care whether you do the validation of the input in the browser, server or Timbuktu! Save your users and save your web application by always validating at the web server!

Secondly, there are many intercepting web proxies that can be used to extract and manipulate the parameters of a web form after validation. Even if the validation is in the browser this does not protect your web server. Also, there is nothing to stop me as a developer cloning your web form, removing the validation and then submitting to your application. Again, this renders the input validation useless.

Finally, many of the main web server or web application server vendor solutions (think IIS / ASP.NET or Oracle Application Server  or JBoss) include in the framework the ability to validate the input data against a specified subset of character or data type. In ASP.NET you have to explicitly change the configuration in order to turn it off. If it’s already there, use it! Don’t go re-inventing the wheel just because it sounds like fun.

With these words of advice ringing in his ears this individual went off with their tail between their legs – and he’s never called me Shirley since!

The Language of Risk

I recently had my eyes opened. It happens every now and then. During a conversation with my friend Seth Bromberger he introduced his Quantitative Threat Methodology. I’m hooked.

Why?

Well:

  1. Its simple and it works.
  2. Its easy to customise to an organisation’s requirements.
  3. Its quick to implement without requiring complex and expensive tools.
  4. It gives comparable and repeatable results.

But, this got me thinking about how we as professionals go about the business of risk assessment. Not the process, but the number crunching. I just carried out a small risk assessment, but the actual work of collecting and crunching the data was painful to say the least. Wouldn’t it be nice if we could just create a set of rules and point these at the data and out of the other end comes a crunched risk assessment. A programming language for risk assessment.

I tried not to lose sleep over this, and thankfully I didn’t.

My first thought on this subject was: how do these methodologies work?

Fundamentally, all risk methodologies break down the risk into core component elements. In information risk theory, for example, the three core elements are:

  • Threats;
  • Vulnerabilities;
  • Impacts;

The risk is calculated from a combination of the three. However, some of these core elements are themselves comprised of other things. If we think about an impact, we would need to know what the impact affects. So an Impact may be composed of an asset and an impact. In fact one impact may affect multiple assets. In fact one RISK may affect multiple assets. In addition to this we have to consider the concept of asset types. Different assets of differing types, e.g:

  • Information Assets (a HR Record);
  • System Assets (SAP HR, comprised of many different servers);
  • Software Assets (a single SAP Netweaver instance);
  • Hardware Assets (Blade Server ID:XXYYZZ);
  • Physical Assets (The Datacenter); and
  • People Assets (SAP Engineer).

These differing asset types could be nested within one another (or dependant upon one another, depending upon your perspective), so the HR Record is stored on the SAP HR system, which uses the SAP Netweaver installed on Server XXYYZZ which is housed in the Datacenter and supported by the SAP Engineer.

A risk which affects one asset (e.g. the datacenter) affects all assets which are nested within it (or are dependant upon it). This introduces the concept of cascading risk.

The identification of cascading risks, in fact the identification of dependencies between assets, is complex. Any language which define how we calculate risk must include the ability to identify and map interdependencies between core risk elements.

When we think about these core elements we also have to consider that for most of these elements we will already have a data collection of these. For assets we will have an asset list, for vulnerabilities as list of identified vulnerabilities which affect us. We don’t want to re-invent the wheel, so it’d be nice to connect to feeds of data, rather than sucking the entire data source into the risk system, effectively duplicating the data. Actually we may have more information in our asset management database than we actually need for our risk assessment. Therefore is seems daft in include all the useless stuff. However, we may need to augment this data feed with new information which is specific to the risk assessment. If we think about an asset we may want to augment it with the value of the asset or the criticality assessment of this asset. So the system should be able to store the data specific to the risk assessment and link that back to the original data record in the feed, which means if we ever want to show information about our asset we can go and grab the data feed and add the risk assessment specific stuff.

Another aspect is that all these methodologies include specific characteristics which apply to the core elements. These can be broadly categorized by a set of ranges or a specific value. These sets are groups of values which equate to another value. Lookups, essentially. Every risk methodology has this type of set up. A collection, which we could call a “range”, and within that a variable number of elements, which we could call a “range element”. So if we are to think about a common aspect of risk assessment, like asset valuation, we could create a range called “asset value range” and this could contain a number of range elements, e.g:

  • Range Element 1: $0-100 = 1;
  • Range Element 2: $100-1000 = 2;
  • Range Element 3: $1000-5000 = 3;
  • etc….

This in turn will fulfil a characteristic of one of the core elements, which we could call “asset value”.

Characteristics are important, but they need to be collected into an organized structure. We could define the characteristics into reusable chunks. We could call these “characteristic policies”. In fact, we could define multiple policy types for a core element. The data feeds can be defined in a “data definition” policy. The calculation of the risk can be defined in a “risk calculation” policy.