Free Newsletter
Register for our Free Newsletters
Newsletter
Zones
Access Control
LeftNav
Alarms
LeftNav
Biometrics
LeftNav
Detection
LeftNav
Deutsche Zone (German Zone)
LeftNav
Education, Training and Professional Services
LeftNav
Government Programmes
LeftNav
Guarding, Equipment and Enforcement
LeftNav
Industrial Computing Security
LeftNav
IT Security
LeftNav
Physical Security
LeftNav
Surveillance
LeftNav
View All
Other Carouselweb publications
Carousel Web
Defense File
New Materials
Pro Health Zone
Pro Manufacturing Zone
Pro Security Zone
Web Lec
 
ProSecurityZone Sponsor
 
ProSecurityZone Sponsor
 
ProSecurityZone Sponsor
 
ProSecurityZone Sponsor
 
ProSecurityZone Sponsor
 
ProSecurityZone Sponsor
 
 
News

When is a vulnerability not a vulnerability?

New Net Technologies : 29 September, 2014  (Technical Article)
Mark Kedgley, CTO of New Net Technologies provides information on a continuous approach to vulnerability scoring and management
When is a vulnerability not a vulnerability?

Information Security is an industry full of buzzwords, acronyms and clichés. The GRC sector in particular is rife with them (which succinctly proves my point about acronyms – GRC: governance, regulatory and compliance).

For example, the expression ‘Checkbox approach to Compliance’ is disparagingly aimed at anyone who treats compliance as a project. For these Checkbox Compliance Cowboys, compliance receives focus once a year for a period of a few weeks and with the sole intention of providing enough paperwork to satisfy an auditor, but with little substance beyond that.

Of course, those that treat compliance as cynically as this are missing the point. Threats to security are constant and therefore security measures and the associated checks and balances of compliance also need to be operated continuously.

But the fact is that most of us would actually prefer to take a checkbox approach, in as much as we are all looking for ways to make compliance a more predictable, less time-consuming and simpler function.

And who can blame us? Security and compliance is a hugely complex task, and the implementation of a hardened build standard is a highly technical project in its own right. Finding the right balance between a configuration standard that protects systems without preventing them from working needs careful consideration.

Overlaid with configuration hardening is the related task of patch management. Both disciplines will address vulnerabilities and both can have nasty side-effects.

On this basis, within the overall context of Vulnerability Management, it is valid to group all vulnerabilities together, and indeed, many vulnerability scanners will aim to detect both configuration and software-based vulnerabilities with one scan. However, because the nature of vulnerabilities and the action required to either mitigate or remediate them are so different, it actually makes sense to segregate their management.
Maintaining a hardened build standard for Windows or Linux hosts is a very different discipline to managing weekly patching exercises and should be measured and handled accordingly.

Groundhog Day - The Traditional Scanner Approach to Vulnerability Management

One of the main obstacles to making vulnerability a streamlined process is that there is a tendency to always be starting at square one.

The vulnerability landscape changes daily with new exploits being discovered and reported, so new scan signatures will always be available. There is also the issue of needing to know which devices you have and where they are located in order to scan them – a secure network is going to be firewalled to prevent scanning activity. Finally it is always better to operate a scan in a focused manner, which means knowing what is installed on the hosts under test in order to specify which vulnerabilities to test for. The alternative is to just run a simple but overkill, ‘route-one-let’s-test for every exploit of every package’ but in a large estate this is just too wasteful of resources and time.

But once the scan results are reported, the real work begins. Each failure needs to be reviewed in turn for its relevance and associated risk. In a large estate, where remediation work could take days or even weeks, which vulnerabilities, for which devices, should you address first?

- For config based vulnerabilities, is it practical to mitigate the vulnerability, given that reducing the opportunity to exploit vulnerabilities invariably reduces functionality (for example, restricting RDP access makes a Windows server more secure, but would compromise support capabilities)

- Likewise, is it safe to go ahead and patch a system? An update that addresses a vulnerability may well introduce other issues such as feature/functional changes or even a new bag of bugs

Faced with these potentially undesirable side-effects, the first question to ask is ‘How serious is this vulnerability?’ or, in other words, does the risk posed by the vulnerability outweigh the risk of causing other operational problems?

Risk Assessment and Vulnerability Scoring Systems

Various systems exist which attempt to categorize and score each vulnerability. Qualys have their own scoring system as do Tripwire (and nCircle), but there are also the consensus-based systems, presided over by NIST, which reference the three earlier definitions of vulnerability classes. In turn these are

* Common Configuration Scoring System (CCSS), used to score the severity of security configuration-based vulnerabilities
* Common Vulnerability Scoring System (CVSS), used to score the severity of software flaw-based vulnerabilities
* Common Misuse Scoring System (CMSS), used to score the severity of software misuse-based vulnerabilities

At a high level, the intention is clear – define how potentially dangerous each vulnerability is. But that isn’t such an easy assessment to make and scoring vulnerabilities starts to get very complicated, very quickly.

Each of the Common Scoring Systems factor in the context of the threat: ‘Just how likely is it that this exploit can be used?’, ‘How real is the exploit?’, ‘How available are the fixes, and how risky are they?’, ‘How much damage could be done using the exploit?’

In the CCSS system the vulnerability is given a ‘Base Score’ based on the

* Access Vector (Local, Adjacent Network or Network)
* Access Complexity (High, Medium or Low)
* Authentication requirements (Multiple, Single or None)
* Confidentiality Impact (Complete, Partial or None)
* Integrity Impact (Complete, Partial or None)
* Availability Impact (Complete, Partial or None)

Next, there is a ‘Temporal Score’ applied, based on

* Exploitability (Not Defined, Unproven that exploit exists, Proof of concept code, Functional exploit exists or High)
* Remediation Level (Not Defined, Official fix, Temporary fix, Workaround or Unavailable)
* Report Confidence (Not Defined, Unconfirmed, Uncorroborated or Confirmed)

Then there is the Environmental Score….do I need to go any further?!

From an academic standpoint, all the factors outlined should be taken into account as they allow a quantitative score for any vulnerability to be derived based on its qualitative attributes.

But as the consumer of the scan report you just want a High, Medium or Low severity rating - you don’t need to worry too much about how the Vulnerablity Score was calculated.

Or do you? Without the context of your estate and network architecture, the risk-level of a vulnerability can only be calculated on a theoretical, not empirical, basis.

Now, the point is that there are no vulnerabilities that should be ignored, but there are any number that within the context of your estate might be tolerated temporarily or permanently due to compensating controls that are in place. SCADA infrastructure components subject to NERC CIP compliance will require the highest level of security, while user workstations segregated from confidential data systems can be treated as lower priority, lower risk items.

With scan results highlighting hundreds of vulnerabilities across the estate, the last thing you need therefore is to be re-reminded everytime you scan of the same known-and-acknowledged vulnerabilities. The concept of improvement-based vulnerability management starts with the need to address this issue as a key objective .

For example, with a large compliance initiative, there could be any number of reasons why servers or network devices will remain in a non-compliant state for months – resource constraints, application compatibility, network architecture – so the requirement to either suspend or exclude compliance requirements for certain hosts or device groups is essential. If we think it will take us 3 months to remediate all vulnerabilities across all systems then we can set time-based milestones for minimum levels of compliance to be achieved, and in doing so, give a realistic set of targets to hit progressively over time without being repeatedly beaten on over all vulnerabilities outstanding.

Similarly there may be a need to make exceptions or adjust compliance requirements, for example, allowing permissions to additional Groups over and above the standard settings advocated by the CIS Benchmark.

Finally the ability to also extend the compliance standard to include additional file integrity monitoring checks over and above the STIG or other secure build standard is valuable. For example, security best practices may recommend removing or disabling unnecessary daemons and services, but you can also use your compliance audit to ensure that other essential services are enabled and running, such as encryption, syslog forwarding agents, DLP or AV products. Likewise, ensuring a functional build standard for a host is implemented and maintained in terms of installed software, filesystem structure and network settings gives a dimension of quality control that will eliminate downtime/reduce troubleshooting.

Conclusion

Vulnerabilities exist via config settings, software bugs and misuse of software features and it is essential to minimize the ‘attack surface’ of IT systems using a vulnerability management process. Where patches can be used to remediate vulnerabilities, these need to be carefully assessed for potential negative side-effects before deployment. Similarly, security configuration settings can be used to close off potential exploits, albeit at the loss of functional freedom which also needs to be weighed up.

Modern approaches to vulnerability management make use of vulnerability scores to help with decisions over whether the cost of remediation outweighs the potential risk. Scoring vulnerabilities also helps prioritize remediation work, especially in large scale estates where the workloads involved are significant.

However, the real answer is to operate a process of Improvement-Based Vulnerability Management. This ensures that intelligence regarding your estate is accumulated incrementally, improving in accuracy continuously. This elevates vulnerability management above the ‘Groundhog Day’ scenario that traditional vulnerability scanners engender, always starting from the same ‘square one’ each time. Improvement Based Vulnerability Management also brings other best practices into one strategic process, such as Configuration Management, Planned Change Management and Functional Build-Standard enforcement.

It may not be the automated checkbox approach to compliance that we want, but by streamlining compliance management processes in this way, you will not only reduce the wasted resources of scanner-based approaches, but improve security too.

For more information and a video on the topic, visit the NNT page on File Integrity Monitoring

Bookmark and Share
 
Home I Editor's Blog I News by Zone I News by Date I News by Category I Special Reports I Directory I Events I Advertise I Submit Your News I About Us I Guides
 
   © 2012 ProSecurityZone.com
Netgains Logo