“He uses statistics as a drunken man uses lampposts – for support rather than for illumination” ~ Andrew Lang
Metrics and statistics, whilst subtly different, are often seen as the accountants yardstick and the pragmatists whipping stick. The use of metrics in IT has had a long and perhaps uneasy route. Technicians want to implement, design and fix. Managers and budget owners need to show value, deliver service and ultimately keep the customer, production line or CFO happy. An efficient and sustainable business position is a meeting place between the two, where tangible (and intangible) metrics (not statistics) are important to both parties.
Why Use Metrics?
IT security has often been seen as a cost within the overall component of IT, which until very recently was also seen as a cost to the business. IT was a necessary component granted, but organisations have historically not seen IT as a strategic part of the overall business delivery cycle. It was never capable of driving efficiencies, saving money or being proactive in gaining and keeping customers. That view has changed considerably and information security is now becoming the necessary component within IT.
But what is the driver for security? Well the main ones are probably compliance necessity, brand damage (especially if customer record losses occur) and the clean up costs of breaches. So the CEO wants their company to be secure. The infosec guys want the company to be secure, so what’s the problem?
There are two main ones. Firstly the non-infosec community within the IT component will often not have security as their default modus operandi. That’s not to say they security-averse, just not pro-security by default. This can hamper design, policy and implementation. Secondly, how do the ideas and strategies from the CxO level filter down to the infosec implementers? One is talking budgets and ROI, the other is talking about standards, compliance, APT’s, firewalls and DLP.
The use of some sort of metric driven analysis can not only aide implementation, but also help non technical members of the business understand the reason, rationale and benefit that a secure infrastructure can provide. As a metric is a snapshot in time, it can also provide a useful benchmark for gauging performance and success of a particular project, policy or component. This can not only help individuals, but also aide with budget realignment and project funding.
What to Measure?
The key to defining what to measure, is rooted in being able to define a framework that can help show progress and performance from all components of the infosec life cycle, whilst being of benefit to the board, IT and infosec teams. To break this down further, it’s important to understand what infosec posture the organisation is taking. This should include what security policies have been created and how are they being implemented? What systems, devices and data are being monitored, controlled or impacted by these policies? In addition, it’s important to understand the type and structure of the metrics being used.
Metrics don’t always have to be numeric and tangible in structure. Metrics can also be more subjective and intangible, covering things like brand awareness, confidence levels and so on. For example, what is the damage to the a large on line retailer if they lost 100k customer credit card details? The impact on brand and future custom could be quite difficult to measure tangibly, but that’s not to say it can’t me measured in some way.
The most obvious low level areas to cover would be things like anti-virus coverage. A basic % showing the number of devices, those protected by AV software and the % with virus definitions older than 3 days for example. Others could include the patch latency average. This could be for particular servers, desktops or devices, showing the lag between a vendor released security update and the time taken to roll that update out. Other more subtle measures could be for things like the number of password resets a help desk receives. This could indicate if a password policy is too complex for users to remember their own passwords. A password strength checking metric could also be used to see how successful a password education policy has been.
The catalogue of metrics should include both technical and non-technical aspects. The underlying aim would be to show the general performance of the security infrastructure of the organisation. Security isn’t just about firewalls and access control lists. It is about education, personnel and physical attributes too.
How to Measure?
Reporting the Results
Clarity and simplicity need not be the same. As the audience will undoubtedly be more business than technically focused, the data clearly needs to be written using business language. This is not to say that the basic omnipresent traffic light system should be used all the time. This seems only appropriate for the most basic of data types. Whilst basic percentages in the previous example are useful for technicians, CxO and budget owners will want to know what that means from a down-to-earth real life impact perspective. So what, 4 servers are 14 days behind in the antivirus roll out plan and two test mail relay appliances are not covered at all. What does that mean to the customer or the impact on the service the business delivers?
Here real impact data should be used. Monetary data is often useful, but isn’t always the most easy to obtain. For example, if a mail device is not protected, or is only partially protected using out of date definitions, the likelihood of a an outbreak will increase. The costs to recover from an outbreak could be $100k split on consultancy, out of hours overtime and a % on unhappy customers who received spam from the malware that was ‘released’. It’s the impact that budget or service owners are interested in. That must always be the underlying theme of how the results are reported. The impact on budget and/or customer happiness and delivery of key components that affect those initial two factors.
Ideally the business should have enough information from reading the report that they themselves can make an informed decision as to whether a particular security posture is being upheld or not.
The reporting process should be periodic as opposed to an annual audit style approach. This will give a more regular, ingrained approach to security. Ultimately, a metric driven approach is only a means to an end. The end is to help ingrain security as part of the overall business and technical aspects of the organisation, where this is appropriate. This proactive stance, will ultimately be more cost and effort efficient if a secure posture is required.
A metric driven approach will help to refine budget and identify weakness of course, but should also help show that information security is a more proactive and contributory discipline, with benefits to the entire business life cycle as opposed, to being a component of reactionary IT, used when necessary when something bad has happened.