IDS is a device or software application that monitors
network and/or system activities for malicious activities, or policy violations
and produces reports to a management station. Management teams need quality
metrics. Consistently, security departments are taking directions from
management teams that may have very little knowledge about the attacks the
environment endures on a daily basis. In some cases IDS metrics will need to be
combined with qualitative or quantitative data, such as the number of
investigations assigned to a security team, to create a better picture of the
security stance in the organization. By creating metrics designed for this
audience, the security team can equip leaders to make informed decisions about
the security of organization.
The four goals of effective metrics are as below:
1. Depth
of System’s Detection Capability
A detection capability metric can be
defined as the number of attack signature patterns and/or behavior models known
to a sensor technology. This metric can indicate if the IDS infrastructure is
identifying all that it is expected to identify. About this metric
understanding, it can show data about the team missing attacks because the IDS
capability is lacking. As the result it can be beneficial to make decision that
could be made to investigate newer technologies to increase the visibility
expectations. Additionally, more information can be gained into how the sensor
technology is currently providing security and can be contrasted with other
competing products can be shown.
2. False
Negative Ratio
Another goals of effective metrics is to get false
negative ratio which is the ratio of successful attacks not detected by the
IDS. For the false positive rate, the security team may be reacting to
incidents not captured by the sensor infrastructure. The organization may have
other security technologies in place such as anti-malware, firewalls, data leak
prevention, application whitelisting or a honeypot that has revealed an
intrusion. As part of incident response tasks, security teams can research
whether the sensors can identify or producing an alert for the intrusion as it
crossed the sensor. As a result, more understanding on the data can be collected
and be used to generate the False Negative Ratio. This I it can be more
beneficial metric that can visualize if the current IDS is the correct solution
for the environment, if the team is utilizing the technology correctly, or if
more security staff should be monitoring the sensor data.
3. Reliability
of Attack Detection
The reliability metric can be defined as the ratio of
false positives to total alarms raised. An analyst may be researching incidents
to determine later that the event was a false positive. The data can be
collected from the team’s ticketing system and used to produce a metric. Thus,
it could be advantage to identify if the sensor infrastructure is to undergo a
reconfiguring exercise, determine if the IDS solution is correct solution for
the environment or have staff be allocated to perform sensor tuning to an
acceptable level.
4. Compromise
Cost Analysis
The compromise metric is the ability to report the
extent of damage and compromise due to intrusions identified by the security
program. Time is being spent on remediating successful intrusions and a
monetary figure could be calculated to aide management in decision making. This
monetary figure could be shifted elsewhere to better protect the organization,
possibly with intrusion detection