Today, the common approach to IT security is based on the assumption that an attacker will infiltrate a network by taking advantage of software vulnerabilities, either at the application or operating system level.
Detection and prevention of intrusions to traditional hosts and network systems are implemented to detect whether someone is taking advantage of these vulnerabilities, primarily by using digital signatures.
The vulnerability-focused approach looks almost exclusively at traffic accesses (entrances) to the network and pays almost no attention to outgoing (exits) and lateral (internal) movements in network traffic.
The vulnerability-centered view of threats is universal, meaning that everyone is attacked using the same flaw. On top of this, attacks are analyzed alone and rely very little on threat assessment or human analysis. Much automated a posteriori detection is oriented to log management, and there is little or no correlation of data from different sources (event logs, application logs, syslog, and so on).
Although this approach has been successful in the past and is the model of current regulatory standards (e.g., PCI), it fails to reduce the risk that any of today’s threat generators will take advantage of a vulnerability. We know that prevention alone will sooner or later fail, as criminals constantly change the tools, tactics and procedures of their attacks.
Although an organization has the best tools for detection and prevention, in the long run, a motivated attacker will find a way to filter his network, either through social engineering techniques and/or a zero-day vulnerability, for which there is no detection method.
However, once an organization accepts that at some point it will be at risk, it can redirect its purely preventive resources to a more threat-centric approach, including a balanced approach to collection, detection, and analysis.
This threat-centric approach involves performing cost and risk analyses to determine whether to leave security to an internal team or hire external experts.
Collection includes defining where the greatest risks are in a particular organization, identifying threats to the organization’s objectives, identifying relevant sources of information, and refining data collection techniques.
In the traditional vulnerability-centered approach, this collection is usually done in a non-focused manner and unrelated to the detection objectives. The tendency has been to overcompensate and collect too much information, making it difficult to review attack indicators (IoA) or risk indicators (IoC).
Detection techniques should also focus on threats. At the host level, this means looking for changes in operating system behavior and should include process creation, network activity, log access, and creation/deletion/renaming of important files and memory analysis.
At the network level, this means that both incoming traffic and traffic between computers are monitored. Changes or deviations from normal traffic should generate an investigation. Just collecting data from all sources and storing it in a central repository is no longer enough.
Data analysis must correlate information from different sources to deduce information from attack or risk indicators. Information management applications and security events (SIEM) will use this information to detect, based on correlated events.
Analysis occurs when a human interprets and investigates alert information. This involves gathering information from other sources, investigating open source intelligence related to the types of alerts generated by the detection mechanism, and conducting investigations related to potentially at-risk hosts.
This requires extensive experience in packet analysis, host and network forensics, and malware analysis. This phase is usually the most time-consuming but is essential in determining whether the event being analyzed needs to be reclassified as an
Lessons learned in the detection and analysis phase will help refine and further improve the organization’s collection strategies.
Challenges of implementing a threat-centred approach
This threat-centric approach, although much more effective in reducing the risk of information loss in the organization, is not free of challenges, and the most notorious are capabilities and costs.
This type of approach requires a dedicated security operations center that operates 24x7x365 and is operated by analysts with advanced security knowledge.
Experts are also needed to formalize procedures for creating detection signatures based on other network events and threat research. These experts also need the ability to manually review multiple data sources for IoA and IoC rather than just automated detection tools.
Finding the right experienced security staff is not only a challenge; it’s costly. Add the tools needed to support a 24x7x365 security operations center, and costs go even higher.
Threat-oriented approaches are essential to combat today’s and tomorrow’s security risks. Organizations can no longer rely exclusively on vulnerability-focused techniques and will need to do a cost/risk analysis to determine how much or how little they spend on each aspect.
In many cases, especially in cloud deployments, it may make more sense to hire an external security service provider, which almost always has more experts, not to mention the ability to invest in the technology needed to leverage the people and processes needed to proactively detect threats.
Rackspace Managed Security offers a combination of the best security solutions within a security operations center operated by industry-leading experts who provide an active 24x7x365 defense for customer environments.