Presented By Charles Leaver And Written By Dr Al Hartmann Of Ziften Inc.
The Breadth Of The Indication – Broad Versus Narrow
A detailed report of a cyber attack will normally provide details of indicators of compromise. Typically these are narrow in their scope, referencing a particular attack group as seen in a particular attack on an enterprise for a limited time period. Generally these narrow indicators are specific artifacts of an observed attack that could make up specific evidence of compromise on their own. For the attack it suggests that they have high specificity, however often at the cost of low level of sensitivity to similar attacks with other artifacts.
Essentially, narrow indicators offer really restricted scope, and it is the factor that they exist by the billions in massive databases that are constantly expanding of malware signatures, network addresses that are suspicious, harmful registry keys, file and packet content snippets, file paths and invasion detection guidelines and so on. The continuous endpoint monitoring solution provided by Ziften aggregates a few of these 3rd party databases and risk feeds into the Ziften Knowledge Cloud, to gain from known artifact detection. These detection aspects can be used in real time in addition to retrospectively. Retrospective application is vital because of the short-term characteristics of these artifacts as hackers continuously render hide the info about their cyber attacks to frustrate this slim IoC detection method. This is the reason that a constant monitoring service must archive tracking results for a long time (in relation to industry reported normal hacker dwell times), to supply an enough lookback horizon.
Slim IoC’s have considerable detection worth however they are mainly inefficient in the detection of brand-new cyber attacks by proficient hackers. New attack code can be pre tested against common enterprise security solutions in lab environments to verify non-reuse of artifacts that are noticeable. Security solutions that function simply as black/white classifiers struggle with this weak point, i.e. by offering an explicit determination of harmful or benign. This technique is very quickly averted. The defended company is likely to be completely attacked for months or years prior to any detectable artifacts can be identified (after intensive investigation) for the particular attack circumstances.
In contrast to the simplicity with which cyber attack artifacts can be obscured by typical hacker toolkits, the particular methods and strategies – the modus operandi – utilized by hackers have been sustained over a number of years. Common techniques such as weaponized websites and docs, brand-new service setup, vulnerability exploitation, module injection, delicate folder and pc registry area adjustment, new set up tasks, memory and drive corruption, credentials compromise, destructive scripting and numerous others are broadly normal. The correct usage of system logging and monitoring can detect a great deal of this characteristic attack activity, when properly coupled with security analytics to concentrate on the highest hazard observations. This completely removes the opportunity for hackers to pre test the evasiveness of their destructive code, because the quantification of risk is not black and white, but nuanced shades of gray. In particular, all endpoint danger is varying and relative, throughout any network/ user environment and time period, and that environment (and its temporal dynamics) can not be duplicated in any laboratory environment. The basic attacker concealment methodology is foiled.
In future posts we will analyze Ziften endpoint risk analysis in greater detail, in addition to the important relationship between endpoint security and endpoint management. “You can’t secure what you don’t manage, you cannot manage what you don’t measure, you can’t measure what you don’t track.” Organizations get breached since they have less oversight and control of their endpoint environment than the cyber attackers have. Look out for future posts…