Monthly Archives: March 2015

Narrow Indicators Of Compromise Are Not Good Enough For Endpoint Monitoring That Is Reliable – Charles Leaver

Presented By Charles Leaver And Written By Dr Al Hartmann Of Ziften Inc.

 

The Breadth Of The Indication – Broad Versus Narrow

A detailed report of a cyber attack will normally provide details of indicators of compromise. Typically these are narrow in their scope, referencing a particular attack group as seen in a particular attack on an enterprise for a limited time period. Generally these narrow indicators are specific artifacts of an observed attack that could make up specific evidence of compromise on their own. For the attack it suggests that they have high specificity, however often at the cost of low level of sensitivity to similar attacks with other artifacts.

Essentially, narrow indicators offer really restricted scope, and it is the factor that they exist by the billions in massive databases that are constantly expanding of malware signatures, network addresses that are suspicious, harmful registry keys, file and packet content snippets, file paths and invasion detection guidelines and so on. The continuous endpoint monitoring solution provided by Ziften aggregates a few of these 3rd party databases and risk feeds into the Ziften Knowledge Cloud, to gain from known artifact detection. These detection aspects can be used in real time in addition to retrospectively. Retrospective application is vital because of the short-term characteristics of these artifacts as hackers continuously render hide the info about their cyber attacks to frustrate this slim IoC detection method. This is the reason that a constant monitoring service must archive tracking results for a long time (in relation to industry reported normal hacker dwell times), to supply an enough lookback horizon.

Slim IoC’s have considerable detection worth however they are mainly inefficient in the detection of brand-new cyber attacks by proficient hackers. New attack code can be pre tested against common enterprise security solutions in lab environments to verify non-reuse of artifacts that are noticeable. Security solutions that function simply as black/white classifiers struggle with this weak point, i.e. by offering an explicit determination of harmful or benign. This technique is very quickly averted. The defended company is likely to be completely attacked for months or years prior to any detectable artifacts can be identified (after intensive investigation) for the particular attack circumstances.

In contrast to the simplicity with which cyber attack artifacts can be obscured by typical hacker toolkits, the particular methods and strategies – the modus operandi – utilized by hackers have been sustained over a number of years. Common techniques such as weaponized websites and docs, brand-new service setup, vulnerability exploitation, module injection, delicate folder and pc registry area adjustment, new set up tasks, memory and drive corruption, credentials compromise, destructive scripting and numerous others are broadly normal. The correct usage of system logging and monitoring can detect a great deal of this characteristic attack activity, when properly coupled with security analytics to concentrate on the highest hazard observations. This completely removes the opportunity for hackers to pre test the evasiveness of their destructive code, because the quantification of risk is not black and white, but nuanced shades of gray. In particular, all endpoint danger is varying and relative, throughout any network/ user environment and time period, and that environment (and its temporal dynamics) can not be duplicated in any laboratory environment. The basic attacker concealment methodology is foiled.

In future posts we will analyze Ziften endpoint risk analysis in greater detail, in addition to the important relationship between endpoint security and endpoint management. “You can’t secure what you don’t manage, you cannot manage what you don’t measure, you can’t measure what you don’t track.” Organizations get breached since they have less oversight and control of their endpoint environment than the cyber attackers have. Look out for future posts…

 

Charles Leaver – Carbanak Case Study 3 And How Ziften Continuous Endpoint Monitoring Would Have Identified The Indicators Of Compromise

Presented By Charles Leaver And Written By Dr Al Hartmann

Part 3 in a 3 part series

 

Below are excerpts of Indicators of Compromise (IoC) from the technical reports on the Anunak/Carbanak APT attacks, with comments on their discovery by the Ziften continuous endpoint monitoring service. The Ziften system has a focus on generic indicators of compromise that have been consistent for years of hacker attacks and cyber security experience. IoC’s can be identified for any operating system such as Linux, OS X and Windows. Particular indicators of compromise likewise exist that indicate C2 infrastructure or specific attack code instances, however these are not utilized long term and not generally made use of once again in fresh attacks. There are billions of these artifacts in the cyber security world with thousands being added every day. Generic IoC’s are embedded for the supported operating systems by the Ziften security analytics, and the specific IoC’s are employed by the Ziften Knowledge Cloud from memberships to a variety of market risk feeds and watch lists that aggregate these. These both have value and will assist in the triangulation of attack activity.

1. Exposed vulnerabilities

Excerpt: All observed cases used spear phishing emails with Microsoft Word 97– 2003 (. doc) files attached or CPL files. The doc files exploit both Microsoft Office (CVE-2012-0158 and CVE-2013-3906) and Microsoft Word (CVE- 2014-1761).

Remark: Not really a IoC, critical exposed vulnerabilities are a major hacker manipulation and is a big red flag that increases the threat rating (and the SIEM priority) for the end point, particularly if other indications are likewise present. These vulnerabilities are indicators of lazy patch management and vulnerability lifecycle management which results in a weakened cyber defense position.

2. Locations That Are Suspect

Excerpt: Command and Control (C2) servers located in China have been determined in this campaign.

Remark: The geolocation of endpoint network touches and scoring by location both add to the risk score that drives up the SIEM priority. There are valid situations for having contact with Chinese servers, and some organizations may have installations situated in China, however this ought to be validated with spatial and temporal checking of abnormalities. IP address and domain details need to be included with a resulting SIEM alarm so that SOC triage can be performed rapidly.

3. Binaries That Are New

Excerpt: Once the remote code execution vulnerability is successfully exploited, it installs Carbanak on the victim’s system.

Remark: Any brand-new binaries are always suspicious, but not all of them ought to be alerted. The metadata of images must be analyzed to see if there is a pattern, for example a brand-new app or a brand-new variation of an existing app from an existing supplier on a likely file path for that vendor etc. Hackers will attempt to spoof apps that are whitelisted, so signing data can be compared along with size, file size and filepath etc to filter out obvious circumstances.

4. Unusual Or Sensitive Filepaths

Excerpt: Carbanak copies itself into “% system32% com” with the name “svchost.exe” with the file attributes: system, concealed and read-only.

Comment: Any writing into the System32 filepath is suspicious as it is a sensitive system directory, so it goes through analysis by inspecting anomalies instantly. A traditional anomaly would be svchost.exe, which is a crucial system process image, in the uncommon location the com subdirectory.

5. New Autostarts Or Services

Excerpt: To guarantee that Carbanak has autorun privileges the malware develops a brand-new service.

Comment: Any autostart or brand-new service prevails with malware and is constantly examined by the analytics. Anything low prevalence would be suspicious. If examining the image hash versus industry watchlists results in an unknown quantity to the majority of antivirus engines this will raise suspicions.

6. Low Prevalence File In High Prevalence Folder

Excerpt: Carbanak produces a file with a random name and a.bin extension in %COMMON_APPDATA% Mozilla where it saves commands to be carried out.

Remark: This is a classic example of “one of these things is not like the other” that is simple for the security analytics to check (continuous monitoring environment). And this IoC is completely generic, has definitely nothing to do with which filename or which folder is produced. Although the technical security report lists it as a particular IoC, it is trivially genericized beyond Carabanak to future attacks.

7. Suspect Signer

Excerpt: In order to render the malware less suspicious, the most recent Carbanak samples are digitally signed

Remark: Any suspect signer will be treated as suspicious. One case was where a signer offers a suspect anonymous gmail e-mail address, which does not inspire confidence, and the risk score will be elevated for this image. In other cases no e-mail address is provided. Signers can be easily listed and a Pareto analysis performed, to determine the more versus less trusted signers. If a less trusted signer is discovered in a more delicate directory then this is very suspicious.

8. Remote Administration Tools

Excerpt: There appears to be a choice for the Ammyy Admin remote administration tool for remote control thought that the attackers used this remote administration tool due to the fact that it is frequently whitelisted in the victims’ environments as a result of being used frequently by administrators.

Comment: Remote admin tools (RAT) constantly raise suspicions, even if they are whitelisted by the company. Checking of anomalies would occur to determine whether temporally or spatially each brand-new remote admin tool is consistent. RAT’s go through abuse. Hackers will constantly prefer to utilize the RAT’s of a company so that they can prevent detection, so they should not be granted access each time just because they are whitelisted.

9. Patterns Of Remote Login

Excerpt: Logs for these tools show that they were accessed from two dissimilar IPs, most likely utilized by the hackers, and situated in Ukraine and France.

Comment: Always suspect remote logins, because all hackers are presumed to be remote. They are also utilized a lot with insider attacks, as the insider does not wish to be identified by the system. Remote addresses and time pattern anomalies would be inspected, and this should expose low prevalence usage (relative to peer systems) plus any suspect locations.

10. Atypical IT Tools

Excerpt: We have also found traces of many different tools utilized by the hackers inside the victim ´ s network to gain control of additional systems, such as Metasploit, PsExec or Mimikatz.

Comment: Being sensitive apps, IT tools should always be checked for abnormalities, due to the fact that lots of hackers overturn them for malicious functions. It is possible that Metasploit could be used by a penetration tester or vulnerability scientist, but circumstances of this would be rare. This is a prime example where an unusual observation report for the vetting of security staff would lead to corrective action. It likewise highlights the issue where blanket whitelisting does not help in the recognition of suspicious activity.

 

Find Out Why Continuous Endpoint Monitoring Is So Efficient In The Second Part Of The Carbanak Case Study – Charles Leaver

Presented By Charles Leaver And Written By Dr Al Hartmann

Part 2 in a 3 part series

 

Continuous Endpoint Monitoring Is Very Efficient

 

Convicting and obstructing malicious software before it has the ability to jeopardize an endpoint is fine. However this technique is mainly inadequate against cyber attacks that have been pre tested to evade this type of approach to security. The genuine issue is that these hidden attacks are conducted by knowledgeable human hackers, while traditional defense of the endpoint is an automatic process by endpoint security systems that rely largely on standard antivirus innovation. The intelligence of people is more creative and flexible than the intelligence of machines and will always be superior to automated defenses. This underlines the findings of the Turing test, where automated defenses are attempting to adapt to the intellectual level of a knowledgeable human hacker. At the current time, artificial intelligence and machine learning are not advanced enough to completely automate cyber defense, the human hacker is going to be victorious, while those attacked are left counting their losses. We are not residing in a sci-fi world where machines can out think people so you must not think that a security software suite will automatically look after all your issues and avoid all attacks and information loss.

The only real way to prevent a resolute human hacker is with an undaunted human cyber protector. In order to engage your IT Security Operations Center (SOC) staff to do this, they must have complete visibility of network and endpoint operations. This kind of visibility will not be accomplished with standard endpoint anti-viruses suites, rather they are designed to stay quiet unless implementing a capture and quarantining malware. This conventional approach renders the endpoints opaque to security personnel, and the hackers use this endpoint opacity to conceal their attacks. This opacity extends backwards and forwards in time – your security personnel don’t know what was running across your endpoint population in the past, or at this moment, or what can be expected in the future. If diligent security personnel find clues that need a forensic look back to uncover attacker characteristics, your anti-viruses suite will be unable to assist. It would not have actually acted at the time so no events will have been recorded.

In contrast, continuous endpoint monitoring is constantly working – supplying real time visibility into endpoint operations, providing forensic look back’s to take action against brand-new evidence of attacks that is emerging and discover indications earlier, and offering a standard for typical patterns of operation so that it understands exactly what to anticipate and alert any irregularities in the future. Supplying not just visibility, continuous endpoint monitoring supplies informed visibility, with the application of behavioral analytics to identify operations that appear abnormal. Abnormalities will be constantly evaluated and aggregated by the analytics and reported to SOC staff, through the organization’s security information event management (SIEM) network, and will flag the most concerning suspicious problems for security workers attention and action. Continuous endpoint monitoring will magnify and scale human intelligence and not replace it. It is a bit like the old game on Sesame Street “One of these things is not like the other.”

A child can play this game. It is simplistic due to the fact that the majority of items (called high prevalence) resemble each other, but one or a small number (referred to as low prevalence) are different and stand out. These different actions taken by cyber wrongdoers have actually been quite constant in hacking for decades. The Carbanak technical reports that noted the signs of compromise ready examples of this and will be gone over below. When continuous endpoint monitoring security analytics are enacted and reveal these patterns, it is basic to recognize something suspicious or unusual. Cyber security personnel will have the ability to carry out quick triage on these unusual patterns, and quickly identify a yes/no/maybe response that will distinguish unusual but recognized to be good activities from malicious activities or from activities that need additional tracking and more insightful forensics investigations to confirm.

There is no chance that a hacker can pre test their attacks when this defense application is in place. Continuous endpoint monitoring security has a non-deterministic risk analytics part (that signals suspect activity) along with a non-deterministic human element (that carries out alert triage). Depending upon the current activities, endpoint population mix and the experience of the cyber security workers, cultivating attack activity may or might not be revealed. This is the nature of cyber warfare and there are no guarantees. But if your cyber security fighters are equipped with continuous endpoint monitoring analytics and visibility they will have an unjust advantage.

 

Why Continuous Endpoint Monitoring Is Preferred The Carbanak Case Study Part One – Charles Leaver

Presented By Charles Leaver And Written By Dr Al Hartmann

 

Part 1 in a 3 part series

 

Carbanak APT Background Details

A billion dollar bank raid, which is targeting more than a hundred banks across the world by a group of unknown cyber wrongdoers, has remained in the news. The attacks on the banks began in early 2014 and they have actually been broadening across the globe. Most of the victims suffered disastrous infiltrations for a variety of months across several endpoints prior to experiencing financial loss. Most of the victims had executed security measures which included the execution of network and endpoint security systems, but this did not supply a great deal of caution or defense against these cyber attacks.

A number of security businesses have actually produced technical reports about the incidents, and they have actually been codenamed either Carbanak or Anunak and these reports noted signs of compromise that were observed. The companies include:

Fox-IT from Holland
Group-IB from Russia
Kaspersky Lab from Russia

This post will serve as a case study for the cyber attacks and address:

1. The factor that the endpoint security and the standard network security was unable to identify and defend against the attacks?
2. Why continuous endpoint monitoring (as provided by the Ziften solution) would have warned early about endpoint attacks then triggered a reaction to prevent data loss?

Standard Endpoint Security And Network Security Is Inadequate

Based upon the legacy security design that relies too much on obstructing and prevention, standard endpoint and network security does not supply a well balanced strategy of obstructing, prevention, detection and response. It would not be hard for any cyber criminal to pre test their attacks on a small number of conventional endpoint security and network security products so that they could be sure an attack would not be identified. A variety of the hackers have actually researched the security products that were in place at the victim organizations then became knowledgeable in breaking through undiscovered. The cyber crooks understood that the majority of these security products only react after the event however otherwise will do nothing. Exactly what this means is that the typical endpoint operation stays generally opaque to IT security workers, which indicates that malicious activity becomes masked (this has already been inspected by the hackers to prevent detection). After a preliminary breach has actually happened, the destructive software application can extend to reach users with higher privileges and the more delicate endpoints. This can be easily achieved by the theft of credentials, where no malware is needed, and conventional IT tools (which have been white listed by the victim company) can be used by cyber criminal developed scripts. This means that the presence of malware that can be spotted at endpoints is not utilized and there will be no red flags raised. Traditional endpoint security software application is too over reliant on looking for malware.

Standard network security can be manipulated in a comparable way. Hackers test their network activities initially to avoid being identified by commonly distributed IDS/IPS rules, and they carefully monitor typical endpoint operation (on endpoints that have actually been compromised) to conceal their activities on a network within typical transaction periods and normal network traffic patterns. A brand-new command and control infrastructure is created that is not registered on network address blacklists, either at the IP or domain levels. There is not much to give the hackers away here. However, more astute network behavioral evaluation, particularly when related to the endpoint context which will be gone over later on in this series of posts, can be a lot more efficient.

It is not time to abandon hope. Would continuous endpoint monitoring (as offered by Ziften) have provided an early caution of the endpoint hacking to start the procedure of stopping the attacks and prevent data loss? Find out more in part two.