Monthly Archives: April 2017

Second Part Of Why Edit Difference Is A Critical Detection Tool – Charles Leaver

Written By Jesse Sampson And Presented By Charles Leaver CEO Ziften

 

In the first post on edit distance, we took a look at searching for harmful executables with edit distance (i.e., how many character modifications it takes to make 2 text strings match). Now let’s take a look at how we can use edit distance to look for harmful domains, and how we can develop edit distance features that can be combined with other domain functions to pinpoint suspicious activity.

Here is the Background

Exactly what are bad actors doing with malicious domains? It may be just utilizing a similar spelling of a typical domain to trick reckless users into viewing advertisements or getting adware. Legitimate sites are gradually catching onto this technique, in some cases called typo squatting.

Other destructive domains are the product of domain generation algorithms, which can be utilized to do all types of nefarious things like evade countermeasures that obstruct known compromised sites, or overwhelm domain servers in a distributed DoS attack. Older variations use randomly-generated strings, while more advanced ones include techniques like injecting typical words, additionally puzzling protectors.

Edit distance can assist with both use cases: let’s see how. First, we’ll leave out common domain names, given that these are typically safe. And, a list of regular domain names offers a baseline for finding anomalies. One great source is Quantcast. For this conversation, we will adhere to domain names and avoid sub domains (e.g. ziften.com, not www.ziften.com).

After data cleaning, we compare each candidate domain (input data observed in the wild by Ziften) to its possible next-door neighbors in the very same top-level domain (the tail end of a domain name – classically.com,. org, and so on but now can be practically anything). The standard job is to find the nearby neighbor in terms of edit distance. By discovering domain names that are one step away from their nearby neighbor, we can quickly find typo-ed domain names. By discovering domain names far from their next-door neighbor (the stabilized edit distance we presented in the first post is useful here), we can also discover anomalous domain names in the edit distance area.

Exactly what were the Results?

Let’s take a look at how these results appear in reality. Be careful when navigating to these domain names considering that they might contain malicious content!

Here are a few prospective typos. Typo-squatters target popular domains given that there are more chances someone will go to them. Several of these are suspect according to our danger feed partners, but there are some false positives too with cute names like “wikipedal”.

ed2-1

Here are some weird looking domains far from their next-door neighbors.

ed2-2

So now we have actually created two useful edit distance metrics for hunting. Not just that, we have 3 features to possibly add to a machine learning model: rank of nearby neighbor, distance from neighbor, and edit distance 1 from next-door neighbor, indicating a risk of typo tricks. Other features that might be utilized well with these include other lexical functions like word and n-gram distributions, entropy, and string length – and network features like the total count of failed DNS requests.

Streamlined Code that you can Experiment with

Here is a simplified version of the code to have fun with! Created on HP Vertica, however this SQL should work on most innovative databases. Keep in mind the Vertica editDistance function may vary in other applications (e.g. levenshtein in Postgres or UTL_MATCH. EDIT_DISTANCE in Oracle).

ed2-3

An Unmanaged Infrastructure Will Not Be Totally Secure And The Reverse Is True – Charles Leaver

Written by Charles Leaver Ziften CEO

 

If your enterprise computing environment is not correctly managed there is no way that it can be totally safe and secure. And you can’t efficiently manage those intricate enterprise systems unless there’s a good feeling that they are secure.

Some might call this a chicken and egg circumstance, where you do not know where to start. Should you begin with security? Or should you start with system management? That’s the wrong technique. Consider this rather like Reese’s Peanut Butter Cups: It’s not chocolate first. It’s not peanut butter first. Instead, both are mixed together – and treated as a single delicious treat.

Numerous companies, I would argue too many companies, are structured with an IT management department reporting to a CIO, and with a security management group reporting to a CISO. The CIO group and the CISO team do not know each other, talk with each other only when definitely required, have distinct budget plans, certainly have separate priorities, read different reports, and use different management platforms. On a day-to-day basis, what constitutes a task, a problem or an alert for one team flies completely under the other team’s radar.

That’s not good, since both the IT and security teams need to make assumptions. The IT group thinks that all assets are secure, unless somebody tells them otherwise. For example, they presume that devices and applications have not been jeopardized, users have actually not intensified their privileges, etc. Similarly, the security group assumes that the servers, desktops, and mobiles are working correctly, operating systems and applications fully updated, patches have actually been applied, and so on

Because the CIO and CISO groups aren’t talking with each other, don’t understand each others’ roles and concerns, and aren’t using the same tools, those assumptions might not be correct.

And again, you cannot have a protected environment unless that environment is correctly managed – and you cannot manage that environment unless it’s safe and secure. Or putting it another way: An environment that is not secure makes anything you perform in the IT group suspect and unimportant, and implies that you cannot know whether the info you are seeing is appropriate or controlled. It may all be fake news.

Bridging the IT / Security Gap

How to bridge that gap? It sounds easy however it can be difficult: Guarantee that there is an umbrella covering both the IT and security groups. Both IT and security report to the exact same person or structure someplace. It might be the CIO, it might be the CFO, it might be the CEO. For the sake of argument here, let’s state it’s the CFO.

If the company doesn’t have a protected environment, and there’s a breach, the value of the brand and the company may be lowered to absolutely nothing. Likewise, if the users, devices, infrastructure, application, and data aren’t managed well, the business can’t work efficiently, and the worth drops. As we’ve discussed, if it’s not properly handled, it can’t be secured, and if it’s not protected, it cannot be well managed.

The fiduciary obligation of senior executives (like the CFO) is to safeguard the worth of company assets, which indicates making sure IT and security talk with each other, comprehend each other’s goals, and if possible, can see the very same reports and data – filtered and shown to be meaningful to their specific areas of duty.

That’s the thought process that we adopted with the development of our Zenith platform. It’s not a security management tool with IT abilities, and it’s not an IT management tool with security capabilities. No, it’s a Peanut Butter Cup, developed similarly around chocolate and peanut butter. To be less confectionery, Zenith is an umbrella that gives IT teams exactly what they require to do their tasks, and provides security groups exactly what they need as well – without coverage gaps that could weaken presumptions about the state of business security and IT management.

We need to guarantee that our company’s IT infrastructure is built on a protected foundation – and that our security is implemented on a well-managed base of hardware, infrastructure, software and users. We cannot operate at peak efficiency, and with complete fiduciary obligation, otherwise.

Offline Activity Needs To Be A Part Of Your Endpoint Visibility Strategy – Charles Leaver

Written By Roark Pollock And Presented By Charles Leaver Ziften CEO

 

A survey recently finished by Gallup found that 43% of Americans that were in employment worked remotely for a few of their work time in 2016. Gallup, who has actually been surveying telecommuting trends in the United States for nearly a decade, continues to see more workers working beyond conventional workplaces and an increasing number of them doing this for more days out of the week. And, naturally the variety of linked devices that the typical staff member uses has increased as well, which helps encourage the benefit and desire of working far from the workplace.

This freedom undoubtedly makes for happier employees, and it is hoped more productive staff members, but the complications that these trends present for both systems and security operations groups ought to not be dismissed. IT systems management. IT asset discovery, and hazard detection and response functions all gain from real-time and historical visibility into user, device, application, and network connection activity. And to be really reliable, endpoint visibility and monitoring must work regardless of where the user and device are operating, be it on the network (local), off the network but linked (remote), or disconnected (offline). Present remote working trends are significantly leaving security and functional teams blind to possible issues and risks.

The mainstreaming of these trends makes it a lot more challenging for IT and security teams to limit what used to be considered greater threat user behavior, for example working from a coffeehouse. But that ship has sailed and today systems management and security groups need to be able to adequately track user, device, application, and network activity, detect abnormalities and improper actions, and impose suitable action or fixes no matter whether an endpoint is locally linked, from another location connected, or detached.

In addition, the fact that numerous employees now routinely access cloud based assets and applications, and have back-up network or USB connected storage (NAS) drives at their homes further amplifies the requirement for endpoint visibility. Endpoint controls frequently supply the only record of activity being remotely performed that no longer always terminates in the organization network. Offline activity provides the most extreme example of the need for continuous endpoint monitoring. Clearly network controls or network monitoring are of negligible use when a device is running offline. The installation of a suitable endpoint agent is important to guarantee the capture of very important system and security data.

As an example of the types of offline activity that could be spotted, a customer was recently able to track, flag, and report unusual habits on a business laptop computer. A high level executive transferred substantial amounts of endpoint data to an unauthorized USB drive while the device was offline. Because the endpoint agent had the ability to collect this behavioral data throughout this offline duration, the client had the ability to see this unusual action and follow up appropriately. Through the continuous monitoring of the device, applications, and user habits even when the endpoint was detached, provided the client visibility they never ever had previously.

Does your company have constant monitoring and visibility when staff member endpoints are not connected? If so, how do you do so?

Machine Learning Is Welcome But These Consequences Are Not – Charles Leaver

Written By Roark Pollock And Presented By Ziften CEO Charles Leaver

 

If you study history you will observe numerous examples of extreme unintended consequences when new technology has been introduced. It frequently surprises people that brand-new technologies may have nefarious purposes in addition to the favorable purposes for which they are brought to market but it happens on a very regular basis.

For instance, Train robbers utilizing dynamite (“You believe you used adequate Dynamite there, Butch?”) or spammers using email. More recently making use of SSL to conceal malware from security controls has become more typical just because the legitimate use of SSL has actually made this strategy better.

Because new technology is often appropriated by bad actors, we have no reason to think this will not be true about the new generation of machine learning tools that have actually reached the market.

To what effect will there be misuse of these tools? There are probably a couple of ways in which enemies might utilize machine learning to their advantage. At a minimum, malware authors will check their brand-new malware against the brand-new class of sophisticated risk protection products in a bid to customize their code to ensure that it is less likely to be flagged as malicious. The efficiency of protective security controls always has a half life due to adversarial learning. An understanding of machine learning defenses will help hackers be more proactive in decreasing the effectiveness of artificial intelligence based defenses. An example would be an assailant flooding a network with fake traffic with the intention of “poisoning” the machine learning model being built from that traffic. The objective of the opponent would be to fool the defender’s machine learning tool into misclassifying traffic or to develop such a high degree of false positives that the defenders would dial back the fidelity of the alerts.

Machine learning will likely also be used as an attack tool by hackers. For example, some scientists forecast that attackers will make use of machine learning methods to hone their social engineering attacks (e.g., spear phishing). The automation of the effort that is required to tailor a social engineering attack is especially uncomfortable given the efficiency of spear phishing. The ability to automate mass customization of these attacks is a potent financial incentive for attackers to adopt the techniques.

Expect breaches of this type that deliver ransomware payloads to increase sharply in 2017.

The requirement to automate jobs is a major motivation of investment choices for both assailants and defenders. Machine learning promises to automate detection and response and increase the functional tempo. While the technology will progressively end up being a basic element of defense in depth methods, it is not a magic bullet. It must be comprehended that attackers are actively dealing with evasion methods around machine learning based detection solutions while also utilizing machine learning for their own offensive functions. This arms race will require defenders to significantly accomplish incident response at machine speed, further worsening the need for automated incident response abilities.