Written by Roark Pollock and Presented by Ziften CEO Charles Leaver
In accordance with Gartner the public cloud services market surpassed $208 billion last year (2016). This represented about a 17% rise year over year. Not bad when you consider the on-going concerns most cloud consumers still have concerning data security. Another particularly intriguing Gartner finding is the typical practice by cloud consumers to contract services to several public cloud suppliers.
In accordance with Gartner “most companies are currently using a mix of cloud services from various cloud providers”. While the business reasoning for the use of numerous suppliers is sound (e.g., avoiding supplier lock in), the practice does create extra complexity inmonitoring activity throughout an company’s increasingly dispersed IT landscape.
While some suppliers support more superior visibility than others (for example, AWS CloudTrail can monitor API calls throughout the AWS infrastructure) companies have to understand and deal with the visibility issues associated with relocating to the cloud despite the cloud service provider or companies they deal with.
Sadly, the capability to monitor application and user activity, and networking interactions from each VM or endpoint in the cloud is restricted.
Regardless of where computing resources live, organizations must respond to the concerns of “Which users, machines, and applications are communicating with each other?” Organizations require visibility across the infrastructure in order to:
- Rapidly recognize and focus on issues
- Speed origin analysis and recognition
- Lower the mean time to fix problems for end users
- Rapidly recognize and eliminate security threats, decreasing general dwell times.
Alternatively, poor visibility or bad access to visibility data can decrease the effectiveness of current security and management tools.
Organizations that are familiar with the ease, maturity, and relative inexpensiveness of keeping an eye on physical data centers are going to be disappointed with their public cloud options.
What has been missing is a basic, common, and classy solution like NetFlow for public cloud infrastructure.
NetFlow, of course, has actually had twenty years approximately to become a de facto requirement for network visibility. A normal deployment involves the tracking of traffic and aggregation of flows where the network chokes, the retrieval and storage of flow data from numerous collection points, and the analysis of this flow data.
Flows consist of a standard set of source and destination IP addresses and port and protocol info that is typically collected from a switch or router. Netflow data is reasonably cheap and simple to collect and provides almost common network visibility and allows for analysis which is actionable for both network tracking and performance management applications.
A lot of IT personnel, specifically networking and some security groups are exceptionally comfortable with the technology.
But NetFlow was produced for fixing what has actually ended up being a rather restricted problem in the sense that it just gathers network data and does this at a restricted variety of prospective places.
To make much better use of NetFlow, 2 key changes are essential.
NetFlow at the Edge: First, we need to broaden the helpful implementation circumstances for NetFlow. Instead of only collecting NetFlow at networking choke points, let’s expand flow collection to the network edge (clients, cloud, and servers). This would considerably broaden the big picture that any NetFlow analytics offer.
This would enable companies to augment and leverage existing NetFlow analytics tools to remove the growing visibility blind spot into public cloud activities.
Rich, contextual NetFlow: Secondly, we have to use NetFlow for more than easy visibility of the network.
Instead, let’s use an extended version of NetFlow and take account of info on the device, application, user, and binary responsible for each tracked network connection. That would permit us to rapidly connect every network connection back to its source.
In fact, these two modifications to NetFlow, are exactly what Ziften has actually achieved with ZFlow. ZFlow provides an expanded version of NetFlow that can be released at the network edge, including as part of a container or VM image, and the resulting info collection can be taken in and evaluated with existing NetFlow tools for analysis. Over and above traditional NetFlow Internet Protocol Flow Information eXport (IPFIX) visibility of the network, ZFlow supplies extended visibility with the addition of details on user, device, application and binary for every network connection.
Ultimately, this enables Ziften ZFlow to provide end to end visibility between any two endpoints, physical or virtual, removing conventional blind spots like East West traffic in data centers and business cloud implementations.