Machine Learning Is Welcome But These Consequences Are Not – Charles Leaver

By | April 5, 2017

Written By Roark Pollock And Presented By Ziften CEO Charles Leaver

 

If you study history you will observe numerous examples of extreme unintended consequences when new technology has been introduced. It frequently surprises people that brand-new technologies may have nefarious purposes in addition to the favorable purposes for which they are brought to market but it happens on a very regular basis.

For instance, Train robbers utilizing dynamite (“You believe you used adequate Dynamite there, Butch?”) or spammers using email. More recently making use of SSL to conceal malware from security controls has become more typical just because the legitimate use of SSL has actually made this strategy better.

Because new technology is often appropriated by bad actors, we have no reason to think this will not be true about the new generation of machine learning tools that have actually reached the market.

To what effect will there be misuse of these tools? There are probably a couple of ways in which enemies might utilize machine learning to their advantage. At a minimum, malware authors will check their brand-new malware against the brand-new class of sophisticated risk protection products in a bid to customize their code to ensure that it is less likely to be flagged as malicious. The efficiency of protective security controls always has a half life due to adversarial learning. An understanding of machine learning defenses will help hackers be more proactive in decreasing the effectiveness of artificial intelligence based defenses. An example would be an assailant flooding a network with fake traffic with the intention of “poisoning” the machine learning model being built from that traffic. The objective of the opponent would be to fool the defender’s machine learning tool into misclassifying traffic or to develop such a high degree of false positives that the defenders would dial back the fidelity of the alerts.

Machine learning will likely also be used as an attack tool by hackers. For example, some scientists forecast that attackers will make use of machine learning methods to hone their social engineering attacks (e.g., spear phishing). The automation of the effort that is required to tailor a social engineering attack is especially uncomfortable given the efficiency of spear phishing. The ability to automate mass customization of these attacks is a potent financial incentive for attackers to adopt the techniques.

Expect breaches of this type that deliver ransomware payloads to increase sharply in 2017.

The requirement to automate jobs is a major motivation of investment choices for both assailants and defenders. Machine learning promises to automate detection and response and increase the functional tempo. While the technology will progressively end up being a basic element of defense in depth methods, it is not a magic bullet. It must be comprehended that attackers are actively dealing with evasion methods around machine learning based detection solutions while also utilizing machine learning for their own offensive functions. This arms race will require defenders to significantly accomplish incident response at machine speed, further worsening the need for automated incident response abilities.

Leave a Reply

Your email address will not be published. Required fields are marked *