Security Vulnerabilities Associated with Machine Learning

6551

Machine learning has brought numerous benefits to our lives. From voice assistants to self-driving cars, machine learning is transforming the way we interact with each other and how we interact with the world. It is significantly becoming one of the latest emerging technologies in the market, producing mobile apps and other services better and smarter than before.

According to IBM, nearly 90% of the data existing in the world has been produced in the last two years. On average, about 2.5 quintillion bytes of data are created every day. This large amount of data can’t be managed and processed by humans physically, and this is where machine learning comes into the action.

The use of artificial intelligence and machine learning in the cybersecurity industry is on its rise. These technologies are capable of delivering advanced insights that the security teams can use to detect various cyber-crimes promptly. However, rogue actors can manipulate these systems using machine learning-based systems to provide inaccurate results, destroying their ability to protect information assets.

It is because machine learning is still in its initial stage of development, and the attack vectors are not much clear. The cyber defense strategies are also in their early stages. As we can’t prevent all types of attacks, understanding why and how they can help us in drawing the response strategy. Read the remaining part of the article to get more insight into the topic.

Machine Learning Attacks

There are different categories of attacks on the ML models based on the actual goal of the attacker and the stages of the machine learning pipeline, i.e., training and production, and can be called attacks on algorithms and models, respectively.

The most common machine learning attacks are described as follows:

Evasion Attacks

They are the most successful attacks that might be experienced in the adversarial settings during the system operations. Like for instance, hackers and spammers attempt to pass the detection by mystifying and confusing the content of the spam emails and malware codes. During the evasion setting, malicious samples are modified at test time to escape detection.

Evasion attacks are also referred to as designing an input that seems reasonable for a human but is classified wrongly by the ML models. An example is to alter some pixels in a picture before uploading in a way that the image recognition system fails to sort the result. This can surely fool humans.

The first example of adversarial attacks was based on the accessible database of handwritten digits. It is illustrated that it was possible to make small changes to the initial picture of a number, so it is recognized as another digit. It is not a single example of the system that confuses 1 and 7. There are other examples of all possible misclassifications of figures from every ten digits to each of 100 numbers.

Further research also demonstrated that small disturbances in an image could lead to misclassification, and the system can quickly recognize a car rather than a panda.

Poisoning Attacks

The poisoning attacks are conducted during the inference stage and aim to threaten availability and integrity. The poisoning changes the training data sets by inserting, editing, and removing the decision points to change the target model’s boundaries.

The machine learning algorithms are re-trained on data collected during the operation to adapt to the changes in the underlying data distribution. Like for instance, intrusion detection systems are re-trained on a set of samples collected during the network operation. In this situation, an attacker may poison the training data by carefully injecting the designed samples to compromise the entire learning process. Thus, poisoning might be regarded as an adversarial defect of the training data.

The examples of poisoning attacks against machine learning algorithms include learning in the presence of worst-case adversarial label flips in the training data. Also, contradictory stop signs are the primary examples of poisoning attacks.

Privacy Attacks

These attacks usually happen during the training phase. The purpose of such attacks is not to corrupt the training model, but to retrieve the sensitive data. During these attacks, the attacker is aimed to explore the system, like a dataset or model that can further come in handy. Moreover, it is possible to get information about the dataset by conducting attacks like membership inference and data attributes with the support of attribute inference. Also, the model inversion attack helps to extract particular data from the model.

How Will They Attack?

The machine learning systems open new gateways for attacks that don’t exist in conventional procedural programs. One such attack is the evasion attack, in which an attacker attempts to inject inputs to ML models that are meant to trigger the mistakes. The data might look perfect to humans, but the variances can cause the machine learning algorithms to go off the track.

These attacks might occur at inference time by damaging the model’s private information in one of two ways. In a black box case, the attacker knows nothing about the systems’ internal workings but recognizes vulnerabilities by probing and finding patterns in the results that betray the learning model. While in a white box scenario, the attacker has some information about the model, obtained directly or from the untrusted actors in the data processing pipeline.

How to Defend Attacks on Machine Learning Systems

There are several different kinds of approaches to defending against each type of attack discussed above. Here are the following best practices that can help the security teams to fight back the attacks on machine learning systems.

  • Ensure that you can completely trust any third party or vendors involved to train your model or provide samples for preparing it.
  • Develop a mechanism or plan to inspect the training data for any contamination. This should be done if training is done internally.
  • Keep a ground-truth test and do test your model against this set after every training session. Significant changes in classifications from the original collection will show poisoning.
  • Avoid real-time training and train offline. It not only lets you evaluate the data but discourages attackers as it cuts off the immediate feedback they can otherwise use to boost their attacks. Fighting back against evasive attacks is difficult because trained models are not perfect, and the attacker can always find new ways to tilt the classifier in the desired directions. To prevent these attacks:
  • Compress the model, so it becomes a smooth decision surface resulting in less room for an attack to manipulate it.
  • Train your model with all the possible accusatory examples an assailant can use.

End Note

Since most security practitioners and researchers believe attackers can surpass the machine learning-driven security and make the machine learning systems as perfect as possible, organizations must adopt the best practices, as discussed above.

Organizations have managed to keep their customary security systems secure against most resolute and firm attackers with appropriate security hygiene. The same concern and focus are required for machine learning systems. By applying that focus, you will be able to avail of the benefits of machine learning and dismiss any perceived mistrust towards the systems.

Author Bio:

Farwa Sajjad is a passionate journalist and tech blogger who writes about AI, Big data, and Cyber Security. currently covering virtual private network topics @privacycrypts. follow me on twitter

 

Ad

No posts to display