Skip to content

SESARLab/certified-malware-detection

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Certified Machine Learning-Based Malware Detection

Nicola Bena, Marco Anisetti, Gabriele Gianini, Claudio A. Ardagna.

Overview

This repository contains

In a nutshell, we re-executed the training process as indicated in the original publication presenting the malware detector (link), and exported the trained model as .h5 model. Here, we import such a model, choose the first 100 data points in the test set, and craft an evasion attack starting from these points varying epsilon. Our results show that the model is highly vulnerable to such attack. However, we note that the evasion attack is limited to perturbing extracted features, and it might be more difficult to carry out this attack in the real world.

Environment and Installation

Experiments have been executed on an Apple MacBook Pro, featuring 10 CPUs Apple M1 Pro, 32 GBs of RAM, OS Sonoma 14.1.2. The instructions to prepare the environment are thus applicable for this setting only.

First, create a conda environment using, e.g, miniforge.

conda create my-env python=3.11
conda activate my-env

We then install the necessary libraries using pip because there are some incompatibilities between the OS version and what we need to install (as of writing).

pip install \
    adversarial-robustness-toolbox \
    numpy \
    pandas \
    scikit-learn \
    tensorflow \
    tensorflow-metal

The final step is to verify that GPUs are recognized.

import tensorflow as tf
print(tf.__version__)
print(tf.config.list_physical_devices())

Output should be something like:

2.15.0
[PhysicalDevice(name='/physical_device:CPU:0', device_type='CPU'), PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')]

If GPU does not appear in the device list, then there are some issues with the installation. Note that it should work even without GPU support.

The libraries version are defined in requirements.txt to install the libraries, although it is specific for MacOS.

Usage

The entire code is implemented as a Python notebook Code/notebook.ipynb. The notebook includes detailed explanations on the process we followed. In summary, we proceeded as follows.

  1. We loaded the entire test set used to evaluate the LSTM model (provided in Dataset/test_set.npz as numpy compressed file).
  2. We chosen the first 100 data points of the test set whose label is 1 (i.e., malware).
  3. We loaded the LSTM model and evaluated its performance on the chosen malware data points, to make sure loading worked properly.
  4. We carried out an evasion attack using the fast gradient method varying epsilon. For each value of epsilon we generated 100 data points starting from those chosen at point 2., and retrieved the predicted label.
  5. We finally exported the generated data points and an additional file summarizing the results.

Details on Output Files

The files we generated during the experiment are saved in the directory Output.

Each sub-directory refers to data points generated according to a specific value of epsilon, each file in each sub-directory is a data point created during the evasion attack.

Finally, file Output/adversarial_results.csv summarizes the retrieved results. In particular, for each value of epsilon, shows

  • the model accuracy
  • the count of data points classified as malware
  • the count of data points misclassified as benign
  • the ratio between the count of data points classified as malware with respect to the total number of crafted data points (i.e., count of data points classified as malware divided by 100).

Note: these measures are slightly redundant, but we decided to keep them.

About

N. Bena, M. Anisetti, G. Gianini, C. A. Ardagna, ``Certified Machine Learning-Based Malware Detection''

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published