Algorithm for correcting sessions of users of large-scale networked systems based on deep learning. View publication
Three Neural Network topologies are proposed, named MLP, LSTM and CNN (Conv), according to their fundamental structures. Each neural network is composed of input, intermediate (also known as hidden layers), and output structures. Below, we provide more details of each proposed neural network topology.
MLP Topology | LSTM Topology | CNN Topology |
---|---|---|
Impact of the number of epochs on average error for Dense topology (arrangements A=3, window width W=11), LSTM topology (arrangements A=3, window width W=11), and Conv. topology (arrangements A=8, squared window width W=H=256).
MLP | LSTM | CNN |
---|---|---|
Parameter sensitivity of Conv. topology withuniform probabilistic injected failure Fprob =10%
Convolutional Topology |
---|
Comparison of topologies MLP, LSTM (LS), and CNN for probabilistic injected failure and monitoring injected failure.
Probabilistic Injected Failure | Monitoring Injected Failure |
---|---|
Comparison between the best neural network model and state-of-the-art probabilistic technique. Values obtained for probabilistic error injection and monitoring error injection.
Convolutional vs Probabilistic |
---|
Impact, in terms of number (left) and duration (right) of a trace (S1) failed (Fmon = 20) and regenerated using the proposed BB-based (topology=Conv., threshold α =0.50, arrangements A =8, squared window width W = H =256) and prior probabilistic-based (threshold α =0.75).
Sessions Duration | Number Sessions |
---|---|
-
Upgrade and update
- sudo apt-get update
- sudo apt-get upgrade
-
Installation of application and internal dependencies
- git clone [https://github.com/kayua/Regenerating-Datasets-With-Convolutional-Network]
- pip install -r requirements.txt
-
Test installation:
- python3 main.py -h
python3 run_jnsm_mif.py -c lstm
python3 main.py
python3 run_mif.py -c lstm
python3 main_mif.py
Arguments(run_TNSM.py):
-h, --help Show this help message and exit
--append, -a Append output logging file with analysis results
--demo, -d Demo mode (default=False)
--trials, -r Mumber of trials (default=1)
--start_trials,-s Start trials (default=0)
--skip_train, -t Skip training of the machine learning model training?
--campaign -c Campaign [demo, mif, pif] (default=demo)
--verbosity, -v Verbosity logging level (INFO=20 DEBUG=10)
--------------------------------------------------------------
Arguments(main.py):
-h, --help Show this help message and exit
--snapshot_column Snapshot column position (Default 1)
--peer_column Peer column position (Default 2)
--window_length Define length window (Default 256)
--window_width Define width window (Default 256)
--number_blocks Define number blocks (Default 32)
--topology Neural topology (Default model_v1)
--verbosity Verbosity (Default 20)
--epochs Define number epochs (Default 120)
--metrics Define metrics (Default mse)
--loss LOSS Define loss (Default mse)
--optimizer Define optimizer (Default adam)
--steps_per_epoch Define batch size (Default 32)
--threshold Threshold (Default 0.75)
--seed Seed (Default 0)
--learning_rate Learning rate (Default 0.001)
--pif PIF PIF(0<x<1) MIF(>1) (Default 0)
--duration Duration
--input_file_swarm Input file swarm (Default )
--save_file_samples Save file samples (Default )
--load_samples_in Load file samples in (Default )
--load_samples_out Load file samples out (Default )
--save_model File save model (Default models_saved/model)
--load_model File load model (Default None)
--input_predict File input to predict (Default )
--output_predict File output to predict (Default )
--file_corrected File corrected for evaluation (Default )
--file_failed File failed for evaluation (Default )
--file_original File failed for evaluation (Default )
--file_analyse_mode File evaluation file mode (Default +a)
--file_analyse File evaluation file (Default results.txt)
--------------------------------------------------------------
Full traces available at: https://github.com/ComputerNetworks-UFRGS/TraceCollection/tree/master/01_traces
matplotlib 3.4.1
tensorflow 2.4.1
tqdm 4.60.0
numpy 1.18.5
keras 2.4.3
setuptools 45.2.0
h5py 2.10.0
Comparison between the neural network MLP and state-of-the-art probabilistic technique. Values obtained for probabilistic error injection and monitoring error injection.
Probabilistic Inject Failure |
---|
Monitoring Inject Failure |
---|
Comparison between the neural network LSTM and state-of-the-art probabilistic technique. Values obtained for probabilistic error injection and monitoring error injection.
Probabilistic Inject Failure |
---|
Monitoring Inject Failure |
---|
This study was financed in part by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - Brasil (CAPES) - Finance Code 001. We also received funding from Rio Grande do Sul Research Foundation (FAPERGS) - Grant ARD 10/2020 and Nvidia – Academic Hardware Grant
@article{Paim2023,
author = {Paim, Kayuã Oleques and Quincozes, Vagner Ereno and Kreutz, Diego and Mansilha, Rodrigo Brandão and Cordeiro, Weverton},
title = {Regenerating Networked Systems’ Monitoring Traces Using Neural Networks},
journal = {Journal of Network and Systems Management},
year = {2023},
volume = {32},
number = {1},
pages = {16},
month = {},
doi = {10.1007/s10922-023-09790-9},
url = {https://doi.org/10.1007/s10922-023-09790-9},
issn = {1573-7705},
}