Skip to content

Latest commit

 

History

History
69 lines (46 loc) · 3.59 KB

README.md

File metadata and controls

69 lines (46 loc) · 3.59 KB

Neural program analyzers use (deep) neural networks to analyze programs in software engineering tasks. They take a program and make predictions about some characteristics of the program. Evaluating the robustness of neural models that process source code is of particular importance because their robustness would impact the correctness of the encompassing analyses that use them. In this study, we propose a transformation-based testing framework to test the correctness of state-of-the-art neural models running on the programming task.

NPA workflow
Fig. 1. Neural Program Analyzer. Fig. 2. An overview for testing neural program analyzers.

Motivating Example and Result:

example result
Fig. 3. A misprediction in code2vec revealed by the loop transformation. Fig. 4. Results of evaluating code2vec on java-small/validation/libgdx project.

Type of Transformations:

1) Variable Renaming:

Variable-Renaming

2) Boolean Exchange:

Boolean-Exchange

3) Loop Exchange:

Loop-Exchange

4) Switch to If:

Switch-to-If

5) Permute Statement:

Permute-Statement

6) Reorder Condition:

Reorder-Condition

7) Dead Code Insertion:

Dead-Code-Insertion

8) Log Statement Insertion:

Log-Statement-Insertion

9) Try Catch Insertion:

Try-Catch-Insertion


Citation:

Testing Neural Program Analyzers

@inproceedings{rabin2019tnpa,
  title={Testing Neural Program Analyzers},
  author={Rabin, Md Rafiqul Islam and  Wang, Ke and Alipour, Mohammad Amin},
  booktitle={34th IEEE/ACM International Conference on Automated Software Engineering (Late Breaking Results-Track)},
  url={https://arxiv.org/abs/1908.10711},
  year={2019}
}

LBR References:

https://2019.ase-conferences.org/track/ase-2019-Late-Breaking-Results?track=ASE%20Late%20Breaking%20Results
https://2019.ase-conferences.org/details/ase-2019-Late-Breaking-Results/17/Testing-Neural-Programs