Neural program analyzers use (deep) neural networks to analyze programs in software engineering tasks. They take a program and make predictions about some characteristics of the program. Evaluating the robustness of neural models that process source code is of particular importance because their robustness would impact the correctness of the encompassing analyses that use them. In this study, we propose a transformation-based testing framework to test the correctness of state-of-the-art neural models running on the programming task.
Fig. 1. Neural Program Analyzer. | Fig. 2. An overview for testing neural program analyzers. |
Fig. 3. A misprediction in code2vec revealed by the loop transformation. | Fig. 4. Results of evaluating code2vec on java-small/validation/libgdx project. |
Testing Neural Program Analyzers
@inproceedings{rabin2019tnpa,
title={Testing Neural Program Analyzers},
author={Rabin, Md Rafiqul Islam and Wang, Ke and Alipour, Mohammad Amin},
booktitle={34th IEEE/ACM International Conference on Automated Software Engineering (Late Breaking Results-Track)},
url={https://arxiv.org/abs/1908.10711},
year={2019}
}
• https://2019.ase-conferences.org/track/ase-2019-Late-Breaking-Results?track=ASE%20Late%20Breaking%20Results
• https://2019.ase-conferences.org/details/ase-2019-Late-Breaking-Results/17/Testing-Neural-Programs