Skip to content

Generating difference exposing tests in Python

Notifications You must be signed in to change notification settings

ASSERT-KTH/Mokav

Repository files navigation

Mokav

This project introduces an innovative approach for iterative test generation using large language models (LLMs). Instead of relying on traditional techniques, our approach leverages the power of LLMs to generate test cases that target behavioral differences between a buggy program version and an accepted/patched program version. By engaging in interactive conversations with the LLM, our method provides feedback to steer the generation process iteratively until a fault-inducing test is found.

Link to the modified Code4Bench dataset: https://github.com/ASSERT-KTH/C4B_APR

About

Generating difference exposing tests in Python

Resources

Stars

Watchers

Forks

Languages