Skip to content

This repository is part of the paper Automated Implementation of Windows-related Security-Configuration Guides presented at the 35th IEEE/ACM International Conference on Automated Software Engineering.

License

tum-i4/CIS-Benchmark-Evaluation

Repository files navigation

CIS Benchmark Evaluation

This repository is part of the paper Automated Implementation of Windows-related Security-Configuration Guides presented at the 35th IEEE/ACM International Conference on Automated Software Engineering. In this repository, we have collected our evaluation data using 12 benchmarks of the Center for Internet Security (CIS). For each benchmark:

  1. We have set up a clean VM with the tested OS. If the tested software was not an OS, we installed the software on a Windows 10 instance.
  2. We installed the CIS-CAT tool on the VM.
  3. We executed the OVAL checks of the tested CIS benchmark with the CIS-CAT tool on the VM. The results are stored as before.html, e.g., for Windows 10.
  4. Next, we executed the automatic remediation of the benchmark using our generated scripts.
  5. Finally, we reran the OVAL checks. This time, the results are stored as after.html, e.g., for Windows 10.

You can download the benchmarks in their PDF from the CIS website.

If you have any questions, please create an issue or contact Patrick Stöckle.

Please also have a look at

About

This repository is part of the paper Automated Implementation of Windows-related Security-Configuration Guides presented at the 35th IEEE/ACM International Conference on Automated Software Engineering.

Topics

Resources

License

Stars

Watchers

Forks

Languages