Skip to content

v0.2.0

Latest
Compare
Choose a tag to compare
@cavokz cavokz released this 12 Apr 16:48
· 302 commits to main since this release
v0.2.0
366392c

Documentation

User interface

  • Added scripts/generate-alerts.sh.
    Generate events that will trigger the rules you want. Use it as template.
  • Added scripts/generate-network-events.sh.
    Forget rules and alers, let there be data! Use it as template.
  • Improved robustness of .ipynb files.
    You can play with the Jupiter notebooks with more freedom.

API server

  • Configure rules execution schedule.
    You'll get alerts in response to generated events sooner (~ 30 secs) than the average rule's interval (~ 2.5 mins, at best).
  • Unified requests body decoding.
    Less code to maintain.
  • Allow fetching rules from Kibana.
    You can use rules directly from your Kibana.

Core

  • Prevent double solver registration.
    In future, when you'll be able to create your solvers, this will prevent annoying and non-trivial to parse errors.
  • Fix use of variable without associated value (IP generator).
  • User prioritized document generation.
    The order of generated fields is dictated by their order in the query.
  • Incremental document generation.
    Generated fields are progressively added to the document, content of later fields may depend on content of earlier ones.
  • Add Autonomous System group solver.
    The AS organizations are total fake though.
  • Use Faker for geo info generation.
  • Switch to per-group data generation.
    Fields in the same group are generated together, this will help later with the development of entities generation.
  • Make *.bytes fields are non-negative 32 bits numbers.
  • Make utils.resource() able to cache downloaded files.
  • Improved the PyPi index entry of Geneve.

Testing

  • Added stacks 8.6 and 8.7 to the test drill.
  • Harmonize Geneve and Faker randomness.
    One source of randomness to rule them all, a must for reproducible tests.
  • Added helper ExpectJson for Geneve server testing.
    It's easier to maintain test cases.
  • Improved response body output when tests fail.
    It's easier to understand what's wrong in the received output when it differs from the expected one.