Skip to content

Commit

Permalink
Merge branch 'release-0.27.2'
Browse files Browse the repository at this point in the history
  • Loading branch information
k0retux committed Apr 2, 2019
2 parents e89f255 + 4f5fed8 commit fd2c4d0
Show file tree
Hide file tree
Showing 18 changed files with 532 additions and 276 deletions.
14 changes: 5 additions & 9 deletions data_models/tutorial/tuto_strategy.py
Expand Up @@ -8,7 +8,6 @@
tactics = Tactics()

def cbk_transition1(env, current_step, next_step, feedback):
assert env is not None
if not feedback:
print("\n\nNo feedback retrieved. Let's wait for another turn")
current_step.make_blocked()
Expand All @@ -32,21 +31,18 @@ def cbk_transition1(env, current_step, next_step, feedback):
return True

def cbk_transition2(env, current_step, next_step):
assert env is not None
if hasattr(env, 'switch'):
if env.user_context.switch:
return False
else:
env.switch = False
env.user_context.switch = True
return True

def before_sending_cbk(env, step):
assert env is not None
print('\n--> Action executed before sending any data on step {:d} [desc: {!s}]'.format(id(step), step))
step.content.show()
return True

def before_data_processing_cbk(env, step):
assert env is not None
print('\n--> Action executed before data processing on step {:d} [desc: {!s}]'.format(id(step), step))
if step.content is not None:
step.content.show()
Expand All @@ -61,7 +57,7 @@ def before_data_processing_cbk(env, step):
do_before_sending=before_sending_cbk, vtg_ids=0)
step2 = Step('separator', fbk_timeout=2, clear_periodic=[periodic1], vtg_ids=1)
empty = NoDataStep(clear_periodic=[periodic2])
step4 = Step('off_gen', fbk_timeout=0, step_desc='overriding the auto-description!')
step4 = Step('off_gen', fbk_timeout=0)

step1_copy = copy.copy(step1) # for scenario 2
step2_copy = copy.copy(step2) # for scenario 2
Expand All @@ -71,7 +67,7 @@ def before_data_processing_cbk(env, step):
empty.connect_to(step4)
step4.connect_to(step1, cbk_after_sending=cbk_transition2)

sc_tuto_ex1 = Scenario('ex1', anchor=step1)
sc_tuto_ex1 = Scenario('ex1', anchor=step1, user_context=UI(switch=False))

### SCENARIO 2 ###
step4 = Step(DataProcess(process=['tTYPE#2'], seed='shape'))
Expand All @@ -97,7 +93,7 @@ def before_data_processing_cbk(env, step):
option1.connect_to(anchor)
option2.connect_to(anchor)

sc_tuto_ex3 = Scenario('ex3', anchor=anchor)
sc_tuto_ex3 = Scenario('ex3', anchor=anchor, user_context=UI(switch=False))

### SCENARIO 4 & 5 ###
dp = DataProcess(['tTYPE#NOREG'], seed='exist_cond', auto_regen=False)
Expand Down
2 changes: 1 addition & 1 deletion docs/source/conf.py
Expand Up @@ -57,7 +57,7 @@
# The short X.Y version.
version = '0.27'
# The full version, including alpha/beta/rc tags.
release = '0.27.1'
release = '0.27.2'

# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
Expand Down
153 changes: 153 additions & 0 deletions docs/source/data_analysis.rst
@@ -0,0 +1,153 @@
.. _data-analysis:

Data Analysis
*************

Each data you send and all the related information (the way the data has been built,
the feedback from the target, and so on) are stored within the ``fuddly`` database
(an SQLite database located at ``~/fuddly_data/fmkdb.db``). They all get a unique ID,
starting from 1 and increasing by 1 each time a data is sent.

To interact with the database a convenient toolkit is provided (``<root of fuddly>/tools/fmkdb.py``).

Usage Examples
==============

Let's say you want to look at all the information
that have been recorded for one of the data you sent, with the ID 4. The following
command will display a synthesis of what you want::

./tools/fmkdb.py -i 4

And if you want to get all information, issue the following::

./tools/fmkdb.py -i 4 --with-data --with-fbk

You can also request information on all data sent between two dates. For instance the
following command will display all data information that have been recorded between
25th January 2016 (11:30) and 26th January 2016::

./tools/fmkdb.py --info-by-date 2016/01/25-11:30 2016/01/26

For further information refer to the help by issuing::

./tools/fmkdb.py -h


Fmkdb Toolkit Manual
====================

Hereunder is shown the output of ``<root of fuddly>/tools/fmkdb.py -h``.

.. code-block:: none
usage: fmkdb.py [-h] [--fmkdb PATH] [--no-color] [-v] [--page-width WIDTH]
[--fbk-src FEEDBACK_SOURCES] [--project PROJECT_NAME] [-s]
[-i DATA_ID] [--info-by-date START END]
[--info-by-ids FIRST_DATA_ID LAST_DATA_ID] [-wf] [-wd]
[--without-fmkinfo] [--without-analysis] [--limit LIMIT]
[--raw] [-dd] [-df] [--data-atom ATOM_NAME]
[--fbk-atom ATOM_NAME] [--force-fbk-decoder DATA_MODEL_NAME]
[--export-data FIRST_DATA_ID LAST_DATA_ID] [-e DATA_ID]
[--remove-data FIRST_DATA_ID LAST_DATA_ID] [-r DATA_ID]
[--data-with-impact] [--data-with-impact-raw]
[--data-without-fbk]
[--data-with-specific-fbk FEEDBACK_REGEXP] [-a IMPACT COMMENT]
[--disprove-impact FIRST_ID LAST_ID]
Argument for FmkDB toolkit script
optional arguments:
-h, --help show this help message and exit
Miscellaneous Options:
--fmkdb PATH Path to an alternative fmkDB.db
--no-color Do not use colors
-v, --verbose Verbose mode
--page-width WIDTH Width hint for displaying information
Configuration Handles:
--fbk-src FEEDBACK_SOURCES
Restrict the feedback sources to consider (through a
regexp). Supported by: --data-with-impact, --data-
without-fbk, --data-with-specific-fbk
--project PROJECT_NAME
Restrict the data to be displayed to a specific
project. Supported by: --info-by-date, --info-by-ids,
--data-with-impact, --data-without-fbk, --data-with-
specific-fbk
Fuddly Database Visualization:
-s, --all-stats Show all statistics
Fuddly Database Information:
-i DATA_ID, --data-id DATA_ID
Provide the data ID on which actions will be
performed. Without any other parameters the default
action is to display information on the specified data
ID.
--info-by-date START END
Display information on data sent between START and END
(date format 'Year/Month/Day' or 'Year/Month/Day-Hour'
or 'Year/Month/Day-Hour:Minute')
--info-by-ids FIRST_DATA_ID LAST_DATA_ID
Display information on all the data included within
the specified data ID range
-wf, --with-fbk Display full feedback (expect --data-id)
-wd, --with-data Display data content (expect --data-id)
--without-fmkinfo Do not display fmkinfo (expect --data-id)
--without-analysis Do not display user analysis (expect --data-id)
--limit LIMIT Limit the size of what is displayed from the sent data
and the retrieved feedback (expect --with-data or
--with-fbk).
--raw Display data and feedback in raw format
Fuddly Decoding:
-dd, --decode-data Decode sent data based on the data model used for the
selected data ID or the atome name provided by --atom
-df, --decode-fbk Decode feedback based on the data model used for the
selected data ID or the atome name provided by --fbk-
atom
--data-atom ATOM_NAME
Atom of the data model to be used for decoding the
sent data. If not provided, the name of the sent data
will be used.
--fbk-atom ATOM_NAME Atom of the data model to be used for decoding
feedback. If not provided, the default data model
decoder will be used (if one exists), or the name of
the first registered atom in the data model
--force-fbk-decoder DATA_MODEL_NAME
Decode feedback with the decoder of the data model
specified
Fuddly Database Operations:
--export-data FIRST_DATA_ID LAST_DATA_ID
Extract data from provided data ID range
-e DATA_ID, --export-one-data DATA_ID
Extract data from the provided data ID
--remove-data FIRST_DATA_ID LAST_DATA_ID
Remove data from provided data ID range and all
related information from fmkDB
-r DATA_ID, --remove-one-data DATA_ID
Remove data ID and all related information from fmkDB
Fuddly Database Analysis:
--data-with-impact Retrieve data that negatively impacted a target.
Analysis is performed based on feedback status and
user analysis if present
--data-with-impact-raw
Retrieve data that negatively impacted a target.
Analysis is performed based on feedback status
--data-without-fbk Retrieve data without feedback
--data-with-specific-fbk FEEDBACK_REGEXP
Retrieve data with specific feedback provided as a
regexp
-a IMPACT COMMENT, --add-analysis IMPACT COMMENT
Add an impact analysis to a specific data ID (expect
--data-id). IMPACT should be either 0 (no impact) or 1
(impact), and COMMENT provide information
--disprove-impact FIRST_ID LAST_ID
Disprove the impact of a group of data present in the
outcomes of '--data-with-impact-raw'. The group is
determined by providing the smaller data ID (FIRST_ID)
and the bigger data ID (LAST_ID).
54 changes: 36 additions & 18 deletions docs/source/evolutionary_fuzzing.rst
Expand Up @@ -13,8 +13,8 @@ classic methods but also on the feedback retrieved from the targets. In this con
is the only one instantiated the traditional way. The ones that follow are spawned using five steps:

#. **Fitness score computation**: each test case, member of the current population, is given a
score which is function of some metrics (impact on the target, diversity, and so on). These
features are calculated by the element in charge of the monitoring aspects.
score which is function of some metrics (impact on the target, diversity, and so on), which are
calculated by the entity in charge of the monitoring aspects.

#. **Probabilities of survival association**: depending on the score computed in the previous step, a probability
of survival is associated to each individual.
Expand All @@ -24,7 +24,7 @@ is the only one instantiated the traditional way. The ones that follow are spawn
#. **Mutation**: aims to modify a little bit each individuals (flip some bits for instance) to find local optimums.

#. **Cross-over**: on the contrary, involves huge changes in order to find other optimums. It combines the test cases
that are still alive in order to generate even better solutions. This process can be used to compensate the kills
that are still alive in order to generate even better solutions. This process is also used to compensate the kills
done in step 3.

.. _evolutionary-process-image:
Expand All @@ -35,7 +35,7 @@ is the only one instantiated the traditional way. The ones that follow are spawn
Evolutionary process


The implementation within Fuddly can be divided into three main components:
The implementation within ``Fuddly`` is divided into three main components:

* A :class:`framework.evolutionary_helpers.Population` class that is composed of
:class:`framework.evolutionary_helpers.Individual` instances. Each individual represents a data to be sent.
Expand All @@ -51,15 +51,15 @@ The implementation within Fuddly can be divided into three main components:
User interface
==============

An evolutionary process can be configurable by extending the
An evolutionary process can be configured by extending the
:class:`framework.evolutionary_helpers.Population` and :class:`framework.evolutionary_helpers.Individual`
abstract classes. These elements describe the contract that needs to be satisfied in order for the evolutionary process
to get running. In general, the methods :meth:`_initialize()` and :meth:`reset()` can be
to get running. Basically, the methods :meth:`_initialize()` and :meth:`reset()` can be
used to initialize the first population, :meth:`evolve()` to get the population to the next generation
and :meth:`is_final()` to specify a stop criteria.

As these are very generic, they bring a lot of flexibility but require some work.
To address this issue, ``fuddly`` also proposes a default implementation that describes the classic approach
To address this issue, ``Fuddly`` also proposes a default implementation that describes the classic approach
introduced in the previous section. Each step is expressed using one of the
:class:`framework.evolutionary_helpers.DefaultPopulation` methods. The evolution stops when the population extincts
or if a maximum number of generation exceeds.
Expand All @@ -76,27 +76,45 @@ or if a maximum number of generation exceeds.
other disruptor could have been chosen (those introduced by the evolutionary fuzzing are described in
the next section).

Finally, to make an evolutionary scenario available, it needs to be registered inside a ``*_strategy.py`` file.
To do so, an ``evolutionary_scenarios`` variable has to be created. This variable is an array that
contains 3-tuples. Each one has to provide:
Finally, to make an evolutionary process available to the framework, it has to be registered at project
level (meaning inside a ``*_proj.py`` file), through :meth:`framework.Project.register_evolutionary_process`.
This method expects processes in the form of 3-tuples containing:

* a name for the evolutionary scenario that will be created;
* a name for the scenario that will implement the evolutionary process;
* a class that inherits from :class:`framework.evolutionary_helpers.Population`;
* and parameters that will be passed to the
:class:`framework.evolutionary_helpers.EvolutionaryScenariosFactory` in order to instantiate the appropriate
population object.

Here under is provided an example to setup an evolutionary scenario:
Here under is provided an example to register an evolutionary process (defined in ``tuto_proj.py``):

.. code-block:: python
from framework.evolutionary_helpers import *
from framework.tactics_helpers import *
from framework.evolutionary_helpers import DefaultPopulation
tactics = Tactics()
evolutionary_scenarios = [("EVOL",
DefaultPopulation,
{'model': 'SEPARATOR', 'size': 10, 'max_generation_nb': 10})]
project.register_evolutionary_processes(
('evol',
DefaultPopulation,
{'init_process': [('SEPARATOR', UI(random=True)), 'tTYPE'],
'size': 10,
'max_generation_nb': 10})
)
Once loaded from ``Fuddly``, ``Scenario`` are created from registered evolutionary processes, which are callable
(like any other scenarios) through their associated ``Generator``. In our example, only one process is
registered and will lead to the creation of the generator ``SC_EVOL``.
After each call to it, the evolutionary process will progress and a new test case will be produced.

Note that the :class:`framework.evolutionary_helpers.DefaultPopulation` is used with this scenario.
It expects three parameters:

- The first one describe the process to follow to generate the data in the initial population
(refer to the API documentation for more information). In the example,
we use the generator ``SEPARATOR`` to produce data compliant to the model in a randome way, then we
apply the disruptor ``tTYPE``.
- The second specify the size of the population.
- The third is a criteria to stop the evolutionary process. It provides the maximum number of generation to reach


.. _ef:crossover-disruptors:
Expand Down
Binary file modified docs/source/images/sc_ex1_step1.png
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified docs/source/images/sc_ex1_step2.png
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
2 changes: 2 additions & 0 deletions docs/source/index.rst
Expand Up @@ -20,6 +20,8 @@ Contents:

data_manip

data_analysis

scenario

knowledge
Expand Down

0 comments on commit fd2c4d0

Please sign in to comment.