Skip to content

vcla/Causality

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Causal And-Or Graph Inference Project

The associated paper and data can be found at http://amyfire.com/projects/inferringcausality/.

Required Data

The required data can be obtained from the project page.

Source Data for Learning + Inference

  • CVPR2012_fluent_result (fluent source data by Bruce)
  • CVPR2012_* (action source data (.txt) by Ping, parsed results (.mat) by Mingtian, parsed results (.py) by ...?)

Human Comparison Data

  • CVPR2012_humanTestAnnotation.txt -- this file defines how the clips were broken up for human annotation
  • amy_cvpr2012.db -- a sqlite file containing raw human annotations as they progressed through the clips

Intermediate Files

  • cvpr_db_results.csv (humans, source data, causalgrammar all merged) -- this is generated by dealWithDBResults.py

Workflow

dealWithDBResults.py

requires CVPR2012_humanTestAnnotation.txt, CVPR2012_fluent_result (fluent detections) and CVPR2012_*/* (one directory of action detections).

Typical usage:

python dealWithDBResults.py (upload|download|upanddown)

You can specify a non-default action directory with the -a parameter. (the default will not run out of the box as it was essentially a symlink to two different directories, depending). Those directories are in the "minimal" dataset--see the minimal dataset readme for details.

upload:

Runs the following methods:

  • causal_grammar.import_summerdata -- importing from CVPR2012_reverse_slidingwindow_action_detection_logspace/* by default, which was changed in the "minimal" dataset to two different choices--see the minimal dataset readme for details - (python files)
  • munge_parses_to_xml(fluent_parses, temporal_parses) --> orig_xml results
  • causal_grammar.process_events_and_fluents --> fluent_and_action_xml results
  • orig_xml results are inserted to db as 'origdata' and 'origsmrt'
  • fluent_and_action_xml results are inserted to db as causalgrammar and causalsmrt via uploadComputerResponseToDB; if source.endswith('smrt'), buildDictForFluentBetweenFramesIntoResults is called, which does some very basic fixing of local inconsistencies (versus buildDictForDumbFluentBetweenFramesIntoResults which does no fixing of local inconsistencies)
download:
  • creates unified results/cvpr_db_results/*.csv for each user (human or algorithm)
  • analyze_results.R converts the cvpr_db_results/*.csv to a single cvpr_db_results.csv
  • R --vanilla < analyze_results.R
  • plotAllResults.sh loops through each table/element in summerdata and generates timeline heatmaps for every agent (human, computer, source, ...)
upanddown:

Simply calls upload and then download.

python dealWithDBResults.py --help

usage: dealWithDBResults.py [-h]
                            [-d | -o EXAMPLES_ONLY | -x EXAMPLES_EXCLUDE | -g EXAMPLES_GREP]
                            [-s] [-i] [-a ACTIONFOLDER] [-n] [--debug]
                            [--database {mysql,sqlite}]
                            {upload,download,upanddown,list}

positional arguments:
  {upload,download,upanddown,list}

optional arguments:
  -h, --help            show this help message and exit
  -d, --dry-run         Do not actually upload data to the db or save
                        downloaded data; only valid for "upload" or
                        "download", does not make sense for "upanddown" or
                        "list"
  -o EXAMPLES_ONLY, --only EXAMPLES_ONLY
                        specific examples to run, versus all found examples
  -x EXAMPLES_EXCLUDE, --exclude EXAMPLES_EXCLUDE
                        specific examples to exclude, out of all found
                        examples
  -g EXAMPLES_GREP, --grep EXAMPLES_GREP
                        class of examples to include
  -s, --simplify        simplify the summerdata grammar to only include
                        fluents that start with the example name[s]
  -i, --ignoreoverlaps  skip the "without overlaps" code
  -a ACTIONFOLDER, --actionfolder ACTIONFOLDER
                        specify the action folder to run against
  -n, --inconsistentok  don't require consistency in parse building
  --debug               Spit out a lot more context information during
                        processing
  --database {mysql,sqlite}

Analysis

  • hitrate.py, formerly analyzeData-nearesthuman-hitrate.py -- "hitrate for CVPR 2015 submission" (Nov 2015)
  • pr.py calculates average precision and recall and F1
  • Obsolete: plotPR.R generates a set of precision/recall graphs from the output of analyzeData-nearesthuman-pr.py analyzeData-nearesthuman-pr.py has been removed, last existed in git hash 3032f8
  • Obsolete: analyze_results.go by Mingtian reads cvpr_db_results.csv; analyzeData.py1 refers to the results of the go code in findDistanceGivenLineOfGoOutput, calling it obsolete. # analyzeData.py has been removed, last existed in git hash 3032f8
  • Obsolete: plotPR.sh calls analyzeData-nearesthuman-pr.py and then plotPR.R to generate a set of precision/recall graphs # analyzeData-nearesthuman-pr.py has been removed, last existed in git hash 3032f8

evaluateCausalGrammer

  • getFluentChangesForFluent (operates on causal_grammar.process_events_and_fluents xml)

  • getFluentChangesForFluentBetweenFrames (operates on causal_grammar.process_events_and_fluents xml)

  • 108 videos -- cut up from "maybe 10" -- in SIX scenes (9406, 9404, 8145, lounge) -- 2 grammars