The paper results are mainly generated using AMOS data. The following instructions outline how to access and download the data:
-
Fill in the access request form (link on this page) to request access to the Google Drive folder containing the AMOS data.
-
Install rclone, make sure it is in the PATH.
-
Install rclone-python:
pip install python-rclone
. -
Create a Google Application Client Id.
- Take note of the
Client ID
andClient secret
fields underCredentials
.
- Take note of the
-
Walk through the rclone Google Drive setup guide.
- Choose
No
when prompted to enter advanced config. - Choose
Yes
when asked if the target is a Shared Drive (Team Drive).
- Choose
-
At the end of the config process, a block of the following format will be printed:
-------------------- [name] type = drive client_id = XXXXXX client_secret = YYYYYY scope = drive.readonly token = ZZZZZZ --------------------
Set environment variables
GDRIVE_CLIENT_ID
,GDRIVE_CLIENT_SECRET
, andGDRIVE_TOKEN_JSON
to XXXXXX, YYYYYY, and ZZZZZZ respectively. -
Try out the config by running
python preproc/AMOS/download.py --test
. -
Specify which AMOS ids (cameras) to download in
preproc/AMOS/download.py
(bottom of the file).preproc/AMOS/good_cams.txt
contains some hand-picked stable cameras, although this list is by no means complete.
-
Start the download by running
python preproc/AMOS/download.py
. The files will be downloaded topreproc/AMOS/AMOS_files/
.
The input sequences (AMOS or otherwise) need to be processed by preproc/process_sequence.py
in order to make sure the necessary metadata is created. The script can also be used to roughly align the input sequence, as well as crop, pad, and resize the images.
- Run script:
python preproc/process_sequence.py /path/to/frames
.- the target path will be recursively searched for folders or zip files containing images.
- the timestamps of the images are parsed from the file names - the list of supported formats can be extended in
process_sequence.py:try_parse_filenames()
.- By default, only formats
20190521_150342
and2019-05-21-15-03-42
are supported.
- By default, only formats
- Scrub through the sequence and verify that the time and date in the UI appear correct (the time zone is assumed to be UTC).
- The 'lock' button fixes the preview to the current time of day.
In the UI:
- Choose a beginning and end index with the
trim
sliders in the UI if the beginning or end of the sequence is not usable. - Scrub through the sequence, and press
spacebar
whenever the image alignment changes drastically. This creates a subsequence with internally more consistent alignment. - When all large aligments issues have been marked, go through the subsequences with
arrow left
andarrow right
, and pick the same three points in the image with theleft mouse button
. Markers can be deleted with theright mouse button
. The scroll wheel can be used to zoom and pan the view. - Export the alignment parameters by pressing
export warps
- this creates
path/to/frames/out/warps_manual.npy
- this creates
In the UI:
- Specify the
trim
sliders, see above. - Choose the newly exported
warps_manual.npy
in theWarps
dropdown. - Click
fit window
to automatically crop in on the region of the frame that is visible in all aligned frames. - Specify additional padding or cropping.
- Specify the desired output resolution.
- Specify
skipped ranges
(e.g."10,25,100-200"
) to exclude frame ranges or individual frames from the export. - Click
export frames
to createpath/to/frames/out/NAME_WWWxHHH_XXXhz.zip
.- E.g.
muotathal_512x512_1200hz.zip
is a dataset with spatial resolution 512x512 of length 1200 days.
- E.g.
The exported dataset can now be used to train a TLGAN model.