Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to export problematic frames flagged timeline data #195

Open
BryceLong13 opened this issue Mar 16, 2023 · 1 comment
Open

How to export problematic frames flagged timeline data #195

BryceLong13 opened this issue Mar 16, 2023 · 1 comment
Assignees
Labels
enhancement New feature or request

Comments

@BryceLong13
Copy link

I found the timeline really helpful when inspecting problematic frames in my videos in the gui. But I didn't find a way to extract timeline data where problematic frames are flagged(correct me if I'm wrong). I think it really helpful if I can see what specific frames are bad and do the fine-tuning easier.

@BryceLong13 BryceLong13 added the enhancement New feature or request label Mar 16, 2023
@mooch443
Copy link
Owner

Hi there! I understand that you'd like to extract problematic frame data, including frame indices and error codes, from exported TRex/TGrabs data files to make fine-tuning easier. Unfortunately, neither TRex nor TGrabs provide a built-in feature to directly extract this data.

However, it's possible to work around this:

  1. TRex exports data in .npz format, which includes information about each frame, such as positions, identities, and other relevant data.
  2. You can write a script in Python to load this .npz file using NumPy and analyze the data to identify problematic frames based on your criteria.
  3. With the problematic frames identified, you can flag them and create a new timeline data file or a CSV file containing the frame numbers and relevant information.

Given the information you provided, I've adapted a Python script that processes the exported data and emulates the error detection as implemented in TRex' original code. Here's an example of how you can load the .npz files in Python and extract information (untested):

import numpy as np
import glob

def check_problematic_frame(frame_idx, prev_frame_idx, current_prob, tdelta, blob_exists, current_speed, settings):
    error_code = 0
    error_code |= settings['FramesSkipped'] * int(prev_frame_idx != frame_idx - 1)
    error_code |= settings['ProbabilityTooSmall'] * int(current_prob is not None and current_prob < settings['track_trusted_probability'])
    error_code |= settings['TimestampTooDifferent'] * int(settings['huge_timestamp_ends_segment'] and tdelta >= settings['huge_timestamp_seconds'])
    error_code |= settings['NoBlob'] * int(not blob_exists)

    # You will need to determine how to check for segment length in your data and update the error_code accordingly

    return error_code

def process_individual_data(filename, settings):
    # Load the .npz file
    data = np.load(filename, allow_pickle=True)

    # Access relevant data, e.g., X and Y positions, frame number, speed
    X_positions = data['X']
    Y_positions = data['Y']
    frames = data['frame']
    speeds = data['SPEED']

    # Initialize an empty list to store problematic frame indices and error codes
    problematic_frames = []

    for idx in range(1, len(frames)):
        frame_idx = frames[idx]
        prev_frame_idx = frames[idx - 1]
        tdelta = data['time'][idx] - data['time'][idx - 1]
        current_speed = speeds[idx]

        # Determine if a blob exists for the current frame
        blob_exists = not np.isinf(data['blobid'][idx])

        # Estimate current_prob using normalized distance from the previous position to the next position (as it's not currently implemented)
        current_prob = None

        error_code = check_problematic_frame(frame_idx, prev_frame_idx, current_prob, tdelta, blob_exists, current_speed, settings)

        if error_code != 0:
            problematic_frames.append((frame_idx, error_code))

    # Save problematic frames and error codes in a new file
    print(f"Problematic frames for {filename}: {problematic_frames}")

# List all the individual data files in the data folder
individual_files = glob.glob('data/*_fish*.npz')

# Define the settings according to your use case
settings = {
    'FramesSkipped': 1,
    'ProbabilityTooSmall': 2,
    'TimestampTooDifferent': 4,
    'NoBlob': 16,
    'track_trusted_probability': 0.8,
    'huge_timestamp_ends_segment': True,
    'huge_timestamp_seconds': 2.0,
    'track_end_segment_for_speed': True,
    'weird_distance': 50.0,
    'track_segment_max_length': 10.0,
    'frame_rate': 30.0
}

# Process each individual file
for individual_file in individual_files:
    process_individual_data(individual_file, settings)

This script loads the exported data files, emulates the error detection, and saves the problematic frame indices along with their corresponding error codes. Note that the current_prob estimation is not currently implemented, but you could use the normalized distance from the previous position to the next position as an approximation. Additionally, the 'ManualMatch' setting is not considered in this script (does not apply to most use cases).

Here's a brief summary of the script's workflow:

  1. Load the exported .npz data files for each individual.
  2. Access relevant data, such as X and Y positions, frame numbers, and speed.
  3. Initialize an empty list to store problematic frame indices and their corresponding error codes.
  4. Iterate through the frames and calculate the error codes based on the provided settings.
  5. Save the problematic frame indices along with their error codes if any error code is detected.

To use this script, simply update the settings to match your use case and run it. The script will process each individual file in the 'data' folder, print the problematic frames and their error codes, and store them in a list.

Let me know if you have any questions or need further assistance! Happy coding!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants