Skip to content

nishimura5/behavior_senpai

Repository files navigation

Behavior Senpai v.1.2.0

ScreenShot

Behavior Senpai is an application that supports quantitative behavior observation in video observation methods. It converts video files into time-series coordinate data using keypoint detection AI, enabling quantitative analysis and visualization of human behavior. Behavior Senpai is distinctive in that it permits the utilization of multiple AI models without the necessity for coding.

The following AI image processing frameworks/models are supported by Behavior Senpai:

Behavior Senpai performs pose estimation of a person in a video using an AI model selected by the user, and outputs time-series coordinate data. (These are variously referred to as "pose estimation", "markerless motion capture", "landmark detection", and so forth, depending on the intended purpose and application.)

What is Behavior Senpai

Behavior Senpai is an open source software developed at Faculty of Design, Kyushu University.

Requirement

In order to use Behavior Senpai, you need a PC that meets the following performance requirements. The functionality has been confirmed on Windows 11 (23H2).

When using CUDA

  • Disk space: 12GB or more
  • RAM: 16GB or more
  • Screen resolution: 1920x1080 or higher
  • GPU: RTX2060~ (CUDA: 12.1)

Without CUDA

If you do not have a CUDA-compatible GPU, only MediaPipe Holistic can be used.

  • Disk space: 8GB or more
  • RAM: 16GB or more
  • Screen resolution: 1920x1080 or higher

Usage

Running BehaviorSenpai.exe will start the application; if you want to use CUDA, check the "Enable features using CUDA" checkbox the first time you start the application and click the "OK" button.

BehaviorSenpai.exe is an application to automate the construction of the Python environment by Rye and the startup of Behavior Senpai itself. The initial setup by BehaviorSenpai.exe takes some time. Please wait until the terminal (black screen) closes automatically.

To uninstall Behavior Senpai or replace it with the latest version, delete the entire folder containing BehaviorSenpai.exe. In addition, to uninstall Rye, run the following from a terminal

rye self uninstall

Keypoints

The ID of keypoints handled by Behavior Senpai is the same as the ID of each dataset. YOLOv8 complies with COCO and RTMPose complies with Halpe26. The IDs of each keypoints are as follows.

Keypoints of body (YOLOv8 and MMPose)

The IDs of the keypoints (landmarks) of the faces in MediaPipe Holistic are as follows. See here for a document with all IDs.

Keypoints of face (Mediapipe Holistic)

Keypoints of parts of face (Mediapipe Holistic)

The IDs of the keypoints (landmarks) of the hands in MediaPipe Holistic are as follows

Keypoints of hands (Mediapipe Holistic)

Interface

Track file

The time-series coordinate data resulting from keypoint detection in app_detect.py is stored in a Pickled Pandas DataFrame. This data is referred to by Behavior Senpai as a "Track file". The Track file is saved in the "trk" folder, which is created in the same directory as the video file where the keypoint detection was performed. The Track file holds time-series coordinate data in a 3-level-multi-index format. The indexes are designated as "frame" "member", and "keypoint", starting from level 0. "Frame" is an integer, starting from 0, corresponding to the frame number of the video. "Member" and "keypoint" are the identifiers of keypoints detected by the model. The Track file always contains three columns: "x," "y," and "timestamp." "X" and "y" are in pixels, while "timestamp" is in milliseconds.

An illustrative example of a DataFrame stored in the Track file is presented below. It should be noted that the columns may include additional columns such as 'z' and 'conf', contingent on the specifications of the AI model.

x y timestamp
frame member keypoint
0 1 0 1365.023560 634.258484 0.0
1 1383.346191 610.686951 0.0
2 1342.362061 621.434998 0.0
... ... ... ...
16 1417.897583 893.739258 0.0
2 0 2201.367920 846.174194 0.0
1 2270.834473 1034.986328 0.0
... ... ... ...
16 2328.100098 653.919312 0.0
1 1 0 1365.023560 634.258484 33.333333
1 1383.346191 610.686951 33.333333
... ... ... ...

Feature file

In Behavior Senpai, the data obtained by calculating the positional relationship of multiple keypoints is referred to as a feature. The data processed by app_2point_calc.py or app_3point_calc.py is stored in a pickled format. The data processed by app_3point_calc is saved as a "Feature file" in the "calc" folder. The file extension is ".pkl", as in the Track file. The Feature file holds time-series data in 2-level-multi-index format, with the indices designated as "frame" and "member", respectively, and the columns including a "timestamp". It should be noted that the data in the Track file is only the result of keypoint detection, while the data in the Feature file are features that are deeply related to the purpose of behavior observation.

Column name definition

The feature files processed by app_2point_calc.py and app_3point_calc.py have rules for columns names (to enable parsing). The format of columns names in the DataFrame in the feature file, except for timestamp, is as follows.

{calc_code}({target keypoints}){suffix like _x or _y}

{calc_code} contains the following strings, depending on the calculation.

  • component: x and y components of one vector (suffixes like '_x' and '_y')
  • norm: norm of one vector
  • plus: the sum of two vectors (suffixes '_x' and '_y')
  • cross: cross product of two vectors
  • dot: inner product of two vectors
  • norms: product of the norms of two vectors

{target keypoints} contains the IDs of the keypoints to be calculated. In the case of a calculation for a single vector, such as 'component' or 'norm', a hyphenated set of keypoint IDs is written with the starting point of the vector on the left, such as '1-2'. For calculations on two vectors such as 'plus' and 'cross', a comma-separated set of keypoint IDs is written, such as '1-2,1-3'.

As a concrete example, a column name meaning the outer product of two vectors starting from keypoint=2 at three keypoints1,2,3 would be written as follows.

cross(2-1,2-3)

Attributes of Track file

The attrs property of the DataFrame stored in Track file (and Feature files) records information such as the original video file name, its frame size, and the name of the AI model used for keypoint detection.

To load a Track file and check the contents recorded in attrs, the Python code

is as follows. The attrs property is of dictionary type:

trk_df = pd.read_pickle("path/to/track_file.pkl")
print(trk_df.attrs)

Main contents of attrs include, but are not limited to:

model

The name of the AI image processing framework/model used for keypoint detection. Addition to attrs is done by app_detector.py (detector_proc.py).

  • YOLOv8 x-pose-p6
  • MediaPipe Holistic
  • MMPose RTMPose-x

frame_size

The frame size of the video on which keypoint detection was performed is recorded as a tuple (width, height) in pixels. Addition to attrs is done by app_detector.py (detector_proc.py).

video_name

The file name of the video on which keypoint detection was performed is recorded. Addition to attrs is done by app_detector.py (detector_proc.py).

created

Dates and times when the Track file was created are recorded in the format "%Y-%m-%d %H-%M-%S". Additions to attrs are made in app_detector.py (detector_proc.py).

next, prev

Long videos captured by a video camera may be split during recording due to camera specifications. Since Track files are paired with video files, they are also split. 'next' and 'prev' record the sequence of split Track files. Addition to attrs is done by app_track_list.py.

Security Considerations

As mentioned above, Behavior Senpai handles pickle format files, and because of the security risks associated with pickle format files, please only open files that you trust (e.g., do not open files from unknown sources that are available on the Internet). (For example, do not try to open files of unknown origin published on the Internet). See here for more information.

Annotated Video file

Behavior Senpai can output videos in mp4 format with detected keypoints drawn on them.

Folder Structure

This section explains the default locations for data output by Behavior Senpai. Track files are saved in the "trk" folder, Feature files in the "calc" folder, and videos with keypoints drawn are saved in the "mp4" folder. If a Track file is edited and overwritten, the old Track file is saved in the "backup" folder (only one backup is kept). These folders are automatically generated at the time of file saving.

Below is an example of the folder structure when there are files named "ABC.MP4" and "XYZ.MOV" in a folder. Output file names include suffixes according to the model or type of calculation. To avoid file read/write failures, use alphanumeric characters for folder and file names, especially when the file path contains Japanese characters.

├── ABC.MP4
├── XYZ.MOV
├── calc
│   └── XYZ_2p.pkl
├── mp4
│   └── ABC_mediapipe.mp4
└── trk
    ├── ABC.pkl
    ├── XYZ.pkl
    └── backup
        └── ABC.pkl

When Behavior Senpai loads a Track file, if a video file exists in the parent folder, it also loads that video file. The file name of the video to be loaded is referred from the "video_name" value in Attributes of Track file. If the video file is not found, a black background is used as a substitute.

Temporary file

The application's settings and the path of the most recently loaded Track file are saved as a Pickled dictionary. The file name is "temp.pkl". If this file does not exist, the application automatically generates it (using default values). To reset the settings, delete the "temp.pkl" file. The Temporary file is managed by gui_parts.py.

Citation

Please acknowledge and cite the use of this software and its authors when results are used in publications or published elsewhere.

Nishimura, E. (2024). Behavior Senpai (Version 1.2.0) [Computer software]. Kyushu University, https://doi.org/10.48708/7160651
@misc{behavior-senpai-software,
  title = {Behavior Senpai},
  author = {Nishimura, Eigo},
  year = {2024},
  publisher = {Kyushu University},
  doi = {10.48708/7160651},
  note = {Available at: \url{https://hdl.handle.net/2324/7160651}},
}

Related Documents

Sample Videos for Behavioral Observation Using Keypoint Detection Technology

Quantitative Behavioral Observation Using Keypoint Detection Technology: Towards the Development of a New Behavioral Observation Method through Video Imagery