Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Regarding h36m image #81

Open
jenowary opened this issue Apr 3, 2023 · 5 comments
Open

Regarding h36m image #81

jenowary opened this issue Apr 3, 2023 · 5 comments

Comments

@jenowary
Copy link

jenowary commented Apr 3, 2023

Thank you for kindly providing 2D HRNet model.
Please let me raise an issue about reproducing 2D detection results.

I could not reproduce Average Joint Localization Error at 4.4 pixel, and i found the example image you provided is different with mine.
Red regions indicate inconsistent pixel value.
image

How I extract image from original video is to use ffmpeg with arguments -hide_banner -loglevel error -nostats -i SrcVideoPath -q:v 1,
and I don't know why the image inconsistency occurs.

Can you share how you do that? That means to me a lot, appreciate in advance.

@Nicholasli1995
Copy link
Owner

Thank you for kindly providing 2D HRNet model. Please let me raise an issue about reproducing 2D detection results.

I could not reproduce Average Joint Localization Error at 4.4 pixel, and i found the example image you provided is different with mine. Red regions indicate inconsistent pixel value. image

How I extract image from original video is to use ffmpeg with arguments -hide_banner -loglevel error -nostats -i SrcVideoPath -q:v 1, and I don't know why the image inconsistency occurs.

Can you share how you do that? That means to me a lot, appreciate in advance.

Hi, I did not use the ffmpeg commands. I used the video functionality in OpenCV. Specifically, I used cv2.VideoCapture to initialize a video stream and read the frames:
import cv2
cap = cv2.VideoCapture(video_path)
while(cap.isOpened()):
ret, frame = cap.read()

@jenowary
Copy link
Author

Thank you for providing the details. But I've tried it by the code clip below and still have difference:

def readFrames(videoFile, destination_dir, sequence_name):
  global image_size, frame_step, destination_format
  directory = os.path.join(destination_dir)
  if not os.path.exists(directory):
      os.makedirs(directory)
  image_counter = 1
  read_counter = 0
  cap = cv2.VideoCapture(videoFile)
  while(cap.isOpened()):
      ret,cv2_im = cap.read()
      if ret and read_counter % frame_step == 0:
              cv2.imwrite(os.path.join(destination_dir, sequence_name + '_%06d'%image_counter + '.' + destination_format), cv2_im)
              image_counter += 1
      elif not ret:
              break
      read_counter += 1
  cap.release()

image

Can you give more details, like using cv2.imwrite or PIL.Image.save?

Appreciate again for your patient reply.

@Nicholasli1995
Copy link
Owner

Thank you for providing the details. But I've tried it by the code clip below and still have difference:

def readFrames(videoFile, destination_dir, sequence_name):
  global image_size, frame_step, destination_format
  directory = os.path.join(destination_dir)
  if not os.path.exists(directory):
      os.makedirs(directory)
  image_counter = 1
  read_counter = 0
  cap = cv2.VideoCapture(videoFile)
  while(cap.isOpened()):
      ret,cv2_im = cap.read()
      if ret and read_counter % frame_step == 0:
              cv2.imwrite(os.path.join(destination_dir, sequence_name + '_%06d'%image_counter + '.' + destination_format), cv2_im)
              image_counter += 1
      elif not ret:
              break
      read_counter += 1
  cap.release()

image

Can you give more details, like using cv2.imwrite or PIL.Image.save?

Appreciate again for your patient reply.

Hi, which method you use to save the image should not cause the problem. Is it possible that the timestamp of the frame you used is different from the example image? In addition, how large is the error quantitatively? Is the error large enough to affect the produced 2D key-point predictions?

@jenowary
Copy link
Author

Thank you for your analysis.
I think the timestamp are same. For example, 1002.jpg that this repo provided means the 1002th or 1003th frame, I tried both and it seems the 1003th frame.
The evaluated 2D error on S9&S11 is 5.76 pixel, significant deterioration in comparison to the reported 4.4 pixel.

But here are some updates.
I noticed that 12 video sequences of S9 subject have drifting problem of joint annotation. (ref: Human3.6M erroneous annotations)
After removing these samples from test set, the evaluation comes down to 4.6 pixel, similar to 4.4.

I wonder if this drifting-remove evaluation is same to yours?

@Nicholasli1995
Copy link
Owner

Thank you for your analysis. I think the timestamp are same. For example, 1002.jpg that this repo provided means the 1002th or 1003th frame, I tried both and it seems the 1003th frame. The evaluated 2D error on S9&S11 is 5.76 pixel, significant deterioration in comparison to the reported 4.4 pixel.

But here are some updates. I noticed that 12 video sequences of S9 subject have drifting problem of joint annotation. (ref: Human3.6M erroneous annotations) After removing these samples from test set, the evaluation comes down to 4.6 pixel, similar to 4.4.

I wonder if this drifting-remove evaluation is same to yours?

It was similar. The wrong ground truth annotations indeed need some extra processing. Instead of removing them, the ground truth keypoints were moved to the centroid of the predicted keypoints for evaluation.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants