Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Visuliaztion problem #14

Open
Rudy112 opened this issue Nov 11, 2022 · 1 comment
Open

Visuliaztion problem #14

Rudy112 opened this issue Nov 11, 2022 · 1 comment

Comments

@Rudy112
Copy link

Rudy112 commented Nov 11, 2022

First, thanks for sharing this great work! Here is some issue that I met.

I try to visualize the result by running the script###

python tools/test.py --cfg experiments/deepfashion2/hrnet/w48_384x288_adam_lr1e-3.yaml TEST.MODEL_FILE models/pose_hrnet-w48_384x288-deepfashion2_mAP_0.7017.pth TEST.USE_GT_BBOX True DATASET.MINI_DATASET True TAG 'experiment description' WORKERS 4 TEST.BATCH_SIZE_PER_GPU 8 TRAIN.BATCH_SIZE_PER_GPU 8

the config file is

AUTO_RESUME: false #
CUDNN:
BENCHMARK: true
DETERMINISTIC: false
ENABLED: true
DATA_DIR: ''
GPUS: (1,)
OUTPUT_DIR: 'output'
LOG_DIR: 'log'
WORKERS: 8
PRINT_FREQ: 100
PIN_MEMORY: true

DATASET:
COLOR_RGB: false
DATASET: 'deepfashion2'
DATA_FORMAT: jpg
FLIP: true
NUM_JOINTS_HALF_BODY: 8
PROB_HALF_BODY: 0.3
ROOT: 'data/deepfashion2/'
ROT_FACTOR: 15 #45
SCALE_FACTOR: 0.1 #0.35
TEST_SET: 'validation'
TRAIN_SET: 'train'
MINI_DATASET: True
SELECT_CAT: [1,2,3,4,5,6,7,8,9,10,11,12,13]
MODEL:
INIT_WEIGHTS: true
NAME: pose_hrnet
NUM_JOINTS: 294
PRETRAINED: ''
TARGET_TYPE: gaussian
IMAGE_SIZE:

  • 288
  • 384
    HEATMAP_SIZE:
  • 72
  • 96
    SIGMA: 2 # 3
    EXTRA:
    PRETRAINED_LAYERS:
    • 'conv1'
    • 'bn1'
    • 'conv2'
    • 'bn2'
    • 'layer1'
    • 'transition1'
    • 'stage2'
    • 'transition2'
    • 'stage3'
    • 'transition3'
    • 'stage4'
      FINAL_CONV_KERNEL: 1
      STAGE2:
      NUM_MODULES: 1
      NUM_BRANCHES: 2
      BLOCK: BASIC
      NUM_BLOCKS:
      • 4
      • 4
        NUM_CHANNELS:
      • 48
      • 96
        FUSE_METHOD: SUM
        STAGE3:
        NUM_MODULES: 4
        NUM_BRANCHES: 3
        BLOCK: BASIC
        NUM_BLOCKS:
      • 4
      • 4
      • 4
        NUM_CHANNELS:
      • 48
      • 96
      • 192
        FUSE_METHOD: SUM
        STAGE4:
        NUM_MODULES: 3
        NUM_BRANCHES: 4
        BLOCK: BASIC
        NUM_BLOCKS:
      • 4
      • 4
      • 4
      • 4
        NUM_CHANNELS:
      • 48
      • 96
      • 192
      • 384
        FUSE_METHOD: SUM
        LOSS:
        USE_TARGET_WEIGHT: true
        TRAIN:
        BATCH_SIZE_PER_GPU: 8
        SHUFFLE: true
        BEGIN_EPOCH: 0
        END_EPOCH: 210
        OPTIMIZER: adam
        LR: 0.001 #0.001
        LR_FACTOR: 0.1
        LR_STEP:
  • 170
  • 200
    WD: 0.
    GAMMA1: 0.99
    GAMMA2: 0.0
    MOMENTUM: 0.9
    NESTEROV: false
    TEST:
    BATCH_SIZE_PER_GPU: 8
    COCO_BBOX_FILE: ''
    DEEPFASHION2_BBOX_FILE: ''
    BBOX_THRE: 1.0
    IMAGE_THRE: 0.0 # threshold for detected bbox to be feed into HRNet
    IN_VIS_THRE: 0.2
    MODEL_FILE: ''
    NMS_THRE: 1.0
    OKS_THRE: 0.9 # the lower threshold for a peak point in a heatmap to be kept
    USE_GT_BBOX: true
    FLIP_TEST: true
    POST_PROCESS: true
    SHIFT_HEATMAP: true
    DEBUG:
    DEBUG: True
    SAVE_BATCH_IMAGES_GT: false
    SAVE_BATCH_IMAGES_PRED: false
    SAVE_BATCH_IMAGES_GT_PRED: True
    SAVE_HEATMAPS_GT: false
    SAVE_HEATMAPS_PRED: false

I change the CONFIG parameter to True, however it still does not save any image. The image saving only works when I change the BATH_SIZE_PER_GPU to 1. However, the image-saving function is based on a torch grid, thus result in a very wired visualization since the scale of keypoint and output is different. Could you please try to solve the problem? I am using a single GPU RTX 3080TI with Ubuntu 18.04.

@BastianSch
Copy link

Hi,

in lib/core/function.py
the iterator i from line 142

for i, (input, target, target_weight, meta) in enumerate(val_loader):

is reused in line 195
for i in range(preds_local.shape[0]):

So I changed it to j:
for j in range(preds_local.shape[0]):
preds[j] = transform_preds( preds_local[j], c[j], s[j], [config.MODEL.HEATMAP_SIZE[0], config.MODEL.HEATMAP_SIZE[1]] )

Did you resolve the issue with the scalings of the keypoints?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants