Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The process of main_train.py never ends #172

Open
syoukera opened this issue Dec 25, 2019 · 0 comments
Open

The process of main_train.py never ends #172

syoukera opened this issue Dec 25, 2019 · 0 comments

Comments

@syoukera
Copy link

I have built a docker container based on the Dockerfile supplied in here. After confirmation of the output of demo.py, I tried to test the training processes.

docker exec serene_joliot python main_train.py

This is the log file of this process.

append flipped images to roidb
loading annotations into memory...
Done (t=13.01s)
creating index...
index created!
num_images 40504
COCO_val2014 gt roidb loaded from ./data/cache/COCO_val2014_gt_roidb.pkl
appending ground truth annotations
Reading cached proposals after ***NMS**** from data/proposals/COCO_val2014_rpn_after_nms.pkl
Done!
append flipped images to roidb
filtered 2138 roidb entries: 246574 -> 244436
add bounding box regression targets
bbox target means:
[[0. 0. 0. 0.]
 [0. 0. 0. 0.]]
[0. 0. 0. 0.]
bbox target stdevs:
[[0.1 0.1 0.2 0.2]
 [0.1 0.1 0.2 0.2]]
[0.1 0.1 0.2 0.2]
Creating Iterator with 244436 Images

I spent almost two days running this command. However, the output of this process was stopped at final line of the massage.

Does anyone have any idea to understand this state?


Machine spec
CPU: i7-8700K
GPU: GeForce GTX 1080

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant