Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error running Inference.py script #23

Open
AkkiSony opened this issue Aug 19, 2021 · 8 comments
Open

Error running Inference.py script #23

AkkiSony opened this issue Aug 19, 2021 · 8 comments

Comments

@AkkiSony
Copy link

I am working on windows platform and have installed TF = 1.15.0 along with python version 3.7.11.

I am getting the below error when running inference .py script. Please guide me solve the issue. Thanks in advance.

coral-TPU-error-21

@YsYusaito
Copy link

YsYusaito commented Aug 22, 2021

My environment is below.
keras 2.2.4
tensorflow 1.15.0
python 3.7.3

Downgrading python version may solve your problem.

@AkkiSony
Copy link
Author

AkkiSony commented Aug 23, 2021

@YsYusaito Thanks for replying. When I tried with TensorFlow 1.15.0 version, I got some error and hence, I tried to upgrade it to 2.5.0. After upgrading tensorflow version, I overcame the error but now there is no object detection happening on my input image.
My current version environment is as follows:
keras-nightly - 2.5.0.dev2021032900
tensorflow 2.5.0
python - 3.7.10
Windows - 10

Can you please let me know if you made any changes in the inference.py or utils.py script? If not then, I will try to have a similar environment configuration as yours and will try to execute the script.

Would you mind to share your inference.py and utily.py script?

Thank you very much for your help. :)

@AkkiSony
Copy link
Author

Hi @YsYusaito,

I created a new virttual environment as per your environment versions. But when I execute the inference.py script, I am getting the following error.

coral-TPU-error-26

Please find my installed versions within my conda virtual environment on windows 10.

(yolov3-tflite2) C:\pycoral_venv\Scripts\coral\pycoral1>pip list
Package Version


absl-py 0.13.0
astor 0.8.1
cached-property 1.5.2
gast 0.2.2
google-pasta 0.2.0
grpcio 1.39.0
h5py 3.3.0
importlib-metadata 4.6.4
Keras 2.2.4
Keras-Applications 1.0.8
Keras-Preprocessing 1.1.2
Markdown 3.3.4
numpy 1.21.2
opencv-python 4.5.3.56
opt-einsum 3.3.0
pip 21.2.4
protobuf 3.17.3
PyYAML 5.4.1
scipy 1.7.1
setuptools 57.4.0
six 1.16.0
tensorboard 1.15.0
tensorflow 1.15.0
tensorflow-estimator 1.15.1
termcolor 1.1.0
typing-extensions 3.10.0.0
Werkzeug 2.0.1
wheel 0.37.0
wrapt 1.12.1
zipp 3.5.0

@AkkiSony
Copy link
Author

AkkiSony commented Aug 23, 2021

I could over come the issue by following google-coral/tflite#37 (comment)
However, the detection boxes are very small compared to the image size.

When I input images of other classes, I do not get any predicitons at all. The objects are not at all detected. :/

Do I have to modify something in utils.py script?

But thank you again for sharing your environment configurations. :)

@YsYusaito
Copy link

YsYusaito commented Aug 23, 2021

I could over come the issue by following google-coral/tflite#37 (comment)

I'm glad to hear that!

This problem is due to anchor file.
Originally, inference.py is for yolov3-tiny, not for yolov3.
Therefore, we have to modify inference.py to yolov3 version and use yolov3 anchor file.

Please correct corresponding part of inference.py as follows.

    # Retrieve outputs of the network
    out1 = interpreter.get_tensor(output_details[0]['index'])
    out2 = interpreter.get_tensor(output_details[1]['index'])
    out3 = interpreter.get_tensor(output_details[2]['index'])

    # If this is a quantized model, dequantize the outputs
    if args.quant:
        # Dequantize output
        o1_scale, o1_zero = output_details[0]['quantization']
        out1 = (out1.astype(np.float32) - o1_zero) * o1_scale
        o2_scale, o2_zero = output_details[1]['quantization']
        out2 = (out2.astype(np.float32) - o2_zero) * o2_scale
        o3_scale, o3_zero = output_details[1]['quantization']
        out3 = (out3.astype(np.float32) - o3_zero) * o3_scale        

    # Get boxes from outputs of network
    start = time()
    _boxes1, _scores1, _classes1 = featuresToBoxes(out1, anchors[[6, 7, 8]], 
            n_classes, net_input_shape, img_orig_shape, threshold)
    _boxes2, _scores2, _classes2 = featuresToBoxes(out2, anchors[[3, 4, 5]], 
            n_classes, net_input_shape, img_orig_shape, threshold)
    _boxes3, _scores3, _classes3 = featuresToBoxes(out3, anchors[[0, 1, 2]], 
            n_classes, net_input_shape, img_orig_shape, threshold)
    
    inf_time = time() - start
    print(f"Box computation time: {inf_time*1000} ms.")

    # This is needed to be able to append nicely when the output layers don't
    # return any boxes
    if _boxes1.shape[0] == 0:
        _boxes1 = np.empty([0, 2, 2])
        _scores1 = np.empty([0,])
        _classes1 = np.empty([0,])
    if _boxes2.shape[0] == 0:
        _boxes2 = np.empty([0, 2, 2])
        _scores2 = np.empty([0,])
        _classes2 = np.empty([0,])
    if _boxes3.shape[0] == 0:
        _boxes3 = np.empty([0, 2, 2])
        _scores3 = np.empty([0,])
        _classes3 = np.empty([0,])

    boxes = np.append(_boxes1, _boxes2, axis=0)
    boxes = np.append(boxes, _boxes3, axis=0)
    
    scores = np.append(_scores1, _scores2, axis=0)
    scores = np.append(scores, _scores3, axis=0)

    classes = np.append(_classes1, _classes2, axis=0)
    classes = np.append(classes, _classes3, axis=0)

I'm going to publish a modified version of inference.py and utils.py on github when I'm free.

@AkkiSony
Copy link
Author

AkkiSony commented Aug 23, 2021

@YsYusaito Thanks for your input. But even after I modified the code as per your above mentioned snippet, the detection still remains unchanged. Is there anything necessary to modify in utily.py script? Is it possible for you to upload the script to google drive and share the link, please.
Thank you again.! :)

@AkkiSony
Copy link
Author

AkkiSony commented Aug 23, 2021

@YsYusaito Can you also please tell me how did you generate anchor file? As I am new to this domain, I created a .txt file from looking at the anchors in the cfg file.

But when I googled about it, I found that we can generate the file directly by using "darknet.exe detector calc_anchors data/obj.data -num_of_clusters 9 -width 416 -height 416"

I generated anchor files using the above command. With new anchor files, the bounding boxes appears to be very big and hence I cannot see the classes as well as the score values.

@AkkiSony
Copy link
Author

@YsYusaito I got the objects detected, but the score value of the detected object is depleted compared to the original darknet model.

In terms of inference, I would like to measure the inference time for a single image. I got the follwing output.

Net forward-pass time: 1664.445161819458 ms.
Box computation time: 0.9758472442626953 ms.

I would like to measure the inference time after the image is loaded into the model and hence, just the time taken for it to process and do the detection.

I calculated using the import time function and found it to be strange. I got an inference time as shown below.

Inference after loading image: 1677.1337985992432 ms

I think the above inference time (infernce on Coral USB) is suspicious to me as the inference time was around 380ms when the same model was run on PC without Coral USB. So afcourse, the inference time should be lesser than 380ms.

Can you shed some inputs, please?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants