Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Different result between python and C++ #247

Open
hrshovonsmx opened this issue Aug 1, 2023 · 0 comments
Open

Different result between python and C++ #247

hrshovonsmx opened this issue Aug 1, 2023 · 0 comments

Comments

@hrshovonsmx
Copy link

Hello,
First of all thanks a lot for this simple to use library. Converting a python code into C++ has been a breeze so far. But there are some issues that I would like to discuss.

Unfortunately I cant share model as its company proprietary stuff. But I am posting the code in case the mistake lies there.

Tensorflow python version 2.13

Tensorflow C api version 2.13

cppflow version latest

Model type: image segmentation(UNet)

model conversion code(python):
This code was written to convert multiple models

import os
os.environ["CUDA_VISIBLE_DEVICES"] = ""
import tensorflow as tf
from efficientnet.tfkeras import EfficientNetB7
from tensorflow.python.framework.convert_to_constants import convert_variables_to_constants_v2
from tensorflow.keras.layers import Input
from tensorflow.keras.models import Model, model_from_json
from pathlib import Path 
from glob import glob 
import numpy as np 
import json
import skimage.io as skio
tf.keras.backend.clear_session()

import tensorflow.keras.backend as K

model_paths = [SOME_MODEL_PATH]

for model_path in model_paths:
    print(model_path)
    model = tf.keras.models.load_model(model_path,compile=False)
    @tf.function
    def serve(*args, **kwargs):
        outputs = model(*args, **kwargs)
        # Apply postprocessing steps, or add additional outputs.
        ...
        return outputs

    # arg_specs is `[tf.TensorSpec(...), ...]`. kwarg_specs, in this
    # example, is an empty dict since functional models do not use keyword
    # arguments.
    arg_specs, kwarg_specs = model.save_spec()
    savepath = f"op_ocr/{Path(model_path).stem}"
    
    model.save(savepath, signatures={
      'serving_default': serve.get_concrete_function(*arg_specs,
                                                     **kwarg_specs)
    })
    #model.save(savepath)

Inference code(C++):

input is a vector of CV_32FC mats. For my case, I have two types,
it could be 3 channel RGB(8 bit) or 3 channel RGB+1 channel NIR band(all of them 16 bit).
division factor is 255.f for 8 bit and 65535.f for 16 bit

TF_CONV_DTYPE_RGB is TF_UINT8
TF_CONV_DTYPE_NIR is TF_UINT16

in both cases, some segmentation results are slightly different from python

The converted model was also tested on python, the results are same as keras h5 model.

for(int i=0;i<input.size();i++)
    {
        cppflow::tensor img_tensor;
        if(dtype == TF_CONV_DTYPE_RGB)
        {
            std::vector<uint8_t> img_data;
            img_data.assign(input[i].data, input[i].data + input[i].total() * num_channels);
            img_tensor = cppflow::tensor(img_data,{input_dim,input_dim,num_channels}); 
        } 
        else if(dtype == TF_CONV_DTYPE_NIR)
        {
      
            Mat imgData = input[i].clone();
            
            std::vector<uint16_t> img_data = imgData.reshape(1,1); //img_data.assign((uint16_t *)imgData.data, (uint16_t *)imgData.data + imgData.total() * num_channels);
            
            img_tensor = cppflow::tensor(img_data,{input_dim,input_dim,num_channels});
        }
        img_tensor = cppflow::cast(img_tensor, dtype, TF_FLOAT);
        
        img_tensor = img_tensor / division_factor;
        
        img_tensor = cppflow::expand_dims(img_tensor, 0);
        auto inf_out = (*modelpts)({{inputsigname+":0", img_tensor}},{"StatefulPartitionedCall:0"})[0];
        //auto final_out = cppflow::arg_max(inf_out,3);
        //auto final_8bit = cppflow::cast(final_out,TF_INT64,TF_UINT8);
        std::vector<float> output_vector = inf_out.get_data<float>();
        Mat op = Mat(input_dim, input_dim, CV_32FC(num_classes));
        memcpy(op.data, output_vector.data(), output_vector.size()*sizeof(float));
        output.push_back(op);
    }
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant