Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

unable to save the model in torchscript format #277

Open
IamExperimenting opened this issue Jun 19, 2022 · 2 comments
Open

unable to save the model in torchscript format #277

IamExperimenting opened this issue Jun 19, 2022 · 2 comments

Comments

@IamExperimenting
Copy link

IamExperimenting commented Jun 19, 2022

Hi team,

Firstly, thanks for your repository. I would like to save the model in torchscript module format rather than the traditional way. when I try to save the model using

torch.jit.script(model)

I'm getting the below error,

---------------------------------------------------------------------------
NotSupportedError                         Traceback (most recent call last)
/tmp/ipykernel_127656/979748029.py in <module>
      7 enabled_precisions = {torch.float, torch.half} # Run with fp16
      8 
----> 9 trt_ts_module = torch_tensorrt.compile(model, inputs=inputs, enabled_precisions=enabled_precisions)
     10 
     11 input_data = input_data.to('cuda').half()

~/miniconda3/envs/tensorrt/lib/python3.7/site-packages/torch_tensorrt/_compile.py in compile(module, ir, inputs, enabled_precisions, **kwargs)
    112                 "Module was provided as a torch.nn.Module, trying to script the module with torch.jit.script. In the event of a failure please preconvert your module to TorchScript"
    113             )
--> 114             ts_mod = torch.jit.script(module)
    115         return torch_tensorrt.ts.compile(ts_mod, inputs=inputs, enabled_precisions=enabled_precisions, **kwargs)
    116     elif target_ir == _IRType.fx:

~/miniconda3/envs/tensorrt/lib/python3.7/site-packages/torch/jit/_script.py in script(obj, optimize, _frames_up, _rcb, example_inputs)
   1264         obj = call_prepare_scriptable_func(obj)
   1265         return torch.jit._recursive.create_script_module(
-> 1266             obj, torch.jit._recursive.infer_methods_to_compile
   1267         )
   1268 

~/miniconda3/envs/tensorrt/lib/python3.7/site-packages/torch/jit/_recursive.py in create_script_module(nn_module, stubs_fn, share_types, is_tracing)
    452     if not is_tracing:
    453         AttributeTypeIsSupportedChecker().check(nn_module)
--> 454     return create_script_module_impl(nn_module, concrete_type, stubs_fn)
    455 
    456 def create_script_module_impl(nn_module, concrete_type, stubs_fn):

~/miniconda3/envs/tensorrt/lib/python3.7/site-packages/torch/jit/_recursive.py in create_script_module_impl(nn_module, concrete_type, stubs_fn)
    464     """
    465     cpp_module = torch._C._create_module_with_type(concrete_type.jit_type)
--> 466     method_stubs = stubs_fn(nn_module)
    467     property_stubs = get_property_stubs(nn_module)
    468     hook_stubs, pre_hook_stubs = get_hook_stubs(nn_module)

~/miniconda3/envs/tensorrt/lib/python3.7/site-packages/torch/jit/_recursive.py in infer_methods_to_compile(nn_module)
    733     stubs = []
    734     for method in uniqued_methods:
--> 735         stubs.append(make_stub_from_method(nn_module, method))
    736     return overload_stubs + stubs
    737 

~/miniconda3/envs/tensorrt/lib/python3.7/site-packages/torch/jit/_recursive.py in make_stub_from_method(nn_module, method_name)
     64     # In this case, the actual function object will have the name `_forward`,
     65     # even though we requested a stub for `forward`.
---> 66     return make_stub(func, method_name)
     67 
     68 

~/miniconda3/envs/tensorrt/lib/python3.7/site-packages/torch/jit/_recursive.py in make_stub(func, name)
     49 def make_stub(func, name):
     50     rcb = _jit_internal.createResolutionCallbackFromClosure(func)
---> 51     ast = get_jit_def(func, name, self_name="RecursiveScriptModule")
     52     return ScriptMethodStub(rcb, ast, func)
     53 

~/miniconda3/envs/tensorrt/lib/python3.7/site-packages/torch/jit/frontend.py in get_jit_def(fn, def_name, self_name, is_classmethod)
    262         pdt_arg_types = type_trace_db.get_args_types(qualname)
    263 
--> 264     return build_def(parsed_def.ctx, fn_def, type_line, def_name, self_name=self_name, pdt_arg_types=pdt_arg_types)
    265 
    266 # TODO: more robust handling of recognizing ignore context manager

~/miniconda3/envs/tensorrt/lib/python3.7/site-packages/torch/jit/frontend.py in build_def(ctx, py_def, type_line, def_name, self_name, pdt_arg_types)
    300                        py_def.col_offset + len("def"))
    301 
--> 302     param_list = build_param_list(ctx, py_def.args, self_name, pdt_arg_types)
    303     return_type = None
    304     if getattr(py_def, 'returns', None) is not None:

~/miniconda3/envs/tensorrt/lib/python3.7/site-packages/torch/jit/frontend.py in build_param_list(ctx, py_args, self_name, pdt_arg_types)
    335             if arg is not None:
    336                 ctx_range = build_expr(ctx, arg).range()
--> 337                 raise NotSupportedError(ctx_range, _vararg_kwarg_err)
    338 
    339     # List of Tuple of args and type as inferred by profile directed typing

NotSupportedError: Compiled functions can't take variable number of arguments or use keyword-only arguments with defaults:
  File "/home/iamalien/Desktop/my_files/semantic_segmentation_example/semantic-segmentation-pytorch/sage_example/code/mit_semseg/models/models.py", line 29
    def forward(self, feed_dict, *, segSize=None):
                                            ~~~~ <--- HERE
        # training
        if segSize is None:

@hangzhaomit @Tete-Xiao @davidbau @devinaconley @eugenelawrence @MarcoForte @zhoubolei @yagi-3 @arjo129 @jeremyfix

could you please help here to save the model in torchscript format?

@cclauss
Copy link
Contributor

cclauss commented Jun 19, 2022

Please remove me from this @mention. I have never done and am not interested in doing torchscript.

@IamExperimenting
Copy link
Author

Hi Team,

could you please help here to fix this issue?
@hangzhaomit @Tete-Xiao @davidbau @devinaconley @eugenelawrence @MarcoForte @zhoubolei @yagi-3 @arjo129 @jeremyfix

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants