Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Failure to Trace MiDaS Model #10

Open
rdcdt1 opened this issue Feb 28, 2024 · 1 comment
Open

Failure to Trace MiDaS Model #10

rdcdt1 opened this issue Feb 28, 2024 · 1 comment
Labels
assigned We're actively working on this issue and hope to provide an update soon

Comments

@rdcdt1
Copy link

rdcdt1 commented Feb 28, 2024

i try to use ai-hub on midas ( a very populare model to do image to depth map ) but from what i understand it doesn't support to be trace
here the code i use

import torch
import urllib.request

# Load MiDaS model
model_type = "DPT_Large"     # MiDaS v3 - Large     (highest accuracy, slowest inference speed)
#model_type = "DPT_Hybrid"   # MiDaS v3 - Hybrid    (medium accuracy, medium inference speed)
#model_type = "MiDaS_small"  # MiDaS v2.1 - Small   (lowest accuracy, highest inference speed)
midas = torch.hub.load("intel-isl/MiDaS", model_type)

# Trace MiDaS model
input_shape = (1, 3, 384, 384)  # Adjust input shape as needed
example_input = torch.rand(input_shape)
traced_midas = torch.jit.trace(midas, example_input)

# Optimize model for the chosen device
device = hub.Device("Samsung Galaxy S23 Ultra")
compile_job = hub.submit_compile_job(
    model=traced_midas,
    name="MyMiDaSModel",
    device=device,
    input_specs=dict(image=input_shape),
)

# Run the model on a hosted device
profile_job = hub.submit_profile_job(
    model=compile_job.get_target_model(),
    device=device,
)

here the log of the execution :

C:\Users\iphone/.cache\torch\hub\intel-isl_MiDaS_master\midas\backbones\vit.py:22: TracerWarning: Using len to get tensor shape might cause the trace to be incorrect. Recommended usage would be tensor.shape[0]. Passing a tensor of different shape might lead to errors or silently give incorrect results.
  gs_old = int(math.sqrt(len(posemb_grid)))
Traceback (most recent call last):
  File "C:\Users\iphone\midas_qualcomm\test.py", line 14, in <module>
    traced_midas = torch.jit.trace(midas, example_input)
  File "C:\Users\iphone\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\jit\_trace.py", line 794, in trace
    return trace_module(
  File "C:\Users\iphone\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\jit\_trace.py", line 1056, in trace_module
    module._c._create_method_from_trace(
  File "C:\Users\iphone\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\Users\iphone\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1488, in _slow_forward
    result = self.forward(*input, **kwargs)
  File "C:\Users\iphone/.cache\torch\hub\intel-isl_MiDaS_master\midas\dpt_depth.py", line 166, in forward
    return super().forward(x).squeeze(dim=1)
  File "C:\Users\iphone/.cache\torch\hub\intel-isl_MiDaS_master\midas\dpt_depth.py", line 114, in forward
    layers = self.forward_transformer(self.pretrained, x)
  File "C:\Users\iphone/.cache\torch\hub\intel-isl_MiDaS_master\midas\backbones\vit.py", line 13, in forward_vit
    return forward_adapted_unflatten(pretrained, x, "forward_flex")
  File "C:\Users\iphone/.cache\torch\hub\intel-isl_MiDaS_master\midas\backbones\utils.py", line 99, in forward_adapted_unflatten
    nn.Unflatten(
  File "C:\Users\iphone\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\flatten.py", line 110, in __init__
    self._require_tuple_int(unflattened_size)
  File "C:\Users\iphone\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\flatten.py", line 133, in _require_tuple_int
    raise TypeError("unflattened_size must be tuple of ints, " +
TypeError: unflattened_size must be tuple of ints, but found element of type Tensor at pos 0

did i do something wrong ?

@bhushan23
Copy link
Contributor

bhushan23 commented Feb 29, 2024

Hi @rdcdt1
seems like Unflatten op is presented with a torch.Tensor instead of integers
This is unusual though as
C:\Users\iphone/.cache\torch\hub\intel-isl_MiDaS_master\midas\backbones\utils.py seem to have following initialization
midas_unflatten
which should be integer.

Could you please add a breakpoint before line 99 in utils.py and check unflatten input size being provided?
this could be an issue with torch version as well (not able to find supported torch version for midas)

@kory kory changed the title ai-hub need trace but some model aren't traceable Failure to Trace MiDaS Model Feb 29, 2024
@mestrona-3 mestrona-3 added the assigned We're actively working on this issue and hope to provide an update soon label Mar 21, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
assigned We're actively working on this issue and hope to provide an update soon
Projects
None yet
Development

No branches or pull requests

3 participants