Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cannot use the inference brach in my own code #36

Open
Bing-Xiong opened this issue Aug 12, 2022 · 1 comment
Open

Cannot use the inference brach in my own code #36

Bing-Xiong opened this issue Aug 12, 2022 · 1 comment

Comments

@Bing-Xiong
Copy link

I am using the HAWP inference brach. I can use python -m hawp.predict test.jpg --show to get the results from terminal. However I want to combine hawp in my own code to combine with other algorithm, I tried:

from hawp import show
from hawp import predicting

wireframe_parser = predicting.WireframeParser(visualize_image=True)
wireframe_painter = show.painters.WireframePainter()
predict, _, meta = wireframe_parser.images("./test.jpg")
print(predict)

and I got the error:

Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "G:\Anaconda\envs\pytorch_gpu\lib\multiprocessing\spawn.py", line 116, in spawn_main
    exitcode = _main(fd, parent_sentinel)
  File "G:\Anaconda\envs\pytorch_gpu\lib\multiprocessing\spawn.py", line 125, in _main
    prepare(preparation_data)
  File "G:\Anaconda\envs\pytorch_gpu\lib\multiprocessing\spawn.py", line 236, in prepare
    _fixup_main_from_path(data['init_main_from_path'])
  File "G:\Anaconda\envs\pytorch_gpu\lib\multiprocessing\spawn.py", line 287, in _fixup_main_from_path
    main_content = runpy.run_path(main_path,
  File "G:\Anaconda\envs\pytorch_gpu\lib\runpy.py", line 265, in run_path
    return _run_module_code(code, init_globals, run_name,
  File "G:\Anaconda\envs\pytorch_gpu\lib\runpy.py", line 97, in _run_module_code
    _run_code(code, mod_globals, init_globals,
  File "G:\Anaconda\envs\pytorch_gpu\lib\runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "E:\LBW\line_3d_construction\ALGO\hawp_video_test.py", line 14, in <module>
    predict, _, meta = wireframe_parser.images("./test.jpg")
  File "G:\Anaconda\envs\pytorch_gpu\lib\site-packages\hawp\predicting.py", line 63, in images
    yield from self.dataset(data, **kwargs)
  File "G:\Anaconda\envs\pytorch_gpu\lib\site-packages\hawp\predicting.py", line 40, in dataset
    yield from self.dataloader(dataloader)
  File "G:\Anaconda\envs\pytorch_gpu\lib\site-packages\hawp\predicting.py", line 43, in dataloader
    for batch_i, item in enumerate(dataloader):
  File "G:\Anaconda\envs\pytorch_gpu\lib\site-packages\torch\utils\data\dataloader.py", line 359, in __iter__
    return self._get_iterator()
  File "G:\Anaconda\envs\pytorch_gpu\lib\site-packages\torch\utils\data\dataloader.py", line 305, in _get_iterator
    return _MultiProcessingDataLoaderIter(self)
  File "G:\Anaconda\envs\pytorch_gpu\lib\site-packages\torch\utils\data\dataloader.py", line 918, in __init__
    w.start()
  File "G:\Anaconda\envs\pytorch_gpu\lib\multiprocessing\process.py", line 121, in start
    self._popen = self._Popen(self)
  File "G:\Anaconda\envs\pytorch_gpu\lib\multiprocessing\context.py", line 224, in _Popen
    return _default_context.get_context().Process._Popen(process_obj)
  File "G:\Anaconda\envs\pytorch_gpu\lib\multiprocessing\context.py", line 327, in _Popen
    return Popen(process_obj)
  File "G:\Anaconda\envs\pytorch_gpu\lib\multiprocessing\popen_spawn_win32.py", line 45, in __init__
    prep_data = spawn.get_preparation_data(process_obj._name)
  File "G:\Anaconda\envs\pytorch_gpu\lib\multiprocessing\spawn.py", line 154, in get_preparation_data
    _check_not_importing_main()
  File "G:\Anaconda\envs\pytorch_gpu\lib\multiprocessing\spawn.py", line 134, in _check_not_importing_main
    raise RuntimeError('''
RuntimeError: 
        An attempt has been made to start a new process before the
        current process has finished its bootstrapping phase.

        This probably means that you are not using fork to start your
        child processes and you have forgotten to use the proper idiom
        in the main module:

            if __name__ == '__main__':
                freeze_support()
                ...

        The "freeze_support()" line can be omitted if the program
        is not going to be frozen to produce an executable.
Traceback (most recent call last):
  File "G:\Anaconda\envs\pytorch_gpu\lib\site-packages\torch\utils\data\dataloader.py", line 990, in _try_get_data
    data = self._data_queue.get(timeout=timeout)
  File "G:\Anaconda\envs\pytorch_gpu\lib\queue.py", line 178, in get
    raise Empty
_queue.Empty

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "E:/LBW/line_3d_construction/ALGO/hawp_video_test.py", line 14, in <module>
    predict, _, meta = wireframe_parser.images("./test.jpg")
  File "G:\Anaconda\envs\pytorch_gpu\lib\site-packages\hawp\predicting.py", line 63, in images
    yield from self.dataset(data, **kwargs)
  File "G:\Anaconda\envs\pytorch_gpu\lib\site-packages\hawp\predicting.py", line 40, in dataset
    yield from self.dataloader(dataloader)
  File "G:\Anaconda\envs\pytorch_gpu\lib\site-packages\hawp\predicting.py", line 43, in dataloader
    for batch_i, item in enumerate(dataloader):
  File "G:\Anaconda\envs\pytorch_gpu\lib\site-packages\torch\utils\data\dataloader.py", line 521, in __next__
    data = self._next_data()
  File "G:\Anaconda\envs\pytorch_gpu\lib\site-packages\torch\utils\data\dataloader.py", line 1186, in _next_data
    idx, data = self._get_data()
  File "G:\Anaconda\envs\pytorch_gpu\lib\site-packages\torch\utils\data\dataloader.py", line 1142, in _get_data
    success, data = self._try_get_data()
  File "G:\Anaconda\envs\pytorch_gpu\lib\site-packages\torch\utils\data\dataloader.py", line 1003, in _try_get_data
    raise RuntimeError('DataLoader worker (pid(s) {}) exited unexpectedly'.format(pids_str)) from e
RuntimeError: DataLoader worker (pid(s) 26680) exited unexpectedly

Is there any way to use the inference brach in my own code rather than commond from the terminal?

Thanks!

@cherubicXN
Copy link
Owner

Hi,

I think the problem is in the multiprocessing of the data loader. How about changing the num_workers in the data loader to 0?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants