Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TypeError: cannot pickle '_thread.RLock' object #1309

Closed
wzyloveywh opened this issue Mar 31, 2023 · 3 comments
Closed

TypeError: cannot pickle '_thread.RLock' object #1309

wzyloveywh opened this issue Mar 31, 2023 · 3 comments

Comments

@wzyloveywh
Copy link

When I run the" voice_activity_detection.ipynb file" at the jupyter notebook.
The last piece of code in the "Training "module reported an error: TypeError: cannot pickle '_thread.RLock' object

How should I solve this problem?

TypeError Traceback (most recent call last)
Cell In[9], line 3
1 import pytorch_lightning as pl
2 trainer = pl.Trainer(devices=1, accelerator="gpu", max_epochs=2)
----> 3 trainer.fit(model)

File D:\Anaconda3\lib\site-packages\pytorch_lightning\trainer\trainer.py:770, in Trainer.fit(self, model, train_dataloaders, val_dataloaders, datamodule, ckpt_path)
751 r"""
752 Runs the full optimization routine.
753
(...)
767 datamodule: An instance of :class:~pytorch_lightning.core.datamodule.LightningDataModule.
768 """
769 self.strategy.model = model
--> 770 self._call_and_handle_interrupt(
771 self._fit_impl, model, train_dataloaders, val_dataloaders, datamodule, ckpt_path
772 )

File D:\Anaconda3\lib\site-packages\pytorch_lightning\trainer\trainer.py:723, in Trainer._call_and_handle_interrupt(self, trainer_fn, *args, **kwargs)
721 return self.strategy.launcher.launch(trainer_fn, *args, trainer=self, **kwargs)
722 else:
--> 723 return trainer_fn(*args, **kwargs)
724 # TODO: treat KeyboardInterrupt as BaseException (delete the code below) in v1.7
725 except KeyboardInterrupt as exception:

File D:\Anaconda3\lib\site-packages\pytorch_lightning\trainer\trainer.py:811, in Trainer._fit_impl(self, model, train_dataloaders, val_dataloaders, datamodule, ckpt_path)
807 ckpt_path = ckpt_path or self.resume_from_checkpoint
808 self._ckpt_path = self.__set_ckpt_path(
809 ckpt_path, model_provided=True, model_connected=self.lightning_module is not None
810 )
--> 811 results = self._run(model, ckpt_path=self.ckpt_path)
813 assert self.state.stopped
814 self.training = False

File D:\Anaconda3\lib\site-packages\pytorch_lightning\trainer\trainer.py:1236, in Trainer._run(self, model, ckpt_path)
1232 self._checkpoint_connector.restore_training_state()
1234 self._checkpoint_connector.resume_end()
-> 1236 results = self._run_stage()
1238 log.detail(f"{self.class.name}: trainer tearing down")
1239 self._teardown()

File D:\Anaconda3\lib\site-packages\pytorch_lightning\trainer\trainer.py:1323, in Trainer._run_stage(self)
1321 if self.predicting:
1322 return self._run_predict()
-> 1323 return self._run_train()

File D:\Anaconda3\lib\site-packages\pytorch_lightning\trainer\trainer.py:1345, in Trainer._run_train(self)
1342 self._pre_training_routine()
1344 with isolate_rng():
-> 1345 self._run_sanity_check()
1347 # enable train mode
1348 self.model.train()

File D:\Anaconda3\lib\site-packages\pytorch_lightning\trainer\trainer.py:1413, in Trainer._run_sanity_check(self)
1411 # run eval step
1412 with torch.no_grad():
-> 1413 val_loop.run()
1415 self._call_callback_hooks("on_sanity_check_end")
1417 # reset logger connector

File D:\Anaconda3\lib\site-packages\pytorch_lightning\loops\base.py:204, in Loop.run(self, *args, **kwargs)
202 try:
203 self.on_advance_start(*args, **kwargs)
--> 204 self.advance(*args, **kwargs)
205 self.on_advance_end()
206 self._restarting = False

File D:\Anaconda3\lib\site-packages\pytorch_lightning\loops\dataloader\evaluation_loop.py:155, in EvaluationLoop.advance(self, *args, **kwargs)
153 if self.num_dataloaders > 1:
154 kwargs["dataloader_idx"] = dataloader_idx
--> 155 dl_outputs = self.epoch_loop.run(self._data_fetcher, dl_max_batches, kwargs)
157 # store batch level output per dataloader
158 self._outputs.append(dl_outputs)

File D:\Anaconda3\lib\site-packages\pytorch_lightning\loops\base.py:199, in Loop.run(self, *args, **kwargs)
195 return self.on_skip()
197 self.reset()
--> 199 self.on_run_start(*args, **kwargs)
201 while not self.done:
202 try:

File D:\Anaconda3\lib\site-packages\pytorch_lightning\loops\epoch\evaluation_epoch_loop.py:88, in EvaluationEpochLoop.on_run_start(self, data_fetcher, dl_max_batches, kwargs)
86 self._reload_dataloader_state_dict(data_fetcher)
87 # creates the iterator inside the fetcher but returns self
---> 88 self._data_fetcher = iter(data_fetcher)
89 # add the previous fetched value to properly track is_last_batch with no prefetching
90 data_fetcher.fetched += self.batch_progress.current.ready

File D:\Anaconda3\lib\site-packages\pytorch_lightning\utilities\fetching.py:178, in AbstractDataFetcher.iter(self)
176 def iter(self) -> "AbstractDataFetcher":
177 self.reset()
--> 178 self.dataloader_iter = iter(self.dataloader)
179 self._apply_patch()
180 self.prefetching()

File D:\Anaconda3\lib\site-packages\torch\utils\data\dataloader.py:438, in DataLoader.iter(self)
436 return self._iterator
437 else:
--> 438 return self._get_iterator()

File D:\Anaconda3\lib\site-packages\torch\utils\data\dataloader.py:384, in DataLoader._get_iterator(self)
382 else:
383 self.check_worker_number_rationality()
--> 384 return _MultiProcessingDataLoaderIter(self)

File D:\Anaconda3\lib\site-packages\torch\utils\data\dataloader.py:1048, in _MultiProcessingDataLoaderIter.init(self, loader)
1041 w.daemon = True
1042 # NB: Process.start() actually take some time as it needs to
1043 # start a process and pass the arguments over via a pipe.
1044 # Therefore, we only add a worker to self._workers list after
1045 # it started, so that we do not call .join() if program dies
1046 # before it starts, and del tries to join but will get:
1047 # AssertionError: can only join a started process.
-> 1048 w.start()
1049 self._index_queues.append(index_queue)
1050 self._workers.append(w)

File D:\Anaconda3\lib\multiprocessing\process.py:121, in BaseProcess.start(self)
118 assert not _current_process._config.get('daemon'),
119 'daemonic processes are not allowed to have children'
120 _cleanup()
--> 121 self._popen = self._Popen(self)
122 self._sentinel = self._popen.sentinel
123 # Avoid a refcycle if the target function holds an indirect
124 # reference to the process object (see bpo-30775)

File D:\Anaconda3\lib\multiprocessing\context.py:224, in Process._Popen(process_obj)
222 @staticmethod
223 def _Popen(process_obj):
--> 224 return _default_context.get_context().Process._Popen(process_obj)

File D:\Anaconda3\lib\multiprocessing\context.py:327, in SpawnProcess._Popen(process_obj)
324 @staticmethod
325 def _Popen(process_obj):
326 from .popen_spawn_win32 import Popen
--> 327 return Popen(process_obj)

File D:\Anaconda3\lib\multiprocessing\popen_spawn_win32.py:93, in Popen.init(self, process_obj)
91 try:
92 reduction.dump(prep_data, to_child)
---> 93 reduction.dump(process_obj, to_child)
94 finally:
95 set_spawning_popen(None)

File D:\Anaconda3\lib\multiprocessing\reduction.py:60, in dump(obj, file, protocol)
58 def dump(obj, file, protocol=None):
59 '''Replacement for pickle.dump() using ForkingPickler.'''
---> 60 ForkingPickler(file, protocol).dump(obj)

TypeError: cannot pickle '_thread.RLock' object

@github-actions
Copy link

Thank you for your issue. Give us a little time to review it.

PS. You might want to check the FAQ if you haven't done so already.

This is an automated reply, generated by FAQtory

@hbredin
Copy link
Member

hbredin commented Mar 31, 2023

To maximise the probability of someone answering your question:

  • if your issue is a bug report, please provide a minimum reproducible example, e.g. a link to a self-contained Google Colab notebook (i.e. containing everthing needed to reproduce the bug: installation of pyannote.audio, downloads of models or test data, etc...)

  • if your issue is a feature request, please read this first and update your request accordingly.

@stale
Copy link

stale bot commented Sep 27, 2023

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@stale stale bot added the wontfix label Sep 27, 2023
@stale stale bot closed this as completed Oct 28, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants