Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ValueError: path while running the extractor #2820

Open
kshtjkumar opened this issue May 8, 2024 · 30 comments
Open

ValueError: path while running the extractor #2820

kshtjkumar opened this issue May 8, 2024 · 30 comments
Labels
question General question regarding SI

Comments

@kshtjkumar
Copy link

I am loading the file from hard disk (drive E) and running the extractor, is there something wrong with my code in assigning the path for the sorter ?

from pathlib import Path
recording_ecog = spre.bandpass_filter(recording_rhs, freq_min=300,
freq_max = 6000 ) #bandpass filter
recording_notch_ecog = spre.notch_filter(recording_ecog,q = 50) 
#notch_filter
rec_ecog_ref = spre.common_reference(recording_notch_ecog,
operator="median", reference="global")  #rereferencing the data
output_folder = Path(r"C:\Users\garim\mountainsort5_output789")
sorting_rec = ss.run_sorter("mountainsort5", rec_ecog_ref,
output_folder=output_folder)
print("Sorter found", len(sorting_rec.get_unit_ids()), "units")
sorting_rec = sorting_rec.remove_empty_units()
print("Sorter found", len(sorting_rec.get_unit_ids()), "non empty units")



---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
Cell In[27], line 6
      4 rec_ecog_ref = spre.common_reference(recording_notch_ecog,
operator="median", reference="global")  #rereferencing the data
      5 output_folder = Path(r"C:\Users\garim\mountainsort5_output789")
----> 6 sorting_rec = ss.run_sorter("mountainsort5", rec_ecog_ref,
output_folder=output_folder)
      7 print("Sorter found", len(sorting_rec.get_unit_ids()), "units")
      8 sorting_rec = sorting_rec.remove_empty_units()

File
~\.conda\envs\spike\lib\site-packages\spikeinterface\sorters\runsorter.py:147,
in run_sorter(sorter_name, recording, output_folder,
remove_existing_folder, delete_output_folder, verbose, raise_error,
docker_image, singularity_image, delete_container_files, with_output,
**sorter_params)
    140             container_image = singularity_image
    141     return run_sorter_container(
    142         container_image=container_image,
    143         mode=mode,
    144         **common_kwargs,
    145     )
--> 147 return run_sorter_local(**common_kwargs)

File
~\.conda\envs\spike\lib\site-packages\spikeinterface\sorters\runsorter.py:170,
in run_sorter_local(sorter_name, recording, output_folder,
remove_existing_folder, delete_output_folder, verbose, raise_error,
with_output, **sorter_params)
    167 SorterClass = sorter_dict[sorter_name]
    169 # only classmethod call not instance (stateless at instance level
but state is in folder)
--> 170 output_folder = SorterClass.initialize_folder(recording,
output_folder, verbose, remove_existing_folder)
    171 SorterClass.set_params_to_folder(recording, output_folder,
sorter_params, verbose)
    172 SorterClass.setup_recording(recording, output_folder,
verbose=verbose)

File
~\.conda\envs\spike\lib\site-packages\spikeinterface\sorters\basesorter.py:141,
in BaseSorter.initialize_folder(cls, recording, output_folder, verbose,
remove_existing_folder)
    139 rec_file = output_folder / "spikeinterface_recording.json"
    140 if recording.check_serializablility("json"):
--> 141     recording.dump(rec_file, relative_to=output_folder)
    142 elif recording.check_serializablility("pickle"):
    143     recording.dump(output_folder /
"spikeinterface_recording.pickle", relative_to=output_folder)

File
~\.conda\envs\spike\lib\site-packages\spikeinterface\core\base.py:569, in
BaseExtractor.dump(self, file_path, relative_to, folder_metadata)
    557 """
    558 Dumps extractor to json or pickle
    559
   (...)
    566     This means that file and folder paths in extractor objects
kwargs are changed to be relative rather than absolute.
    567 """
    568 if str(file_path).endswith(".json"):
--> 569     self.dump_to_json(file_path, relative_to=relative_to,
folder_metadata=folder_metadata)
    570 elif str(file_path).endswith(".pkl") or
str(file_path).endswith(".pickle"):
    571     self.dump_to_pickle(file_path, folder_metadata=folder_metadata)

File
~\.conda\envs\spike\lib\site-packages\spikeinterface\core\base.py:602, in
BaseExtractor.dump_to_json(self, file_path, relative_to, folder_metadata)
    599     relative_to = Path(file_path).parent if relative_to is True
else Path(relative_to)
    600     relative_to = relative_to.resolve().absolute()
--> 602 dump_dict = self.to_dict(
    603     include_annotations=True,
    604     include_properties=False,
    605     relative_to=relative_to,
    606     folder_metadata=folder_metadata,
    607     recursive=True,
    608 )
    609 file_path = self._get_file_path(file_path, [".json"])
    611 file_path.write_text(
    612     json.dumps(dump_dict, indent=4, cls=SIJsonEncoder),
    613     encoding="utf8",
    614 )

File
~\.conda\envs\spike\lib\site-packages\spikeinterface\core\base.py:406, in
BaseExtractor.to_dict(self, include_annotations, include_properties,
relative_to, folder_metadata, recursive)
    404     relative_to = Path(relative_to).resolve().absolute()
    405     assert relative_to.is_dir(), "'relative_to' must be an
existing directory"
--> 406     dump_dict = _make_paths_relative(dump_dict, relative_to)
    408 if folder_metadata is not None:
    409     if relative_to is not None:

File
~\.conda\envs\spike\lib\site-packages\spikeinterface\core\base.py:983, in
_make_paths_relative(d, relative)
    981 relative = str(Path(relative).resolve().absolute())
    982 func = lambda p: os.path.relpath(str(p), start=relative)
--> 983 return recursive_path_modifier(d, func, target="path", copy=True)

File
~\.conda\envs\spike\lib\site-packages\spikeinterface\core\core_tools.py:831,
in recursive_path_modifier(d, func, target, copy)
    829 if isinstance(v, dict) and is_dict_extractor(v):
    830     nested_extractor_dict = v
--> 831     recursive_path_modifier(nested_extractor_dict, func, copy=False)
    832 # deal with list of extractor objects (e.g. concatenate_recordings)
    833 elif isinstance(v, list):

File
~\.conda\envs\spike\lib\site-packages\spikeinterface\core\core_tools.py:831,
in recursive_path_modifier(d, func, target, copy)
    829 if isinstance(v, dict) and is_dict_extractor(v):
    830     nested_extractor_dict = v
--> 831     recursive_path_modifier(nested_extractor_dict, func, copy=False)
    832 # deal with list of extractor objects (e.g. concatenate_recordings)
    833 elif isinstance(v, list):

    [... skipping similar frames: recursive_path_modifier at line 831 (1
times)]

File
~\.conda\envs\spike\lib\site-packages\spikeinterface\core\core_tools.py:831,
in recursive_path_modifier(d, func, target, copy)
    829 if isinstance(v, dict) and is_dict_extractor(v):
    830     nested_extractor_dict = v
--> 831     recursive_path_modifier(nested_extractor_dict, func, copy=False)
    832 # deal with list of extractor objects (e.g. concatenate_recordings)
    833 elif isinstance(v, list):

File
~\.conda\envs\spike\lib\site-packages\spikeinterface\core\core_tools.py:824,
in recursive_path_modifier(d, func, target, copy)
    821 kwargs = dc["kwargs"]
    823 # change in place (copy=False)
--> 824 recursive_path_modifier(kwargs, func, copy=False)
    826 # find nested and also change inplace (copy=False)
    827 nested_extractor_dict = None

File
~\.conda\envs\spike\lib\site-packages\spikeinterface\core\core_tools.py:847,
in recursive_path_modifier(d, func, target, copy)
    845     continue
    846 if isinstance(v, (str, Path)):
--> 847     dc[k] = func(v)
    848 elif isinstance(v, list):
    849     dc[k] = [func(e) for e in v]

File
~\.conda\envs\spike\lib\site-packages\spikeinterface\core\base.py:982, in
_make_paths_relative.<locals>.<lambda>(p)
    980 def _make_paths_relative(d, relative) -> dict:
    981     relative = str(Path(relative).resolve().absolute())
--> 982     func = lambda p: os.path.relpath(str(p), start=relative)
    983     return recursive_path_modifier(d, func, target="path", copy=True)

File ~\.conda\envs\spike\lib\ntpath.py:747, in relpath(path, start)
    745 path_drive, path_rest = splitdrive(path_abs)
    746 if normcase(start_drive) != normcase(path_drive):
--> 747     raise ValueError("path is on mount %r, start on mount %r" % (
    748         path_drive, start_drive))
    750 start_list = [x for x in start_rest.split(sep) if x]
    751 path_list = [x for x in path_rest.split(sep) if x]

ValueError: path is on mount 'E:', start on mount 'C:'
@zm711
Copy link
Collaborator

zm711 commented May 8, 2024

@kshtjkumar, which version are you using. This was an error for Windows computer that was fixed previously. Could you post the exact paths you're using? (Feel free to put ... if there are parts you want to edit for privacy). E.g.

recording = se.read_xx(r'E:\Users\...\experiment1\...\file.xx')

@zm711 zm711 added the question General question regarding SI label May 8, 2024
@kshtjkumar
Copy link
Author

current version of this notebook is : 0.99.1

@zm711
Copy link
Collaborator

zm711 commented May 8, 2024

I would update that to at least 0.100.x I believe the Windows fix was around that time. Otherwise you have to sort on the same drive where the data is. So you could test E: -> E:

@kshtjkumar
Copy link
Author

kshtjkumar commented May 8, 2024

file2 ="E:\CEAF_PC_BACKUP\monica_EEG_GSstudy\L37_1_19_1_24_240120_102532\L37_1_19_1_24_240120_102532_merged.rhs"
#reader = se.read_intan(file,stream_name ='RHD2000 amplifier channel',use_names_as_ids=True)
reader2 = se.read_intan(file2,stream_name ='RHD2000 amplifier channel',use_names_as_ids=True)

recording_rhs = reader2 #recording file
print(reader2.channel_ids)

recording_rhs.annotate(is_filtered = False)

channel_ids = recording_rhs.get_channel_ids()
fs = recording_rhs.get_sampling_frequency()
num_chan = recording_rhs.get_num_channels()
num_segments = recording_rhs.get_num_segments()

print("Channel_ids = ", channel_ids)
print("Sampling_frequency = ", fs)
print("Number of Channels = ", num_chan)
print("Number of segments = ", num_segments)
print('Total_rec_duration = ', recording_rhs.get_total_duration())

#ecog = ['D-000','D-002', 'D-004', 'D-006']
ecog = ['B-000', 'B-002', 'B-004', 'B-006']#,'D-000','D-002', 'D-004', 'D-006']
recording_rhs = recording_rhs.channel_slice(ecog) #Using only specific channels for recording

@kshtjkumar
Copy link
Author

I would update that to at least 0.100.x I believe the Windows fix was around that time. Otherwise you have to sort on the same drive where the data is. So you could test E: -> E:

I dont mind updating but then I will have to change a lot of syntax and waveform extraction also for the newer version.

@zm711
Copy link
Collaborator

zm711 commented May 8, 2024

For version 0.100.x you won't have to change anything. syntax changes with version 0.101.0.

@kshtjkumar
Copy link
Author

file2 ="E:\CEAF_PC_BACKUP\monica_EEG_GSstudy\L37_1_19_1_24_240120_102532\L37_1_19_1_24_240120_102532_merged.rhs"
#reader = se.read_intan(file,stream_name ='RHD2000 amplifier channel',use_names_as_ids=True)
reader2 = se.read_intan(file2,stream_name ='RHD2000 amplifier channel',use_names_as_ids=True)

recording_rhs = reader2 #recording file
print(reader2.channel_ids)

recording_rhs.annotate(is_filtered = False)

channel_ids = recording_rhs.get_channel_ids()
fs = recording_rhs.get_sampling_frequency()
num_chan = recording_rhs.get_num_channels()
num_segments = recording_rhs.get_num_segments()

print("Channel_ids = ", channel_ids)
print("Sampling_frequency = ", fs)
print("Number of Channels = ", num_chan)
print("Number of segments = ", num_segments)
print('Total_rec_duration = ', recording_rhs.get_total_duration())

#ecog = ['D-000','D-002', 'D-004', 'D-006']
ecog = ['B-000', 'B-002', 'B-004', 'B-006']#,'D-000','D-002', 'D-004', 'D-006']
recording_rhs = recording_rhs.channel_slice(ecog) #Using only specific channels for recording

Hi here is my reader path, can you help with the changes ?

@zm711
Copy link
Collaborator

zm711 commented May 8, 2024

My only recommendation for the path is that you should use a raw string instead. Windows paths can escape so using a raw string will protect you.

file2 =r"E:\CEAF_PC_BACKUP\monica_EEG_GSstudy\L37_1_19_1_24_240120_102532\L37_1_19_1_24_240120_102532_merged.rhs"

notice the r for raw. This prevents any accidental escaping.

@zm711
Copy link
Collaborator

zm711 commented May 8, 2024

Also as an FYI we've updated intan for neo 13.1 so now there is a difference between RHD and RHS amplifier stream. So that code may break with a future version of neo. Just in case you update.

@kshtjkumar
Copy link
Author

My only recommendation for the path is that you should use a raw string instead. Windows paths can escape so using a raw string will protect you.

file2 =r"E:\CEAF_PC_BACKUP\monica_EEG_GSstudy\L37_1_19_1_24_240120_102532\L37_1_19_1_24_240120_102532_merged.rhs"

notice the r for raw. This prevents any accidental escaping.

tried this it didnt work, same mounting error.

@zm711
Copy link
Collaborator

zm711 commented May 9, 2024

After updating to spikeinterface 0.100.x?

@kshtjkumar
Copy link
Author

After updating to spikeinterface 0.100.x?

updated to 0.100.1, here is the error:


Exception in thread Thread-5:
Traceback (most recent call last):
  File "C:\Users\garim\.conda\envs\spike\lib\threading.py", line 1016, in
_bootstrap_inner
    self.run()
  File
"C:\Users\garim\.conda\envs\spike\lib\concurrent\futures\process.py",
line 323, in run
    self.terminate_broken(cause)
  File
"C:\Users\garim\.conda\envs\spike\lib\concurrent\futures\process.py",
line 463, in terminate_broken
    work_item.future.set_exception(bpe)
  File "C:\Users\garim\.conda\envs\spike\lib\concurrent\futures\_base.py",
line 561, in set_exception
    raise InvalidStateError('{}: {!r}'.format(self._state, self))
concurrent.futures._base.InvalidStateError: CANCELLED: <Future at
0x292d9d21f90 state=cancelled>
---------------------------------------------------------------------------
SpikeSortingError                         Traceback (most recent call last)
Cell In[4], line 6
      4 rec_ecog_ref = spre.common_reference(recording_notch_ecog,
operator="median", reference="global")  #rereferencing the data
      5 output_folder = Path(r"C:\Users\garim\mountainsort5_output78558599")
----> 6 sorting_rec = ss.run_sorter("mountainsort5", rec_ecog_ref,
output_folder=output_folder)
      7 print("Sorter found", len(sorting_rec.get_unit_ids()), "units")
      8 sorting_rec = sorting_rec.remove_empty_units()

File
~\.conda\envs\spike\lib\site-packages\spikeinterface\sorters\runsorter.py:175,
in run_sorter(sorter_name, recording, output_folder,
remove_existing_folder, delete_output_folder, verbose, raise_error,
docker_image, singularity_image, delete_container_files, with_output,
**sorter_params)
    168             container_image = singularity_image
    169     return run_sorter_container(
    170         container_image=container_image,
    171         mode=mode,
    172         **common_kwargs,
    173     )
--> 175 return run_sorter_local(**common_kwargs)

File
~\.conda\envs\spike\lib\site-packages\spikeinterface\sorters\runsorter.py:225,
in run_sorter_local(sorter_name, recording, output_folder,
remove_existing_folder, delete_output_folder, verbose, raise_error,
with_output, **sorter_params)
    223 SorterClass.set_params_to_folder(recording, output_folder,
sorter_params, verbose)
    224 SorterClass.setup_recording(recording, output_folder,
verbose=verbose)
--> 225 SorterClass.run_from_folder(output_folder, raise_error, verbose)
    226 if with_output:
    227     sorting = SorterClass.get_result_from_folder(output_folder,
register_recording=True, sorting_info=True)

File
~\.conda\envs\spike\lib\site-packages\spikeinterface\sorters\basesorter.py:293,
in BaseSorter.run_from_folder(cls, output_folder, raise_error, verbose)
    290         print(f"{sorter_name} run time {run_time:0.2f}s")
    292 if has_error and raise_error:
--> 293     raise SpikeSortingError(
    294         f"Spike sorting error trace:\n{log['error_trace']}\n"
    295         f"Spike sorting failed. You can inspect the runtime trace
in {output_folder}/spikeinterface_log.json."
    296     )
    298 return run_time

SpikeSortingError: Spike sorting error trace:
Traceback (most recent call last):
  File
"C:\Users\garim\.conda\envs\spike\lib\site-packages\spikeinterface\sorters\basesorter.py",
line 258, in run_from_folder
    SorterClass._run_from_folder(sorter_output_folder, sorter_params,
verbose)
  File
"C:\Users\garim\.conda\envs\spike\lib\site-packages\spikeinterface\sorters\external\mountainsort5.py",
line 191, in _run_from_folder
    recording_cached = create_cached_recording(
  File
"C:\Users\garim\.conda\envs\spike\lib\site-packages\mountainsort5\util\create_cached_recording.py",
line 18, in create_cached_recording
    si.BinaryRecordingExtractor.write_recording(
  File
"C:\Users\garim\.conda\envs\spike\lib\site-packages\spikeinterface\core\binaryrecordingextractor.py",
line 148, in write_recording
    write_binary_recording(recording, file_paths=file_paths, dtype=dtype,
**job_kwargs)
  File
"C:\Users\garim\.conda\envs\spike\lib\site-packages\spikeinterface\core\recording_tools.py",
line 137, in write_binary_recording
    executor.run()
  File
"C:\Users\garim\.conda\envs\spike\lib\site-packages\spikeinterface\core\job_tools.py",
line 401, in run
    for res in results:
  File
"C:\Users\garim\.conda\envs\spike\lib\site-packages\tqdm\notebook.py",
line 250, in __iter__
    for obj in it:
  File "C:\Users\garim\.conda\envs\spike\lib\site-packages\tqdm\std.py",
line 1181, in __iter__
    for obj in iterable:
  File
"C:\Users\garim\.conda\envs\spike\lib\concurrent\futures\process.py",
line 575, in _chain_from_iterable_of_lists
    for element in iterable:
  File "C:\Users\garim\.conda\envs\spike\lib\concurrent\futures\_base.py",
line 621, in result_iterator
    yield _result_or_cancel(fs.pop())
  File "C:\Users\garim\.conda\envs\spike\lib\concurrent\futures\_base.py",
line 319, in _result_or_cancel
    return fut.result(timeout)
  File "C:\Users\garim\.conda\envs\spike\lib\concurrent\futures\_base.py",
line 458, in result
    return self.__get_result()
  File "C:\Users\garim\.conda\envs\spike\lib\concurrent\futures\_base.py",
line 403, in __get_result
    raise self._exception
concurrent.futures.process.BrokenProcessPool: A process in the process
pool was terminated abruptly while the future was running or pending.

Spike sorting failed. You can inspect the runtime trace in
C:\Users\garim\mountainsort5_output78558599/spikeinterface_log.json.

@zm711
Copy link
Collaborator

zm711 commented May 9, 2024

We fixed the drive error! So that's one down. So this is a multiprocessing error.

What did you set n_jobs to?

And did you download the most recent 0.100.6 for example?

@kshtjkumar
Copy link
Author

not sure what is n_jobs, I can try 0.100.6 too!

@zm711
Copy link
Collaborator

zm711 commented May 9, 2024

If you haven't messed with it then it should default to 1 except in run_sorter where it defaults to all cores which could be a problem. Try 0.100.6 and if it doesn't work then we may need to strategize a little bit. Windows are always a bit tricky to get working for these things, but we will do our best to figure this out!

@kshtjkumar
Copy link
Author

this is after 0.100.6

Exception in thread Thread-7:
Traceback (most recent call last):
  File "C:\Users\garim\.conda\envs\spike\lib\threading.py", line 1016, in
_bootstrap_inner
    self.run()
  File
"C:\Users\garim\.conda\envs\spike\lib\concurrent\futures\process.py",
line 323, in run
    self.terminate_broken(cause)
  File
"C:\Users\garim\.conda\envs\spike\lib\concurrent\futures\process.py",
line 463, in terminate_broken
    work_item.future.set_exception(bpe)
  File "C:\Users\garim\.conda\envs\spike\lib\concurrent\futures\_base.py",
line 561, in set_exception
    raise InvalidStateError('{}: {!r}'.format(self._state, self))
concurrent.futures._base.InvalidStateError: CANCELLED: <Future at
0x292dc8e4580 state=cancelled>
---------------------------------------------------------------------------
SpikeSortingError                         Traceback (most recent call last)
Cell In[9], line 6
      4 rec_ecog_ref = spre.common_reference(recording_notch_ecog,
operator="median", reference="global")  #rereferencing the data
      5 output_folder = Path(r"C:\Users\garim\mountainsort5_output785585959")
----> 6 sorting_rec = ss.run_sorter("mountainsort5", rec_ecog_ref,
output_folder=output_folder)
      7 print("Sorter found", len(sorting_rec.get_unit_ids()), "units")
      8 sorting_rec = sorting_rec.remove_empty_units()

File
~\.conda\envs\spike\lib\site-packages\spikeinterface\sorters\runsorter.py:175,
in run_sorter(sorter_name, recording, output_folder,
remove_existing_folder, delete_output_folder, verbose, raise_error,
docker_image, singularity_image, delete_container_files, with_output,
**sorter_params)
    168             container_image = singularity_image
    169     return run_sorter_container(
    170         container_image=container_image,
    171         mode=mode,
    172         **common_kwargs,
    173     )
--> 175 return run_sorter_local(**common_kwargs)

File
~\.conda\envs\spike\lib\site-packages\spikeinterface\sorters\runsorter.py:225,
in run_sorter_local(sorter_name, recording, output_folder,
remove_existing_folder, delete_output_folder, verbose, raise_error,
with_output, **sorter_params)
    223 SorterClass.set_params_to_folder(recording, output_folder,
sorter_params, verbose)
    224 SorterClass.setup_recording(recording, output_folder,
verbose=verbose)
--> 225 SorterClass.run_from_folder(output_folder, raise_error, verbose)
    226 if with_output:
    227     sorting = SorterClass.get_result_from_folder(output_folder,
register_recording=True, sorting_info=True)

File
~\.conda\envs\spike\lib\site-packages\spikeinterface\sorters\basesorter.py:293,
in BaseSorter.run_from_folder(cls, output_folder, raise_error, verbose)
    290         print(f"{sorter_name} run time {run_time:0.2f}s")
    292 if has_error and raise_error:
--> 293     raise SpikeSortingError(
    294         f"Spike sorting error trace:\n{log['error_trace']}\n"
    295         f"Spike sorting failed. You can inspect the runtime trace
in {output_folder}/spikeinterface_log.json."
    296     )
    298 return run_time

SpikeSortingError: Spike sorting error trace:
Traceback (most recent call last):
  File
"C:\Users\garim\.conda\envs\spike\lib\site-packages\spikeinterface\sorters\basesorter.py",
line 258, in run_from_folder
    SorterClass._run_from_folder(sorter_output_folder, sorter_params,
verbose)
  File
"C:\Users\garim\.conda\envs\spike\lib\site-packages\spikeinterface\sorters\external\mountainsort5.py",
line 191, in _run_from_folder
    recording_cached = recording
  File
"C:\Users\garim\.conda\envs\spike\lib\site-packages\mountainsort5\util\create_cached_recording.py",
line 18, in create_cached_recording
    si.BinaryRecordingExtractor.write_recording(
  File
"C:\Users\garim\.conda\envs\spike\lib\site-packages\spikeinterface\core\binaryrecordingextractor.py",
line 148, in write_recording
    write_binary_recording(recording, file_paths=file_paths, dtype=dtype,
**job_kwargs)
  File
"C:\Users\garim\.conda\envs\spike\lib\site-packages\spikeinterface\core\recording_tools.py",
line 137, in write_binary_recording
    executor.run()
  File
"C:\Users\garim\.conda\envs\spike\lib\site-packages\spikeinterface\core\job_tools.py",
line 401, in run
    for res in results:
  File
"C:\Users\garim\.conda\envs\spike\lib\site-packages\tqdm\notebook.py",
line 250, in __iter__
    for obj in it:
  File "C:\Users\garim\.conda\envs\spike\lib\site-packages\tqdm\std.py",
line 1181, in __iter__
    for obj in iterable:
  File
"C:\Users\garim\.conda\envs\spike\lib\concurrent\futures\process.py",
line 575, in _chain_from_iterable_of_lists
    for element in iterable:
  File "C:\Users\garim\.conda\envs\spike\lib\concurrent\futures\_base.py",
line 621, in result_iterator
    yield _result_or_cancel(fs.pop())
  File "C:\Users\garim\.conda\envs\spike\lib\concurrent\futures\_base.py",
line 319, in _result_or_cancel
    return fut.result(timeout)
  File "C:\Users\garim\.conda\envs\spike\lib\concurrent\futures\_base.py",
line 458, in result
    return self.__get_result()
  File "C:\Users\garim\.conda\envs\spike\lib\concurrent\futures\_base.py",
line 403, in __get_result
    raise self._exception
concurrent.futures.process.BrokenProcessPool: A process in the process
pool was terminated abruptly while the future was running or pending.

Spike sorting failed. You can inspect the runtime trace in
C:\Users\garim\mountainsort5_output785585959/spikeinterface_log.json.


@zm711
Copy link
Collaborator

zm711 commented May 9, 2024

Does anything appear in your terminal? When mountsort5 runs it usually starts printing stuff. Do you get to that point or does it break earlier?

@kshtjkumar
Copy link
Author

Does anything appear in your terminal? When mountsort5 runs it usually starts printing stuff. Do you get to that point or does it break earlier?

Screenshot 2024-05-10 020839

@zm711
Copy link
Collaborator

zm711 commented May 9, 2024

Okay cool, it is failing at the write_binary. Could you try:

ss.run_sorter("mountainsort5", rec_ecog_ref, output_folder=output_folder, n_jobs=1)

And see what it does?

This might be pretty slow but it will help us diagnose things!

@kshtjkumar
Copy link
Author

it did work, but it is ver very slow!

@zm711
Copy link
Collaborator

zm711 commented May 9, 2024

Cool. Okay so the problem is in the multiprocessing. Could you try setting n_jobs=2 or n_jobs=3. We need to check if the problem is that you were defaulting to too many jobs or if all multiprocessing is broken in your setup.

@kshtjkumar
Copy link
Author

so basically last time also when i tried the n_jobs argument it gave this error, previous run was executed without mentioning the n_jobs, so by default it took that as 1. here is the error:

---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
Cell In[22], line 7
      4 rec_ecog_ref = spre.common_reference(recording_notch_ecog,
operator="median", reference="global")  #rereferencing the data
      6 output_folder =
Path(r"C:\Users\garim\mountainsort5_output78558588877559")
----> 7 sorting_rec = ss.run_sorter("mountainsort5", rec_ecog_ref,
output_folder=output_folder, n_jobs = 2)
      8 print("Sorter found", len(sorting_rec.get_unit_ids()), "units")
      9 sorting_rec = sorting_rec.remove_empty_units()

File
~\.conda\envs\spike\lib\site-packages\spikeinterface\sorters\runsorter.py:175,
in run_sorter(sorter_name, recording, output_folder,
remove_existing_folder, delete_output_folder, verbose, raise_error,
docker_image, singularity_image, delete_container_files, with_output,
**sorter_params)
    168             container_image = singularity_image
    169     return run_sorter_container(
    170         container_image=container_image,
    171         mode=mode,
    172         **common_kwargs,
    173     )
--> 175 return run_sorter_local(**common_kwargs)

File
~\.conda\envs\spike\lib\site-packages\spikeinterface\sorters\runsorter.py:223,
in run_sorter_local(sorter_name, recording, output_folder,
remove_existing_folder, delete_output_folder, verbose, raise_error,
with_output, **sorter_params)
    221 # only classmethod call not instance (stateless at instance level
but state is in folder)
    222 output_folder = SorterClass.initialize_folder(recording,
output_folder, verbose, remove_existing_folder)
--> 223 SorterClass.set_params_to_folder(recording, output_folder,
sorter_params, verbose)
    224 SorterClass.setup_recording(recording, output_folder,
verbose=verbose)
    225 SorterClass.run_from_folder(output_folder, raise_error, verbose)

File
~\.conda\envs\spike\lib\site-packages\spikeinterface\sorters\basesorter.py:180,
in BaseSorter.set_params_to_folder(cls, recording, output_folder,
new_params, verbose)
    178         bad_params.append(p)
    179 if len(bad_params) > 0:
--> 180     raise AttributeError("Bad parameters: " + str(bad_params))
    182 params.update(new_params)
    184 # custom check params

AttributeError: Bad parameters: ['n_jobs']

@zm711
Copy link
Collaborator

zm711 commented May 9, 2024

Sorry then what did you change when it worked and when it didn't work?

@kshtjkumar
Copy link
Author

i just removed the n_jobs argument.

@zm711
Copy link
Collaborator

zm711 commented May 9, 2024

I mean from when it failed due to the broken process and it actually working. It just randomly worked or you made a specific change?

@kshtjkumar
Copy link
Author

kshtjkumar commented May 9, 2024

no after updating to 0.100.6 , i ran this command :
ss.run_sorter("mountainsort5", rec_ecog_ref, output_folder=output_folder, n_jobs=1)

but it gave the error :
AttributeError: Bad parameters: ['n_jobs']

So i ran this command:
ss.run_sorter("mountainsort5", rec_ecog_ref, output_folder=output_folder)

this is the one which is very slow.

@zm711
Copy link
Collaborator

zm711 commented May 9, 2024

If you type

si.get_global_job_kwargs

What does it say? Because it was saying that it wasn't letting you change the n_jobs in the run_sorter function. Could you try

ss.run_sorter("mountainsort5", rec_ecog_ref, output_folder=output_folder, verbose=True)

@kshtjkumar
Copy link
Author

If you type

si.get_global_job_kwargs

What does it say? Because it was saying that it wasn't letting you change the n_jobs in the run_sorter function. Could you try

ss.run_sorter("mountainsort5", rec_ecog_ref, output_folder=output_folder, verbose=True)

si.get_global_job_kwargs()
{'n_jobs': 1,
 'chunk_duration': '1s',
 'progress_bar': True,
 'mp_context': None,
 'max_threads_per_process': 1}

si.get_global_job_kwargs
<function spikeinterface.core.globals.get_global_job_kwargs()>

@zm711
Copy link
Collaborator

zm711 commented May 9, 2024

Sorry that was my typo you were right to type: si.get_global_job_kwargs()

I'm still trying to figure out what if causing the multiprocessing to fail sometimes.

@kshtjkumar
Copy link
Author

sure!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question General question regarding SI
Projects
None yet
Development

No branches or pull requests

2 participants