New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Initial Usage Error(s): NumPy array is not writable, 0 spikes extracted #679
Comments
@kipkeller Can you please confirm if you're using the latest version of Kilosort4 (v4.0.5), and reinstall if not ( |
OK - did this: |
You might need to do |
did: pip install kilosort can I just pip install qtpy ? should I specify a version or source? |
Yeah, try that, just |
OK - now running, with a few glitches to attend to tomorrow: then: I could set this to some value, but what? Is there info somewhere on what all these settings mean and suggested values? |
Yes, parameters are documented in |
Looks good this morning. Thanks for all your help! |
Describe the issue:
upon"LOAD"-ing the data:
C:\ProgramData\anaconda3\envs\KS4\lib\site-packages\kilosort\io.py:498: UserWarning:
The given NumPy array is not writable, and PyTorch does not support non-writable tensors.
This means writing to this tensor will result in undefined behavior.
You may want to copy the array to protect its data or make it writable before converting it to a tensor.
This type of warning will be suppressed for the rest of this program.
(Triggered internally at C:\cb\pytorch_1000000000000\work\torch\csrc\utils\tensor_numpy.cpp:212.)
X[:, self.nt : self.nt+nsamp] = torch.from_numpy(data).to(self.device).float()
If I proceed anyways, and "RUN", I get:
Preprocessing filters computed in 0.20s; total 0.20s
computing drift
Re-computing universal templates from data.
C:\ProgramData\anaconda3\envs\KS4\lib\site-packages\threadpoolctl.py:1223: RuntimeWarning:
Found Intel OpenMP ('libiomp') and LLVM OpenMP ('libomp') loaded at
the same time. Both libraries are known to be incompatible and this
can cause random crashes or deadlocks on Linux when loaded in the
same Python program.
Using threadpoolctl may cause crashes or deadlocks. For more information and possible workarounds, please see
https://github.com/joblib/threadpoolctl/blob/master/multiple_openmp.md
warnings.warn(msg, RuntimeWarning)
0%| | 0/155 [00:00<?, ?it/s]
0%| | 0/155 [00:10<?, ?it/s]
100%|################################################################################| 155/155 [00:19<00:00, 7.81it/s]
drift computed in 20.98s; total 21.18s
Extracting spikes using templates
Re-computing universal templates from data.
0%| | 0/155 [00:00<?, ?it/s]
100%|################################################################################| 155/155 [00:17<00:00, 8.66it/s]
0 spikes extracted in 18.44s; total 39.61s
Traceback (most recent call last):
File "C:\ProgramData\anaconda3\envs\KS4\lib\site-packages\kilosort\gui\sorter.py", line 82, in run
st, tF, Wall0, clu0 = detect_spikes(ops, self.device, bfile, tic0=tic0,
File "C:\ProgramData\anaconda3\envs\KS4\lib\site-packages\kilosort\run_kilosort.py", line 397, in detect_spikes
raise ValueError('No spikes detected, cannot continue sorting.')
ValueError
:
No spikes detected, cannot continue sorting.
******* DUMP SETTINGS:
{'data_file_path': WindowsPath('C:/DATA/Spikes/tempData.dat'), 'results_dir': WindowsPath('C:/DATA/Spikes/kilosort4'), 'probe': {'xc': array([ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 500., 500.,
500., 500., 500., 500., 500., 500., 500., 500., 500., 500., 500.,
500., 500., 500., 500., 500., 500., 500., 500., 500., 500., 500.,
500., 500., 500., 500., 500., 500., 500., 500., 500., 500., 500.,
500., 500., 500., 500., 500., 500., 500., 500., 500., 500., 500.,
500., 500., 500., 500., 500., 500., 500., 500., 500., 500., 500.,
500., 500., 500., 500., 500., 500., 500.], dtype=float32), 'yc': array([ 0., 20., 40., 60., 80., 100., 120., 140., 160.,
180., 200., 220., 240., 260., 280., 300., 320., 340.,
360., 380., 400., 420., 440., 460., 480., 500., 520.,
540., 560., 580., 600., 620., 640., 660., 680., 700.,
720., 740., 760., 780., 800., 820., 840., 860., 880.,
900., 920., 940., 960., 980., 1000., 1020., 1040., 1060.,
1080., 1100., 1120., 1140., 1160., 1180., 1200., 1220., 1240.,
1260., 0., 20., 40., 60., 80., 100., 120., 140.,
160., 180., 200., 220., 240., 260., 280., 300., 320.,
340., 360., 380., 400., 420., 440., 460., 480., 500.,
520., 540., 560., 580., 600., 620., 640., 660., 680.,
700., 720., 740., 760., 780., 800., 820., 840., 860.,
880., 900., 920., 940., 960., 980., 1000., 1020., 1040.,
1060., 1080., 1100., 1120., 1140., 1160., 1180., 1200., 1220.,
1240., 1260.], dtype=float32), 'kcoords': array([1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,
1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,
1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,
1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 2., 2., 2., 2.,
2., 2., 2., 2., 2., 2., 2., 2., 2., 2., 2., 2., 2., 2., 2., 2., 2.,
2., 2., 2., 2., 2., 2., 2., 2., 2., 2., 2., 2., 2., 2., 2., 2., 2.,
2., 2., 2., 2., 2., 2., 2., 2., 2., 2., 2., 2., 2., 2., 2., 2., 2.,
2., 2., 2., 2., 2., 2., 2., 2., 2.], dtype=float32), 'chanMap': array([ 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60,
61, 62, 63, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41,
42, 43, 44, 45, 46, 47, 16, 17, 18, 19, 20, 21, 22,
23, 24, 25, 26, 27, 28, 29, 30, 31, 0, 1, 2, 3,
4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 112,
113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125,
126, 127, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106,
107, 108, 109, 110, 111, 80, 81, 82, 83, 84, 85, 86, 87,
88, 89, 90, 91, 92, 93, 94, 95, 64, 65, 66, 67, 68,
69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79]), 'n_chan': 128}, 'probe_name': 'chanMap128.mat', 'data_dtype': 'int16', 'n_chan_bin': 128, 'fs': 30000.0, 'batch_size': 60000, 'nblocks': 1, 'Th_universal': 9.0, 'Th_learned': 8.0, 'tmin': 0.0, 'tmax': inf, 'nt': 61, 'artifact_threshold': inf, 'nskip': 25, 'whitening_range': 32, 'binning_depth': 5.0, 'sig_interp': 20.0, 'nt0min': None, 'dmin': None, 'dminx': None, 'min_template_size': 10.0, 'template_sizes': 5, 'nearest_chans': 10, 'nearest_templates': 100, 'templates_from_data': True, 'n_templates': 6, 'n_pcs': 6, 'Th_single_ch': 6.0, 'acg_threshold': 0.2, 'ccg_threshold': 0.25, 'cluster_downsampling': 20, 'cluster_pcs': 64, 'duplicate_spike_bins': 15}
Provide environment info:
populated config files :
conda version : 24.1.0
conda-build version : 3.27.0
python version : 3.11.5.final.0
solver : libmamba (default)
virtual packages : __archspec=1=x86_64
__conda=24.1.0=0
__cuda=12.2=0
__win=0=0
base environment : C:\ProgramData\anaconda3 (writable)
conda av data dir : C:\ProgramData\anaconda3\etc\conda
conda av metadata url : None
channel URLs : https://repo.anaconda.com/pkgs/main/win-64
https://repo.anaconda.com/pkgs/main/noarch
https://repo.anaconda.com/pkgs/r/win-64
https://repo.anaconda.com/pkgs/r/noarch
https://repo.anaconda.com/pkgs/msys2/win-64
https://repo.anaconda.com/pkgs/msys2/noarch
package cache : C:\ProgramData\anaconda3\pkgs
C:\Users\wehr.conda\pkgs
C:\Users\wehr\AppData\Local\conda\conda\pkgs
envs directories : C:\ProgramData\anaconda3\envs
C:\Users\wehr.conda\envs
C:\Users\wehr\AppData\Local\conda\conda\envs
platform : win-64
user-agent : conda/24.1.0 requests/2.31.0 CPython/3.11.5 Windows/10 Windows/10.0.19045 solver/libmamba conda-libmamba-solver/24.1.0 libmambapy/1.5.6 aau/0.4.3 c/jUGHLyl0bHLouE7eQQlcaQ s/WWJUok-a5hEuXJiceMfI9A e/UhGP7xmY_2PAV0B0v2HiFA
administrator : True
netrc file : None
offline mode : False
Operating system information:
OS Name Microsoft Windows 10 Pro
Version 10.0.19045 Build 19045
The text was updated successfully, but these errors were encountered: