Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Display leadfield as heatmap within Brainstorm #585

Open
tmedani opened this issue Oct 18, 2022 · 19 comments
Open

Display leadfield as heatmap within Brainstorm #585

tmedani opened this issue Oct 18, 2022 · 19 comments
Assignees

Comments

@tmedani
Copy link
Collaborator

tmedani commented Oct 18, 2022

Hello all,

help wanted @ftadel

I'm trying to add a feature to brainstorm that allows the visualization of the leadfield as a heatmap on the cortical surface and on the volume (mesh and ideally on the MRIs).

This function will follow a similar implementation as for the view_leadfield with similar options, but instead of plotting the arrows, we will use the magnitude of the LF vectors, then plot the results as a heatmap either on the surface or volume of the brain.

The purpose is to be able to display the sensitivity maps of the electrodes, as in this example (coarse example not smoothed)
image ==>
image

These maps can be useful for estimating the sensitivity of an sEEG grid for example, and also for the brain stimulation purposes.

Currently, some users get similar maps with Ansys software (not dedicated to neuroscience), here is an example:
image

I have basic scripts, but not yet been adapted for Brainstorm.

@ftadel
Copy link
Member

ftadel commented Oct 19, 2022

This doesn't seem like a complicated task.
We can edit the view_leadfields.m function:

  • Add a parameter DisplayMode to inform whether we want to display as arrows or heatmap
  • In case of heatmap, load the cortex surface and compute the heat map (one value per vertex)
  • Display the map with view_surface_matrix.m

If you provide me with code to compute the heatmap, I can do the rest of the implementation next week.

@tmedani
Copy link
Collaborator Author

tmedani commented Oct 31, 2022

Hi @ftadel,

Sorry for the late reply.

Indeed this process can be included within the view_leadfields.m function, but it may require more edits and changes.

As discussed, here is an example with the data (subject exported from Brainstorm) and a script that use this data.

Thanks for your great help.

Best,
Takfarinas

@ftadel
Copy link
Member

ftadel commented Oct 31, 2022

Thanks.
I have a few more questions before working on a more formal implementation in Brainstorm.

How relevant is it to have the leadfield sensitivity displayed on top of the FEM mesh?
At the moment, the tetrahedral meshes are not so easy to explore in the Brainstorm figures. If you move the slice in any direction, it recomputes a new surface, which is slow and difficult to control.

We discussed briefly about representing this as an overlay in the MRI viewer instead of the FEM mesh.
The user would right-click on the leadfield > "Display sensitivity (3D MRI)" or "Display sensitivity (MRI Viewer)", the display function would ask to select electrodes (just like the current function to view the leadfield as arrows), and then interpolate the same value that you compute in your script, but over the grid of MRI voxels. Once the volume is computed, then you can navigate in the volume real-time.

At the moment, the interpolation of source results is done with a 3x3x3 voxel resolution. This was to optimized the computation time without degrading much the expected spatial resolution of the source results (in the context of non-invasive MEG/EEG source modeling, we didn't really need a 1mm resolution). If this is useful for you to have a 1x1x1 voxel resolution, I could add this as a new configuration option, and work on optimizing the interpolation in a different way (or just be ok with the fact that the interpolation takes a few seconds, just like the current display of the FEM mesh anyway).

The display would be similar with the current display you have when displaying source maps on the MRI (3D or MRI viewer), but with a higher spatial resolution:
image

In the case this makes sense, can this simply replace the display on the FEM mesh that you propose in your script, or would you need both?

@ftadel
Copy link
Member

ftadel commented Oct 31, 2022

Last option: we could make this a process that computes the sensitivity map and save it in the database as a volume, that can later be exported as a .nii file or displayed with the existing interface (3D or MRI viewer). We could compute all the electrodes at once, and would maybe have only to ask the reference as an input parameter.

@tmedani
Copy link
Collaborator Author

tmedani commented Oct 31, 2022

Thank You, Francois, the sensitivity display on the MRI is very important indeed, or I would say most important than on the mesh at this level, the display on the mesh can be optional. We can discuss this with @jcmosher.

How relevant is it to have the leadfield sensitivity displayed on top of the FEM mesh?
At the moment, the tetrahedral meshes are not so easy to explore in the Brainstorm figures. If you move the slice in any direction, it recomputes a new surface, which is slow and difficult to control.

We may need to have the display at the mesh level when the resolution of the computation goes less than 1mm, for example when we will integrate the body on the electrode and the contacts on the mesh (width less than 1mm), in that case, the mesh will be refined around the electrode, and the sensitivity will be affected by all the material around it.
So I believe that at some time we will need to display the sensitivity around the mesh.

We discussed briefly about representing this as an overlay in the MRI viewer instead of the FEM mesh.
The user would right-click on the leadfield > "Display sensitivity (3D MRI)" or "Display sensitivity (MRI Viewer)", the display function would ask to select electrodes (just like the current function to view the leadfield as arrows), and then interpolate the same value that you compute in your script, but over the grid of MRI voxels. Once the volume is computed, then you can navigate in the volume real-time.

This is great, we can have the evolution of the sensitivity across the pairs of channels (with regard to a fixed reference).

At the moment, the interpolation of source results is done with a 3x3x3 voxel resolution. This was to optimized the computation time without degrading much the expected spatial resolution of the source results (in the context of non-invasive MEG/EEG source modeling, we didn't really need a 1mm resolution). If this is useful for you to have a 1x1x1 voxel resolution, I could add this as a new configuration option, and work on optimizing the interpolation in a different way (or just be ok with the fact that the interpolation takes a few seconds, just like the current display of the FEM mesh anyway).

Yes, for EEG/MEG this resolution can be sufficient, but for sEEG we will need a lower resolution.

Last option: we could make this a process that computes the sensitivity map and save it in the database as a volume, that can later be exported as a .nii file or displayed with the existing interface (3D or MRI viewer). We could compute all the electrodes at once, and would maybe have only to ask for the reference as an input parameter.

This would be great indeed and can be used to have "a full head sensitivity" when summing all (or a group) of the electrodes when computed relative to one distant reference.

I think it's a great idea!!

We can also combine the MRI sensitivity view with the brain surfaces and deep structures, and then overlay it with the VTA that can be computed from the sensitivity (this may be a second step later)

Thanks

@ftadel
Copy link
Member

ftadel commented Nov 1, 2022

I added menus to display the leadfield sensitivity:
4cc7c85

It works for a surface source space (display on the cortex surface) or a volume source model (display as "MRI 3D" or "MRI Viewer").

image

image

image

image

Function view_leadfields.m was renamed into view_leadfield_vectors.m.
The new function is named view_leadfield_sensitivity.m.

There is one thing that needs to be fixed probably: the units.
What are the units you want to display this in? (instead of the current "arbitrary units", with a multiplication factor that is not displayed here).
Would a relative sensitivity in % make sense?

I added the option to interpolate with a 1-voxel resolution instead of using only the 3-voxel value by default.
https://neuroimage.usc.edu/brainstorm/Tutorials/SourceEstimation#Display:_MRI_3D

@ftadel
Copy link
Member

ftadel commented Nov 1, 2022

I modified the functions in order to use the channel display of all the other figures.
This also allows adding the SEEG electrode display.

image

@tmedani
Copy link
Collaborator Author

tmedani commented Nov 7, 2022

This looks awesome already. Thanks @ftadel

Regarding the display of the units, I prefer the normalized values,
but from the last discussion, it seems using the units is better.

Would a relative sensitivity in % make sense?
Relative to what? sorry didn't get it?

do you still think about the idea to save the results into a volume (3D) in the database ?

I think it may be interesting when analyzing the sensitivity of all electrodes (sEEG) (sum all the sensitivity when computed to a distant reference), we can use it to detect some blind spots for example.

@tmedani

This comment was marked as resolved.

@ftadel

This comment was marked as resolved.

@ftadel
Copy link
Member

ftadel commented Nov 11, 2022

Regarding the display of the units, I prefer the normalized values,
but from the last discussion, it seems using the units is better.

Done in: d64af79

image

image

do you still think about the idea to save the results into a volume (3D) in the database ?

I haven't thought about it yet, but it's my todo list.
I think we decided that isosurface display was a higher priority isn't it?

I think it may be interesting when analyzing the sensitivity of all electrodes (sEEG) (sum all the sensitivity when computed to a distant reference), we can use it to detect some blind spots for example.

Can you give me more precise directions, if I need to implement something specific?

@tmedani

This comment was marked as resolved.

@tmedani
Copy link
Collaborator Author

tmedani commented Nov 12, 2022

do you still think about the idea to save the results into a volume (3D) in the database ?

I haven't thought about it yet, but it's my todo list.
I think we decided that isosurface display was a higher priority isn't it?

The iso-surface can be used in any/all cases. The volume of the enclosed surface can be interpreted as the VTA from the DBS point of view or the sensitivity area from the SEEG point of view. (with a fixed threshold)

I think it may be interesting when analyzing the sensitivity of all electrodes (sEEG) (sum all the sensitivity when computed to a distant reference), we can use it to detect some blind spots for example.

Can you give me more precise directions, if I need to implement something specific?

Sure, the idea here is to estimate/visualize the overall sensitivity of a grid of sensors within the brain.

To do that we need to use a distant reference (not the adjacent contact), the distant reference can be the farthest electrode/contact from the ROI, or ideally, an electrode located outside of the brain (on the scalp when it's available).

Then for all the contacts, we can compute the sensitivity with regard to this distant reference:

For example here: the electrode S' and a distant reference O'1
S'1-O'1
image
S'3-O'1
image
S'5-O'1
image
....

Then when we sum up all the sensitivity of all the contacts (S'1 + S'2 + S'3 + ... to S'8) we can get a "whole sensitivity for this electrode S' " : something like this:

image

When we do this to all the sensor grids (all the electrodes and all the contacts),
we can have the whole sensitivity map within the brain (summing all the contacts).
Ideally, one should have full coverage or at least the coverage of the area of SOZ for better SOZ localization.

I believe having this feature within BST will be great, it may give the sensitivity only from simulation and could be useful for pre-implantation planning.

Please let me know if my explanations make sense.

Thanks :)

@ftadel

This comment was marked as resolved.

@ftadel
Copy link
Member

ftadel commented Nov 14, 2022

Sure, the idea here is to estimate/visualize the overall sensitivity of a grid of sensors within the brain.

I started to implement the sum of all the channels when selecting iChannel=0 (press left arrow after display).
2c73800
image

It works well with an average reference, but I'm not sure what to with the results with different references.
Can you check the code?

@tmedani
Copy link
Collaborator Author

tmedani commented Nov 16, 2022

That's awesome @ftadel

The average reference can be useful, but I believe in such comparaison a distant reference should be better.

From the simulation view, we can add a virtual ref channel on the furthest mesh node from the channels, this option can be a good solution. We can discuss it with the team.

I will review the code as well :)

@ftadel
Copy link
Member

ftadel commented Nov 16, 2022

First experiments for displaying as isosurface here:
b54d6f6

I used Iso2mesh's qmeshcut.
The function surfvolume you used to compute the VTA does not work with the output from qmeshcut, I still need to figure out why. Then I will add the VTA in the legend of the figure.

I posted an issue on the iso2mesh repo to get some help with it:
fangq/iso2mesh#70

image

@ftadel
Copy link
Member

ftadel commented Nov 27, 2022

I closed the iso2mesh issue: fangq/iso2mesh#70

I committed my last experiments for the computation of the VTA: 84ddaa1

If you want to test it, it requires a full version of iso2mesh to be installed on your computer.
You should not be using it as a plugin as for now Brainstorm removes the old Tetgen binaries after installing it.
If you really want to use the function surfvolume we'd need to edit the iso2mesh plugin definition, either to keep the old tetgen binaries, or to link them to the tetgen 1.5 binaries.

It is also very slow to split and then fix and then measure the surfaces with iso2mesh.
There might be faster solutions, like counting directly the number of voxels inside these surfaces with inpolyhd,

If it is slow like this, we can't leave it computed at each change of the target or reference.
We'd need to add a button or menu to request it.

@tmedani
Copy link
Collaborator Author

tmedani commented Nov 30, 2022

Hi Francois,

Thanks for the updates.

I have some issues with my computer. I will test it in the following days.
and we may add some other features as well.

Thanks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants