Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Phonemic receptive field estimation #7

Open
prlabu opened this issue Mar 16, 2021 · 1 comment
Open

Phonemic receptive field estimation #7

prlabu opened this issue Mar 16, 2021 · 1 comment

Comments

@prlabu
Copy link

prlabu commented Mar 16, 2021

*Note: this is a question rather than an issue. Please let me know if there's a better platform than this to post my question.

I am using mTRF-Toolbox with intracranial depth electrodes. It's been a huge help - great work.

I would like to look at certain channels' response to natural speech, and specifically phonemes. I have phonetic transcriptions (that is, onset and offset of each phoneme) of the speech. What would be the best way to estimate a channel's response to each phoneme with mTRF-Toolbox? I was thinking of binarizing phoneme onset timestamps and then convolving with a gaussian filter in order to achieve a continuous stimulus. Does that sound reasonable? Thanks!

@diliberg
Copy link
Collaborator

Hi Latané,

I suggest to have a look at the solution we used in these studies for forward TRF models and EEG:
https://www.cell.com/current-biology/pdfExtended/S0960-9822(15)01001-5
https://www.sciencedirect.com/science/article/pii/S1053811920310715

as well as the earlier work with STRFs and ECoG from Nima Mesgarani.
https://science.sciencemag.org/content/343/6174/1006/tab-pdf

You could use impulses at the phoneme onsets, steps, and your gaussian filter idea could be fine too. I personally used steps to indicate the occurrence and duration of a phoneme. Your choice depends on your hypothesis or observation (e.g., the acoustic onset of a phoneme is not the same as the moment it is "perceived").

Note that my considerations are valid for forward models.

(Please skip this part, unless you actually went into depth by reading those papers.
In relation with the abovementioned papers Di Liberto et al., 2015 and 2021, note that the measure rFS - rS (or simply FS-S) makes sense in the EEG domain, but it's less relevant if the spatial resolution is sufficient to separate acoustic vs. phonological responses. In that case, FS-S makes sense if you combine multiple electrodes).

I hope this helps!
Cheers,
Giovanni

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants