This is a collaborative space. In order to contribute, send an email to maximilien.chaumon@icm-institute.org
On any page, type the letter L on your keyboard to add a "Label" to the page, which will make search easier.

Assess the quality of artifact correction/rejection

Assessing the performance of an artifact detection/correction implemented in our Apps is a very important step. To have an idea how to do it, we sent the following message to the mailing list of different tools:

Dear XXX,

I performed an artifact detection/correction procedure and I am looking for a good way to visualize its results but also to assess its performance. I would imagine something akin to a script displaying a summary plot per participant showing all trials x channels and the scores of one or several measurements done before and after artifact detection/correction. More precisely, I would want to find a way to assess the quality of the artifact detection/correction or the quality of the corrected data, thanks to an index for instance.

Do you know a good application/function/script/favorite procedure that would allow me to do this? A paper reference would be of course most welcome!

Many thanks in advance,

Best

Message sent to

Date

Commentary

Answer

Message sent to

Date

Commentary

Answer

eeglablist@sccn.ucsd.edu

Feb 12, 2021

 

 

fieldtrip@science.ru.nl

Feb 16, 2021

I had to subscribe to the mailing list

  • see https://www.fieldtriptoolbox.org/workshop/madrid2019/tutorial_erp/#inspect-cleaned-data

  • “Alors pour les fonctions fieldtrip, la chose qui pourrait le plus se rapprocher de ce que tu me demandes serait un score de variance où tu pourrais définir un seuil (ex: 3x la variance) à partir duquel tu considères un trial/canal bruité. L'idée de partir du ft_rejectvisual en te permettant d'automatiser le quality check.”

  • “Pour ce qui est de la détection d'artefacts, si tu as des signaux BIOs (ex: EOG/EMG/ECG), tu pourrais lancer une ICA et partir sur un score de corrélation entre les composantes et ces signaux BIOs pour identifier automatiquement les composantes à retirer.”

https://neuroimage.usc.edu/forums/

 

Maybe ask directly Guio

 

mne_analysis@nmr.mgh.harvard.edu

Feb 11, 2021

Max sent an email (not exactly the same as above), but maybe ask on the forum instead

 

 

In Gonzalez-Moreno et al., 2014, a SNR was computed to compare the quality of artifact correction by MaxFilter SSS and Maxfiter tSSS. Maybe in our case we could compute a SNR before and after Maxfilter (and also before and after the bad channel detection)? It would be better if this step is an App. This App would take in input the .fif file before MaxFilter and the .fif file after MaxFilter and would return a report.

To compute this SNR:

  1. Select only MEG channels and exclude bad channels

  2. Create events

  3. Create epochs based on the events

  4. Compute for each epoch its mean amplitude across all electrodes and times

  5. Compute the mean of all the mean amplitudes of each epoch

  6. Compute the standard error of that mean

  7. SNR = result step 5. / result step 6.

Mar 18, 2021

  • Compute SNR on a subset of channels?

  • Select only magnetometers or gradiometers?

  • Maybe when the data has events, it’s best to create the epochs based on these existing events

Jun 4, 2021

  • In several Apps there is the function to compute the SNR the way it is described here. For now, this function is not used

  • It would be interesting to add this function to the helper.py once we agreed on the way to compute the SNR.