Guideline(s) for MEG Pre-processing
!!! Note some of the points below refer to older versions of Maxfilter 2.0 or 2.1!!!
Typically pre-processing with Maxfilter requires three steps:
- finding bad channels,
- applying sss to remove noise,
- transforming the data to a different co-ordinate frame.
Each step is detailed below along with code that you can use. NB you need to input your file names in the appropriate places. For full details of maxfilter and the options used below please see the Maxfilter manual.
Contributions from Jason Taylor, Danny Mitchell, Dan Wakeman, Rik Henson, Marie Smith ...
Step 1. Identifying Bad Channels with Maxfilter
In the first step, we call maxfilter to establish which channels in the data set are bad by using the -autobad option. Maxfilter scans the first 20 seconds of your recording (or however long you waited before turing on cHPI) and returns the output to a log file. To save time we tell maxfilter to skip the remainder of the file. The -ctc and -cal options refer to the fine calibration and cross talk correction data specific to our MEG system. Inputting them to maxfilter will produce a better result.
maxfilter -f <rawdata_file> -o <output_file> -ctc /neuro/databases/ctc/ct_sparse.fif -cal /neuro/databases/sss/sss_cal.dat -autobad 20 -skip 21 999999 -v | tee <bad_chans_log_file>
If you have not used cHPI, then bad channel detection can be combined with Step 2. Using -autobad 1 rather than -autobad 20 will check for bad channels on a second by second basis throughout the recording, rather than declaring channels to be permanently bad based on the first 20 seconds of data. Also note that autobad only works on data segments where the trigger channel has a value of zero. If substantial portions of the data have non-zero values on the trigger channel then the temporal extension of SSS may provide more robust removal of sensor artefacts (see below).
Rather than having to search through the log file for the bad channels you can run the following command:
cat <bad_chans_log_file> | sed -n /Static/p | cut -f 5- -d | tee <bad_chans_log.txt>
Alternatively you can use MNE_browse_raw, EEGLAB data browser to scan through all of your raw data for bad channels. Finally, remember to add in any bad channels that you noted during the recording to the list.
Step 2. Applying Signal Space Separation
In this second call to maxfilter there are a number of different things that you can do. Just pick and choose from the following commands to do what you want with your data. All of the following options can be put in one call to maxfilter, and it doesn't matter which order you type them in.
The most simple call would be to run only spatial SSS correction with your specified list of bad channels as:
[SSS] maxfilter -f <rawdata_file> -o <output_file> -ctc /neuro/databases/ctc/ct_sparse.fif -cal /neuro/databases/sss/sss_cal.dat -autobad off -bad <list of bad channels> -v | tee <log_file>
NB. Due to a bug in maxfilter you must not run autobad together with -movecomp or -headpos (see below). Sometimes there are artifacts left in the data after running SSS, e.g. sensor jumps, which can be dealt with by using the temporal extension (-st) to maxfilter. To use this option you are required to specify a temporal buffer over which it will work. The default buffer is 4 seconds which corresponds to a high pass filter of 0.25Hz. If you are keen to keep lower frequencies then you can set the buffer time to be longer (e.g 10secs - 0.1Hz), at the cost of processing time.
If sensor artefacts still remain, -st could be made less conservative, by reducing the threshold at which potential artefacts are removed, by setting -corr to be less than the default of 0.98. (However this would increase the risk of also projecting out signal of interest, and values of <0.8 are not recommended.)
[SSS, -ST] maxfilter -f <rawdata_file> -o <output_file> -ctc /neuro/databases/ctc/ct_sparse.fif -cal /neuro/databases/sss/sss_cal.dat -autobad off -bad <list of bad channels> -st 10 -v | tee <log_file>
The list of bad channels would be something like -bad 1143 1123 1412The reason for using cHPI (continuous head position tracking) during recordings is so that we can monitor / correct for movements of the head of our participants within a recording session. To monitor headmovements and output them to a file we use the -headpos option. To correct for movements we use the -movecomp option. With both options we specify the log file which will store the head position parameters (to be visually inspected later) and set the estimation interval (normally 200ms but 500 and 1000ms also available). You can also choose to have maxfilter subtract out the cHPI signals (very high frequencies) from your data by using the -hpisubt option (either amp or off).
[SSS, -ST, -movecomp] maxfilter -f <rawdata_file> -o <output_file> -ctc /neuro/databases/ctc/ct_sparse.fif -cal /neuro/databases/sss/sss_cal.dat -autobad off -bad <list of bad channels> -st 10 -movecomp -hpistep 200 -hp <head_pos_log_file> -hpisubt amp -v | tee <log_file>
During head movemement compensation or estimation, maxfilter continually acceses the cHPI data. If HPI failed at some point during your run, head movement compensation/estimation will fail and the resulting data block will be skipped (i.e. set to zero). To force maxfilter to use the last known HPI position you can use the -movecomp inter option. But be careful if this happens early on in your file as you will not really be performing movement correction anymore.
[SSS, -ST, -movecomp inter] maxfilter -f <rawdata_file> -o <output_file> -ctc /neuro/databases/ctc/ct_sparse.fif -cal /neuro/databases/sss/sss_cal.dat -autobad off -bad <list of bad channels> -st 10 -movecomp inter -hpistep 200 -hp <head_pos_log_file> -hpisubt amp -v | tee <log_file>
Choice of Origin
The outcome of Maxfilter depends on the coordinate system origin being used. By default maxfilter will determine this origin for you by fitting a sphere to your digitized points. However you may run into problems with this. The most common ones being that the "sphere fit extends outside of the sensors" or that the origin is found to be 0,0,0 (as a result of a bug in maxfilter - see here for more details). Both errors can be fixed in the same way: by manually specifiying the origin that you want to use. This origin can either be a default setting (x=0,y=0,z=40mm), or can be established by other sphere fitting tools (see an example of this in matlab).
[SSS, -ST, -movecomp, origin] maxfilter -f <rawdata_file> -o <output_file> -ctc /neuro/databases/ctc/ct_sparse.fif -cal /neuro/databases/sss/sss_cal.dat -autobad off -bad <list of bad channels> -st 10 -movecomp -hpistep 200 -hp <head_pos_log_file> -hpisubt amp -frame head -origin <x> <y> <z> -v | tee <log_file>
Step 3. Transforming to a different co-ordinate frame
Unlike EEG data, the sensors of the MEG are not affixed to the head of the participant making averaging of blocks of data of one subject, or across all of your subjects more difficult. A solution to this is to make use of maxfilter option -trans to transform your data to either one specified raw fif file (e.g. the first data block for a subject) or to the default device co-ordinate scheme. Due to a maxfilter bug it is not possible to perform trans at the same time as movement compensation, so you should perform it on an already SSS'd file.
To transform to a single file (e.g. if your experiment has 5 blocks, you would transform all to the third block).
[Trans file] maxfilter -f <output_file_from_step2> -o <new_output_file> -ctc /neuro/databases/ctc/ct_sparse.fif -cal /neuro/databases/sss/sss_cal.dat -autobad off -trans <chosen_fif_file> -force -v | tee <trans_log_file>
To transform to the default position (e.g. if you want to compare across subjects). NB Due to another bug you must specifiy the frame and origin in this call. The origin can be that given by maxfilter in Step2, the default values (0,0,40), or that established by any other program. It has been observed that applying a small correction to the sphere origin results in less artefacts being introduced into the data during this step.
[Trans default] maxfilter -f <output_file_from_step2> -o <new_output_file> -ctc /neuro/databases/ctc/ct_sparse.fif -cal /neuro/databases/sss/sss_cal.dat -autobad off -trans default -frame head -origin <x> <y-13> <z+6> -force -v | tee <trans_log_file>
Head position transformation is not easy to get one's head around (ha)
-trans default -frame head -origin 0 0 40
- = Move the point 0,0,40 in head space (40mm above the actual origin, defined as the point where lines connecting the fiducial points would meet) to the device origin AND align head and device coordinate systems.
-trans default -frame head -origin x y z
- = Move the point x,y,z in head coordinates to the origin of device space AND align the head and device coordinate systems. Note: The MF manual suggests that the optimal position is not 0,0,0 in device space; rather it is around 0,13,-6.
-trans my_target_file.fif
- = Move the origin of the head in the current file to the origin of the head in my_target_file.fif (that is, where the head origin was relative to the sensors on the first measurement in that file).
Miscellaneous points
Other things you can do with maxfilter include downsampling your data e.g. -ds 4 and setting the output format e.g. -format float. Beware of lowpass filtering your data inside maxfilter due to another known bug.
Large (>2GB files)
Even latest version of maxfilter has problems writing files >2GB. If your raw files are bigger than this, you need to reduce them with one of the following options: 1) downsample, as in example above; 2) use maxfilter "-skip" command to exclude time periods with data you do not need, or as last resort, 3) write in int16 rather than float32 using maxfilter "-format short" command, though you will lose resolution (and may get warnings of saturated values for some channels).
More on batching
Batching Maxfilter in Matlab is relatively easy. More options for batching can be found here maxperl