As with other imaging modalities, motion induces artifacts that can significantly corrupt optical neuroimaging data. While multiple methods have been developed for motion detection in individual NIRS measurement channels, the large measurement numbers present in multichannel fNIRS or high-density diffuse optical tomography (HD-DOT) systems, create an opportunity for detection methods that integrate over the entire field of view. Here, we leverage the inherent covariance among multiple NIRS measurements after pre-processing, to quantify motion artifacts by calculating the global variance in the temporal derivative (GV-TD) across all measurements (e.g. from the temporal derivative of each time-course, the method calculates root mean square across all measurements for each time point). This calculation is fast, automated, and identifies motion by incorporating global aspects of data instead of individual channels. To test the performance, we designed an experimental paradigm that intermixed controlled epochs of motion artifact with relatively motion-free epochs during a block design hearing-words language paradigm using a previously described HD-DOT system. We categorized 348 blocks by sorting the blocks based on the maximum of their GV-TD time-courses. Our results show that with a modest thresholding of the data, wherein we keep data with 0.66 of the full data set average GV-TD, we obtain a ~50% increase in the signal-to-noise. With noisier data, we expect the performance gains to increase. Further, the impact on resting state functional connectivity may also be more significant. In summary, a censoring threshold based on the GV-TD metric provides a fast and direct way for identifying motion artifacts.
|