Open Access
18 September 2018 Functional near-infrared spectroscopy-based affective neurofeedback: feedback effect, illiteracy phenomena, and whole-connectivity profiles
Lucas R. Trambaiolli, Claudinei E. Biazoli, André M. Cravo, Tiago H. Falk, João R. Sato
Author Affiliations +
Abstract

Background: Affective neurofeedback constitutes a suitable approach to control abnormal neural activities associated with psychiatric disorders and might consequently relief symptom severity. However, different aspects of neurofeedback remain unclear, such as its neural basis, the performance variation, the feedback effect, among others.

Aim: First, we aimed to propose a functional near-infrared spectroscopy (fNIRS)-based affective neurofeedback based on the self-regulation of frontal and occipital networks. Second, we evaluated three different feedback approaches on performance: real, fixed, and random feedback. Third, we investigated different demographic, psychological, and physiological predictors of performance.

Approach: Thirty-three healthy participants performed a task whereby an amorphous figure changed its shape according to the elicited affect (positive or neutral). During the task, the participants randomly received three different feedback approaches: real feedback, with no change of the classifier output; fixed feedback, keeping the feedback figure unmodified; and random feedback, where the classifier output was multiplied by an arbitrary value, causing a feedback different than expected by the subject. Then, we applied a multivariate comparison of the whole-connectivity profiles according to the affective states and feedback approaches, as well as during a pretask resting-state block, to predict performance.

Results: Participants were able to control this feedback system with 70.00  %    ±  24.43  %   (p  <  0.01) of performance during the real feedback trials. No significant differences were found when comparing the average performances of the feedback approaches. However, the whole functional connectivity profiles presented significant Mahalanobis distances (p  ≪  0.001) when comparing both affective states and all feedback approaches. Finally, task performance was positively correlated to the pretask resting-state whole functional connectivity (r  =  0.512, p  =  0.009).

Conclusions: Our results suggest that fNIRS might be a feasible tool to develop a neurofeedback system based on the self-regulation of affective networks. This finding enables future investigations using an fNIRS-based affective neurofeedback in psychiatric populations. Furthermore, functional connectivity profiles proved to be a good predictor of performance and suggested an increased effort to maintain task control in the presence of feedback distractors.

1.

Introduction

We can describe neurofeedback and brain–computer interfaces (BCI) as a group of devices and protocols that use neurophysiological signals to detect mental states and take this information to promote more realistic interactions between humans and machines.1,2 For this, the participant might reach the self-control of his neural response patterns directly and consciously, without the interference of external stimuli.1 Specifically to the affective neurofeedback, this self-control focuses on areas or networks related to different affective states, such as basic emotions, or states of valence.3,4 Generally, affective neurofeedback protocols are based on electrophysiological asymmetries in frontal areas using electroencephalography (EEG) or different hemodynamic patterns in specific cortical and/or subcortical areas using functional magnetic resonance imaging (fMRI).4 Among a wide range of applications, the use of affective neurofeedback constitutes a suitable approach to control abnormal neural activities associated with psychiatric disorders and might consequently relief symptom severity.5 Previous studies, including healthy subjects and patients with schizophrenia, major depressive disorder, personality disorders, addiction, obsessive-compulsive disorder, among others, show that the voluntary control of the neural activity in regions of interest is feasible.6,7 Furthermore, in some cases, this control was associated with clinical improvement.7,8

Recently, functional near-infrared spectroscopy (fNIRS) has been proposed as a source of neurophysiological information to neurofeedback systems,9 including affective neurofeedback applications.10 fNIRS uses low-energy light detectors and transmitters to indirectly measure the local neural activity based on changes of oxyhemoglobin (O2Hb) and deoxyhemoglobin (HHb) concentration in the cortical surface,11 including the prefrontal cortex (PFC), a cortical area commonly related to affect induction and processing.12,13 For neurofeedback applications, fNIRS has several advantages: (1) it has relatively easier data acquisition protocols, (2) reduces the discomfort and anxiety of patients during experimental preparation and data acquisition, (3) can be acquired in comfortable seating positions; and (4) can be expanded to naturalistic settings. Moreover, it bears the additional advantages of portability, simplicity, and computational economy of its features, opening doors for robust applications using neurofeedback.9,12,14,15

In this context, we first aimed to evaluate the efficacy of an fNIRS-based affective neurofeedback system. In fact, previous studies reported neurofeedback experiments founded on the self-regulation of activity in the orbitofrontal cortex (OFC) and PFC using fMRI16,17 and fNIRS10 but limited to a univariate approach. Additionally, recent meta-analyses demonstrated that the occipital area is a core region to affective elicitation18,19 and might be related to the vividness and effectiveness of the strategy of self-regulation.20,21 Thus, our system focuses on these frontal and occipital networks to induce self-regulation based on positive affect. Indeed, this approach was already demonstrated using frontal areas and an fMRI neurofeedback,17 however not yet reported using fNIRS or including the occipital network.

While the previous literature explores different topics of affective neurofeedback implementation, two important aspects should be carefully observed: the feedback approach and the illiteracy phenomena. The feedback approach refers to the information continuously presented to the participant about his/her performance.22 This immediate response is used to keep the user’s interest and attention, besides allowing the brain to develop fast and practical strategies to adjust and improve neural activity control.23 To our knowledge, no other studies evaluated the feedback effect in affective neurofeedback experiments. However, motor imagery experiments suggest that the feedback can lead to distraction and reduced attention,24 or generate frustration and stress.25 Thus, the feedback might cause an essential impact on affective neurofeedback protocols since this would hinder the elicitation and maintenance of the targeted emotions or affective states.

Therefore, our second aim was to evaluate three different feedback approaches on performance: real, fixed, and random feedback. High control performance with the real feedback would be expected due to the possibility of reinforcement or corrections of the elicitation strategies.22 On the other hand, although the absence of feedback (fixed condition) does not allow corrections to the strategies in use, its reduction of distractors could lead to performances close to the real condition.24 Finally, a random feedback would cause an initial encouragement26 but also might lead to frustration and irritation25 after a while. Thus, we would expect a low control performance in this condition.

The illiteracy phenomena state that even with long periods of training, clear instructions, and improvements in the experimental protocol, it is expected that some participants will present poor control performance.27 These are examples of “nonperformers” or “illiterates,” which can compose up to 50% of potential neurofeedback users.25,28 During the last years, there is an intense debate about possible predictors for this lack of ability to control different protocols of neurofeedback.29 While some studies evaluated the influence of the mental strategy used to control the neurofeedback system,30,31 others correlated psychological aspects with performance, such as the self-confidence,32 frustration,25 and concentration.33 Using neurophysiological data, the neurofeedback performance was predicted from functional28,3438 and structural39 resting-state measures. Therefore, our third aim was to evaluate possible predictors to the affective neurofeedback performance. Based on previous findings, we might expect the pretask resting state as a promising predictor of performance.

However, instead of performing univariate analyses to understand the feedback effect and the illiteracy phenomena, we applied here a connectivity-based multivariate analysis. For this, we used the connectivity information from all the recorded areas simultaneously. This approach was inspired in the concept of functional connectivity fingerprints,40,41 which is based on the idea that individuals have a functional connectivity profile that is both unique and reliable, similarly to a fingerprint.41 Hence, with this whole-connectivity approach, we were able to evaluate the contribution of the entire set of functional connections to the desired aspect (here, the performance in different affective states and feedback approaches).40

2.

Methods

2.1.

Participants

Thirty-three healthy participants (17 females), aged between 20 and 35 years (mean age of 25.58±3.26 years) and all undergraduate or graduate students, were recruited. The subjects had no diagnosis of neurological (ICD-10: G00-G99) and/or psychiatric diseases (ICD-10: F00-F99) and had normal or corrected-to-normal vision.

Ethical approval was obtained from the local ethics committee and all participants provided written consent prior to participation.

2.2.

Data Acquisition

The recording was performed using the NIRScout System (NIRx Medical Technologies, LLC., Los Angeles, California) with an array of optodes (12 light sources/emitters and 12 detectors) covering the orbitofrontal, prefrontal, temporal, and occipital areas. Optodes were arranged in an elastic band, with nine pairs of source–detectors positioned over the fronto-temporal regions and three pairs of source–detectors over the occipital region (Fig. 1). We selected these regions considering the core role of the PFC, OFC, and occipital cortex in the elicitation of affective states.18,19 Four positions of the International 10–20 System were adopted as reference points during the setup: detectors 1 and 9 were positioned approximately over the T7 and T8 positions, respectively, whereas the Fpz and Oz were in the center of channels 5–5 and 11–11, respectively (Fig. 1). Source–receptor distance was 30 mm for contiguous optodes, and the used wavelengths were 760 and 850 nm. The differential pathlength factor (DPF) was set to 7.25 and 6.38, respectively, for both fronto-temporal and occipital regions.42 Signals obtained from these 32 channels were measured with a sampling rate of 5.2083 Hz (maximum sampling rate of the equipment—62.5 Hz—divided by the number of sources—12) using the NIRStar 14.0 software (NIRx Medical Technologies, LLC, Los Angeles, California).

Fig. 1

(a) Schematic representations of channel configuration. Different colors represent the regions of interest of each NIRS channel: yellow for the lateral orbitofrontal cortex (lOFC), green for the medial orbitofrontal cortex (mOFC), blue for the medial prefrontal cortex (mPFC), pink for the lateral prefrontal cortex (lPFC), purple for the occipital/striate cortex, and orange for the occipital/primary visual cortex. Pictures of the probe used during the data acquisition are provided in (b) for the left lateral, (c) for the frontal, (d) for the right lateral, and (e) for the back views. In all subfigures, red squares represent sources, blue circles represent the detectors, dotted lines the NIRS channels.

NPH_5_3_035009_f001.png

2.3.

Experimental Configuration

Each subject performed the neurofeedback task (see Sec. 2.4). In addition, they completed an 11-point Likert mood scale immediately before and after the session to quantify sleepiness, agitation, strength, confusion, agility, apathy, satisfaction, worry, perspicacity, stress, attention, capacity, happiness, hostility, interest, and retraction.43

For the experiment execution, subjects were seated in a padded chair with armrests, positioned at a 1-m distance in front of the monitor. They were asked to remain relaxed, with hands within sight resting on the table or on the armrests of the chair. They were also asked to avoid eye movements, as well as any body movement. The recording room became completely dark and the subject used earplugs.

2.4.

Neurofeedback Task

During the neurofeedback task, subjects were asked to use their mental states to transform an amorphous figure in a perfect circle. For this purpose, they were instructed to imagine/remember personal experiences (autobiographical memory) with positive affect context during “positive trials” or to remain relaxed (not thinking about particularly emotional contents, here called as neutral affect) during “neutral trials.” The trial label varied according to the color of the figure presented on the screen (blue and yellow, respectively). The session consisted of two blocks of 5 min of continuous resting state (before and after the neurofeedback test), two training blocks used to train the classifier and two test blocks with visual feedback about the subject performance [Fig. 2(a)].

Fig. 2

(a) Block structure of the experimental protocol, (b) example of trials distribution and (c) screen events during a classifier training block, and (d) example of trials distribution and (e) screen events during a feedback test block.

NPH_5_3_035009_f002.png

Visual stimuli were created and presented using the Psychophysics Toolbox extensions.4446

2.4.1.

Classifier training blocks (no feedback)

Each classifier training block consisted of 10 trials (5 for positive affect and 5 for neutral affect) presented in a random order [Fig. 2(b)]. For the first 5 s of each trial, a white cross is displayed in the center of a black screen. This interval is essential to restore the fNIRS baseline level after the previous trial. A blue (indicating a trial for positive affect elicitation) or yellow (indicating a neutral trial) amorphous figure appears in the center of the display, and the instruction was to perform the corresponding affective task. To replicate the length of the test block trials (see Sec. 2.4.2), the figure remains on screen for 32 s (2 s of initial instruction +30  s of feedback) [Fig. 2(c)].

After the figure disappears, a self-evaluation screen is presented, and participants were instructed to blink and move in this period but not in the other phases. Due to this orientation, this screen had no preset duration to allow the participant to feel comfortable to proceed to the next trial. At this point, the user was asked if he/she was able to elicit the proper neutral or positive experience, assigning a score in a 1 to 9 scale.

For each trial, real-time signal processing of both O2Hb and HHb concentrations from all NIRS channels was carried out every 1 s. Data from the previous 2 s (beginning with the instruction period) were merged with the actual second to compose a 3-s moving window [Fig. 3(a)]. The signals of each NIRS channel were initially filtered using a simple moving averages filter, used as an online low-pass filter with a cutoff of 1 Hz.4749 After this, the variation of O2Hb and HHb concentrations was calculated by the Beer–Lambert law, using the pretest resting block as the reference of concentration. Finally, inspired by voxel normalization approaches for fMRI-based neurofeedback,50,51,52 for each channel, the moving O2Hb and HHb concentration values were corrected by the averaged concentrations from the same channel during the previous neutral condition. This approach was used for both positive and neutral conditions (although for the neutral condition this procedure is only possible from the second trial since it needs the first one for normalization).

Fig. 3

Schematic representation of (a) the real-time signal processing, during both block of classifier training and block of feedback test and (b) the feedback logic. (c) An illustrative example of the window movement according to the trial timeline.

NPH_5_3_035009_f003.png

At the end of each block of classifier training, the resulting 300×64 matrix was then used to train a linear discriminant analysis (LDA) classifier to recognize the two classes (positive and neutral affect). In this matrix, lines correspond to 300 examples resulting from 30 moving windows for each trial (5 per class, 10 in total). Columns presented 64 features of mean concentration of O2Hb and HHb from a total of 32 channels each. The use of mean concentration as input was due to its simplicity and discriminative power in BCI experiments.14

LDA was used based on its extensive application in BCI and neurofeedback protocols, allowing further comparisons with other experiments.53,54 Also, considering the most common classifiers, LDA seems to consider all discriminative information available, allowing the interpretation of all areas/connections evoked during the task.55 Thus, although studies such as Ref. 15 present other classifiers as more accurate, we used the LDA algorithm based on its informative power. The LDA was implemented using the BCILAB toolbox56 and applying the default settings.

2.4.2.

Feedback test blocks

Each feedback test block consisted of 11 trials presented in a random order, totaling 22 trials at the end of the experiment (10 trials with real, 6 with fixed, and 6 with random feedback) [Fig. 2(d)].

Each test trial starts with a baseline period comprised of a white cross in the center of a black screen [see Fig. 2(e)]. After 5 s, the cross disappears and a blue or yellow amorphous figure appears, indicating the target task. This stays the same for 2 s, after which the shape of the figure begins to change according to the output of the classifier. For these trials, real-time signal processing of data follows the same steps as previously described in Sec. 2.4.1 and presented in Fig. 3(a). Each moving window was then real-time classified using the LDA model created in the training blocks and the output ranged from 1 for definite neutral affect classification to +1 for definite positive affect classification.

According to the feedback of interest, each output of the classifier could be multiplied by a different numeric value [Fig. 3(b)]. During trials with real feedback, the outputs of the classifier were always multiplied by one, with no change of the original value. During trials with fixed feedback, the outputs were always multiplied by zero, keeping the feedback figure unmodified (in other words, keeping the target figure as feedback figure). Finally, during trials with random feedback, each output was multiplied by a random value between 1 and +1, causing an output possibly different of the expected by the subject. To facilitate the understanding of the feedback logic, the Appendix presents examples of signal traces and the consequent feedback of a randomly chosen subject during each feedback condition.

2.5.

Offline Analysis

2.5.1.

Quantifying performance

To compute performance, we first quantified the number of trials correctly performed during each feedback. Here, we considered a trial as correctly performed if the sum of the classifier outputs resulting from 30 moving windows was in accordance with the trial label, that is, if n=130yn<0 for neutral trials or n=130yn0 for positive trials. Then, the number of trials correctly performed was divided by the total of trials with the respective feedback. For example, if 8 of 10 trials were successfully executed, a real feedback would result in a performance of 80%; if 2 of 6 trials were correctly performed, a fixed feedback would result in a performance of 33.33%; and if 4 of 6 trials were successfully completed, a random feedback would result in 66.67%.

The difference of performance between the three feedbacks was evaluated by a paired-samples t-test, with Bonferroni correction to three multiple comparisons. Possible relations between the accuracies of each feedback type were also evaluated using a Spearman’s correlation and the p-values were Bonferroni corrected for multiple comparisons (three pairs of feedback). Finally, to investigate the experiment efficacy as a neurofeedback system, we considered the real feedback performance as the general task performance. Then, the significance of task performance was evaluated by a one-sample t-test against chance level (50%).

2.5.2.

fNIRS preprocessing

The offline preprocessing was performed using the MATLAB software (MathWorks, Massachusetts) with the nirsLAB v2014.12 toolbox (NIRx Medical Technologies, LLC., Los Angeles, California). Each participant’s raw data were digitally bandpass filtered by a linear-phase FIR filter (0.01 to 0.2 Hz) to filter noises due to the heartbeat (0.8 to 1.2 Hz), respiration (0.3 Hz), and Mayor waves (0.1  Hz).5759 Then, each wavelength was detrended by their respective whole length record (without segmentation), and the variation in concentration of O2Hb and HHb was calculated by the Beer–Lambert law (DPF set to 7.25 and 6.38, respectively).42

2.5.3.

Evaluating literacy

To evaluate possible factors related to the task performance, we exclusively considered performances achieved during the real feedback. Therefore, we first tested the gender effect on literacy by a two-sample t-test. Then, the age effect was evaluated using a Spearman’s correlation coefficient.

The whole-connectivity profiles extracted from the 5 min of resting state recorded before and after the experiment were also tested as a possible predictor of literacy. For this, the connectivity between a pair of regions was evaluated by the magnitude squared coherence of the two corresponding NIRS channels,60 here calculated using 20-s Hamming windows with 50% of overlap. This procedure was repeated for all combinations of channels and for both O2Hb and HHb, generating two 32×32 matrices for each block of resting state (pre- and posttask). Then, we averaged all connectivity values in each matrix, resulting in a single whole-connectivity score for each block and each chromophore. After this, these values were correlated to the performance by a Spearman’s correlation coefficient, and the respective p-values were Bonferroni corrected for four multiple comparisons (2resting-state blocks×2chromophores).

Finally, the influence of psychological factors was also explored. The difference between the mood scores after and before the neurofeedback test (Δmood) was correlated with the task performance by a Spearman’s correlation and the p-values were Bonferroni corrected for multiple comparisons (16 Δmood scores).

2.5.4.

Brain connectivity during different affective states and feedback approaches

The connectivity between a pair of regions was evaluated by the magnitude squared coherence of the two corresponding NIRS channels,60 here calculated using 20-s Hamming windows with 50% of overlap. This procedure was repeated for both O2Hb and HHb, as well as all possible combinations of NIRS channels, generating two 32×32 matrices for each affective state (neutral and positive) and feedback type (real, fixed, and random).

The difference between the whole-connectivity matrices during each pair of affective states or feedback approaches was performed using the Mahalanobis distance. The Mahalanobis distance uses the data from each matrix as a multidimensional dataset and compares it to another multidimensional dataset. Mahalanobis distance showed greater sensitivity compared to other distance measures, because it considers mean and variance differences in connectivity matrices.40 Its output is a number representing how distant one dataset is from the other. The smaller the distance between the matrices, the greater the similarity between the connectivity patterns during the comparison.61

To obtain each Mahalanobis distance, first, for each subject, we take the connectivity matrices for each trial in condition A (e.g., positive affect during real feedback) and condition B (e.g., neutral affect during real feedback). Then, we get the Mahalanobis distance between the matrices of conditions A and B. After this, we randomly shuffle the labels of the trials and calculate the Mahalanobis distance between the permuted A and B. We repeated this step 103 times to generate a permutation distribution. After that, we calculated the z-scores of the real distance between A and B relative to the distribution of the permutations (this was accomplished by assigning a p-value to the real distance relative to the permutation distribution, and then using the inverse normal distribution to transform the p-value into a z-score). Up until here, all the analysis was done at the subject level, and the next step was simply to calculate one-sample t-tests for each affect and feedback condition. The resulting p-values were Bonferroni corrected for 6 multiple comparisons (3 feedback × 2 chromophores) when comparing the affective task effect and for 12 multiple comparisons (2 affective states × 3 feedback combinations × 2 chromophores) when comparing the feedback effect.

3.

Results

The general performance (median±standard-deviation), calculated considering only the real feedback, was 70.00%±24.43%, being significantly greater than chance level (p<0.01). This level of performance indeed demonstrates that the paradigm implemented here worked as an affective neurofeedback. As can be seen in Fig. 4(a), more than half of the participants reached performances above 50%. Moreover, for 19 participants, the performance was higher or equal to 70%, and 5 of them reached 100% performance.

Fig. 4

In (a), each subject’s performance for real (blue), fixed (green), and random (red) feedback. The dotted line represents 50% level of performance and the continuous line the 70% level. In (b), the distribution of subjects according to their performances in positive (y-axis) and neutral trials (x-axis) during the real feedback. The diameter of the circle is proportional to the number of participants for which that level of performance was observed. The distribution of participants according to their accuracies and pretask resting-state connectivity scores is presented in (c) for HHb-based connectivity scores and in (d) for O2Hb-based connectivity scores. In both cases, red lines represent the trend.

NPH_5_3_035009_f004.png

A more detailed exploration of these performances is obtained by considering the performance for the neutral and positive trials, independently. As can be seen in Fig. 4(b), 6 subjects obtained accuracies under 50% for both classes, as well as 16 subjects reached more than 50% for both trials.

In a second step, we compared the effect of three different feedback approaches in performance. Additionally to the real feedback, both fixed (66.67%±20.57%, p<0.001) and random feedback conditions (66.67%±13.06%, p<0.001) were significantly different from chance level. Although a positive correlation was found between real and fixed feedback performance curves (r=0.745, p<0.001), there were no significant differences when comparing the averages of each pair of feedback.

Additionally, we evaluated possible factors related to the illiteracy phenomena. No significant gender or age effects on performance were found. However, Fig. 4(c) shows that the pretask resting-state connectivity score calculated using HHb concentration was positively correlated to performance (r=0.511, p=0.010). In contrast, no significant correlation with the performance was found using the O2Hb concentration [Fig. 4(d)] or when evaluating posttask resting-state connectivity scores. Finally, no significant correlations were observed between Δmood scores (VAMS questionnaire) and subjects’ performance.

To evaluate the relevance of each feature in our neurofeedback setup, we first normalized the weights assigned by the LDA for all subjects, during both training blocks. For each trained classifier (more details in Sec. 2.4.1), we first subtracted the minimum absolute weight from all features’ weights in that features set. Then, all resulting values were divided by the maximum absolute weight. Then, with weights now ranging from 0 to 1, we averaged these values among all participants. Figure 5 shows these averaged and normalized weights for features calculated using O2Hb and HHb concentrations. Although some of these channels present higher contribution to identifying one class, it is notable that the most relevant features that contribute for both classes are distributed around the lateral and medial portions of the OFC (with particular attention to the left mOFC), the medial parts of the PFC and occipital areas, and are based on HHb concentration.

Fig. 5

Average weights assigned by the LDA classifier for each feature using HHb and O2Hb concentrations. In (a), the average of weights from all features in classes, and (b) the average of those weights relevant to classify (b) positive and (c) neutral trials. Hottest colors indicate higher relevance while cooler colors indicate lower relevance.

NPH_5_3_035009_f005.png

Lastly, we looked at the whole functional connectivity matrices related to positive and neutral conditions according to the delivered feedback. As can be seen in Figs. 6(a)6(b) and 7(a)7(b), HHb matrices presented the highest coherence values, with stronger connections among neighbors and homologous contralateral channels. Although the connectivity patterns remain remarkably similar for both positive and neutral trials, and for the three feedback conditions, all distance-based comparisons (task effect and feedback effect comparisons) presented statistical significance with p0.001, as shown in Fig. 8. To better visualize the differences in connectivity matrices due to the affective states, we plotted the difference matrices, which were calculated by subtracting the positive class from the neutral one [Figs. 6(c) and 7(c)]. Notably, the differences were generally positive, indicating that the neutral class has strong overall connectivity. Moreover, the real feedback presented the lower variation among all the feedback conditions.

Fig. 6

Coherence matrices comparing all-to-all channels using O2Hb concentrations. Graphs from the (a) first and (b) second columns correspond to neutral and positive trials, respectively, and the (c) last column corresponds to the difference among them. Matrices from each line correspond to real, fixed, and random feedback, respectively. Hottest colors were indicating higher coherences (or differences) while cooler colors indicate lower coherences (or differences).

NPH_5_3_035009_f006.png

Fig. 7

Coherence matrices comparing all-to-all channels using HHb concentrations. Graphs from the (a) first and (b) second columns correspond to neutral and positive trials, respectively, and the (c) last column corresponds to the difference among them. Matrices from each line correspond to real, fixed, and random feedback, respectively. Hottest colors were indicating higher coherences (or differences) while cooler colors indicate lower coherences (or differences).

NPH_5_3_035009_f007.png

Fig. 8

Bar graph with mean and standard error for z-scored Mahalanobis distances between each comparison, during (a) task and (b) feedback effect evaluation. Asterisks represent significant difference from zero (p<0.05). Blue bars correspond to O2Hb data, whereas red bars to HHb data.

NPH_5_3_035009_f008.png

4.

Discussion

Here, the three following objectives were set out: (i) to evaluate if it is possible to develop an fNIRS-based affective neurofeedback system using the self-control of networks activities, including the OFC, PFC, and occipital cortex, (ii) to further test the feedback effect in performance and in the subject’s multivariate functional connectivity, and (iii) to investigate possible demographic, psychological, or physiological predictors of performance.

4.1.

Performance and Literacy

Exclusively considering the general performance (during real feedback), we found that our volunteers were able to self-control the accessed networks activity using neutral and positive affective states. Even more, the majority of our participants reached performances over the 70% threshold, which is suggested by the BCI/neurofeedback community as sufficient to perform device control and communication.62

Subjects performing under the mentioned threshold would probably improve their performance after a few training sessions, as already observed in longitudinal experiments using different BCI and neurofeedback approaches.6366 However, it is expected that some of them keep the poor performances even after exhaustive training sessions or subject-specific improvements of the system (e.g., with specific features or different classifiers).27 These are examples of BCI/neurofeedback illiterates, commonly described in the literature, whose best option to attain proficiency would be switching to another neurofeedback approach.27,29

Here, we found the pretask resting-state connectivity as positively correlated to the task performance. It means that the greater the connectivity during the resting state, the greater might be the performance on an affective neurofeedback. This correlation is expected considering recent studies that report the default mode network (DMN, also known as the “resting-state network”) and the affective workspace network (AWN) being partly overlapped.18,6769 Also, considering that this connectivity score is based on the same regions used as input to the classifier, it is expected that highly interconnected cortices tend to produce precise and clear activation patterns. Furthermore, this result is in agreement with previous studies reporting the resting-state connectivity as a relevant predictor of performance in different BCI/neurofeedback tasks and recording modalities.37,70,71

To the best of our knowledge, this is only the second work using an fNIRS-based real-time affective neurofeedback, and the first one to consider a multiregional approach. However, our results are in agreement with previous findings of hemodynamic-based neurofeedback, mainly with fMRI targeting the OFC72,73 or the PFC.10,16,17 As expected, relevant features were placed around the OFC, PFC, and the occipital cortex. These regions were recently listed as fundamentals to the affective induction and processing.18,19 The mOFC, in special, was the most relevant feature among all. This is reasonable considering that this region guides internal responses to affective contexts,68 which is a core aspect to achieve an affective-based self-regulation. Also, the occipital cortex plays an important function to improve the vividness and effectiveness of the autobiographical memory evocation.20,21

Concerning the predominance of changes in HHb as relevant features, although O2Hb and HHb present similar results in comparative studies for LDA classification,14 the best features set might vary according to the application. For example, motor imagery studies have found O2Hb as the most robust features,57,74 whereas in a similar affective experiment our group also reported a predominance of HHb features among the most relevant ones.75,76 Previous studies describe different advantages for each measure. The O2Hb is reliable and shows higher retest reliability.77 On the other hand, although presenting higher variability, HHb features are commonly related to the fMRI blood oxygen-level dependent signal.78

Thus, these results open one more protocol option to be applied in clinical populations,6,7 with the advantage that fNIRS is a portable and easy-to-setup equipment,9,12 also enabling applications out of the lab,79,80 and with a new multivariate approach.

4.2.

Feedback Effect on Performance

The performance of the user of neurofeedback is usually variable during the learning process.81 Such variability tends to decrease when the user reaches complete control of the system. However, difficulties in properly controlling the neurofeedback system might cause frustration and disappointment, obviously impairing learning affective protocols. Moreover, for potential therapeutic applications, users will need to be able to self-regulate their networks activities during stressful situations, such as anxious or depressive states.6 In this context, it is important to simulate some distractors/stressing stimuli to evaluate its possible consequences in the neurofeedback tasks.

In the performance during fixed feedback, which might be considered as a mild task distractor, we found a high correlation with the performance in real feedback. No significant difference between these feedback conditions was observed. These results are in line with previous EEG findings.24,25 We argue that this is an important result for the potential therapeutic application of an affective neurofeedback system. In a real situation, for instance, where a patient would need to apply the affective neurofeedback training to relief a given situational symptom, he will need to do this without any type of feedback. This scenario is indeed simulated by the fixed feedback condition.

Contrary to our expectations, the random feedback was not different from the other feedback approaches. However, this finding is consistent with previous results in other nonaffective tasks, such as motor imagery BCI in EEG.26,82 Since this is the first experience of all participants with an affective neurofeedback system, a possible explanation is that a greater task engagement was caused by the random feedback, which worked as a strong task distractor.26 Possibly, exposure to longer periods of random feedback would lead to frustration and demotivation.82

4.3.

Task and Feedback Effects on fNRIS Whole-Connectivity

Significant differences in distance in all the performed comparisons suggest that the neural networks accessed have different connectivity patterns for each condition (affective states and feedback).40,61 Significant differences between positive and neutral connectivity profiles were already expected given that the classifier used in our neurofeedback was able to find different neural patterns during both affective states.54

It is notable that the connectivity maps for both affective states are highly overlapped. This functional overlapping may be related to two neural networks that share some brain areas and connectivity pathways. The positive affect processing may be related to the AWN,18,19,68 which involves subcortical areas classically listed as “emotional centers,” such as amygdala and ventral striatum, as well as cortical areas implicated in affective processing, such as the lOFC, the ventrolateral prefrontal cortex, the ventromedial prefrontal cortex, the dorsomedial prefrontal cortex (dmPFC), and the lateral portions of the right temporal/occipital cortex.18,19,83 On the other hand, the neutral affective state might be reasonably related to increased activity in the DMN,84,85 which include areas such as the mPFC, the posterior cingulate cortex, and the inferior parietal lobule.86,87

Two of these overlapped areas play an important role in both tasks used during this affective neurofeedback experiment: the dmPFC and the occipital cortex. The dmPFC is highly associated and engaged during remembering of personal events (autobiographical memory),69 as well as during the spontaneous thinking.88 In addition, the occipital cortex is crucial to the quality of the autobiographical memory because it regulates subjective vividness during imagination.20,21 Even more, fNIRS studies found high connectivity between contralateral occipital areas during resting state.89,90 The higher connectivity found in neutral trials for all feedback approaches may also be explained by a DMN characteristic. Previous studies found that the brain activity decreases during different tasks when compared to passive mental states, which suggests the existence of a baseline neural activity in the absence of external attentional focus.9193

Finally, in both O2Hb and HHb maps (Figs. 6 and 7), the neutral to positive differences showed increased values according to the presented feedback (real feedback with lower differences and random feedback with higher differences). In accordance, an fNIRS study observed reduced prefrontal oxygenation when attentional distractors were presented during an affective task.94 Negative distractors may lead to changes exclusively on the brain activity but not in the task performance. This may be caused by a compensatory brain effort to maintain similar performance levels even with competitive stimulus.95

4.4.

Innovation and Limitations

As previously mentioned, up to our knowledge, this is the first study to propose fNIRS-based affective neurofeedback based on the self-regulation of frontal and occipital networks. This is a relevant result considering that neurofeedback is a noninvasive and nonpharmacological approach for the treatment of psychiatric disorders.6,7 Our protocol might be especially valuable for mood disorders, such as the major depressive disorder and the obsessive–compulsive disorder, considering the self-control of critical areas of its neurocircuitry.96 In addition, we pioneered used a connectivity analysis to evaluate the feedback effect and the illiteracy phenomena related to affective neurofeedback protocols.

However, this study possesses some limitations of note. First, we adopted an unbalanced number of trials for real, fixed, and random feedback. This difference is a consequence of the protocol length, which might take up to 60 min considering the system setup and the task execution. Thereby, to avoid the influence of fatigue and stress related to the long duration,97 we presented more trials with real feedback than with fixed and random feedback. This choice was based on the use of the real feedback to validate the neurofeedback protocol.

Additionally, this study has a limited sample size with controlled aspects, such as health history and educational level. Although it was enough to validate the neurofeedback protocol, as well as to allow an initial evaluation of the feedback effect and the illiteracy phenomena, this aspect should be improved in future studies.

Therefore, the next step of the research should consider the use of a balanced number of trials, as well as an increased sample size, for example, including psychiatric patients or participants with different educational levels. These aspects might result in more variability to validate our results and reinforce the better understanding of the self-control of neurofeedback protocols.9799 Also, future studies should use the self-evaluation scores to correlate it with the effective performance or connectivity patterns. This data will provide more information regarding the relationship between system and user from the participants’ perspective.

5.

Conclusion

In conclusion, our results suggest that fNIRS might be a feasible tool to develop affective neurofeedback systems based on self-regulation of frontal and occipital networks. Additionally, it seems to be possible to predict performance using a short pretask resting-state period, suggesting that the general background connectivity rules the self-control capacity. Finally, although no significantly different performances were found using real, fixed, and random feedback conditions, offline functional connectivity profiles analyses suggest a neural basis for an increased effort to maintain task control in the presence of distractors.

Appendices

Appendix

As previously described, each trial starts with a black screen with a cross during the first 5 s. This period is essential to both O2Hb and HHb concentrations to recover to its baseline level. Then, during the following 2 s, the orientation screen illustrates the trial class (a yellow figure for neutral trials or a blue figure for positive trials). The next second is used to compose the first moving window [for more details refer to Fig. 3(c)], meaning that the first feedback is provided 8 s after the beginning of the trial. Consequently, the 30th, and last, feedback screen appears after 37 s after the start of the trial.

The feedback presentation is ruled by the logic described in Sec. 2.4.2, where first, the classifier output is converted according to the feedback condition. Then, the figure format is shaped according to the trial class, as shown in Table 1. To facilitate the understanding of this logic, Figs. 9Fig. 1011 show examples of signal traces and the consequent feedback of a randomly chosen subject during each feedback condition.

Table 1

Simplified rules for screen updates according to the trial class.

Converted outputNeutral trialPositive trial
Less than 0More shaped figureMore deformed figure
Equal to 0Same figureSame figure
Greater than 0More deformed figureMore shaped figure

Fig. 9

Example of signal traces from subject 8 during a positive trial with real feedback, including the classifier output from each moving window and the converted output according to the respective feedback condition. Continuous green and dotted red lines represent HHb and O2Hb concentrations, respectively.

NPH_5_3_035009_f009.png

Fig. 10

Example of signal traces from subject 8 during a neutral trial with fixed feedback, including the classifier output from each moving window and the converted output according to the respective feedback condition. Continuous green and dotted red lines represent HHb and O2Hb concentrations, respectively.

NPH_5_3_035009_f010.png

Fig. 11

Example of signal traces from subject 8 during a neutral trial with random feedback, including the classifier output from each moving window and the converted output according to the respective feedback condition. Continuous green and dotted red lines represent HHb and O2Hb concentrations, respectively.

NPH_5_3_035009_f011.png

In Fig. 9, we can see an example of a positive trial with real feedback. In this case, the classifier output is the same as the converted output. It is possible since, during real feedback conditions, the classifier output is always multiplied by one. Then, the figure format will be reliable to the participant’s expectation. Also, following rules for positive trials as shown in Table 1, positive converted outputs lead to more shaped figures, whereas negative converted outputs lead to more deformed figures.

On the other hand, Fig. 10 shows an example of a neutral trial with fixed feedback. In this case, due to the fixed feedback condition, all results from feedback calculation are equal to zero. Therefore, following Table 1, the same figure is presented for all feedback screens.

Finally, Fig. 11 shows an example of a neutral trial with random feedback. In this case, feedback calculation outputs are predominantly different from the classifier output and, consequently, different from the self-regulation patterns. Then, following rules for neutral trials as shown in Table 1, converted outputs with negative results lead to more shaped figures, whereas positive results lead to more deformed figures.

Disclosures

All authors declare no conflicts of interest.

Acknowledgments

This work was supported by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (Grant No. 88881.134039/2016-01) and the Fundação de Amparo à Pesquisa do Estado de São Paulo (Grant No. 2015/17406-5). We are grateful to Jackson Cionek (Brainsupport Brazil) and Guilherme A. Z. Moraes (NIRx) for the technological support.

References

1. 

N. Birbaumer et al., “Neurofeedback and brain–computer interface: clinical applications,” Int. Rev. Neurobiol., 86 107 –117 (2009). https://doi.org/10.1016/S0074-7742(09)86008-X Google Scholar

2. 

R. Sitaram et al., “Closed-loop brain training: the science of neurofeedback,” Nat. Rev. Neurosci., 18 (2), 86 –100 (2017). https://doi.org/10.1038/nrn.2016.164 NRNAAN 1471-003X Google Scholar

3. 

C. Mühl et al., “A survey of affective brain–computer interfaces: principles, state-of-the-art, and challenges,” Brain Comput. Interfaces, 1 (2), 66 –84 (2014). https://doi.org/10.1080/2326263X.2014.912881 Google Scholar

4. 

G. Liberati, S. Federici and E. Pasqualotto, “Extracting neurophysiological signals reflecting users’ emotional and affective responses to BCI use: a systematic literature review,” NeuroRehabilitation, 37 (3), 341 –358 (2015). https://doi.org/10.3233/NRE-151266 Google Scholar

5. 

S. Kim and N. Birbaumer, “Real-time functional MRI neurofeedback: a tool for psychiatry,” Curr. Opin. Psychiatry, 27 (5), 332 –336 (2014). https://doi.org/10.1097/YCO.0000000000000087 COPPE8 Google Scholar

6. 

D. C. Hammond, “Neurofeedback with anxiety and affective disorders,” Child Adolesc. Psychiatr. Clin. N. Am., 14 (1), 105 –123 (2005). https://doi.org/10.1016/j.chc.2004.07.008 Google Scholar

7. 

T. Fovet, R. Jardri and D. Linden, “Current issues in the use of fMRI-based neurofeedback to relieve psychiatric symptoms,” Curr. Pharm. Des., 21 (23), 3384 –3394 (2015). https://doi.org/10.2174/1381612821666150619092540 Google Scholar

8. 

D. E. Linden et al., “Real-time self-regulation of emotion networks in patients with depression,” PLoS One, 7 (6), e38115 (2012). https://doi.org/10.1371/journal.pone.0038115 POLNCL 1932-6203 Google Scholar

9. 

N. Naseer and K. S. Hong, “fNIRS-based brain–computer interfaces: a review,” Front. Hum. Neurosci., 9 3 (2015). https://doi.org/10.3389/fnhum.2015.00003 Google Scholar

10. 

K. Sakatani et al., “NIRS-based neurofeedback learning systems for controlling activity of the prefrontal cortex,” Adv. Exp. Med. Biol., 789 449 –454 (2013). https://doi.org/10.1007/978-1-4614-7411-1 AEMBAP 0065-2598 Google Scholar

11. 

A. Villringer et al., “Near-infrared spectroscopy (NIRS): a new tool to study hemodynamic changes during activation of brain function in human adults,” Neurosci. Lett., 154 (1–2), 101 –104 (1993). https://doi.org/10.1016/0304-3940(93)90181-J NELED5 0304-3940 Google Scholar

12. 

H. Doi, S. Nishitani and K. Shinohara, “NIRS as a tool for assaying emotional function in the prefrontal cortex,” Front. Hum. Neurosci., 7 770 (2013). https://doi.org/10.3389/fnhum.2013.00770 Google Scholar

13. 

R. C. A. Bendall, P. Eachus and C. Thompson, “A brief review of research using near-infrared spectroscopy to measure activation of the prefrontal cortex during emotional processing: the importance of experimental design,” Front. Hum. Neurosci., 10 529 (2016). https://doi.org/10.3389/fnhum.2016.00529 Google Scholar

14. 

N. Naseer et al., “Determining optimal feature-combination for LDA classification of functional near-infrared spectroscopy signals in brain–computer interface application,” Front. Hum. Neurosci., 10 237 (2016). https://doi.org/10.3389/fnhum.2016.00237 Google Scholar

15. 

N. Naseer et al., “Analysis of different classification techniques for two-class functional near-infrared spectroscopy-based brain–computer interface,” Comput. Intell. Neurosci., 2016 1 –11 (2016). https://doi.org/10.1155/2016/5480760 Google Scholar

16. 

S. J. Johnston et al., “Neurofeedback: a promising tool for the self-regulation of emotion networks,” NeuroImage, 49 (1), 1066 –1072 (2010). https://doi.org/10.1016/j.neuroimage.2009.07.056 NEIMEF 1053-8119 Google Scholar

17. 

S. Johnston et al., “Upregulation of emotion areas through neurofeedback with a focus on positive mood,” Cognit. Affective Behav. Neurosci., 11 (1), 44 –51 (2011). https://doi.org/10.3758/s13415-010-0010-1 Google Scholar

18. 

K. A. Lindquist et al., “The brain basis of emotion: a meta-analytic review,” Behav. Brain Sci., 35 (3), 121 –143 (2012). https://doi.org/10.1017/S0140525X11000446 BBSCDH 0140-525X Google Scholar

19. 

K. A. Lindquist et al., “The brain basis of positive and negative affect: evidence from a meta-analysis of the human neuroimaging literature,” Cereb. Cortex, 26 (5), 1910 –1922 (2016). https://doi.org/10.1093/cercor/bhv001 53OPAV 1047-3211 Google Scholar

20. 

X. Cui et al., “Vividness of mental imagery: individual variability can be measured objectively,” Vision Res., 47 (4), 474 –478 (2007). https://doi.org/10.1016/j.visres.2006.11.013 VISRAM 0042-6989 Google Scholar

21. 

A. Köchel et al., “Affective perception and imagery: a NIRS study,” Int. J. Psychophysiol., 80 (3), 192 –197 (2011). https://doi.org/10.1016/j.ijpsycho.2011.03.006 IJPSEE 0167-8760 Google Scholar

22. 

N. Birbaumer, S. Ruiz and R. Sitaram, “Learned regulation of brain metabolism,” Trends Cogn. Sci., 17 (6), 295 –302 (2013). https://doi.org/10.1016/j.tics.2013.04.009 TCSCFK 1364-6613 Google Scholar

23. 

E. A. Curran and M. J. Stokes, “Learning to control brain activity: a review of the production and control of EEG components for driving brain–computer interface (BCI) systems,” Brain Cognit., 51 (3), 326 –336 (2003). https://doi.org/10.1016/S0278-2626(03)00036-8 Google Scholar

24. 

D. J. McFarland, L. M. McCane and J. R. Wolpaw, “EEG-based communication and control: short-term role of feedback,” IEEE Trans. Rehabil. Eng., 6 (1), 7 –11 (1998). https://doi.org/10.1109/86.662615 IEEREN 1063-6528 Google Scholar

25. 

C. Guger et al., “How many people are able to operate an EEG-based brain–computer interface (BCI)?,” IEEE Trans. Neural Syst. Rehabil. Eng., 11 (2), 145 –147 (2003). https://doi.org/10.1109/TNSRE.2003.814481 Google Scholar

26. 

M. Gonzalez-Franco et al., “Motor imagery based brain–computer interface: a study of the effect of positive and negative feedback,” in Conf. Proc. IEEE Engineering in Medicine and Biology Society (EMBC), 6323 –6326 (2011). https://doi.org/10.1109/IEMBS.2011.6091560 Google Scholar

27. 

B. Z. Allison, C. Neuper, “Could anyone use a BCI?,” Brain–Computer Interfaces, 35 –54 Springer Verlag, London (2010). Google Scholar

28. 

B. Blankertz et al., “Neurophysiological predictor of SMR-based BCI performance,” NeuroImage, 51 (4), 1303 –1309 (2010). https://doi.org/10.1016/j.neuroimage.2010.03.022 NEIMEF 1053-8119 Google Scholar

29. 

O. Alkoby et al., “Can we predict who will respond to neurofeedback? A review of the inefficacy problem and existing predictors for successful EEG neurofeedback learning,” Neuroscience, 378 155 –164 (2018). https://doi.org/10.1016/j.neuroscience.2016.12.050 Google Scholar

30. 

W. Nan et al., “Individual alpha neurofeedback training effect on short term memory,” Int. J. Psychophysiol., 86 (1), 83 –87 (2012). https://doi.org/10.1016/j.ijpsycho.2012.07.182 IJPSEE 0167-8760 Google Scholar

31. 

S. E. Kober et al., “Learning to modulate one’s own brain activity: the effect of spontaneous mental strategies,” Front. Hum. Neurosci., 7 695 (2013). https://doi.org/10.3389/fnhum.2013.00695 Google Scholar

32. 

M. Witte et al., “Control beliefs can predict the ability to up-regulate sensorimotor rhythm during neurofeedback training,” Front. Hum. Neurosci., 7 478 (2013). https://doi.org/10.3389/fnhum.2013.00478 Google Scholar

33. 

E. M. Hammer et al., “Psychological predictors of SMR-BCI performance,” Biol. Psychol., 89 (1), 80 –86 (2012). https://doi.org/10.1016/j.biopsycho.2011.09.006 Google Scholar

34. 

T. Dickhaus et al., “Predicting BCI performance to study BCI illiteracy,” BMC Neurosci., 10 (Suppl. 1), P84 (2009). https://doi.org/10.1186/1471-2202-10-S1-P84 1471-2202 Google Scholar

35. 

E. Weber et al., “Predicting successful learning of SMR neurofeedback in healthy participants: methodological considerations,” Appl. Psychophysiol. Biofeedback, 36 (1), 37 –45 (2011). https://doi.org/10.1007/s10484-010-9142-x Google Scholar

36. 

M. Ahn et al., “High theta and low alpha powers may be indicative of BCI-Illiteracy in motor imagery,” PLoS One, 8 (11), e80886 (2013). https://doi.org/10.1371/journal.pone.0080886 POLNCL 1932-6203 Google Scholar

37. 

D. Scheinost et al., “Resting state functional connectivity predicts neurofeedback response,” Front. Behav. Neurosci., 8 338 (2014). https://doi.org/10.3389/fnbeh.2014.00338 Google Scholar

38. 

F. Wan et al., “Resting alpha activity predicts learning ability in alpha neurofeedback,” Front. Hum. Neurosci., 8 500 (2014). https://doi.org/10.3389/fnhum.2014.00500 Google Scholar

39. 

S. Halder et al., “Prediction of brain–computer interface aptitude from individual brain structure,” Front. Hum. Neurosci., 7 105 (2013). https://doi.org/10.3389/fnhum.2013.00105 Google Scholar

40. 

Z. Shehzad et al., “A multivariate distance-based analytic framework for connectome-wide association studies,” NeuroImage, 93 (Pt. 1), 74 –94 (2014). https://doi.org/10.1016/j.neuroimage.2014.02.024 NEIMEF 1053-8119 Google Scholar

41. 

E. S. Finn et al., “Functional connectome fingerprinting: identifying individuals using patterns of brain connectivity,” Nat. Neurosci., 18 (11), 1664 –1671 (2015). https://doi.org/10.1038/nn.4135 NANEFN 1097-6256 Google Scholar

42. 

M. Essenpreis et al., “Spectral dependence of temporal point spread functions in human tissues,” Appl. Opt., 32 (4), 418 –425 (1993). https://doi.org/10.1364/AO.32.000418 APOPAI 0003-6935 Google Scholar

43. 

R. A. Stern, VAMS: Visual Analog Mood Scales: Professional Manual, Psychological Assessment Resources, Odessa, Ukraine (1997). Google Scholar

44. 

D. H. Brainard, “The psychophysics toolbox,” Spat. Vision, 10 (4), 433 –436 (1997). https://doi.org/10.1163/156856897X00357 SPVIEU 0169-1015 Google Scholar

45. 

D. G. Pelli, “The VideoToolbox software for visual psychophysics: transforming numbers into movies,” Spat. Vision, 10 437 –442 (1997). https://doi.org/10.1163/156856897X00366 SPVIEU 0169-1015 Google Scholar

46. 

M. Kleiner, D. Brainard and D. G. Pelli, “What’s new in psychtoolbox-3,” Perception, 36 14 (2007). https://doi.org/10.1068/v070821 PCTNBA 0301-0066 Google Scholar

47. 

G. Gratton and P. M. Corballis, “Removing the heart from the brain: compensation for the pulse artifact in the photon migration signal,” Psychophysiology, 32 (3), 292 –299 (1995). https://doi.org/10.1111/psyp.1995.32.issue-3 PSPHAF 0048-5772 Google Scholar

48. 

C. J. Soraghan et al., “A dual-channel optical brain–computer interface in a gaming environment,” in CGAMES—9th Int. Conf. on Computer Games: AI, Animation, Mobile, Educational and Serious Games, 1 –5 (2006). Google Scholar

49. 

F. Matthews et al., “Hemodynamics for brain–computer interfaces,” IEEE Signal Process. Mag., 25 (1), 87 –94 (2008). https://doi.org/10.1109/MSP.2008.4408445 ISPRE6 1053-5888 Google Scholar

50. 

J. R. Sato et al., “Real-time fMRI pattern decoding and neurofeedback using FRIEND: an FSL-integrated BCI toolbox,” PLoS One, 8 (12), e81658 (2013). https://doi.org/10.1371/journal.pone.0081658 POLNCL 1932-6203 Google Scholar

51. 

J. Moll et al., “Voluntary enhancement of neural signatures of affiliative emotion using FMRI neurofeedback,” PLoS One, 9 (5), e97343 (2014). https://doi.org/10.1371/journal.pone.0097343 POLNCL 1932-6203 Google Scholar

52. 

S. M. LaConte, S. J. Peltier and X. P. Hu, “Real-time fMRI using brain-state classification,” Hum. Brain Mapp., 28 (10), 1033 –1044 (2007). https://doi.org/10.1002/hbm.20326 HBRME7 1065-9471 Google Scholar

53. 

D. Garrett et al., “Comparison of linear, nonlinear, and feature selection methods for EEG signal classification,” IEEE Trans. Neural Syst. Rehabil. Eng., 11 (2), 141 –144 (2003). https://doi.org/10.1109/TNSRE.2003.814441 Google Scholar

54. 

F. Lotte et al., “A review of classification algorithms for EEG-based brain–computer interfaces,” J. Neural Eng., 4 R1 (2007). https://doi.org/10.1088/1741-2560/4/2/R01 1741-2560 Google Scholar

55. 

J. R. Sato et al., “Evaluating SVM and MLDA in the extraction of discriminant regions for mental state prediction,” NeuroImage, 46 (1), 105 –114 (2009). https://doi.org/10.1016/j.neuroimage.2009.01.032 NEIMEF 1053-8119 Google Scholar

56. 

C. A. Kothe and S. Makeig, “BCILAB: a platform for brain–computer interface development,” J. Neural. Eng., 10 (5), 056014 (2013). https://doi.org/10.1088/1741-2560/10/5/056014 1741-2560 Google Scholar

57. 

N. Naseer and K. S. Hong, “Classification of functional near-infrared spectroscopy signals corresponding to the right-and left-wrist motor imagery for development of a brain–computer interface,” Neurosci. Lett., 553 84 –89 (2013). https://doi.org/10.1016/j.neulet.2013.08.021 NELED5 0304-3940 Google Scholar

58. 

N. Naseer, M. J. Hong and K. S. Hong, “Online binary decision decoding using functional near-infrared spectroscopy for the development of brain–computer interface,” Exp. Brain Res., 232 (2), 555 –564 (2014). https://doi.org/10.1007/s00221-013-3764-1 EXBRAP 0014-4819 Google Scholar

59. 

M. A. Yücel et al., “Mayer waves reduce the accuracy of estimated hemodynamic response functions in functional near-infrared spectroscopy,” Biomed. Opt. Express, 7 (8), 3078 –3088 (2016). https://doi.org/10.1364/BOE.7.003078 BOEICL 2156-7085 Google Scholar

60. 

F. T. Sun, L. M. Miller and M. D’esposito, “Measuring interregional functional connectivity using coherence and partial coherence analyses of fMRI data,” NeuroImage, 21 (2), 647 –658 (2004). https://doi.org/10.1016/j.neuroimage.2003.09.056 NEIMEF 1053-8119 Google Scholar

61. 

R. B. Mars et al., “Connectivity profiles reveal the relationship between brain areas for social cognition in human and monkey temporoparietal cortex,” Proc. Natl. Acad. Sci. U. S. A., 110 (26), 10806 –10811 (2013). https://doi.org/10.1073/pnas.1302956110 Google Scholar

62. 

D. J. McFarland et al., “BCI meeting 2005-workshop on BCI signal processing: feature extraction and translation,” IEEE Trans. Neural Syst. Rehabil. Eng., 14 (2), 135 –138 (2006). https://doi.org/10.1109/TNSRE.2006.875637 Google Scholar

63. 

G. Pfurtscheller et al., “Current trends in Graz brain–computer interface (BCI) research,” IEEE Trans. Rehabil. Eng., 8 (2), 216 –219 (2000). https://doi.org/10.1109/86.847821 IEEREN 1063-6528 Google Scholar

64. 

J. A. Pineda et al., “Learning to control brain rhythms: making a brain–computer interface possible,” IEEE Trans. Neural Syst. Rehabil. Eng., 11 (2), 181 –184 (2003). https://doi.org/10.1109/TNSRE.2003.814445 Google Scholar

65. 

S. Enriquez-Geppert et al., “Modulation of frontal-midline theta by neurofeedback,” Biol. Psychol., 95 59 –69 (2014). https://doi.org/10.1016/j.biopsycho.2013.02.019 Google Scholar

66. 

V. Kaiser et al., “Cortical effects of user training in a motor imagery based brain–computer interface measured by fNIRS and EEG,” NeuroImage, 85 (Pt. 1), 432 –444 (2014). https://doi.org/10.1016/j.neuroimage.2013.04.097 NEIMEF 1053-8119 Google Scholar

67. 

L. F. Barrett et al., “The experience of emotion,” Annu. Rev. Psychol., 58 373 –403 (2007). https://doi.org/10.1146/annurev.psych.58.110405.085709 ARPSAC 0066-4308 Google Scholar

68. 

L. F. Barrett and E. Bliss-Moreau, “Affect as a psychological primitive,” Adv. Exp. Soc. Psychol., 41 167 –218 (2009). https://doi.org/10.1016/S0065-2601(08)00404-8 AXSPAQ Google Scholar

69. 

L. F. Barrett and A. B. Satpute, “Large-scale brain networks in affective and social neuroscience: towards an integrative functional architecture of the brain,” Curr. Opin. Neurobiol., 23 (3), 361 –372 (2013). https://doi.org/10.1016/j.conb.2012.12.012 COPUEN 0959-4388 Google Scholar

70. 

Y. Zhang et al., “Prediction of SSVEP-based BCI performance by the resting-state EEG network,” J. Neural Eng., 10 (6), 066017 (2013). https://doi.org/10.1088/1741-2560/10/6/066017 1741-2560 Google Scholar

71. 

R. Zhang et al., “Efficient resting-state EEG network facilitates motor imagery performance,” J. Neural Eng., 12 (6), 066024 (2015). https://doi.org/10.1088/1741-2560/12/6/066024 1741-2560 Google Scholar

72. 

M. Hampson et al., “Real-time fMRI biofeedback targeting the orbitofrontal cortex for contamination anxiety,” J. Vis. Exp., 59 e3535 (2012). https://doi.org/10.3791/3535 Google Scholar

73. 

D. Scheinost et al., “Orbitofrontal cortex neurofeedback produces lasting changes in contamination anxiety and resting-state connectivity,” Transl. Psychiatry, 3 e250 (2013). https://doi.org/10.1038/tp.2013.24 Google Scholar

74. 

M. Mihara et al., “Neurofeedback using real-time near-infrared spectroscopy enhances motor imagery related cortical activation,” PLoS One, 7 (3), e32234 (2012). https://doi.org/10.1371/journal.pone.0032234 POLNCL 1932-6203 Google Scholar

75. 

L. R. Trambaiolli et al., “Decoding affective states across databases using functional near-infrared spectroscopy,” bioRxiv, 228007 (2017). https://doi.org/10.1101/228007 Google Scholar

76. 

L. R. Trambaiolli et al., “Predicting affective valence using cortical hemodynamic signals,” Sci. Rep., 8 (1), 5406 (2018). https://doi.org/10.1038/s41598-018-23747-y SRCEC3 2045-2322 Google Scholar

77. 

M. M. Plichta et al., “Event-related functional near-infrared spectroscopy (fNIRS): are the measurements reliable?,” NeuroImage, 31 (1), 116 –124 (2006). https://doi.org/10.1016/j.neuroimage.2005.12.008 NEIMEF 1053-8119 Google Scholar

78. 

J. Steinbrink et al., “Illuminating the BOLD signal: combined fMRI-fNIRS studies,” Magn. Reson. Imaging, 24 (4), 495 –505 (2006). https://doi.org/10.1016/j.mri.2005.12.034 MRIMDQ 0730-725X Google Scholar

79. 

T. H. Falk et al., “Taking NIRS-BCIs outside the lab: towards achieving robustness against environment noise,” IEEE Trans. Neural Syst. Rehabil. Eng., 19 (2), 136 –146 (2011). https://doi.org/10.1109/TNSRE.2010.2078516 Google Scholar

80. 

J. B. Balardin et al., “Imaging brain function with functional near-infrared spectroscopy in unconstrained environments,” Front. Hum. Neurosci., 11 258 (2017). https://doi.org/10.3389/fnhum.2017.00258 Google Scholar

81. 

A. Zuberer, D. Brandeis and R. Drechsler, “Are treatment effects of neurofeedback training in children with ADHD related to the successful regulation of brain activity? A review on the learning of regulation of brain activity and a contribution to the discussion on specificity,” Front. Hum. Neurosci., 9 135 (2015). https://doi.org/10.3389/fnhum.2015.00135 Google Scholar

82. 

A. Barbero and M. Grosse-Wentrup, “Biased feedback in brain–computer interfaces,” J. Neuroeng. Rehabil., 7 (1), 34 (2010). https://doi.org/10.1186/1743-0003-7-34 Google Scholar

83. 

T. D. Wager et al., “The neuroimaging of emotion,” The Handbook of Emotion, 249 –271 Guildford Press, New York (2008). Google Scholar

84. 

R. L. Buckner, J. R. Andrews-Hanna and D. L. Schacter, “The brain’s default network,” Ann. N. Y. Acad. Sci., 1124 1 –38 (2008). https://doi.org/10.1196/annals.1440.011 ANYAA9 0077-8923 Google Scholar

85. 

J. R. Andrews-Hanna et al., “Functional-anatomic fractionation of the brain’s default network,” Neuron, 65 (4), 550 –562 (2010). https://doi.org/10.1016/j.neuron.2010.02.005 NERNET 0896-6273 Google Scholar

86. 

L. Q. Uddin et al., “Functional connectivity of default mode network components: correlation, anticorrelation and causality,” Hum. Brain Mapp., 30 (2), 625 –637 (2009). https://doi.org/10.1002/hbm.20531 HBRME7 1065-9471 Google Scholar

87. 

C. G. Davey, J. Pujol and B. J. Harrison, “Mapping the self in the brain’s default mode network,” NeuroImage, 132 390 –397 (2016). https://doi.org/10.1016/j.neuroimage.2016.02.022 NEIMEF 1053-8119 Google Scholar

88. 

T. T. Raij and T. J. J. Riekki, “Dorsomedial prefrontal cortex supports spontaneous thinking per se,” Hum. Brain Mapp., 38 (6), 3277 –3288 (2017). https://doi.org/10.1002/hbm.23589 HBRME7 1065-9471 Google Scholar

89. 

R. C. Mesquita, M. A. Franceschini and D. A. Boas, “Resting state functional connectivity of the whole head with near-infrared spectroscopy,” Biomed. Opt. Express, 1 (1), 324 –336 (2010). https://doi.org/10.1364/BOE.1.000324 BOEICL 2156-7085 Google Scholar

90. 

S. L. Novi, R. B. Rodrigues and R. C. Mesquita, “Resting state connectivity patterns with near-infrared spectroscopy data of the whole head,” Biomed. Opt. Express, 7 2524 –2537 (2016). https://doi.org/10.1364/BOE.7.002524 BOEICL 2156-7085 Google Scholar

91. 

D. A. Gusnard et al., “Medial prefrontal cortex and self-referential mental activity: relation to a default mode of brain function,” Proc. Natl. Acad. Sci. U. S. A., 98 (7), 4259 –4264 (2001). https://doi.org/10.1073/pnas.071043098 Google Scholar

92. 

M. E. Raichle et al., “A default mode of brain function,” Proc. Natl. Acad. Sci. U. S. A., 98 (2), 676 –682 (2001). https://doi.org/10.1073/pnas.98.2.676 Google Scholar

93. 

M. Amft et al., “Definition and characterization of an extended social-affective default network,” Brain Struct. Funct., 220 (2), 1031 –1049 (2015). https://doi.org/10.1007/s00429-013-0698-0 Google Scholar

94. 

S. Ozawa and K. Hiraki, “Distraction decreases prefrontal oxygenation: a NIRS study,” Brain Cognit., 113 155 –163 (2017). https://doi.org/10.1016/j.bandc.2017.02.003 Google Scholar

95. 

S. Ozawa, G. Matsuda and K. Hiraki, “Negative emotion modulates prefrontal cortex activity during a working memory task: a NIRS study,” Front. Hum. Neurosci., 8 46 (2014). https://doi.org/10.3389/fnhum.2014.00046 Google Scholar

96. 

J. L. Price and W. C. Drevets, “Neurocircuitry of mood disorders,” Neuropsychopharmacology, 35 (1), 192 –216 (2010). https://doi.org/10.1038/npp.2009.104 NEROEW 0893-133X Google Scholar

97. 

J. Sulzer et al., “Real-time fMRI neurofeedback: progress and challenges,” NeuroImage, 76 386 –399 (2013). https://doi.org/10.1016/j.neuroimage.2013.03.033 NEIMEF 1053-8119 Google Scholar

98. 

R. T. Thibault, M. Lifshitz and A. Raz, “The self-regulating brain and neurofeedback: experimental science and clinical promise,” Cortex, 74 247 –261 (2016). https://doi.org/10.1016/j.cortex.2015.10.024 Google Scholar

99. 

M. Arns et al., “Neurofeedback: one of today’s techniques in psychiatry?,” Encephale, 43 (2), 135 –145 (2017). https://doi.org/10.1016/j.encep.2016.11.003 Google Scholar

Biographies for the authors are not available.

© 2018 Society of Photo-Optical Instrumentation Engineers (SPIE) 2329-423X/2018/$25.00 © 2018 SPIE
Lucas R. Trambaiolli, Claudinei E. Biazoli, André M. Cravo, Tiago H. Falk, and João R. Sato "Functional near-infrared spectroscopy-based affective neurofeedback: feedback effect, illiteracy phenomena, and whole-connectivity profiles," Neurophotonics 5(3), 035009 (18 September 2018). https://doi.org/10.1117/1.NPh.5.3.035009
Received: 16 February 2018; Accepted: 10 August 2018; Published: 18 September 2018
Lens.org Logo
CITATIONS
Cited by 20 scholarly publications.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Near infrared spectroscopy

Spectroscopy

Matrices

Control systems

Neurophotonics

Mahalanobis distance

Brain

RELATED CONTENT


Back to Top