0% found this document useful (0 votes)
3 views

Distraction Detectionand Monitoring Using Eye Trackingin Virtual Reality

Uploaded by

serdeer97
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

Distraction Detectionand Monitoring Using Eye Trackingin Virtual Reality

Uploaded by

serdeer97
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/370936146

Distraction Detection and Monitoring Using Eye Tracking in Virtual Reality

Chapter · May 2023


DOI: 10.1007/978-3-031-32883-1_44

CITATIONS READS

3 225

3 authors:

Mahdi Zarour Hamdi Ben Abdessalem


Université de Montréal University of Québec in Chicoutimi
1 PUBLICATION 3 CITATIONS 47 PUBLICATIONS 172 CITATIONS

SEE PROFILE SEE PROFILE

Claude Frasson
Université de Montréal
347 PUBLICATIONS 3,330 CITATIONS

SEE PROFILE

All content following this page was uploaded by Mahdi Zarour on 30 September 2023.

The user has requested enhancement of the downloaded file.


Distraction Detection and Monitoring Using Eye
Tracking in Virtual Reality

Mahdi Zarour, Hamdi Ben Abdessalem and Claude Frasson

Département d’Informatique et de Recherche Opérationnelle


Université de Montréal, Montréal, Canada H3C 3J7
{ mahdi.zarour, hamdi.ben.abdessalem}@umontreal.ca,
[email protected]

Abstract. Effective learning is highly affected by attention levels. Hence,


Intelligent Tutoring Systems and other technologies for learning should be able
to monitor the attention levels of learners and detect distractions in real-time to
improve the learning process. We study the feasibility of detecting and
monitoring visual distraction of participants, while they complete cognitive tasks,
using Eye Tracking in a Virtual Reality environment. We also investigate the
possibility of improving the attention of participants using relaxation in Virtual
Reality. The Eye Tracking distraction model we developed correctly predicts the
distraction state of participants with an F1-score of 86%. We also found that the
most appropriate window size to detect distraction ranges from three to six
seconds. Furthermore, results suggest that our relaxation method significantly
decreased the visual distraction of the participants.

Keywords: Eye Tracking, Virtual Reality, Distraction, Attention, Human


Interaction.

1 Introduction

1.1 Distraction and Intelligent Tutoring Systems

Attention, concentration, and distraction are interrelated cognitive processes that


determine our ability to focus and efficiently complete tasks. Attention serves as the
basis for concentration, which is the ability to sustain attention on a particular task or
thought. Distraction on the other hand, diverts attention away from the primary task,
and disrupts our ability to concentrate and can lead to reduced productivity and
effectiveness [1].
Intelligent tutoring systems (ITS) are educational technologies that provide
personalized instruction and feedback to students. are designed to adapt to individual
learners' needs, providing personalized feedback and scaffolding to optimize learning
outcomes [2].
Studies have shown that attention directly influences learning outcomes, with
students who are better able to concentrate achieving higher academic performance [3].
2

Research has also shown that distractions negatively impact learning outcomes, as they
divert attentional resources from the learning process and impair information
processing [4]. Thue, one critical aspect to improve ITSs’ effectiveness is attention
management, which involves monitoring and guiding learners' attention and minimize
distractions to ensure they remain focused on the learning task [5].

1.2 Virtual Reality

Virtual Reality (VR) is an advanced technology that simulates environments


realistically. It lives in the intersection of many fields including electronic engineering,
simulation, and computer graphics [6]. Many VR headsets come equipped with Eye
Tracking technology nowadays.
While numerous experiments were conducted in real-world settings, little was done
in Virtual Reality. Consumer-grade Eye Tracking devices constrain the user to always
look ahead, and devices that enable the user to freely rotate the head are high-priced.
With the actual VR technology, multiple consumer-grade VR devices are equipped with
Eye Tracking technology like high-priced Eye Tracking devices. Moreover, Research
indicated higher learning performance and engagement in VR compared to classic
methods [7, 8], suggesting that current and new learning methods and platforms could
target this technology soon including ITSs. These results suggest that VR could help to
advance research on attention, by providing simulation environments that generalize to
real-life conditions.

1.3 Eye Tracking

Lee et al. developed a system to monitor concentration level of learners in real-time by


analyzing pupillary response and eye blinking patterns using a simple commercial eye
tracker, and web camera [9]. A machine learning model was first trained to discriminate
between the “concentrated” and “not-concentrated” states. Later, the predictions of the
model were averaged over periods of one second and used to make a real-time
concentration monitoring system. Although the system performed reasonably well with
one second periods, other values were not investigated and could significantly improve
the performance of the proposed system.
Hutt et al. studied the feasibility of integrating cheap eye trackers into ITSs to
monitor the attention of learners using an extensive list of eye movements features [10].
The developed machine learning model achieved an F1 score of 59% in a classification
problem with two states: attentive, and mind wandering (MW) in a participant-
independent setting. They also used Cohen’s d values to rank features by their
contribution. Even though the model performance was higher than the chance level,
there is still room for improvement and a distraction detection system with higher
accuracy would be more efficient.
These results suggest Eye tracking could be used to effectively detect and monitor
the distraction of individuals.
3

1.4 Relaxation and Attention Restoration

Attention Restoration Theory (ART) is a psychological framework suggesting that


exposure to natural environments facilitates the restoration of voluntary attention
capacity, reducing mental fatigue and enhancing cognitive functioning [11]. Gao et al.
assessed ART with VR and EEG and showed that the experience had positive
restorative effects on the individuals’ attentional fatigue and negative mood [12].
In [13] authors used relaxation in Virtual Reality to study the possibility of
decreasing negative emotions in the elderly including frustration, anxiety, and apathy.
The preliminary results showed a decrease in anxiety and frustration, an increase in
memorization performance, and an improvement in cognitive abilities, particularly in
attention exercises.
Together, these results suggest it is possible to improve the performance of
individuals on tasks, by restoring their attention and reducing their mental fatigue, using
relaxation. In addition to monitoring and detecting distractions, providing ITS with a
tool to improve the attention of learners would lead to more effective learning.

While Eye Tracking has previously seen extensive use in attention research, it was not
the case for Virtual Reality. An easy way to study attention is to use Virtual Reality
environments as they provide capabilities to easily manipulate the levels of attention
experimentally.
The purpose of this study is to develop a means to detect and monitor distraction,
and to develop methods to improve the attention of participants (by reducing their
distraction) while completing cognitive tasks in a Virtual Reality environment using
Eye Tracking technology. We put forward two hypotheses in this study: 1) it is possible
to detect and monitor the visual distraction levels of participants using Eye Tracking in
VR while completing cognitive tasks. 2) The visual distraction of participants will
decrease after relaxation compared to before the relaxation period.

2 Experimentation

We built an experiment in VR to detect distraction and obtain data from Eye Tracking
headset. The experiment was divided into two parts. The first part of the experiment
was used to develop a means to detect and monitor the levels of visual distraction using
Eye Tracking, while the second part of the experiment was used to investigate the
effects of relaxation on the visual distraction levels of participants.

2.1 Hardware

For this experiment, we used an HTC VIVE Pro Eye VR headset with one right
controller. The VR headset came with an integrated Eye Tracking system that could
track eyes at a maximal frequency of 120Hz (90Hz in our experiment) and accuracy
between 0.5° and 1.1°. All participants used the same hardware to complete the
experiment.
4

2.2 Experiment

We implemented the experiment using Unity, a popular game development engine that
uses the C Sharp programming language. The experiment took place in a room in
Virtual Reality and consisted of two parts. Figure 1 shows an overview of the entire
experiment and details of each part.

Fig. 1. Overview of the experiment

All participants completed the same tasks with the same values. The duration of the
experiment was 40 minutes for each participant.
To detect the distraction, we used Eye Tracking data obtained from an HTC SRanipal
SDK and included gaze origin as a three-dimensional point, gaze direction as a three-
dimensional vector, pupil position as a two-dimensional vector indicating the position
in sensor area, pupil size in millimeters as a real number, eye openness as a real number.
Eye Tracking data were recorded manually in CSV file using obtained data and times
in Unix timestamps format from Unity.
In the first part of the experiment, we used mental tasks to trigger the concentration
of participants. However, to induce visual distraction state, we introduced distractors
and hints during the experiment in order to divert participants’ attention away from
the main task towards other objects in the environment. Advice was given to
participants to look left or right for a supposed hint. The hints had two objectives. The
first objective was to keep the participants distracted away from the primary task. The
second objective was to help participants find the correct answer in case they were not
unable to. The correct answer was always hidden randomly among hints, and we
informed the participants of that, but we instructed them to try and solve the problems
without help.
Participants performed three types of cognitive tasks a total of nine times. In mental
arithmetic, they had to solve an arithmetic problem of addition and subtraction with
five or six operands mentally. In the anagram task, participants had to rearrange a set
5

of eight or nine letters to obtain a valid word in English. In the memorization task,
participants had to memorize a sequence of seven or eight digits in the reverse order,
the digits were presented one by one with a one-second between-digit interval. Mental
arithmetic task and anagram task were previously used to study attention [14, 15]. The
memorization task was not used in attention research, but previous research suggests
that attention is linked with memory [16], which makes it an interesting option.

In the second part of the experiment, we used the environment developed in [13,
17] to study the feasibility of improving concentration and decreasing distraction levels
using relaxation in Virtual Reality. Participants were invited to travel in a virtual train
which has proven to reduce negative emotions. Participants first completed a set of 6
cognitive tasks before going for relaxation in the virtual train, then completed another
set of cognitive tasks like the first set. Relaxation was used to decrease the distraction
of participants. The levels of visual distraction of the participants before and after
relaxation were later calculated and compared.
Participants started the experiment by wearing the Virtual Reality headset. All
participants did Eye Tracking calibration using the VIVE calibration tool. At the start
of the experiment, a gray screen was displayed for one minute, and participants were
instructed to not think about anything, data from this period were used as a reference
for later analysis. The experiment consisted of two parts as described previously.
Participants were seated and had to interact with the environment only using the VR
controller.
Each task in the first part went as follows. The problem was first presented on the
screen and participants were instructed to solve it. After 30 seconds, the problem was
hidden, and a red window (distractor) was displayed over the screen instructing
participants to look left to see a hint, at the same time two suggestions were presented
on the left side of the room. After five seconds, the red window disappeared, and the
problem reappeared (except for the memorization task where nothing appeared). After
15 seconds, the problem was hidden and a red window was displayed over the screen
instructing the participant to look right to see a hint, at the same time two suggestions
were presented on the right side of the room. After 15 seconds, a keyboard appeared
where participants could enter and validate their answers. Participants did not have a
time limit and could validate their answer until it was correct. Figure 2 shows an
example of the red window and hints displayed for a given task to distract the
participants.
6

Fig. 2. Example of the red window and hints for a given task.

In the second part, during relaxation, participants went for a virtual tour on a train that
lasted six minutes approximately. The train was moving, and they could hear the rail
wheel as well as relaxing music playing through the sound output system of the VR
headset. They could also see other nonplayer characters on the train including a family
that was seated next to the player. Participants visited three locations aboard the train,
a forest, a frozen mountain, and a desert. A detailed description of the environment was
reported in [13]. We later analyzed the data from the six cognitive tasks and compared
the data before and after relaxation in order to assess the participants' attentional state
during these two phases.

3 Analysis and Results

31 participants (M = 16, F = 15) aged between 17 and 44 (mean = 23, std = 5) undertook
the experiment at BMU (Beam Me Up Labs Inc., Montréal, Quebec Canada). All the
participants came from Canadian universities and were either current students or
graduates, except one participant who was a CEGEP student.
The average duration of the experiment for each participant was 40 minutes. We
discarded the data of one participant who did not finish the experiment. The data from
30 participants (M = 15, F = 15) were used for the rest of the study. Scikit-learn python
library [18] was used for all the computational analyses, and matplotlib library for
python [19] was used to create all the visual plots.

3.1 Feature extraction

Collected Eye Tracking data were analyzed using a script written manually in Python
language. When returning eye movement values at a given timestamp, SRanipal SDK
also returns a value indicating the validity of data. Data are considered valid if the
7

validity value is 31 and considered invalid otherwise. Thus, we dropped invalid eye
movements for all participants.
Hutt et al. [10] and Benedek [20] used an extensive list of eye movements features
in their study of attention using Eye Tracking. Here, we extracted and used the most
relevant features.
Imaoka et al. [21] studied the feasibility of using an HTC Vive Virtual Reality
headset to assess eye saccades, and results suggested that VIVE Pro Eye could function
as an assessment tool for saccadic eye movement. Saccades and fixations were
computed using a velocity-based identification algorithm [22], where we considered
eye movements with a velocity higher than 300 degrees per second saccades and eye
movements with a velocity lower than 100 degrees per second fixations, similarly to
[22]. We then extracted the saccade count, average saccade amplitude, fixations count,
and average fixation duration.
Blinks were also computed using a simple threshold method where eye movements
with eye openness lower than 45% were considered blinks and were not considered
blinks otherwise. Then, blinks count, and average blink duration were extracted.
Furthermore, the vergence angle of the eyes, which is the angle of the convergence
point of the eyes was computed using gaze direction vectors. Then, the average angle
and angle variance were computed. Finally, we extracted the average pupil diameter
and pupil diameter variance.
To extract features for data samples, we used segments of data of length T.
Moreover, to compute saccades, fixations, blinks, and pupil diameters, the average data
of both eyes were used. In total, 10 features were extracted from Eye Tracking data.

3.2 Eye Tracking Model for Detecting Distraction

To study the feasibility of detecting distraction using Eye Tracking, we created a set
of data with eye movement features. For each task, four samples with a time window T
of five seconds length each were created. The first two samples included the data from
second 10 to second 15, and from second 15 to second 20, and were labeled “not
distracted”. The other two samples included data from the two five seconds following
the appearance of the red window (onset of the distractor) and were labeled
“distracted”. We justified the labels of the samples in section 2.2. A total of 36 samples
were created for each participant.
We trained multiple machine learning models to learn this classification task. The
models were trained and tested using the leave-p-groups-out cross-validation
(LPGOCV) method to evaluate the models on data from the groups it never observed
during training. Each group corresponded to the data of a participant. Each time, the
models were trained on a subset of n-p groups, then tested on a subset of p groups. This
procedure was repeated until the model was tested once on all possible combinations
of p groups, and the evaluation metrics were averaged. The maximal value p=4 (87%
data for training and 13% for testing) was used, as larger values for p required
significantly more time to complete. To increase the efficiency of the models, a scaler
was set up using the training data. Average F1-score, Recall, and Precision over the two
classes were used as metrics to evaluate the performance of the models.
8

Table 1 shows the average scores as well as standard deviations after evaluating the
models using LPGOCV with p=4. Logistic Regression, Random Forest, and Multilayer
Perceptron achieved the highest F1-score of 86% for correctly detecting distraction,
while K-Nearest Neighbors (KNN) achieved the lowest F1-score of 84%.

Table 1. Results after model evaluation using LPGOCV with p=4

Model F1 Distraction Recall Distraction Precision


Mean, Std Mean, Std Distraction
Mean, Std
KNN 84.84, 4.96 84.96, 9.08 85.79, 6.75
Logistic Regression 86.43, 4.66 86.50, 7.62 87.03, 5.99
RBF SVM 86.40, 5.21 86.48, 8.24 87.07, 6.39
Random Forest 86.74, 3.95 86.79, 6.72 87.23, 5.44
Naive Bayes 85.74, 4.84 85.83, 8.32 86.49, 6.37
Multilayer Perceptron 86.43, 4.87 86.51, 8.14 87.13, 6.24
Random Baseline 49.92, 4.66 50.00, 5.88 50.01, 4.2

3.3 Investigation of Dataset Creation Parameters

Time window T represents the length in seconds of the data segments used to create
the samples. Lee et al. [9] used a T value of one second to monitor attention but did not
investigate other values. We compared the effects of different T values on the
distraction models developed in section 3.2. Moreover, due to the structure of the
experiment used to collect data, we investigated another parameter, the moment which
represents the time in seconds when the samples labeled “not distracted” were created.
Figure 3 shows an overview of the different phases of a task in the first part of the
experiment and illustrates the different parameters for the creation of samples.

Fig. 3. Overview of tasks from part one, and illustration of samples’ creation parameters

For T, we tried values in the set St = {1,2,3,4,5,6,7,8,9,10,15,30}. For the moment


parameter, we tried three moments. The first moment was at the beginning of the tasks
(labeled “beginning”), and the second moment was ten seconds after the beginning of
the tasks (labeled “middle”). The final moment was 20 seconds after the beginning of
the tasks (labeled “ending”). T parameter values were investigated first, then the best T
value was used to investigate the moment parameter.
9

We compared the different parameters values on the distraction detection task from
section 3.2 using a Logistic Regression model. Figure 4 shows results of F1-scores for
different T values. The window length values 3, 4, 5, and 6 gave the highest F1-scores,
86%.

Fig. 4. F1-scores for different values of T.

Window length T values from 3 to 6 inclusively had the highest average F1-score. T=3
was selected for the next analysis. Eye movements data extracted starting from the
middle of the focus phase (moment=middle) resulted in the highest F1-score value of
86%, followed by the ending moment with 82%, and the beginning moment with 77%.
We used the parameters T=3 and moment=middle in the rest of the study.

3.4 Improvement of Participants’ Attention

To investigate effects of the relaxation in VR on attention levels, Eye movements data


of participants while completing cognitive tasks before relaxation and after relaxation
were compared. This approach was used by Frasson and Ben Abdessalem [17] to
investigate the effects of relaxation on the negative emotions in the elderly. We split
the entirety of data from the second part into three-second segments and created eye
movements samples. Then, the samples were classified using a Logistic Regression
model trained on the dataset from section 3.1 with parameters T=3 and moment=middle
with either label “0” if not distracted or label “1” if distracted.
Figure 5 shows average visual distraction levels by participants and phase (before
relaxation, and after relaxation) using the classification labels. Levels of distraction
before relaxation exceeded levels of distraction after relaxation on average for 20
participants. Distraction decreased by 15% on average after relaxation compared to
before relaxation.
10

Fig. 5. Distraction levels by participant and phase.

A one-tailed paired t-test was performed to compare the levels of distraction of


participants before relaxation and after relaxation. The results from pre-relaxation (M
= 0.14, SD = 0.10) and post-relaxation (M = 0.12, SD = 0.11) indicate that the relaxation
in Virtual Reality resulted in a decrease in visual distraction levels; t(29) = 3.03, p =
.002.

4 Discussion

The purpose of this study was to find a means to detect and monitor distraction, and to
find methods to improve the attention of individuals while completing cognitive tasks
in a Virtual Reality environment using Eye Tracking technology. To do that, we
designed an experiment in two steps. In the first step, we manipulated the visual
attention of participants experimentally using cognitive tasks and distractors. In the
second step, participants completed a set of cognitive tasks before trying relaxation to
improve their attention, then completed another set of cognitive tasks. We then
compared the levels of attention of participants before and after relaxation to investigate
the effects of our method.
We started by developing a tool to detect visual distraction by training machine
learning models with samples containing eye movements features. The best model
achieved an F1-score of 86% in a participant-independent setting with LPGOCV (p=4).
These results suggest it is possible to effectively discriminate between the “distracted”
state and “not distracted” state using Eye Tracking in Virtual Reality even for new
participants never seen before.
Eye features were all computed manually using methods from literature, and were
not the only focus of the study, suggesting that the performance of detection of
11

distraction could be further improved. Saccades and fixations were computed using a
simple velocity-based algorithm [22]. Blinks were computed using the same method as
saccades and fixations, by collapsing successive blink points and non-blink points. We
considered a high threshold for blinks (0.45 from section 3.1) because of the temporal
resolution of the Eye Tracking device (90 Hz). Blinks happen rapidly and the Eye
Tracking device could not always capture the eyes openness during blinks, which
resulted in the recorded eye openness value being high most of the time. The VIVE Pro
Eye was previously validated as a tool to assess saccades and fixations [21]. In contrast,
the computation of other eye movements features such as eye blinks using the VIVE
Pro Eye was not validated.
We investigated the dataset creation parameters T and moment such as in [23]. The
classification models were robust to the different parameters’ values. While
moment=middle resulted by far in the highest F1-score, T values between 3 and 6 all
performed similarly. We considered T=3 for the subsequent analysis for it is a fine
tradeoff between performance and window size, using a small value would permit to
monitor attention levels closely and detect changes quickly.
To investigate the second hypothesis, we used the attention model from section 3.2
to monitor the visual distraction levels before and after going through relaxation
retrospectively using the monitoring method from [9]. Results revealed lower
distraction levels after relaxation. To validate these results, a paired t-test was
performed, and confirmed the results, suggesting that relaxation significantly decreased
the visual distraction levels. These findings agree with the improvement in performance
in attention exercises after relaxation [13].
Results from section 3.4 confirmed that the levels of visual distraction of participants
decreased after using relaxation. [13] also found that the performance of participants in
attention tasks improved after relaxation compared to before relaxation.

5 Conclusion and Future Works

We developed an immersive environment in Virtual Reality in order to manipulate


visual attention of individuals. We then used Eye Tracking data to create a model able
to detect visual distraction levels of participants while they solved cognitive problems.
The model was validated and later used to monitor retrospectively the distraction levels
of participants while we tried to improve their concentration levels using relaxation in
VR. Our findings suggest that it is possible to effectively detect the levels of visual
distraction of learners using Eye Tracking and to decrease their levels of distraction by
using relaxation in Virtual Reality. Modern learning systems can greatly benefit by
integrating those methods to monitor the attentional state of learners and adapt the
proposed content accordingly.
Our results suggest that relaxation in VR could help the visual distraction of
individuals. These results are a good indicator that relaxation can improve the attention
of individuals, but a study of the cognitive state of the participants using tools such as
EEG is necessary to confirm this theory.
12

Many aspects of attention were studied, mind wandering in [10], and visual
distraction in this work, both using Eye Tracking. While the work on mind wandering
was promising, it also might have revealed the limitations of Eye Tracking when
studying internal cognition [16]. Combining EEG and Eye Tracking may solve the
existing problems in attention classification, for example by focusing the eye tracking
on the visual (external) aspect of attention, and the EEG on the internal aspect of
attention.

6 Limitations

As per our extensive research, we found no available tool to extract eye movement
features from the HTC VIVE Pro Eye headset without incurring a cost. Therefore, we
developed our own method to extract these features, taking into consideration budget
constraints. It implies that utilizing specialized tools should result in even better-quality
models.

Acknowledgements. We acknowledge NSERC-CRD (National Science and


Engineering Research Council Cooperative Research Development), Prompt, and
BMU (Beam Me Up) for funding this work.

References

1. Ophir, E., Nass, C., Wagner, A.D.: Cognitive control in media multitaskers. Proc. Natl.
Acad. Sci. U.S.A. 106, 15583–15587 (2009). https://doi.org/10.1073/pnas.0903620106.
2. Woolf, B., Burleson, W., Arroyo, I., Dragon, T., Cooper, D., Picard, R.: Affect-aware tutors:
recognising and responding to student affect. IJLT. 4, 129 (2009).
https://doi.org/10.1504/IJLT.2009.028804.
3. Sharot, T., Phelps, E.A.: How arousal modulates memory: Disentangling the effects of
attention and retention. Cognitive, Affective, & Behavioral Neuroscience. 4, 294–306
(2004). https://doi.org/10.3758/CABN.4.3.294.
4. Sana, F., Weston, T., Cepeda, N.J.: Laptop multitasking hinders classroom learning for both
users and nearby peers. Computers & Education. 62, 24–31 (2013).
https://doi.org/10.1016/j.compedu.2012.10.003.
5. Roll, I., Aleven, V., McLaren, B.M., Koedinger, K.R.: Improving students’ help-seeking
skills using metacognitive feedback in an intelligent tutoring system. Learning and
Instruction. 21, 267–280 (2011). https://doi.org/10.1016/j.learninstruc.2010.07.004.
6. Zheng, J.M., Chan, K.W., Gibson, I.: Virtual reality. IEEE Potentials. 17, 20–23 (1998).
https://doi.org/10.1109/45.666641.
7. Allcoat, D., von Mühlenen, A.: Learning in virtual reality: Effects on performance, emotion
and engagement. Research in Learning Technology. 26, (2018).
https://doi.org/10.25304/rlt.v26.2140.
8. Ghali, R., Abdessalem, H.B., Frasson, C.: Improving intuitive reasoning through assistance
strategies in a virtual reality game. Presented at the The Thirtieth International Flairs
Conference , Marco Island, Florida, USA (2017).
13

9. Lee, G., Ojha, A., Lee, M.: Concentration Monitoring for Intelligent Tutoring System Based
on Pupil and Eye-blink. In: Proceedings of the 3rd International Conference on Human-
Agent Interaction. pp. 291–294. ACM, Daegu Kyungpook Republic of Korea (2015).
https://doi.org/10.1145/2814940.2815000.
10. Hutt, S., Mills, C., Bosch, N., Krasich, K., Brockmole, J., D’Mello, S.: “Out of the Fr-Eye-
ing Pan”: Towards Gaze-Based Models of Attention during Learning with Technology in
the Classroom. In: Proceedings of the 25th Conference on User Modeling, Adaptation and
Personalization. pp. 94–103. ACM, Bratislava Slovakia (2017).
https://doi.org/10.1145/3079628.3079669.
11. Kaplan, S.: The restorative benefits of nature: Toward an integrative framework. Journal of
Environmental Psychology. 15, 169–182 (1995). https://doi.org/10.1016/0272-
4944(95)90001-2.
12. Gao, Zhang, Zhu, Gao, Qiu: Exploring Psychophysiological Restoration and Individual
Preference in the Different Environments Based on Virtual Reality. IJERPH. 16, 3102
(2019). https://doi.org/10.3390/ijerph16173102.
13. Ben Abdessalem, H., Boukadida, M., Bruneau, M.-A., Robert, P., Belleville, S., David, R.,
Frasson, C.: Immersion en train thérapeutique pour la relaxation de patients Alzheimer.
French Journal of Psychiatry. 1, S152 (2019). https://doi.org/10.1016/j.fjpsy.2019.10.422.
14. Huang, M.X., Li, J., Ngai, G., Leong, H.V., Bulling, A.: Moment-to-Moment Detection of
Internal Thought from Eye Vergence Behaviour. (2019).
https://doi.org/10.48550/ARXIV.1901.06572.
15. Myrden, A., Chau, T.: A Passive EEG-BCI for Single-Trial Detection of Changes in Mental
State. IEEE Trans. Neural Syst. Rehabil. Eng. 25, 345–356 (2017).
https://doi.org/10.1109/TNSRE.2016.2641956.
16. Chun, M.M., Golomb, J.D., Turk-Browne, N.B.: A Taxonomy of External and Internal
Attention. Annu. Rev. Psychol. 62, 73–101 (2011).
https://doi.org/10.1146/annurev.psych.093008.100427.
17. Frasson, C., Abdessalem, H.: Contribution of Virtual Reality Environments and Artificial
Intelligence for Alzheimer. MRAJ. 10, (2022). https://doi.org/10.18103/mra.v10i9.3054.
18. Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V., Thirion, B., Grisel, O., Blondel,
M., Müller, A., Nothman, J., Louppe, G., Prettenhofer, P., Weiss, R., Dubourg, V.,
Vanderplas, J., Passos, A., Cournapeau, D., Brucher, M., Perrot, M., Duchesnay, É.: Scikit-
learn: Machine Learning in Python. (2012). https://doi.org/10.48550/ARXIV.1201.0490.
19. Hunter, J.D.: Matplotlib: A 2D Graphics Environment. Comput. Sci. Eng. 9, 90–95 (2007).
https://doi.org/10.1109/MCSE.2007.55.
20. Benedek, M., Stoiser, R., Walcher, S., Körner, C.: Eye Behavior Associated with Internally
versus Externally Directed Cognition. Front. Psychol. 8, 1092 (2017).
https://doi.org/10.3389/fpsyg.2017.01092.
21. Imaoka, Y., Flury, A., de Bruin, E.D.: Assessing Saccadic Eye Movements With Head-
Mounted Display Virtual Reality Technology. Front. Psychiatry. 11, 572938 (2020).
https://doi.org/10.3389/fpsyt.2020.572938.
22. Salvucci, D.D., Goldberg, J.H.: Identifying fixations and saccades in eye-tracking
protocols. In: Proceedings of the symposium on Eye tracking research & applications -
ETRA ’00. pp. 71–78. ACM Press, Palm Beach Gardens, Florida, United States (2000).
https://doi.org/10.1145/355017.355028.
14

23. Atyabi, A., Fitzgibbon, S.P., Powers, D.M.W.: Multiplication of EEG Samples through
Replicating, Biasing, and Overlapping. In: Zanzotto, F.M., Tsumoto, S., Taatgen, N., and
Yao, Y. (eds.) Brain Informatics. pp. 209–219. Springer Berlin Heidelberg, Berlin,
Heidelberg (2012). https://doi.org/10.1007/978-3-642-35139-6_20.

View publication stats

You might also like