| Literature DB >> 31763361 |
Giulio Jacucci1, Oswald Barral1, Pedram Daee2, Markus Wenzel3, Baris Serim1, Tuukka Ruotsalo1, Patrik Pluchino4, Jonathan Freeman5, Luciano Gamberini4, Samuel Kaski2, Benjamin Blankertz3.
Abstract
The use of implicit relevance feedback from neurophysiology could deliver effortless information retrieval. However, both computing neurophysiologic responses and retrieving documents are characterized by uncertainty because of noisy signals and incomplete or inconsistent representations of the data. We present the first-of-its-kind, fully integrated information retrieval system that makes use of online implicit relevance feedback generated from brain activity as measured through electroencephalography (EEG), and eye movements. The findings of the evaluation experiment (N = 16) show that we are able to compute online neurophysiology-based relevance feedback with performance significantly better than chance in complex data domains and realistic search tasks. We contribute by demonstrating how to integrate in interactive intent modeling this inherently noisy implicit relevance feedback combined with scarce explicit feedback. Although experimental measures of task performance did not allow us to demonstrate how the classification outcomes translated into search task performance, the experiment proved that our approach is able to generate relevance feedback from brain signals and eye movements in a realistic scenario, thus providing promising implications for future work in neuroadaptive information retrieval (IR).Entities:
Year: 2019 PMID: 31763361 PMCID: PMC6853416 DOI: 10.1002/asi.24161
Source DB: PubMed Journal: J Assoc Inf Sci Technol ISSN: 2330-1635 Impact factor: 2.687
Figure 1From target to relevance detection. The classical row‐column speller (a) which consists essentially in the detection of flashing. The center speller (b) relies on the recognition of a target shape/color. In contrast, the task to search for relevant terms (c) is incomparably more complex. [Color figure can be viewed at http://wileyonlinelibrary.com]
Figure 2A screenshot of the user interface displaying the intent model view.
Figure 3A screenshot of the user interface displaying the document view. [Color figure can be viewed at http://wileyonlinelibrary.com]
Figure 4Summary of the system as a control loop during the online phase.
Figure 5Components of the system. [Color figure can be viewed at http://wileyonlinelibrary.com]
Figure 6Individual classification performances in the calibration phase in terms of area under the ROC curve (AUROC), and improvement over the random baseline at the levels of p < 0.05 (*), and p < 0.001 (**). The horizontal lines represent the mean (solid) and random (dashed).
Figure 7Individual classification performance in terms of area under the ROC curve (AUROC). Left: offline prediction in the “calibration phase.” Middle: neurophysiologic prediction in the “online phase.” Right: intent model prediction in the “online phase.” Smaller black dots and dashed lines indicate mean classification performance. The dashed horizontal line represents random classification. Participants for which calibration did not outperform random predictions are presented in gray. [Color figure can be viewed at http://wileyonlinelibrary.com]