Literature DB >> 30499008

Mental state and emotion detection from musically stimulated EEG.

Avinash L Tandle1, Manjusha S Joshi2, Ambrish S Dharmadhikari3, Suyog V Jaiswal4.   

Abstract

This literature survey attempts to clarify different approaches considered to study the impact of the musical stimulus on the human brain using EEG Modality. Glancing at the field through various aspects of such studies specifically an experimental protocol, the EEG machine, number of channels investigated, feature extracted, categories of emotions, the brain area, the brainwaves, statistical tests, machine learning algorithms used for classification and validation of the developed model. This article comments on how these different approaches have particular weaknesses and strengths. Ultimately, this review concludes a suitable method to study the impact of the musical stimulus on brain and implications of such kind of studies.

Entities:  

Keywords:  EEG; Emotion; Machine learning; Music

Year:  2018        PMID: 30499008      PMCID: PMC6429168          DOI: 10.1186/s40708-018-0092-z

Source DB:  PubMed          Journal:  Brain Inform        ISSN: 2198-4026


Introduction

The human brain is a spectacularly complex organ, how the brain processes an emotion having very little acquaintance. Discovering how the brain processes the emotion will impact not only in artificial emotional intelligence, human–computer interface but also have many clinical implications of diagnosing affective diseases and neurological disorders. There are several multidisciplinary and collaborative researches across the globe happening using different modalities of brain research and to investigate how the brain processes emotion. There are many ways to evoke the emotion; music is the excellent thriller and elicitor of emotion [1]. During listening unique music the physiological responses of subjects like shivering, speeding heart, goosebumps, laughter, lump in throat, sensual arousal and sweating [2]. Tuning in to the music incorporates different mental means, for example, observation, multimodal combination, focus, reviewing memory, syntactic handling and the preparing of significant data, activity, feeling and social discernment [3]. Thus, music is a potent stimulus for evoking the emotions and investigating processing functions of the human brain. There are several modalities of brain research categorised depending on how it measures neuronal activity of the brain, direct imaging, and indirect imaging, direct imaging measures electrical or magnetic signal generated due neuronal activity directly, e.g. EEG (electroencephalogram) and MEG (magnetoencephalogram), whereas indirect imaging fMRI (functional magnetic resonance imaging), PET (positron emission tomography), etc., measure neuronal activity using oxygen consumption of neurone. Indirect measuring has an excellent spatial resolution in case of PET around 4 mm and f MRI 2 mm but low temporal resolution low for PET 1–2 min and fMRI 4–5 s [4] and other enlisted disadvantagesDirect imaging reasonable good spatial resolution and excellent temporal resolution 1 ms in case of EEG its 10 mm but having several advantages to carry the stimulus-based experiment [4]This article reviews the literature of clinical and engineering domain to quantify the impact of music stimulus. The aspect of items of evaluations among literature isThe paper is written using an approach of a summary of reviews, an analysis of surveying aspects and synthesis of reviewing aspects and organised in sections as follows: Sect. 2 covers the structural information of brain, Sect. 3 represents literature selection and analysis, Sect. 4 shows summary of review, and Sects. 5, 6 and 7 represent discussion, suggested approach and conclusion, respectively Subject has to take radionuclide dye Claustrophobic Noisy Mostly used for clinical research purpose Highly expensive machine cost ($20,00,000–800,000) and scanning cost ($800–1500.) [4] Non-ionising Simple to work, portable Silent No claustrophobia Comparatively Inexpensive Machine cost ($1000–$10,000) and Scan cost ($100) [4] Simple to plan incitement test Easy to configuration/assemble HCI (human–computer interface) research and applications Type of population and sample EEG recording environment and recording Machine Stimulus Type, duration of the stimulus, emotion Model Feature extraction transform, feature extracted Brainwave investigated Statistical test and machine learning algorithm used Assessment of model Functional diagram of brain diagram is adopted from [5]

Functional structure of the brain

Before understanding EEG signals, we need to understand the structure of the brain. The human brain conveyed into three critical parts: cerebrum, cerebellum and cerebrum stem. Cerebrum subdivided into frontal lobe, parietal lobe, temporal lobe, occipital lobe, insular and limbic lobe alludes Fig. 1. Each part connected with some mental capacity, for example, the parietal projection sees agony and taste sensations and is associated with critical thinking exercises. The temporal lobe worried about hearing and memory. The occipital lobe primarily contains the districts utilised for vision-related errands. The frontal lobe principally connected with feelings, critical thinking, discourse and movement [6, 7]. A grown-up human brain holds, on an average, 100 billion neurons [8]. Neurons process and send data through electrical and chemical signals due to this it generates neuronal oscillations called brainwaves or EEG signals. Table 1 shows electrical and functional characteristics of these waves. The frequency range of EEG signals is 0.5–100 Hz, whereas amplitude range is 10–100 μV [9]. Delta wave has highest amplitude and lowest frequency, whereas gamma waves have highest frequency and lowest amplitude. In reviews, the frequency range varies by ± 0.5–1 Hz.
Fig. 1

Functional diagram of brain diagram is adopted from [5]

Table 1

Electrical characteristics of significant brainwaves

BrainwaveFrequency range (Hz)Amplitude (μV)Mental function
\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\delta$$\end{document}δ 0–410–100Unconsciousness during a deep dreamless sleep
During a deep dreamless sleep
\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\theta$$\end{document}θ 4–810–50Subconscious mind
Focused attention
Emotion responses
\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\alpha$$\end{document}α 8–125–25Relaxed mental state
\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\beta 1$$\end{document}β1 12–160.1–1Intense focused mental activity
\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\beta 2$$\end{document}β2 16–30< 0.1Anxious alert
\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\gamma$$\end{document}γ 30–99≪ 0.1Hyper brain activity

Amplitude values measured during data collections

Electrical characteristics of significant brainwaves Amplitude values measured during data collections Experimental approach adopted in reviews

Literature selection and analysis

The keywords used to select the article were EEG and Music and Emotions on a repository like PubMed, IEEE explorer, Science Direct and Mendeley research tool. Library recognised quality twenty-two papers from the year of 2001 and 2018 created using Mendeley [10]. The mostly followed research methodology is shown in Fig. 2. The articles were analysed concerning general steps observed in an experiment such as participants, stimulus, EEG machine, channel, montages preprocessing, feature extraction, statistical testing and machine learning.
Fig. 2

Experimental approach adopted in reviews

Summary of reviews

This section summaries findings and outcomes all the selected articles. For a musical stimulus which was known to fluctuate in full of affective valence (positive versus negative) and intensity (extreme versus quiet), the author found that the pattern of asymmetrical frontal EEG activity. A higher relative left frontal EEG movement to satisfaction and cheerful melodic entries and more prominent relative right frontal EEG action to fear and dismal melodic selections. The author additionally discovered EEG asymmetry distinguished the intensity of emotion [11]. For the distinctive stimuli excerpt, jazz, rock-pop, traditional music and environmental sound. Author discovered positive enthusiastic attributions were joined by an expansion in left fleeting initiation, negative by a more two-sided design with predominance of the privilege fronto-temporal cortex. Author additionally discovered female members affirmed more prominent valence-related contrasts than males [12]. In this research, wonderful and offensive feelings were evoked by consonant and cacophonous melodic portion creator discovered lovely music was related with increment in frontal mid-line power [13]. In this examination, EEG-based emotion classification algorithm was explored utilising four types musical excerpts. The hemispheric asymmetry power indices of brain activation were extracted as feature [14]. The author examined the connection between EEG signs and music-initiated emotion responses using four emotional music excerpts (Oscar film track). The author found that low-frequency bands , and are correlate of evoked emotions [15]. In this examination the author researches spatial and spectral pattern for evoked feelings because of melodic passage. Author found that spatial and spectral pattern most significant to feeling and reproducible crosswise over subjects [16]. In this investigation, the author distinguished 30 subject-free features that were most connected with emotion processing crosswise over subjects and investigated the convenience of utilising less electrode to describe the EEG flow amid music listening [17]. For stimulus rock-pop, electronic, jazz and broadband noise author examined the relation between subjects’ EEG responses to self-evaluated enjoyed or loathed music. Movement in and band may prompt a relationship between music inclination and enthusiastic excitement phenomena [18]. In this article, author found frequency band, beta and theta, perform superior to anything other frequency band [19]. The author investigated like and disliked under three cases familiarity of the music by taking three types music regardless of familiarity of music, familiar music and unfamiliar music. The author found that familiar music gives highest classification accuracy compared to regardless familiarity and unfamiliar music [20]. The authors found that among the musician and non-musician subjects participated in the research, musicians have significantly lower frontal activity during music listening and music imaging than resting state [21]. Author classified euphoric versus non-partisan, upbeat versus melancholic and well-known versus new melodic selection. The author researched brain network related to happy, melancholic and unbiased music. The authors research inter/intra provincial network designs with the self-announced assessment of the melodic selection [22]. The author found that among members thirty people of three distinctive age gatherings (15–25 years, 26–35 years and 36–50 years). The brain signals of age gathering (26–35 years) gave the best emotion acknowledgement accuracy in understanding to the self-reported emotions [23]. Author proposed a novel user identification framework using EEG signals while tuning in to music [24]. Authors quantify emotional arousal corresponding to different musical clips [25]. Author suggests that unfamiliar songs are most appropriate for the construction of an emotion recognition system [26]. The author explores the impact of Indian instrumental music Raag Bhairavi using frontal theta asymmetry [27]. The author proposes the frontal theta asymmetry model for estimating valence of evoked emotions and also suggested electrode reduction for neuromarketing applications [28, 29]. Author proposes frontal theta as biomarker of depression [30].

Participants and their handedness

Handedness

Human brain has two identical anatomical spheres, but each sphere has functional specialisation. Handedness is concept which by simplistic definition is prominent hand used in day-to-day activity [31]. Each hemisphere has specific prominent function, like language abilities in left hemisphere in right-handed person [32]. As we are probably aware that the brain is cross-wired, the left side of the hemisphere of the cerebrum controls the right side of the body and vice versa in the majority of people. In research involving brain and stimuli, we first need to know about handedness as it is an indicator of prominent hemisphere. As a prominent hemisphere has specialised functions; observations, findings, interpretation differ according to dominance. Many functions change hemisphere as per dominance in particular person. Like, left-handed people have language processing in right hemisphere and right-handed have in left hemisphere [33]. Brain pattern of right- and left-handed persons are different [34]. This section analyses the natures of subjects considered in the reviews Analysis on the basis of participant and handedness NA not available

Participants

Subjects used 5–79 with median 20 most of the researchers consider unbalanced numbers of males and females see Table 2. When subjects participated in studies are less, outcome of the hypothesis is always questionable. In 78% of research, authors reported right-handed subjects without any handedness inventory. Only 22% of research used handedness Edinburgh inventory [37, 38]. In most of the investigation, 95% researcher recruited normal participants; few of them verify the normalcy. Most of the researchers selected participants who are the students or working staff and of the same background. Author [23] investigated the impact of the musical stimulus on a different age group. Author [21] studied the effect of the musical stimulus by recruiting musician and non-musician subject. Authors [30, 35] investigated the impact of the musical stimulus on mentally depressed subjects.
Table 2

Analysis on the basis of participant and handedness

ReferencesParticipantHandedness inventory
[11]59 (29 males, 30 females) right-handedEdinburgh
[12]16/right-handedEdinburgh
[13]22 non-musiciansEdinburgh
[14]5 NormalNA
[15]26 NormalNA
[16]26 NormalNA
[17]26 NormalNA
[35]79 depressedNA
[18]9 right-handed normalNA
[19]5 right-handed normalNA
[21]6 Musicians (4 men and 2 women)NA
5 healthy non-musicians (4 men and 1 woman).
[20]9 right-handed normalNA
[36]13 right-handed normalNA
[22]19/non-musicians (11 females and 8 males)NA
[23]30/men and women of three different age groupsNA
(15–25 years, 26–35 years and 36–50 years)
[24]60 NormalNA
[25]5/(M = 3, F = 2)NA
[26]15 normalNA
[2729]41 normal right-handedEdinburgh
[30]23 depressed 17 normal right-handedEdinburgh

NA not available

Musical stimulus type, duration and emotions

Analysis on the basis of stimulus and emotions Different genres of musical stimulus excerpt of pleasant and unpleasant music selected to evoke a different types emotions stimulus chosen are classical, rock, hip-hop, jazz, metal, African drums, Oscar tracks, environmental (refer Table 3). Author [13, 18] used noise along with pleasant stimulus to elicit negative emotion. Authors [20] used familiar unfamiliar and regardless familiar music. Stimulus duration selected from 2 s to 10 min with median 30 s. Different excerpts interleaved with some time gap. Self-responses of evoked emotion noted from subjects participated in study. Emotions investigated the positive and negative emotions such as Fear, Happiness, Sadness, Anger, Tiredness, Like, Dislike, Anxiety, and Depression. Some authors used feel tracer to measure arousal effect of the stimulus.
Table 3

Analysis on the basis of stimulus and emotions

ReferenceStimulus duration/typeEmotions
[11]60 s/excerpts vary in affective valenceFear, Joy
and intensity (i.e. intense vs. calm)Happy, Sad
[12]15 s/Jazz, rock-pop, classical musicPositive, Negative
and environmental sounds
[13]1 min/Consonant comprised 10 excerpts of joyful instrumental dance tunesPleasant, Unpleasant
Dissonant stimuli were electronically manipulated counterparts of the consonant excerpts:
[1417]30 s/Four types musical excerptJoy, Angry,
Sadness, Pleasure
[35]5 min/West-African Djembe drums and electronic hand drumsDepression
[18]15 s/Rock-pop, electronic, jazz and classical (15 excerpts per genre) and 15 excerpts of broadband noiseLike, Dislike
[19]3 min/16 peace of musicExciting, Relaxing
[21]2.5 min/Largo, D-flat major, Going HomeNA
[20]60 musical excerpts LD (regardless of familiarity), LDF (familiar music), LDUF (Unfamiliar music))NA
[36]15 s/10 film music excerptsAnger, Fear, Happiness,
Sadness, Tiredness
[22]60 s/Iranian music along with other classical excerptValence, Arousal
[23]1 min /Rap, metal, rock and hip-hop genres Happy
[24]20 s/Electronic, classical and rock. four music genresAnger, Happiness, Calm, Sadness, Scare
[25]30 s/8 cross-culture instrumentJoy, Sorrow, Anxiety, Calm
[26]2 s/Familiar unfamiliar musical stimuliLike, Dislike
[2729]10 min/Instrumental Raag BhairaviLike, Dislike
[30]10 min/Instrumental Raag BhairaviLike, Dislike, Depression

EEG machine and channel investigated

10–20 System of electrode placement Twelve different EEG machines are used in the reviewed articles (refer Tables 4 and 5). All the EEG machines surveyed on the features Compliance Certification, PC Interface, Filter Number of the channel, Sampling Frequency, Compatible toolbox, Electrode Type. Almost all machines were FDA (Food and Drug Administration), CE (Conformite Europeenne) certified, required a sampling frequency. Most of the equipment and compatible toolbox MS-Excel/MATLAB/LabVIEW. Mostly, 10–20 systems of electrode placement used in reviews (refer Fig. 3). Electrode is used in the reviewed articles 1–63 with a median 21.5. Total of 75% article reported referential montages taking A1 and A2 reference electrodes. Author [11, 35] used vertex electrode Cz as reference. Author [20] used frontal mid-line electrode Fz as well as A1 and A2 reference electrodes. Author [18] used Laplacian montage.
Table 4

EEG machine and sampling frequency

ReferencesEEG machineSampling frequency (Hz)
[11]Electro-Cap, Inc.512
[12]Electro-Cap International, Eaton OH100
[13]Electro-Cap International500
Inc., Eaton, USA
[1417]NeuroScan Inc.500
[35]Bio Semi Active II amplifier2048
[18]g.MOBIlab256
[19]ESI NeuroScan500
[21]Elekta-NeuromagNA
[20]g.MOBIlab256
[36]Biosem512
[22]Electro-Cap International128
Inc., Eaton, USA/1
[23]Emotiv256
[24]Neuro-headset Emotiv128
[25]Recorders and Medicare256
Systems
[26]Waveguard cap256
[2729]Neuromax 32 Medicaid256
[30]Neuromax 32 Medicaid256
Table 5

Channels and Montages

ReferenceChannel investigatedMontage
[11]F3, F4, P3 and P4Referential Cz
[12]Fp1, Fpz, Fp2, F7, F3, Fz, F4 F8, FT7, Fc3, FC4, FT8, T7, C3, Cz, C4, T8, Tp7, Cp3Cp4, Tp8, P7, P3Pz, P4 P8, O1 and O2Referential Ear
[13]AF4, F4, F8, FC4 AF3, F3, F7, FC3; C3, C5, CP3, CP5 C4, C6, CP4, CP6; P3, P5, PO3, PO7. P4, P6, PO4, and PO8.
[1417]Fp1-Fp2, F7-F8, F3-F4, FT7-FT8, FC3-FC4, T3-T4, T5-T6, C3-C4, TP7-TP8, CP3-CP4, P3-P4, O1-O2Referential
[35]Fp1-Fp2, F3-F4, F7-F8Referential Cz
[18]AF3, F7, F3, FC5, T7, P7, O1, O2, P8, T8, FC6, F4, F8, and AF4Referential laplacian
[19]Fp1, F7, F3, FT7, FC3, T7, P7, C3, TP7, CP3, P3, O1, AF3, F5, F7, FC5, FC1, C5, C1, CP5, CP1, P5, P1, PO7, and Fp2, F8, F4, FT8, FC4, T8, P8, C4, TP8, CP4, P4, O2, AF4, F6, F8, FC6, FC2, C6, C2, CP6, CP2, P6, P2, PO8, PO6, PO4, CB2Referential
[21]NAReferential
[20]AF3, F7, F3, FC5, T7, P7, O1, O2, P8, T8, FC6, F4, F8, and AF4Referential laplacian
[36]128 electrodesReferential
[22]AF3, F7, F3, FC5, T7,P7,O1, O2, P8, T8, FC6, F4, F8, AF4.Referential
[23]Fp1NA
[24]AF3, AF4, F3, F4, F7, F8, FC5, FC6, P7, P8, T7, T8, O1, O2
[25]NAReferential Fz
[26]Fp1, Fp2, F3, F4, F7, F8, Fz, 3, C4, T3, T4, and PzReferential Cz
[2729]FP1, F7, F3, FP2, F8, F4Referential
[30]FP1, F7, F3, FP2, F8, F4Referential
Fig. 3

10–20 System of electrode placement

EEG machine and sampling frequency Channels and Montages

Preprocessing for artefact and feature extraction

Most of the articles reported manual, and offline removal artefact; few articles used filter and Laplacian montage method [19]. The notch filter was also used to remove features extraction transform. Most of the articles used FFT either DFT or STFT (56.25); 12 researchers used wavelet transform and 6.25 researcher applied DFA and time domain analysis. Author [18] applied time–frequency transform (Zhao-Atlas-Marks, STFT, Hilbert, Huang Spectrum) (refer Table 6).
Table 6

Analysis on the basis preprocessing for artefact removal and feature extraction transform

ReferencePreprocessing approachFeature extraction
[11]Offline manualFFT
[12]Offline manualTime domain
[13]Offline manualFFT
[14]Filter of 0–100 Hz, notch filter of 60 Hz and offline manualSTFT
[15]Filter of 0–100 Hz, notch filter of 60 Hz and offline manualSTFT
[16]Filter of 0–100 Hz, notch filter of 60 Hz and offline manualSTFT
[17]Filter of 0–100 Hz, notch filter of 60 Hz and offline manualSTFT
[35]Offline manualFFT
[18]Offline manualTime-frequency transform(Zhao-Atlas-Marks STFT, Hilbert Huang Spectrum)
[19]Offline manualSTFT
[21]Offline manualWavelet
[20]Offline manualTF
[36]Filter, offline manual, PCAFBCSP
[22]Offline manualDTF
[23]Offline manualhybrid domain
[24]Offline manualWavelet
[25]Offline manualDFA
[26]EEG Lab Tool, ICAFFT
[2729]Instrumental Raag BhairaviFFT
[30]Instrumental Raag BhairaviFFT

FFT Fast Fourier transform, STFT short Fourier transform, TF time frequency, DFA dendred facture analysis, ICA independent component analysis, FBCSP filter-bank common spatial patterns

Brainwave and location investigated and statistical test

Analysis on the basis preprocessing for artefact removal and feature extraction transform FFT Fast Fourier transform, STFT short Fourier transform, TF time frequency, DFA dendred facture analysis, ICA independent component analysis, FBCSP filter-bank common spatial patterns 31.25 researchers investigated all brainwaves (, , , and ) together. Remaining of them selected few of them or independently studied a single band. In all reviews , , , and were investigated. Almost all researchers investigated frontal hemisphere only. Author [20] investigates all regions of the brain and correlates waves with memory processing. Twenty five per cent reviews conducted statistical tests, namely ANOVA, t test and Z test. Most of the authors consider confidence level of 0.05. Seventy-five per cent reviews directly applied machine learning algorithm (refer Table 7).
Table 7

Brainwave, location investigated and statistical test

ReferenceNo. of band /Brainwave/Location investigated/Brain modelStatistical test
[11]2/\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\alpha, \theta$$\end{document}α,θ/Frontal/AsymmetryANOVA
[12]-/-/Frontal/AsymmetryANOVA
[13]4/other waves and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\theta$$\end{document}θ/Frontal/AsymmetryANOVA, paired t test
[14]1/\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\alpha$$\end{document}α/Entire/AsymmetryNA
[15]5/\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\delta$$\end{document}δ,\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\theta, \alpha$$\end{document}θ,α, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\beta$$\end{document}β,\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\gamma$$\end{document}γ /Entire/AsymmetryNA
[16]5/\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\delta$$\end{document}δ,\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\theta, \alpha$$\end{document}θ,α, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\beta$$\end{document}β,\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\gamma$$\end{document}γ /Entire/AsymmetryNA
[17]5/\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\delta$$\end{document}δ,\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\theta, \alpha$$\end{document}θ,α, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\beta$$\end{document}β,\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\gamma$$\end{document}γ /Entire/AsymmetryNA
[35]2/\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\theta, \alpha$$\end{document}θ,α /Frontal/Asymmetryz test
[18]4/\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\theta ,\alpha$$\end{document}θ,α, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\beta$$\end{document}β,\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\gamma$$\end{document}γNA
[19]5/\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\delta$$\end{document}δ,\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\theta ,\alpha$$\end{document}θ,α, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\beta$$\end{document}β,\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\gamma$$\end{document}γ /Entire/AsymmetryNA
[21]5/\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\delta$$\end{document}δ,\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\theta ,\alpha$$\end{document}θ,α ,\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\beta$$\end{document}β,\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\gamma$$\end{document}γ /Entire/Asymmetryz test
[20]5/\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\delta$$\end{document}δ,\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\theta ,\alpha$$\end{document}θ,α ,\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\beta$$\end{document}β,\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\gamma$$\end{document}γ /Entire/AsymmetryNA
[36]5/\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\delta$$\end{document}δ,\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\theta ,\alpha$$\end{document}θ,α ,\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\beta$$\end{document}β,\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\gamma$$\end{document}γ/EntireNA
[22]4/\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\theta ,\alpha$$\end{document}θ,α ,\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\beta$$\end{document}β,\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\gamma$$\end{document}γ /Entire/AsymmetryNA
[23]NANA
[24]4/\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\theta ,\alpha$$\end{document}θ,α ,\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\beta$$\end{document}β,\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\gamma$$\end{document}γ NA
[25]3/\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\alpha$$\end{document}α ,\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\beta$$\end{document}β,\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\gamma$$\end{document}γNA
[26]5/\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\delta$$\end{document}δ,\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\theta ,\alpha$$\end{document}θ,α ,\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\beta$$\end{document}β,\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\gamma$$\end{document}γANOVA
[2729]1/\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\theta$$\end{document}θ/Frontalt test
[30]1/\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\theta$$\end{document}θ/Frontalt test

ANOVA—Analysis of Variance

Brainwave, location investigated and statistical test ANOVA—Analysis of Variance

Machine learning algorithms

In all, 72% reviews employed supervised learning algorithm, namely k-NN, SVM, MLP, LDA, QDA, HMM, self-responses of subjects used as a feature vector. Twenty-eight per cent reviews used statistical tests, namely t test, ANOVA and Z test. Forty per cent of reviews used SVM along with other classifiers for classifying emotions. Classification accuracy is the most used metric. No study reported unsupervised machine learning algorithms (see Table 8).
Table 8

Machine learning algorithms and model evaluation attributes

ReferenceMachine learning algorithmModel evaluation attributes
[11]NAp value
[12]NAp value
[13]NAp value
[14]MLPCA
[15]SVMCA
[16]SVMCA
[17]SVM, MLPNA
[35]NANA
[18]SVM, QDA, k-NNNA
[19]k-NN, SVMNA
[21]NAp value
[20]k-NN, SVMNA
[36]NANA
[22]SVMNA
[23]K-nn, SVM and MLPCA
[24]SVM, HMMCA
[25]NANA
[26]NAp value
[27, 28]NAp value
[29]k-NN,LDACA
[30]NAp value

CA classification accuracy, MLP multi-level perception, SVM support vector machine, k-NN K-nearest neighbour, LDA linear discriminant analysis, QDA quadratic discriminant analysis, HMM hidden Markov level

Machine learning algorithms and model evaluation attributes CA classification accuracy, MLP multi-level perception, SVM support vector machine, k-NN K-nearest neighbour, LDA linear discriminant analysis, QDA quadratic discriminant analysis, HMM hidden Markov level

Discussion and recommendations

Participants

The vast majority of the engineering domain study consider very less subject on an average 11 approximately, especially articles on IEEE explorer. To prove the hypothesis, minimum 30 subjects are required in the study [39]. In case scholars use subjects of both sexes, the number of subjects should be equal. Most of the authors required normal subjects without confirming normalcy of subjects. Homogeneous population were considered. This study is multidisciplinary study human factor, and experimental psychology is involved in this [40]. Most of the studies conducted by engineering fraternities are without clinical guidance. Handedness not considered if it considers evasive about handedness evaluation method.

Musical stimulus and dimension of emotion

Reviews use various genres of musical stimuli. To evoke different emotions among the subjects, a different emotional excerpt of incentives was employed. Most of the reviews employed familiar musical stimulus. Author [26] empirically proved unfamiliar excerpt most suitable for the construction of an emotion identification system. In reviews, various emotions are considered for emotion classification. The higher number of emotions causes emotion acknowledgement troublesome, and a few emotions may overlap [41]. In most surveys, the 1-dimensional emotion model was used. To investigate arousal feel tracer used feel tracer instrument is not reliable [42]. No reviews report about an automatic prediction of valence and arousal of 2 dimensional for the same excerpt of musical stimuli. High-frequency brainwaves like beta and gamma were used to correlate arousal [43] of emotion, while low frequency like alpha or theta for valence of emotion [11, 13]. Arousal and valence for the same excerpt of stimulus were plotted on the same graph as shown in Fig. 4.
Fig. 4

Recommended 2D model

Recommended 2D model

Emotional processing in depression

Emotions are broadly classified as positive and negative, for sake of their understanding in processing in brain. Broadly, it is seen that positive emotions are processed in left anterior hemisphere (a.k.a. prefrontal cortex) of brain and negative emotions are processed in right [44]. In cases of depression, hypothesis in left anterior hemisphere hypo-arousal or right anterior hemisphere hyper-arousal leads to symptoms of depression [45]. EEG pattern supports evidence; findings shows that in cases of depression left anterior hemisphere is relatively inactive to right hemisphere [27], indicating that patients with depression have differential processing of stimuli than people without depression.

EEG machine and montages

While selecting EEG machine, following features should be consideredMontages are sensible and efficient game plans of electrode sets called channels that show EEG action over the whole scalp, permit appraisal of movement on the two sides of the cerebrum (lateralization) and aid in localisation of recorded activity to a specific brain region [46]In a bipolar montage, each waveform signifies the difference between two adjacent electrodes. This class of montage is designated as longitudinal bipolar (LB) and transverse bipolar (TB). Longitudinal bipolar montages measure the activity between two electrodes placed longitudinally on scalp, e.g. Fp1-F7, Fp1-F3, C3-P3, Fp2-F8, Fp1-F3, Fp2-F4, Fp2-F8 and F3-C3. Transverse bipolar montage measures activity between two electrodes along crosswise, e.g. Fp1-Fp2, F7-F3, Fp1-Fp2, F7-F3, Fp2-F8, F3-Fz, Fp2-F8, F3-Fz and F7-F3In this montage, the distinction between the signal from an individual electrode and that of an assigned reference electrode was estimated. The reference electrode has no standard position. Nonetheless, the situation of the reference electrode is unique in relation to the account electrode. Mid-line positions are often used to avoid amplification of signals in one hemisphere relative to the other. Another most loved reference that utilised impressively is the ear (left ear for left hemisphere and right ear for right hemisphere), e.g. the left and right ears are considered as reference electrode Fp1-A1, Fp2-A2, F7-A1, F8-A2, Fp1-Cz, Fp2-Cz, F7-Cz, F8-Cz and so forthIn this montage, the distinction between a electrode and a weighted normal of the encompassing electrodes is utilised to represent a channel. Minimum 256 Hz sampling frequency CE, FDA approvals Compatible with MS-EXCEL, LabVIEW Quick technical support DC operated Bipolar Montage Bipolar Referential Montage Laplacian Montage

Preprocessing for artefacts

EEG recording is exceedingly powerless to various forms and sources of noise. Morphology, an electrical characteristic of artefacts, can lead to significant difficulties in analysis and interpretation of EEG data. Table 9 shows various types of artefacts. The morphology of external artefacts is easily distinguishable from actual EEG [47]. Taking long duration and using many electrode artefact-free recording protocol is the best strategy for preventing and minimising all types of artefacts [27]
Table 9

Various artefacts in EEG signal recording

CategoryArtefact /Source(Cause)/Frequency/Amplitude MorphologyArtefact Prevention
PhysiologicalArtefactsCardiac/Heart/\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\le$$\end{document}1Hz/1-10mV/EpilepsySelection of proper montageMonitoring during recordingOffline visual inspectionLow pass filter (LPF)Data Rejection
EOG/Eye/0.5-3 Hz/100mV/Tumour, delta waveArtefact-free recording protocolOnline monitoringOffline visual inspectionLow pass filter (LPF)Using variousTransform (ICA, PCA, EOG subtraction)
Muscle Artefact/Muscle/\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\ge$$\end{document} 100Hz/low /Beta frequencyArtefact-free recording protocolOnline MonitoringOffline visual inspectionHigh pass filter(HPF)Data rejection
Physical movement artefact/Physical movement/Very low/ very high/Morphology differentfrom actual EEGArtefact-free recording protocolOnline monitoringOffline visual inspectionData rejection
ExternalArtefactsTransmission line/Transmission line50–60 Hz/low/Morphology differentfrom actual EEGNotch filterDC power supply
Phone artefacts/Mobile and landline phone/high/high/differentArtefact-free recording protocol
Electrode artefact/Electrode and sweating/very low/highArtefact-free recording protocolLPF
Impedance artefact/Electrode with impedance>5K\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Omega$$\end{document}Ω/-/approximately 100 μV/different
Educate the members around an eye, physical movement Try not to permit electronic contraption in EEG recording lab Record in acoustic free, diminish light and at surrounding temperature All muscle, ocular or movement artefact slots of EEG signals reject Members wash their hair to expel oil from their scalp. Use proper montage Various artefacts in EEG signal recording

Feature extraction

There are three methods of analysing EEG signal time domain, frequency domain, time–frequency domain [9]All real-world signals presented time domain. This method is suitable to visualise real-world signal, voltage, PSD (power spectral density) and energy estimation of signal, mostly used for epilepsy analysis.Analysis of EEG signals concerns frequency, rather than time. It gives PSD’s of various rhythms of EEG signals. It is suitable for studying various brainwaves over a stipulated time periodTime–frequency examination contains those procedures that review a signal in both the time and frequency at the same time, appropriate for event-related emotion acknowledgement. Time domain Frequency domain Time frequency

Brainwave and location

In existing literature, a frontal region mostly explored as it associated with emotion processing. A few researchers investigated an exclusive wave correlating evoked emotion. As mentioned in Sect. 1, musical stimulus created many psychological changes in subjects only examining frontal region, and few are the wave is not enough in creating a model of evoked emotion. Various lobes and many waves establishing their interrelationship need to be explored.

Machine learning algorithm

SVM is a supervised machine learning algorithm which can be used for classification or regression problems. It is a suitable algorithm for classification of evoked emotions. SVM utilises kernel trick to transform the data, and after that, because of these changes, it finds an ideal limit between the conceivable yields. Nonlinear kernel tricks can catch substantially more perplexing connections between data points without having to perform difficult transformations on own [48]. It has features High prediction speed Fast training speed High accuracy Results are interpretable Performs wells with small numbers of observation

Model performance metrics

Healthcare and engineering models have different obligations, so the assessment metric should be different and should not be judged using a single metric; classification accuracy metrics are mostly considered in reviews for assessing the model. The model performance represented in the form of the confusion matrix is shown in Eq. (1).Assume the inadequate model shown by Eq. (3) is having true-positive and false-positive values zero; still model classification accuracy by Eq. (2) is 83.33%. Accuracy is not a reliable metric for assessment of model. Apart from classification accuracy, there are many metrics for models assessment such as sensitivity, specificity, precision NPV (negative prediction value), FDR (false discovery rate), F1 score, FPR (false-positive rate), FNR (false-negative rate) accuracy, MCC (Mathew correlation coefficient) informedness (Youden index), markedness and ROC (receiver output character). Model performance metric such as recall, specificity, precision and accuracy are biased metrics [49]. ROC diagrams depicted the trade-off inside hit rates and false alert rates of classifiers and honed for the long time [50, 51]. As ROC decouples models performance from class skew and error costs, this makes ROC best measure of classification performance. The ROC graphs are useful for building the model and formulating their performance [52]. For a small number of positive class, F1 and ROC give a precise assessment of models [53, 54].

Suggested approach

As this research is interdisciplinary collaborative research by involving the medical fraternity of psychiatry or neurology background, music expert will satisfy Brouwer’s [40] recommendation I, II and VI. By recording EEG in three continuous gatherings, prestimulus, during stimulus and post-stimulus, could help in comparing with the baseline changes, and post hoc selection of data satisfy Brouwer’s recommendation III. moreover, remaining Burrowers recommendation IV and V by recording EEG using good artefact removing the protocol mentioned in Sect. 4.4 and Table 9. Analysing data using proper statistical test and machine learning algorithms (refer Fig. 6 for suggested approach). Comparison of left and right hemispheric activity refer gives vivid results, and the model formed called asymmetry model (refer Fig. 5). Most of the reviews compared left brain activity with the right brain activity and found that mathematical relationship for stimulus will be more significant.
Fig. 6

Suggested approach

Fig. 5

Asymmetry model

Asymmetry model Suggested approach

Conclusion

We have summarised, analysed and discussed the research articles with using keywords music, EEG and emotion from the year 2001-2018. We have outlined attention of different approaches considered in mental and emotion detection with the musical stimulus, We have drawn attention to various aspects of current research such emotion model, statistical test and machine learning algorithms, model performance metrics, etc. We have recommended best practices for putting scholar before the researcher. It will provide inputs for the new researcher in this area.
  15 in total

Review 1.  Better decisions through science.

Authors:  J A Swets; R M Dawes; J Monahan
Journal:  Sci Am       Date:  2000-10       Impact factor: 2.142

2.  Hits to the left, flops to the right: different emotions during listening to music are reflected in cortical lateralisation patterns.

Authors:  Eckart Altenmüller; Kristian Schürmann; Vanessa K Lim; Dietrich Parlitz
Journal:  Neuropsychologia       Date:  2002       Impact factor: 3.139

3.  EEG-based emotion recognition in music listening.

Authors:  Yuan-Pin Lin; Chi-Hong Wang; Tzyy-Ping Jung; Tien-Lin Wu; Shyh-Kang Jeng; Jeng-Ren Duann; Jyh-Horng Chen
Journal:  IEEE Trans Biomed Eng       Date:  2010-05-03       Impact factor: 4.538

Review 4.  Brain lateralization of emotional processing: historical roots and a future incorporating "dominance".

Authors:  Heath A Demaree; D Erik Everhart; Eric A Youngstrom; David W Harrison
Journal:  Behav Cogn Neurosci Rev       Date:  2005-03

5.  Music and emotion: electrophysiological correlates of the processing of pleasant and unpleasant music.

Authors:  Daniela Sammler; Maren Grigutsch; Thomas Fritz; Stefan Koelsch
Journal:  Psychophysiology       Date:  2007-03       Impact factor: 4.016

6.  Functional neuroimaging: a brief overview and feasibility for use in chiropractic research.

Authors:  Reidar P Lystad; Henry Pollard
Journal:  J Can Chiropr Assoc       Date:  2009-03

7.  EEG dynamics during music appreciation.

Authors:  Yuan-Pin Lin; Tzyy-Ping Jung; Jyh-Horng Chen
Journal:  Conf Proc IEEE Eng Med Biol Soc       Date:  2009

8.  Toward an EEG-based recognition of music liking using time-frequency analysis.

Authors:  Stelios K Hadjidimitriou; Leontios J Hadjileontiadis
Journal:  IEEE Trans Biomed Eng       Date:  2012-09-27       Impact factor: 4.538

9.  Individual differences in handedness and specific speech and language impairment: evidence against a genetic link.

Authors:  D V Bishop
Journal:  Behav Genet       Date:  2001-07       Impact factor: 2.805

10.  Real-time EEG-based happiness detection system.

Authors:  Noppadon Jatupaiboon; Setha Pan-ngum; Pasin Israsena
Journal:  ScientificWorldJournal       Date:  2013-08-18
View more
  1 in total

1.  Classification of Brainwaves Using Convolutional Neural Network.

Authors:  Swapnil R Joshi; Drew B Headley; K C Ho; Denis Paré; Satish S Nair
Journal:  Proc Eur Signal Process Conf EUSIPCO       Date:  2019-11-18
  1 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.