| Literature DB >> 31897864 |
Xiaofeng Yang1,2, Zhe Wang1, Hongxia Deng1, Haifang Li3, Rong Yao1, Peng Gao1, Saddam Naji Abdu Nasher1.
Abstract
Images are powerful tools with which to convey human emotions, with different images stimulating diverse emotions. Numerous factors affect the emotions stimulated by the image, and many researchers have previously focused on low-level features such as color, texture and so on. Inspired by the successful use of deep convolutional neural networks (CNN) in the visual recognition field, we used a data augmentation method for small data sets to gain the sufficient number of the training dataset. In this paper, we use low-level features (color and texture features) of the image to assist the extraction of advanced features (image object category features and deep emotion features of images), which are automatically learned by deep networks, to obtain more effective image sentiment features. Then, we use the stack sparse auto-encoding network to recognize the emotions evoked by the image. Finally, high-level semantic descriptive phrases including image emotions and objects are output. Our experiments are carried out on the IAPS and GAPED data sets of the dimension space and the artphoto data set of the discrete space. Compared with the traditional manual extraction methods and other existing models, our method is superior to in terms of test performance.Entities:
Keywords: Deep learning; Image semantic; Stack sparse auto-encoding; Transfer-learning
Mesh:
Year: 2020 PMID: 31897864 DOI: 10.1007/s10916-019-1498-8
Source DB: PubMed Journal: J Med Syst ISSN: 0148-5598 Impact factor: 4.460