| Literature DB >> 36004855 |
Nada Ali Hakami1, Hanan Ahmed Hosni Mahmoud2.
Abstract
Consumer behavior variants are evolving by utilizing advanced packing models. These models can make consumer behavior detection considerably problematic. New techniques that are superior to customary models to be utilized to efficiently observe consumer behaviors. Machine learning models are no longer efficient in identifying complex consumer behavior variants. Deep learning models can be a capable solution for detecting all consumer behavior variants. In this paper, we are proposing a new deep learning model to classify consumer behavior variants using an ensemble architecture. The new model incorporates two pretrained learning algorithms in an optimized fashion. This model has four main phases, namely, data gathering, deep neural modeling, model training, and deep learning model evaluation. The ensemble model is tested on Facemg BIG-D15 and TwitD databases. The experiment results depict that the ensemble model can efficiently classify consumer behavior with high precision that outperforms recent models in the literature. The ensemble model achieved 98.78% accuracy on the Facemg database, which is higher than most machine learning consumer behavior detection models by more than 8%.Entities:
Keywords: consumer behavior; deep learning; social media activities
Year: 2022 PMID: 36004855 PMCID: PMC9404982 DOI: 10.3390/bs12080284
Source DB: PubMed Journal: Behav Sci (Basel) ISSN: 2076-328X
Figure 1The ensemble consumer behavior classification model.
The description of Vgg19 network [33].
| Layer Type | Properties |
|---|---|
| Input Layer | 512 × 512 representations |
| Two Convolutional | 64 × 3× 3 |
| Maxpool | Max pooling |
| Four Convolutional Layer | 128 × 3 × 3 |
| Maxpool | Max pooling |
| Four Convolutional Layer | 256 × 3 × 3 |
| Maxpool | Max pooling |
| Four Convolutional Layer | 512 × 3 × 3 |
| Maxpool | Max pooling |
| Four Convolutional Layers | 256 × 3 × 3 |
| Maxpool | Max pooling |
| Three Fully Connected Layers | 4096 |
| Softmax | Softmax |
| Output consumer class |
The description of DenseNet201.
| Layer Type | Properties |
|---|---|
| Input Layer | 512 × 512 feature representation map |
| Two Convolutional Layers | 112 × 7× 3 |
| Average pooling | Average |
| Dense block 56 × 56 | 6× 3 × 3 |
| Average pooling | Average |
| Dense block 28 × 28 | 12 × 3 × 3 |
| Transition layer 28 × 28 | 1 × 3 × 3 + Maxpooling |
| Dense block 14 × 14 | 12 × 3 × 3 |
| Transition layer 14 × 14 | 1 × 2 × 2 + Maxpooling |
| Dense block 7 × 7 | 48 × 3 × 3 |
| Softmax | Softmax |
| Output consumer class |
The description of the ensemble model.
| Block 1 | Block 2 |
|---|---|
| Input Layer | Input Layer |
| Two Convolutional Layers | Two Convolutional |
| Maxpool | Average pooling |
| Four Convolutional Layers | Dense block 56 × 56 |
| Maxpool | Average pooling |
| Four Convolutional Layers | Dense block 28 × 28 |
| Maxpool | Transition layer 28 × 28 |
| Four Convolutional Layers | Dense block 14 × 14 |
| Maxpool | Transition layer 14 × 14 |
| Four Convolutional Layers | Dense block 7 × 7 |
| Maxpool | |
| Three Fully Connected Layers | |
| Softmax | Softmax |
| Feature Map Merger | |
| Fully Connected Layers | |
| Softmax classifier | |
| Output layer | Consumer Varient |
Figure 2The performance of Vgg19, DenseNet201, and the ensemble models for the Facemg database.
Figure 3The performance of Vgg19, DenseNet201, and the ensemble models for the BIG-D15 database.
Figure 4The performance of Vgg19, DenseNet201, and the ensemble models for the TwitD database.
Parameters of the Facemg, BIG-D15, and TwitD databases.
| Database Parameters | Facemg | BIG-D15 | TwitD |
|---|---|---|---|
| Batch size | 64 | 32 | 32 |
| Dropout rate | 0.4 | 0.4 | 0.4 |
| Epoch count | 80 | 70 | 80 |
| Learning rate | 0.006 | 0.006 | 0.006 |
| loss function | Cross entropy | Cross entropy | Cross entropy |
Statistics metrics for the compared models for the Facemg dataset.
| Vgg19 | DenseNet201 | Our Ensemble Model Features Merging | |
|---|---|---|---|
| TP + TN | 0.91 | 0.915 | 0.98 |
| FP + FN | 0.09 | 0.085 | 0.02 |
| Kappa coefficient (inter-qualitative reliability) | 0.309 | 0.411 | 0.514 |
| Mean square error | 0.412 | 0.413 | 0.211 |
Statistics metrics for the compared models for the BIG-D15 dataset.
| Vgg19 | DenseNet201 | Our Ensemble Model Features Merging | |
|---|---|---|---|
| TP + TN | 0.91 | 0.915 | 0.98 |
| FP + FN | 0.09 | 0.085 | 0.02 |
| Kappa coefficient (inter-qualitative reliability) | 0.309 | 0.411 | 0.514 |
| Mean square error | 0.412 | 0.413 | 0.211 |
Statistics metrics for the compared models for the TwitD dataset.
| Vgg19 | DenseNet201 | Our Ensemble Model Features Merging | |
|---|---|---|---|
| TP + TN | 0.91 | 0.915 | 0.98 |
| FP + FN | 0.09 | 0.085 | 0.02 |
| Kappa coefficient (inter-qualitative reliability) | 0.309 | 0.411 | 0.514 |
| Mean square error | 0.412 | 0.413 | 0.211 |