| Literature DB >> 36080922 |
Muhammad Haseeb Arshad1, Muhammad Bilal2, Abdullah Gani3.
Abstract
Nowadays, Human Activity Recognition (HAR) is being widely used in a variety of domains, and vision and sensor-based data enable cutting-edge technologies to detect, recognize, and monitor human activities. Several reviews and surveys on HAR have already been published, but due to the constantly growing literature, the status of HAR literature needed to be updated. Hence, this review aims to provide insights on the current state of the literature on HAR published since 2018. The ninety-five articles reviewed in this study are classified to highlight application areas, data sources, techniques, and open research challenges in HAR. The majority of existing research appears to have concentrated on daily living activities, followed by user activities based on individual and group-based activities. However, there is little literature on detecting real-time activities such as suspicious activity, surveillance, and healthcare. A major portion of existing studies has used Closed-Circuit Television (CCTV) videos and Mobile Sensors data. Convolutional Neural Network (CNN), Long short-term memory (LSTM), and Support Vector Machine (SVM) are the most prominent techniques in the literature reviewed that are being utilized for the task of HAR. Lastly, the limitations and open challenges that needed to be addressed are discussed.Entities:
Keywords: CCTV; computer vision; human activity recognition; machine learning; sensors
Mesh:
Year: 2022 PMID: 36080922 PMCID: PMC9460866 DOI: 10.3390/s22176463
Source DB: PubMed Journal: Sensors (Basel) ISSN: 1424-8220 Impact factor: 3.847
Figure 1Steps performed for selection of articles.
Figure 2Taxonomy of HAR.
Figure 3Frequency of application areas targeted by existing literature on HAR.
Summary of literature on static activities.
| Ref. | Year | Description |
|---|---|---|
| [ | 2018 | The proposed data integration framework has two components: data collection from various sensors and a codebook-based feature learning approach to encode data into an effective feature vector. Non-Linear SVM used as min method in proposed framework. |
| [ | 2018 | Features were extracted from raw data collected with a smartphone sensor, processed with KPCA and LDA, and trained with DBN for activity recognition. |
| [ | 2018 | PCA is used to reduce dimensionality and extract significant features, which are then compared using a machine learning classifier to raw data and PCA-based features for HAR. |
| [ | 2019 | Introduced SSAL, based on the ST approach to automate and reduce annotation efforts for HAR. |
| [ | 2019 | Proposed a method based on DLSTM and DNN for accurate HAR with smartphone sensors. |
| [ | 2019 | Proposed a three-stage framework for recognizing and forecasting HAR with LSTM: post activity recognition, recognition in progress, and in advance prediction. |
| [ | 2019 | Trained LR, NB, SVM, DT, RF and ANN (vanilla feed-forward) on data collected with low-resolution (4 × 16) thermal sensors. |
| [ | 2019 | Proposed new time and frequency domain features to improve algorithms’ classification accuracy and compare four algorithms in terms of accuracy: ANN, KNN, QSVM, EBT. |
| [ | 2020 | Proposed a pattern-based RCAM for extracting and preserving diverse patterns of activity and solving problem of imbalanced dataset. |
| [ | 2020 | Proposed a method for predicting activities that used a 2-axis accelerometer and MLP, J48, and LR classifiers. |
| [ | 2020 | Investigated that images should be used as HAR sensors rather than accelerometers because they contain more information. Claimed that CNN with images will not burden the modern devices. |
| [ | 2020 | Proposed a hybrid feature selection process in which SFFS extracts features and SVM classifies activities. |
| [ | 2021 | Proposed a one-dimensional CNN framework with three convolutional heads to improve representation ability and automatic feature selection. |
| [ | 2021 | Using CNN and LSTM, a framework CNN-LSTM Model was proposed for multiclass wearable user identification while performing various activities. |
| [ | 2021 | Proposed a CPC framework based on CNN and LSTM for monitoring construction equipment activity. |
| [ | 2022 | Proposed a hybrid model that combines one-dimensional CNN with bidirectional LSTM (1D-CNN-BiLSTM) to recognize individual actions using wearable sensors. |
| [ | 2022 | Aimed to address these issues by employing the GRU network, which collects valuable moments and temporal attention in order to minimize model attributes for HAR in the absence of I.I.D. |
Summary of literature on dynamic activities.
| Ref. | Year | Description |
|---|---|---|
| [ | 2018 | Proposed Coarse-to-Fine framework that uses Microsoft Kinect to capture activity sequences in 3D skeleton form, groups them into two forms, and then classifies them using the BLSTM-NN classifier. |
| [ | 2018 | A deep NN was applied on multichannel time series collected from various body-worn sensors for HAR. Deep architecture CNN-IMU finds basic and complex attributes of human movements and categorize them in actions. |
| [ | 2018 | Proposed online activity recognition with three temporal sub-windows for predicting activity start time based on an activity’s end label and comparing results of NA, SVM, and C4.5 with different changes. |
| [ | 2019 | The proposed FR-DCNN for HAR improves the effectiveness and extends the information collected from the IMU sensor by building a DCNN classifier with a signal processing algorithm and a data compression module. |
| [ | 2019 | Proposed an approach for detecting real-time human activities employing three methods: YOLO object detection, the Kalman Filter, and homography. |
| [ | 2019 | The I3D network included a tracking module, and GNNs are used for actor and object interactions. |
| [ | 2019 | Proposed an ensemble ELM algorithm for classifying daily living activities, which used Gaussian random projection to initialize base input weights. |
| [ | 2019 | Proposed AttnSense with CNN and GRU is for multimodal HAR to capture signal dependencies in spatial and temporal domains. |
| [ | 2019 | Proposed a 3Dimensional deep learning technique that detect multiple HAR using a new data representation. |
| [ | 2019 | Proposed approach has different modules. The first stage generates dense spatiotemporal data using Mask R-CNN, second module has deep 3D-CNN performing classification and localization, then classification using a TRI-3D network. |
| [ | 2019 | Proposed strategy based on 3 classifiers (TF-IDF, TF-IDF + Sigmod, TF-IDF + Tanh) for utilizing statistical data about individual and aggregate activities. |
| [ | 2019 | Demonstrated AdaFrame, which included LSTM to select relevant frames for fast video recognition and time savings. |
| [ | 2020 | In the proposed adaptive time-window-based algorithm, MGD was used to detect signals, define time window size, and then adjust window size to detect activity. |
| [ | 2020 | Implemented CPAM to detect real-time activities in a range calculated by SEP algorithm and reduce energy consumption. |
| [ | 2020 | Proposed a framework for future frame generation, as well as an online temporal action localization solution. Framework contains 4 deep neural network PRs for background reduction, AR for activity type prediction, F2G for future frame generation, and LSTM to recognize action on the basis of input received by AR and PR. |
| [ | 2020 | A HAR framework is proposed that is based on features, quadratic discriminant analysis, and features processed by the maximum entropy Markov model. |
| [ | 2021 | The proposed method improved GBO performance by selecting features and classifying them with SVM using an FS method called GBOGWO. |
| [ | 2021 | The AGRR network has been proposed to solve HOI problems with a large combination space and non-interactive pair domains. |
| [ | 2021 | The ACAR-Net model is proposed to support actor interaction-based indirect relation reasoning. |
| [ | 2021 | In the proposed ASRF framework, an ASB is used to classify video frames, a BRB is used to regress action boundaries, and a loss function is used to smooth action probabilities. |
| [ | 2022 | SVM to identify daily living activities in the proposed system by adjusting the size of the sliding window, reducing features, and implanting inertial and pressure sensors. |
| [ | 2022 | Tried to figure out whether the currently performing action is a continuation of a previously performed activity or is novel in three steps: sensor correlation (SC), temporal correlation (TC), and determination of the activity activating the sensor. |
Summary of literature on surveillance.
| Ref. | Year | Description |
|---|---|---|
| [ | 2019 | Extract static sparse features from each frame by feature pyramid and sparse dynamic features from successive frames to improve feature extraction speed, then combine them in Adaboost classification. |
| [ | 2019 | CNN extracted features from videos after background reduction, fed these features to DDBN, and compared CNN extracted features with labelled video features to classify suspicious activities. |
| [ | 2020 | Identified object movement, performed video synchronization, and ensured proper detail alignment in CCTV videos for traffic and violence monitoring with Lucas–Kanade model. |
| [ | 2021 | Proposed a DPCA-SM framework for detecting suspicious activity in a shopping mall from extracted frames that trained with VGG, along with applications for tracing people’s routes and identifying measures in a store setting. |
| [ | 2021 | Proposed an effective approach to detect and recognize multiple human actions using TDMap HOG by comparing existing HOG and generated HOG using CNN model. |
| [ | 2021 | Proposed an efficient method for automatically detecting abnormal behavior in both indoor and outdoor settings in academics and alerting appropriate authorities. Proposed system process video with VGG and LSTM network differentiates normal and abnormal frames. |
| [ | 2021 | To detect normal and unusual activity in a surveillance system, an SSD algorithm with bounded box explicitly trained with a transfer learning approach DS-GRU is used. |
| [ | 2022 | For dealing with untrimmed multi-scale multi-instance video streams with a wide field of view, a real-time activity detection system based on Argus++ is proposed. Argus++ combined Mask R-CNN and ResNet101. |
Summary of literature on suspicious activities.
| Ref. | Year | Description |
|---|---|---|
| [ | 2019 | PCANet and CNN were used to overcome issues with manual detection of anomalies in videos and false alarms. In video frames, abnormal event is determined with PCA and SVM. |
| [ | 2020 | CCTV footage is fed into a CNN model, which detects shoplifting, robbery, or a break-in in a retail store and immediately alerts the shopkeeper. |
| [ | 2020 | Pretrained CNN model VGG16 was used to obtain features from videos, then a feature classifier LSTM was used to detect normal and abnormal behavior in an academic setting and alert the appropriate authorities. |
| [ | 2021 | The proposed system offered a framework for analyzing video statistics obtained from a CCTV digital camera installed in a specific location. |
| [ | 2021 | Three-dimensional CNN ResNet with spatio-temporal features was used to recognize and detect smoking events. |
| [ | 2021 | A pretrained model was used to estimate human poses, and deep CNN was built to detect anomalies in examination halls. |
| [ | 2021 | The GMM was combined with the UAM to distinguish between normal and abnormal activities such as hitting, slapping, punching, and so on. |
| [ | 2021 | Deep learning was used to detect suspicious activities automatically, saving time and effort spent manually monitoring videos. |
| [ | 2022 | A two-stream neural network was proposed using AIoT to recognize anomalies in Big Video Data. BD-LSTM classified anomaly classes of data stored on cloud. Different modeling choices used by researcher to obtain better results. |
| [ | 2022 | Created a larger benchmark dataset than was previously available and proposed an algorithm to address the problems of continuous learning and few-shot learning. YOLO v4 discovers items from frames and kNN based RNN model avoids catastrophic forgetting from frames. |
Summary of literature on healthcare.
| Ref. | Year | Description |
|---|---|---|
| [ | 2019 | Deep learning was used to create a multi-sensory framework that combined SRU and GRU. SRU is concerned with multimodal input data, whereas GRU is concerned with accuracy issues. |
| [ | 2020 | SDRs were used to create a dataset of radio wave signals, and a RF machine learning model was developed to provide near-real-time classification between sitting and standing. |
| [ | 2019 | Gausian kernel-based PCA gets significant features from sensors data and recognizes activities using Deep CNN. |
| [ | 2022 | “CNN-net”, “CNNLSTM-net”, “ConvLSTM-net”, and “StackedLSTM-net” models based on one dimensional CNN and LSTM stacked predictions and then trained a blender on them for final prediction. |
| [ | 2022 | Used a model based on handcrafted features and RF on data collected with two smartphones. |
Summary of literature on induvial user-based HAR.
| Ref. | Year | Description |
|---|---|---|
| [ | 2018 | SVM and the N-cut algorithm were used to label video segments, and the CRF was used to detect anomalous events. |
| [ | 2018 | A deep convolutional framework was used to develop a unified framework for detecting abnormal behavior with LSTM in RGB images. YOLO was used determine the action of individuals in video frames and then VGG-16 classify them. |
| [ | 2018 | Proposed a HOME FAST spatiotemporal feature extraction approach based on optical flow information to detect anomalies. Proposed approach obtained low-level features with KLT feature extractor and supplied to DCNN for categorization. |
| [ | 2019 | Proposed an algorithm used adaptive transformation to conceal the affected area and the pyramid L-K optical flow method to extract abnormal behavior from videos. |
| [ | 2019 | By combining extracted hidden patterns of text with available metadata, a deep learning architecture RNN was proposed to detect abusive behavioral norms. |
| [ | 2019 | SVM was used to determine abnormal behavior using extracted feature vectors and vector trajectories from the computed optical flow field of determined joint points with LK method. |
| [ | 2019 | The proposed LSTM-FCN detects aggressive driving sessions as time series classification to solve the problem of driver behavior. |
| [ | 2019 | A method that combined CNN with HOF and HOG was proposed to detect anomalies in surveillance video frames. |
| [ | 2020 | A deep learning model was used to detect abnormal behavior in videos automatically, and experiments with 2D CNN-LSTM, 3D CNN, and I3D models were conducted. |
| [ | 2020 | Propose to do instance segmentation in video bytes and predicting the actions with the help of DBN based on RBM. Aimed to present an implementation of an algorithm that can depict anomalies in real time video feed. |
| [ | 2021 | Proposed a method for detecting abnormal behavior that is both accurate and effective. VGG16 network transferred to full CNN to extract features. Then LSTM is used for prediction at that moment. |
| [ | 2021 | Proposed a method in the ABAW competition that used a pre-trained JAA model and AU local features. |
| [ | 2021 | Proposed a strategy for recognizing and detecting anomalies in human actions and extracting effective features using a CPRTSA based Deep Maxout Network. |
| [ | 2021 | The algorithm was classified into two types. The first employs data mining and knowledge discovery, whereas the second employs deep CNN to detect collective abnormal behavior. Researcher planned variation of DBSCAN, kNN feature selection, and ensemble learning for behavior identification. |
| [ | 2021 | Residual LSTM was introduced to learn static and temporal person-level residual features, and GLIL was proposed to model person-level and group-level activity for group activity recognition. |
Summary of literature on group-based HAR.
| Ref. | Year | Description |
|---|---|---|
| [ | 2018 | A two-stream convolutional network with density heat-maps and optical flow information was proposed to classify abnormal crowd behavior and generate a large-scale video dataset. To prevent long-term dependency, they used LSTM. |
| [ | 2018 | For abnormal event detection in surveillance videos, an algorithm based on image descriptors derived from the HMM that used HOFO as feature extractor and a classification method is proposed. |
| [ | 2018 | The proposed descriptor is based on spatiotemporal 3D patches and can be used in conjunction with sHOT to detect abnormal behavior. Then one class SVM classifies behaviors. |
| [ | 2018 | HOG-LBP and HOF were calculated from extracted candidate regions and passed to two distinct one-class SVM models to detect abnormal events after redundant information was removed. |
| [ | 2018 | Particle velocities are extracted using the optical flow method, and motion foreground is extracted using the crowded motion segmentation method. The distance to the camera is calculated using linear interpolation, and crowd behavior is analysed using the contrast of three descriptors. |
| [ | 2019 | Reviewed crowd analysis fundamentals and three main approaches: crowd video analysis, crowd spatiotemporal analysis, and social media analysis. |
| [ | 2019 | Presented a deep CRM component that learns to generate activity maps, a multi-stage refinement component that reduces incorrect activity map predictions, and an aggregation component that recognizes group activities based on refined data. |
| [ | 2019 | Presented a contextual relationship-based learning model that uses a deep NN to recognize a group of people’s activities in a video sequence. Action-poses are classified with pre-trained CNN, then passed to RNN and GRU. |
| [ | 2019 | A Gaussian average model was proposed to overcome the disadvantage of slow convergence speed, and a predictive neural network was used to detect abnormal behavior by determining the difference between predictive and real frames. |
| [ | 2019 | Extracted optical flow motion features, generated a trajectory oscillation pattern, and proposed a method for detecting crowd congestion. |
| [ | 2019 | A method for detecting crowd panic states based on entropy and enthalpy was proposed, with enthalpy describing the system’s state and entropy measuring the degree of disorder in the system. Crowded movement area represented in the form of text with LIC. |
| [ | 2019 | The CrowdVAS-Net framework is proposed, which extracts features from videos using DCNN and trains these features with a RF classifier to differentiate between normal and abnormal behavior. |
| [ | 2020 | The proposed MPF framework is built on the L1 and L2 norms. Descriptor of structure context for self-weighted structural properties Framework for group detection and multiview feature point clustering. |
| [ | 2020 | MII is generated from frames based on optical flow and angle difference and used to train CNN, provide visual appearance and distinguish between normal and unusual crowd motion with one class SVM. |
| [ | 2021 | The proposed method employs a two-stream convolutional architecture to obtain the motion field from video using dense optical flow and to solve the problem of capturing information from still frames. |
| [ | 2021 | Extracted dynamic features based on optical flow and used an optical flow framework with U-Net and Flownet based on GAN and transfer learning to distinguish between normal and abnormal crowd behavior. |
| [ | 2022 | CCGLSTM with STCC and GCC is proposed to recognize group activity and build a spatial and temporal gate to control memory and capture relevant motion for group activity recognition. |
Figure 4Frequency of data sources used by existing literature on HAR.
Vision-based and Sensor-based data sources used in the literature.
| Ref. | Vision-Based | Sensor-Based | ||||||
|---|---|---|---|---|---|---|---|---|
| CCTV | Kinect Device | YouTube | Smart Phone | Camera Images | Social Media Images | Mobile Sensor | Wearable Device Sensor | |
| [ | ✔ | |||||||
| [ | ✔ | |||||||
| [ | ✔ | |||||||
| [ | ✔ | |||||||
| [ | ✔ | |||||||
| [ | ✔ | |||||||
| [ | ✔ | |||||||
| [ | ✔ | ✔ | ||||||
| [ | ✔ | |||||||
| [ | ✔ | |||||||
| [ | ✔ | |||||||
| [ | ✔ | |||||||
| [ | ✔ | |||||||
| [ | ✔ | |||||||
| [ | ✔ | |||||||
| [ | ✔ | |||||||
| [ | ✔ | |||||||
| [ | ✔ | |||||||
| [ | ✔ | |||||||
| [ | ✔ | |||||||
| [ | ✔ | |||||||
| [ | ✔ | |||||||
| [ | ✔ | |||||||
| [ | ✔ | |||||||
| [ | ✔ | |||||||
| [ | ✔ | |||||||
| [ | ✔ | |||||||
| [ | ✔ | |||||||
| [ | ✔ | |||||||
| [ | ✔ | |||||||
| [ | ✔ | |||||||
| [ | ✔ | |||||||
| [ | ✔ | ✔ | ||||||
| [ | ✔ | |||||||
| [ | ✔ | |||||||
| [ | ✔ | ✔ | ||||||
| [ | ✔ | |||||||
| [ | ✔ | |||||||
| [ | ✔ | |||||||
| [ | ✔ | ✔ | ||||||
| [ | ✔ | |||||||
| [ | ✔ | |||||||
| [ | ✔ | ✔ | ||||||
| [ | ✔ | |||||||
| [ | ✔ | |||||||
| [ | ✔ | |||||||
| [ | ✔ | |||||||
| [ | ✔ | |||||||
| [ | ✔ | |||||||
| [ | ✔ | |||||||
| [ | ✔ | |||||||
| [ | ✔ | |||||||
| [ | ✔ | |||||||
| [ | ✔ | |||||||
| [ | ✔ | |||||||
| [ | ✔ | |||||||
| [ | ✔ | |||||||
| [ | ✔ | |||||||
| [ | ✔ | |||||||
| [ | ✔ | |||||||
| [ | ✔ | |||||||
| [ | ✔ | |||||||
| [ | ✔ | |||||||
| [ | ✔ | |||||||
| [ | ✔ | |||||||
| [ | ✔ | |||||||
| [ | ✔ | |||||||
| [ | ✔ | |||||||
| [ | ✔ | |||||||
| [ | ✔ | |||||||
| [ | ✔ | |||||||
| [ | ✔ | |||||||
| [ | ✔ | |||||||
| [ | ✔ | |||||||
| [ | ✔ | |||||||
| [ | ✔ | |||||||
| [ | ✔ | |||||||
| [ | ✔ | |||||||
| [ | ✔ | |||||||
| [ | ✔ | |||||||
| [ | ✔ | |||||||
| [ | ✔ | |||||||
| [ | ✔ | |||||||
| [ | ✔ | |||||||
| [ | ✔ | |||||||
| [ | ✔ | |||||||
| [ | ✔ | |||||||
| [ | ✔ | |||||||
| [ | ✔ | |||||||
| [ | ✔ | |||||||
| [ | ✔ | |||||||
| [ | ✔ | |||||||
| [ | ✔ | |||||||
| [ | ✔ | ✔ | ||||||
| [ | ✔ | |||||||
Figure 5Frequency of techniques/algorithms used in the existing literature on HAR.
Techniques/algorithms used in the literature.
| Ref. | SVM | KNN | RF | DT | CNN | RNN | LSTM | HMM | PCA | DBN | K-Means | VGG | Lucas-Kanade | Gaussian Model | I3D | LR | GRU | HOG | Others |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| [ | ✔ | ||||||||||||||||||
| [ | ✔ | ✔ | |||||||||||||||||
| [ | ✔ | ||||||||||||||||||
| [ | ✔ | ✔ | |||||||||||||||||
| [ | ✔ | ✔ | |||||||||||||||||
| [ | ✔ | ✔ | ✔ | ||||||||||||||||
| [ | ✔ | ||||||||||||||||||
| [ | ✔ | ✔ | |||||||||||||||||
| [ | ✔ | ✔ | |||||||||||||||||
| [ | ✔ | ✔ | ✔ | ✔ | |||||||||||||||
| [ | ✔ | ||||||||||||||||||
| [ | ✔ | ||||||||||||||||||
| [ | ✔ | ||||||||||||||||||
| [ | ✔ | ✔ | ✔ | ||||||||||||||||
| [ | ✔ | ✔ | |||||||||||||||||
| [ | ✔ | ✔ | ✔ | ✔ | |||||||||||||||
| [ | ✔ | ✔ | |||||||||||||||||
| [ | ✔ | ||||||||||||||||||
| [ | ✔ | ||||||||||||||||||
| [ | ✔ | ✔ | |||||||||||||||||
| [ | ✔ | ||||||||||||||||||
| [ | ✔ | ||||||||||||||||||
| [ | ✔ | ||||||||||||||||||
| [ | ✔ | ||||||||||||||||||
| [ | ✔ | ||||||||||||||||||
| [ | ✔ | ✔ | ✔ | ||||||||||||||||
| [ | ✔ | ||||||||||||||||||
| [ | ✔ | ||||||||||||||||||
| [ | ✔ | ✔ | ✔ | ✔ | |||||||||||||||
| [ | ✔ | ✔ | ✔ | ||||||||||||||||
| [ | ✔ | ✔ | |||||||||||||||||
| [ | ✔ | ||||||||||||||||||
| [ | ✔ | ||||||||||||||||||
| [ | ✔ | ✔ | |||||||||||||||||
| [ | ✔ | ||||||||||||||||||
| [ | ✔ | ✔ | ✔ | ||||||||||||||||
| [ | ✔ | ✔ | ✔ | ||||||||||||||||
| [ | ✔ | ✔ | ✔ | ✔ | ✔ | ||||||||||||||
| [ | ✔ | ||||||||||||||||||
| [ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | |||||||||||||
| [ | ✔ | ||||||||||||||||||
| [ | ✔ | ||||||||||||||||||
| [ | ✔ | ||||||||||||||||||
| [ | ✔ | ✔ | |||||||||||||||||
| [ | ✔ | ✔ | |||||||||||||||||
| [ | ✔ | ✔ | |||||||||||||||||
| [ | ✔ | ✔ | |||||||||||||||||
| [ | ✔ | ✔ | ✔ | ||||||||||||||||
| [ | ✔ | ||||||||||||||||||
| [ | ✔ | ||||||||||||||||||
| [ | ✔ | ✔ | |||||||||||||||||
| [ | ✔ | ||||||||||||||||||
| [ | ✔ | ||||||||||||||||||
| [ | ✔ | ||||||||||||||||||
| [ | ✔ | ✔ | ✔ | ✔ | |||||||||||||||
| [ | ✔ | ✔ | |||||||||||||||||
| [ | ✔ | ||||||||||||||||||
| [ | ✔ | ||||||||||||||||||
| [ | ✔ | ✔ | ✔ | ||||||||||||||||
| [ | ✔ | ||||||||||||||||||
| [ | ✔ | ||||||||||||||||||
| [ | ✔ | ||||||||||||||||||
| [ | ✔ | ✔ | |||||||||||||||||
| [ | ✔ | ✔ | ✔ | ||||||||||||||||
| [ | ✔ | ||||||||||||||||||
| [ | ✔ | ||||||||||||||||||
| [ | ✔ | ||||||||||||||||||
| [ | ✔ | ||||||||||||||||||
| [ | ✔ | ||||||||||||||||||
| [ | ✔ | ||||||||||||||||||
| [ | ✔ | ||||||||||||||||||
| [ | ✔ | ✔ | |||||||||||||||||
| [ | |||||||||||||||||||
| [ | ✔ | ✔ | |||||||||||||||||
| [ | ✔ | ||||||||||||||||||
| [ | ✔ | ✔ | ✔ | ✔ | |||||||||||||||
| [ | ✔ | ✔ | ✔ | ||||||||||||||||
| [ | ✔ | ||||||||||||||||||
| [ | ✔ | ✔ | |||||||||||||||||
| [ | ✔ | ||||||||||||||||||
| [ | ✔ | ||||||||||||||||||
| [ | ✔ | ||||||||||||||||||
| [ | ✔ | ✔ | |||||||||||||||||
| [ | ✔ | ||||||||||||||||||
| [ | ✔ | ||||||||||||||||||
| [ | ✔ | ||||||||||||||||||
| [ | ✔ | ✔ | ✔ | ✔ | |||||||||||||||
| [ | ✔ | ✔ | ✔ | ||||||||||||||||
| [ | ✔ | ✔ | |||||||||||||||||
| [ | ✔ | ||||||||||||||||||
| [ | ✔ | ||||||||||||||||||
| [ | ✔ | ||||||||||||||||||
| [ | ✔ | ✔ | |||||||||||||||||
| [ | ✔ | ||||||||||||||||||
| [ | ✔ | ✔ | ✔ | ✔ |