Literature DB >> 35746223

A Novel Detection Refinement Technique for Accurate Identification of Nephrops norvegicus Burrows in Underwater Imagery.

Atif Naseer1,2, Enrique Nava Baro1, Sultan Daud Khan3, Yolanda Vila4.   

Abstract

With the evolution of the convolutional neural network (CNN), object detection in the underwater environment has gained a lot of attention. However, due to the complex nature of the underwater environment, generic CNN-based object detectors still face challenges in underwater object detection. These challenges include image blurring, texture distortion, color shift, and scale variation, which result in low precision and recall rates. To tackle this challenge, we propose a detection refinement algorithm based on spatial-temporal analysis to improve the performance of generic detectors by suppressing the false positives and recovering the missed detections in underwater videos. In the proposed work, we use state-of-the-art deep neural networks such as Inception, ResNet50, and ResNet101 to automatically classify and detect the Norway lobster Nephrops norvegicus burrows from underwater videos. Nephrops is one of the most important commercial species in Northeast Atlantic waters, and it lives in burrow systems that it builds itself on muddy bottoms. To evaluate the performance of proposed framework, we collected the data from the Gulf of Cadiz. From experiment results, we demonstrate that the proposed framework effectively suppresses false positives and recovers missed detections obtained from generic detectors. The mean average precision (mAP) gained a 10% increase with the proposed refinement technique.

Entities:  

Keywords:  Nephrops norvegicus; deep learning; detection refinements; spatial–temporal analysis

Mesh:

Year:  2022        PMID: 35746223      PMCID: PMC9227871          DOI: 10.3390/s22124441

Source DB:  PubMed          Journal:  Sensors (Basel)        ISSN: 1424-8220            Impact factor:   3.847


1. Introduction

Research in underwater image analysis has gained popularity in many applications of marine sciences. There are various research directions in underwater image analysis, for instance, underwater species classification and detections [1], seafloor image recognition [2], coral reef classification [3], and flora and fauna recognition [4]. Underwater image analysis requires a set of image processing tasks including underwater object detection, classification, visual content recognition, and image annotation of large-scale marine species [5]. Certain challenges such as turbidity, color variations, and illumination changes make underwater environments very difficult for the models to detect and classify the objects automatically. There are thousands of species in the ocean all over the world. One of the most important commercial species in Europe is the Norway lobster Nephrops norvegicus. Figure 1 shows the Nephrops norvegicus species (hereafter referred to as Nephrops). This species is distributed from 10 m to 800 m of depth in the Atlantic NE waters and the Mediterranean Sea [6], where sediment is suitable for them to construct their burrows. This species excavates into and inhabits burrow systems mainly in muddy seabed sediments, with more than 40 percent silt and clay [7]. These burrows systems have a single or multiple openings or holes with characteristic features that make them different to burrows built for other burrowing species [8,9]. At least one opening has a crescent moon shape and a shallowly descending tunnel. It is often proof of expelled sediment forming a wide delta-like tunnel opening, and signals such as scratches and tracks are frequently observed. If a burrow system consists of more than one entrance, then the center of all the openings has a raised gain. It is assumed that each burrow system is occupied by a unique individual. Figure 2 shows the features of the Nephrops burrows system.
Figure 1

Some individuals of Nephrops norvegicus.

Figure 2

Nephrops burrow system.

Nephrops spend most of their time inside the burrows, and their emergence behavior is influenced by several factors: time of year, light intensity, or tidal strength [10]. For this reason, abundance indices obtained from the commercial catch or the traditional bottom trawl surveys are thought to be poorly representative of the Nephrops population and they are not considered appropriate [11,12]. The abundance of Nephrops populations is currently monitored by underwater television (UWTV) surveys on many European grounds. The methodology used in UWTV surveys was developed in Scotland in the 1990s and is based on the identification and quantification of the burrows systems over the known area of Nephrops distribution [13]. Nephrops abundance from UWTV surveys is the basis of assessment and advice for managing these stocks [14]. Videos are recorded using a camera system mounted in a sledge with angle with respect to the bottom ranging between 37–60° depending to the country [15]. They are reviewed manually by trained experts and quantified following the protocol established by ICES [8,16]. With the recent advancement in artificial intelligence and computer vision technology, many researchers employ AI-based tools to analyze marine species. Some people use feature extraction mechanisms to count and identify the species while others use some advanced techniques [17] such as neural networks. Convolutional neural networks (CNN) bring a revolution in object detection. Deep convolutional neural networks gain tremendous success in the tasks of object detection [18,19], classification [20,21], and segmentation [22,23]. These networks are data-driven and require a huge amount of labeled data for training. In our previous work [24], we developed a deep learning model based on state-of-the-art Faster RCNN [19] models Inceptionv2 [25] and MobileNetv2 [26] for the detection of Nephrops openings. Those models were trained on Gulf of Cadiz and Irish datasets. These models achieved good results in detecting the burrows from the image test data. However, when these trained models were tested on a video from Gulf of Cadiz, the accuracy of the detectors degraded. We figured out many false positive (FP) and missed true positive (TP) detections that adversely affect the accuracy of these models. In this work, we proposed a detection refinement mechanism based on spatial–temporal information to enhance the detection of missed true positive and suppress the false positive detections. The work presented in [27] used the temporal information to track the faces and suppresses the false positive detections. Their approach used low-level tracking to detect the faces in real images. Furthermore, their approach does not recover the missed detections. In our case, the low-level tracking cannot be applied as we are using underwater videos and the objects we are detecting are not real species but the burrows on the ground, where the characteristics are very different than the natural image. The previous work integrates the temporal information to track the faces and suppress the false positives. In our approach we are using the spatial and temporal information to suppress the false positives and recover the missed detections. Our work is divided into two parts. At first, we trained the model using state-of-the-art Faster RCNN [19] models Inceptionv2 [25], ResNet50 [28], and ResNet101 [29] for the detection of Nephrops burrows. We built the dataset for training and testing the models. In the second part of our work, we presented a spatial–temporal-based detection refinement algorithm. We detected the burrows in each frame in a video sequence and then obtained the spatial and temporal information across the multiple frames to refine the Nephrops burrows detections. The spatial–temporal mechanism helped in suppressing the FP burrows and allowed us to find the missed TP detection that led us to achieve a better accuracy as well as tracking and counting burrows in a video sequence. Figure 3 shows the result of the detector that we trained using the Inception model. The bounding boxes in blue color show the ground truth, while the red color bounding boxes show the detections from the Inception model. Due to variation in camera direction and appearance of burrows, the detector accumulates FPs and missed detection in some frames. The figure clearly shows the missed detection in the intermediate frames.
Figure 3

Ground truth (blue color, bounding boxes). The result of detector (Inception) (red color, bounding boxes). Due to camera angle variation and burrows appearance, the detector missed detections in consecutive frames.

To address these challenges, we proposed a detection refinement approach based on spatial–temporal analysis that enhances the mAP of a generic detector. Our proposed detection refinement mechanism identified these missed detections, recovered them, and suppressed the false positives. Generally, our approach has the following contributions: We propose the spatial–temporal filtering (STF) model that extracts the spatial and temporal information of all the detections of the consecutive frames of an input video by suppressing the false positives and recovering the missed detections. The proposed method will improve the performance of the generic detectors (such as Inception and ResNet, in our case). We evaluate the performance of the proposed framework on our proposed novel dataset. From the experiment results, we demonstrate the effectiveness of the proposed approach. The rest of the paper is organized as follows: the related work is presented in Section 2. The Materials and Methods section given in Section 3 presents the data collection method and proposed methodology to refine the detections. The achieved results with the proposed methodology are discussed in Section 4. Finally, Section 5 concludes the article.

2. Related Work

Object detection and classification is a challenging computer vision problem. Researchers have developed many methods for object detection and classification tasks. The existing object detection approaches use handcrafted feature-based models [30,31,32,33] and deep features models [34]. The hand-crafted features models use basic features such as shape [35], texture [36,37,38], and edges [35,38] to train the classifier. On the other hand, convolutional neural networks automatically learn hierarchical features from the training set. Deep learning replaces the handcrafted features and introduces some efficient algorithms for object detection and classification. Over the last few years, deep learning models have enjoyed tremendous success in various object detection and classification tasks. Due to this reason, deep learning models are also employed in the detection and classification of underwater species. Although the underwater environment is hard and challenging compared to the ground, the deep learning algorithms perform much better compared to the conventional and handcrafted features. State-of-the-art deep learning-based object detectors include region-based convolution network (R-CNN) [39], Fast R-CNN [40], and Faster R-CNN [19]. R-CNN uses deep ConvNet to classify the object proposals. R-CNN algorithm is computationally expensive as it uses a selective search [41] strategy to generate a large number of object proposals followed by the object proposal classification step. On the other hand, Fast R-CNN is the improvement of R-CNN, where a faster training process is achieved compared to R-CNN. Fast R-CNN uses multitasking in updating all the network layers and handling the loss which improves the speed and accuracy of the network. Compared to both methods, Faster R-CNN introduces region proposal network (RPN) as it combines the RPN with Fast R-CNN into a single network. Li et al. [42] developed a deep learning model for the detection of marine objects. The model detects and recognizes fishes using deep convolutional network. They applied the Fast R-CNN algorithm to classify the twelve different classes of underwater fishes. They also introduced a dataset of 24,272 images of all these classes. They achieved more than 90% of accuracy in detection. Similarly, Villon et al. [43] applied the deep learning algorithms to the Fish4Knowledge dataset project to detect and classify the fishes. Rathi et al. [44] combined Faster R-CNN with three classification networks (ZF Net, CNN-M, and VGG16) to detect 50 fish and crustacean species from Queensland beaches and estuaries. The regional proposal method consists of a regional proposal network coupled with a classifier network. Xu et al. [45] applied the YOLO deep learning model to recognize the fishes in underwater videos. They used three different types of datasets that were recorded at real-world waterpower sites. They achieved an mAP up to 53.92%. Mandal et al. [46] presented a Faster R-CNN approach to identify the fishes and their different species using deep neural networks. Gundam et al. [47] also proposed a fish classification technique based on the Kalman filter that used partial automation of fish classification from underwater videos. Jalal et al. [1] proposed a hybrid approach that combines the YOLO-based object detection with optical flow and Gaussian matrix models to detect and classify the fishes from underwater videos. A similar method based on YOLO to detect and classify the fishes was proposed by Sung et al. [48]. They used 892 images and achieved the fish classification accuracy up to 93%. Jager et al. [49] proposed a deep CNN approach based on AlexNet architecture for the classification of fish species. They used the dataset of LifeCLEF 2015. Zhuang et al. [50] proposed a deep learning model based on SSD detector to automatically identify the fishes and their species. In their approach they used ResNet-10 as a classifier for species identification. Zhao et al. [51] proposed an automatic detection and classification method for fish and underwater species. The proposed method, called “Composed FishNet”, is based on the composite backbone and a path aggregation network. The composite backbone method is the improvement of ResNet. The enhanced path aggregation network is designed to improve the semantic information caused by unsampling. The results show that they achieved an average precision (AP) of 75.2%. Labao et al. [52] proposed a multilevel object detection network that used R-CNN as network framework. Their proposed network contained two region proposal networks and seven CNNs connected by long short-term memory (LSTM). The proposed network showed an improvement in the performance over the simple one-stage detection networks. Salman et al. [53] proposed an R-CNN-based two-stage automatic fish detection and location method. They used the fish motion information and combined it with the background and optical flow information to generate the candidate region of the fish. Their proposed model requires a fixed size input image and the candidate region extraction needs a substantial disk space as well. Deep learning models also have been employed to detect marine objects other than fishes, such as planktons and corals. These two are also major components of the underwater marine ecosystem. Plankton are the basics of aquatic food. Dieleman et al. [54] used a deep neural network to classify the plankton. They introduced the inception module for image information extraction. Lee et al. [55] also proposed a deep neural network for plankton classification on a large dataset. Their convolutional neural network used three convolutional layers and two fully connected layers. The problem with the coral classification is its color, size, texture, and shape. Shiela et al. [56] introduced a local binary pattern for texture and color coordination. For classification purposes, they used the neural network with three backpropagation layers. Elawady et al. [57] used supervised CNN for the classification of corals. Table A1 in Appendix B summarizes the key findings of the papers discussed in this section.
Table A1

Underwater object detection with key findings.

AuthorYearApproachObject DetectionDatasetPerformanceParameters
Li et al.2015Deep Convolutional NetworkMarine ObjectsImageCLEF_Fish_TS dataset 24272 ImagesmAP
Villon et al.2016HOG, SVM and Deep LearningFish DetectionFish4Knowledge13000 fish thumbnailsPrecision, Recall, F-Score
Rathi et al.2018Faster R-CNN (ZF Net, CNN-M, VGG16)Fishes & crustacean speciesFish4Knowledge27,142 ImagesAP
Xu et al.2018YOLOFishes3 datasetsmAP
Mandal et al.2018Faster R-CNNFishesUni of Sunshine Coast12365 ImagesmAP
Jalal et al.2020YOLO based Hybrid approachFish ClassificationLifeCLEF 201593 VideosF-Score
Sung et al.2017YOLOFish detection892 ImagesPrecision, Recall, FPS
Jager et al.2016CNN AlexNetFish ClassificationLifeCLEF2015AP, Precision, Recall,
Zhuang et al.2017ResNet-10Underwater Species SEACLEF2017AP
Zhao et al.2021Composed FishNetFish and Underwater Species detectionsSeaCLEF 2017200,000 imagesAP, F-Measure
Labao et al.2019Multilevel R-CNNFish detection300 Underwater ImagesPrecision, Recall, F-Score
Salman et al.2019Two stage R-CNNFish detectionFish4Knowledge, LCF-15Precision, Recall, F-Score
Lee et al.2016Three layers CNNPlankton detectionWHOI-Plankton database3.2 million ImagesF1-Score

3. Materials and Methods

In this section, we discuss the proposed methodology of improving the detections of Nephrops burrows. Figure 4 shows the pipeline of proposed framework. This section also presents the equipment and method used in the data collection in detail. Generally, the proposed framework has two sequential stages. The first stage is object detection, while detection refinement is performed during the second stage. During the first stage, we use state-of-the-art generic detectors, for example, Faster RCNN, Inception, ResNet50, and ResNet101, to detect the Nephrops burrows. For this purpose, we first divide the input video sequence into temporal segments, with each segment consisting of N number of frames. We then apply state-of-the-art detectors to each temporal segment to detect Nephrops burrows. The obtained results are passed to the refinement module that will employ spatial–temporal filtering (STF) to recover the missed detections from the frames and suppress the false positive detections. This process improves the mean average precision (mAP) of the results obtained from the detectors.
Figure 4

Detection refinement framework based on spatial–temporal filtering.

3.1. Nephrops Burrows Detections

To detect and classify the Nephrops burrows, state-of-the-art Faster R-CNN deep learning algorithms, Inceptionv2 [25], ResNet50 [28], and ResNet101 [29], were used to train the models. Figure 5 shows the pipeline of the proposed detection framework.
Figure 5

Nephrops burrows detection framework.

3.1.1. Data Collection

High-resolution footage was collected using a sledge during the 2018 Underwater TV (UWTV) survey at the Gulf of Cadiz by marine scientists who belong to IEO (Instituto Español de Oceanografía), a Spanish research institution devoted to promoting ocean research and knowledge, including government assessment for sustainable fisheries. A sledge is a stainless-steel underwater vehicle equipped with multiple cameras, sensors, lasers, and lights to record the footage. Figure 6 shows the setup of the instruments mounted in the sledge and a sample image, and a complete description is presented in Table 1.
Figure 6

Sledge and equipment use in 2018 UWTV survey at the Gulf of Cadiz.

Table 1

Equipment details used in data collection.

Image System
Life Camera
Full HD (1920 × 1080) @ 30 fpsMounting angle 45°
Recording Camera: SONY FDRAX33
4K Ultra HD (3840 × 2160) and Full HD (1920 × 1080) @ 50 fpsMounting angle 45°
Photo camera: SONY ILCE QX1
20.1 MPixelMounting Angle variable
Lighting System
28,640 lumens, distributed in 4 spotlights with individual intensity systemTST-OFL 7000 (Thalassatech—Oil Filled LED)
Photogrammetry System
3-point lasers (5 mW & λ = 670 nm) forming a triangle of side 70 mm2-line lasers (200 mW & λ = 670 nm) separated 75 cm (Field of view)
Auxiliary System
Battery (Li-ion, size 18,650, 3.7 V & 2400 mAh = capacity 480 Wh)
Sensors
Altimeter: Tritech PA500CTD (conductivity, temperature, and depth): AML Oceanographic MINOS X
Sampling on 70 stations were conducted in the 2018 UWTV survey. A station is a geostatistical location where the Nephrops burrow density is estimated to obtain the Nephrops abundance index over the known survey area using geostatistical analysis. At each station, the sledge was deployed and towed with constant speed between 0.6–0.7 knots to obtain the best possible conditions for counting Nephrops burrows. Once the sledge is stable on the seabed, a video footage of 10–12 min at 25 frames per seconds is recorded, which corresponds to 200 m swept, approximately. Vessel position (dGPS) and position of sledge, using a HiPAP transponder, are recorded every 1 to 2 s. The distance over ground (DOG) is estimated from the position of sledge in all stations, and the field of view of the video footage is 75 cm (FOV), which was confirmed using two line lasers. Out of all these 70 stations, we selected seven based on the better lighting conditions, high contrast, and high density of Nephrops burrows, as well as the better visibility of burrows. The recorded footages were saved into hard disks for further analysis on Nephrops density.

3.1.2. Image Annotation

The obtained frames were annotated using Microsoft VOTT [58] tool. We adopted the mechanism to annotate the burrows manually in the Microsoft VOTT image annotation tool and saved the annotations in Pascal VOC format. The saved XML annotation file contains image name, class name (Nephrops), and bounding box details of each object of interest in the image. The annotated frames led to formulating the ground truths (GT) for model training. To create the datasets for training and testing, from the set of annotated frames (more than 100,000), we selected those which contained Nephrops burrows, using the criteria of using only one frame per individual object, selected to increase the diversity of its appearance, which the aim of creating a small dataset which contained most of the typical cases of Nephrops burrows.

3.1.3. Annotation Validation

The Nephrops burrows annotation is a tedious job, and it requires a lot of experience to annotate a burrow, because different species build burrows with similar appearance on the bottom of the sea. Once all the burrows are annotated, it is very important to validate each one of them with the advice of marine experts from IEO institution, Gulf of Cadiz. Only the validated annotations were used in the model training.

3.1.4. Prepare Dataset

After validating all the annotations, the dataset was divided in two independent groups, the first one for training and the second one for testing purposes. Details are given in Table 2.
Table 2

Dataset distribution.

Dataset Distribution
Functional UnitTraining ImagesTesting ImagesTotal
Gulf of Cadiz Dataset200 (80%)48 (20%)248

3.1.5. Model Training

We utilized transfer learning [59] to fine-tune the models in TensorFlow [60]. Inceptionv2 [25] is one of the architectures that have a high degree of accuracy, which helps to reduce the complexity of CNN. Inceptionv2 has 3 × 3 convolutions layers, which increases the performance of the network with respect to computational speed and processing. ResNet50 [28] is a variant of the model ResNet. The ResNet50 has 48 convolutional layers, one max pool, and one average pool layer so it is a 50-layers-deep convolutional network. Out of these 50 layers, one layer is used in the first convolution with a kernel size of 7 × 7 64 kernels with stride 2 and a max pool of size 3 × 3 with stride 2, nine layers are used in the second convolution with a kernel size of 1 × 1, 64 kernels and 3 × 3, 128 kernels. In the next step, 12 layers are used with 1 × 1, 128; after that, a kernel of 3 × 3, 128, and, at last, a kernel of 1 × 1, 512. The fourth convolution uses 18 layers with kernel of 1 × 1, 256 and two more kernels with 3 × 3, 256 and 1 × 1, 1024. The fifth convolution uses nine layers with 1 × 1, 512 kernel with two more of 3 × 3, 512 and 1 × 1, 2048. Finally, the last layer is used for avg pool and a softmax function. ResNet50 is a widely used ResNet model. The ResNet101 [29] is a dense convolutional neural network that is 101 layers deep. The first convolution has a kernel size of 7 × 7 64 kernels with stride 2 and a max pool of size 3 × 3 with stride 2. Nine layers are used in the second convolution with a kernel size of 1 × 1 64 kernels and 3 × 3 128 kernels. In the next step 12 layers are used with 1 × 1, 128; after that, a kernel of 3 × 3, 128, and, at last, a kernel of 1 × 1, 512. The fourth convolution uses 69 layers with kernel of 1 × 1, 256 and two more kernels with 3 × 3, 256 and 1 × 1, 1024. The fifth convolution uses 9 layers with 1 × 1, 512 kernel with two more of 3 × 3, 512 and 1 × 1, 2048. Finally, the last layer is used for avg pool and a softmax function. The ResNet50 and ResNet101 have better accuracy when compared to the other models for our problem.

3.1.6. Testing

To test our algorithm, we selected another station from the Gulf of Cadiz whose frames were not used in the training dataset. The test video, which is five minutes long and contains 7500 frames, was divided into temporal segments and then passed to our trained models to obtain the Nephrops burrows detections.

3.2. Detection Refinements

After the detections of Nephrops burrows, we performed a post analysis of the obtained results. After a critical analysis of the results, we found that the detectors encounter many FP and missed many TP, which degrades accuracy. To recover missed detections and suppress FP, we propose a detection refinement algorithm that exploits the spatial–temporal information among consecutive frames of the given temporal segment. The Inception, ResNet50, and ResNet101 models are tested on a video of five minutes in length. The proposed detection refinement algorithm takes V, λ, and W as inputs, where V is the video, λ, is a threshold value for displacement vector, the threshold value is the value of IoU (intersection over union) that is compared later with the IoU of detected Nephrops burrow, and W is a size of temporal window which determines the number of frames in the temporal window. These models provide a set of TP, FP, and missed detections. The criteria for definition of TP, FP, and working of the proposed detection algorithm is discussed in the next sections.

3.2.1. True Positives (TP)

The algorithm considers every detection as a TP if it is continuously detected by the detector within the temporal window and its average IoU in all the frames in the temporal window is more than or equal to the threshold value λ. Therefore, if the detector marks any FP detection as TP and the detection continues to occur in all the consecutive frames, then our algorithm considers it as a TP detection.

3.2.2. False Positives (FP)

The FP detections are those detections which are not detected in the consecutive frames and their combined IoU is less than the threshold value λ. These FP detections are also declared as FP in the ground truth dataset. The detectors detect them as TP because of camera angle (45°) and the position and angle of the burrow.

3.2.3. Missed Detections

The missed detections are those detections which are TP and are detected in some frames by the detector but missed in some intermediate frames due to position or visibility of the burrow. The missed detections are very important to identify because without identifying them we cannot track a burrow. We can increase the performance of models by recovering the missed detections.

3.3. Working of Detection Refinement Algorithm

The proposed algorithm is presented in Appendix A and shows the refinement mechanism using the spatial temporal analysis of data. This algorithm is divided into two sections, i.e., suppression of false positives and identification of missed detections. Figure 7 shows the basic processing steps of false positive suppression and missed detection identification and recovery.
Figure 7

Detection refinement algorithm.

3.3.1. Suppression of False Positives

The first step towards the refinement of detections is to suppress the FP. Let F = {B1, B2,…, Bn} be the frame i with n detections obtained with a deep learning model. Let sF be the set of consecutive frames within a temporal window with size W. The algorithm takes B for frame F as an input for refinement and provides a refined output as F. To suppress the FP in the current frame i, we compute the overlapping of each detection B of the current frame and the detection in the next frame from sF. The algorithm receives three inputs: an input video with detections V, threshold value λ, and temporal window size W. For each detection in the current frame b ∈ B at frame F, we first identify the current detection location in the next frame of sF and then compute δ = ΙoU value of current detection with consecutive k frame’s detection in sF using Compare_Displacement_Vector(f fc method (k = 1,…, W). Then, δavg = 1/W ∑δk is the estimated average within the temporal window. We marked the detection as FP if δavg < λ, and as TP if otherwise, suppressing the FP. We process the whole video V detections in the same way.

3.3.2. Identification of Missed Detections

After refining the detections by suppressing the FP in the previous step, the next step is to identify the missed detections that were missed by our detector. For this purpose, we track each detection B ∈ F to identify the missed detection. If the detection is found in frame i + 1, we continue to track it till the temporal window size W. If the current detection is not tracked in any frame, we mark that as missed detection and store it in the set indexSet. To calculate the value of the missed detection, we define the Set_BoundingBox_Value( ) method. We first compute the location of the missed detection from the indexSet. Letting B be the current detection and indexSet the missed detection, we calculate the accumulative value of detection from the current frame till the indexSet location and then calculate the average, called bBValue_missing. As we are maintaining the number of frames N between the current detection and the missed detection, we calculate the missed detection value by adding the N value to the bBValue_missing. The missed detections information is then filled and updates the refined output F.

4. Experiments and Results

In this section, we evaluate the results of different experiments performed using the proposed detection refinement algorithm. We use three different models (Inception, ResNet50, and ResNet101) for training with Gulf of Cadiz dataset. Each model is trained up to 100k iterations, and a log is maintained for each 10k iteration for evaluation.

4.1. Quantitative Analysis

In the quantitative analysis, an annotated video with frame rate of 25 fps is used for testing the Inception, ResNet50, and ResNet101 models. The video is divided into five temporal segments, each of one minute. Each temporal segment has 1500 frames. We record number of detection from each temporal segment by all three models. The detection is then processed through the proposed detection refinement algorithm to identify the TP, FP, and missed detections. Table A2, Table A3, Table A4, Table A5 and Table A6 in Appendix B clearly show the obtained results in each temporal segment by each model and their corresponding improvement by the proposed detection refinement algorithm. The algorithm is run with W = 8, 12, and 16. In each temporal window, the algorithm is tested with λ = 0.3 and 0.4 and finds out the number of TP, FP, missed detection, and F1-score (geometric mean of precision and recall metrics) in each minute of the video.
Table A2

Detections and refinement results of 1st temporal segment.

1st Temporal Segment
GT = 255RecallPrecisionF1-Score
WλTPFPMiss%Age Before%Age After%Age Before%Age After%Age Before%Age After
Inception80.316691365.170.294.995.277.280.8
80.4149261258.463.185.186.169.372.9
120.3165101564.770.694.394.776.780.9
120.468107926.730.238.941.831.635.1
160.3163124163.980.093.194.475.886.6
160.4661091925.933.337.743.830.737.9
ResNet5080.3188203173.785.990.491.681.288.7
80.4177312069.477.385.186.476.581.6
120.3186224372.989.889.491.280.390.5
120.4110981943.150.652.956.847.553.5
160.3175334168.684.784.186.775.685.7
160.4931151236.541.244.747.740.244.2
ResNet10180.3217262485.194.589.390.387.192.3
80.4164792064.372.267.570.065.971.0
120.3188552873.784.777.479.775.582.1
120.41001431839.246.341.245.240.245.7
160.3181622171.079.274.576.572.777.8
160.4961471337.642.739.542.638.642.7
Table A3

Detections and refinement results of 2nd temporal segment.

2nd Temporal Segment
GT = 585RecallPrecisionF1-Score
WλTPFPMiss%Age Before%Age After%Age Before%Age After%Age Before%Age After
Inception80.3398336168.078.592.393.378.385.2
80.43241074655.463.275.277.663.869.7
120.3393387367.279.791.292.577.485.6
120.42711604146.353.362.966.153.359.0
160.33933811567.286.891.293.077.489.8
160.42691626846.057.662.467.553.062.2
ResNet5080.34204510571.889.790.392.180.090.9
80.43061598552.366.865.871.158.368.9
120.34046111469.188.586.989.577.089.0
120.42412247841.254.551.858.745.956.6
160.336310216862.190.878.183.969.187.2
160.423223310439.757.449.959.144.258.2
ResNet10180.34413110375.493.093.494.683.493.8
80.44331398974.089.275.779.074.883.8
120.34684910380.097.690.592.184.994.8
120.43092636852.864.454.058.953.461.6
160.34155714570.995.787.990.878.593.2
160.43002728951.366.552.458.951.962.4
Table A4

Detections and refinement results of 3rd temporal segment.

3rd Temporal Segment
GT = 480RecallPrecisionF1-Score
WλTPFPMiss%Age Before%Age After%Age Before%Age After%Age Before%Age After
Inception80.3163234534.043.387.690.048.958.5
80.4132543727.535.271.075.839.648.1
120.3160264733.343.186.088.848.058.1
120.4106803022.128.357.063.031.839.1
160.3159274633.142.785.588.447.757.6
160.4641222813.319.234.443.019.226.5
ResNet5080.3291438760.678.887.189.871.583.9
80.4269656956.070.480.583.966.176.6
120.32805410658.380.483.887.768.883.9
120.42031315942.354.660.866.749.960.0
160.32746011457.180.882.086.667.383.6
160.41811535537.749.254.260.744.554.3
ResNet10180.33544010573.895.689.892.081.093.8
80.4335598869.888.185.087.876.787.9
120.33684611176.799.888.991.282.395.3
120.4302926462.976.376.679.969.178.0
160.33254513667.796.087.891.176.593.5
160.42681267955.872.368.073.461.372.8
Table A5

Detections and refinement results of 4th temporal segment.

4th Temporal Segment
GT = 468RecallPrecisionF1-Score
WλTPFPMiss%Age Before%Age After%Age Before%Age After%Age Before%Age After
Inception80.3304246465.078.692.793.976.485.6
80.4280485159.870.785.487.370.478.2
120.3296326763.277.690.291.974.484.1
120.4235934850.260.571.675.359.067.1
160.3293357262.678.089.391.373.684.1
160.42061224344.053.262.867.151.859.4
ResNet5080.3330286670.584.692.293.479.988.8
80.4284745060.771.479.381.968.876.3
120.3327318169.987.291.392.979.290.0
120.42471115052.863.569.072.859.867.8
160.3325339869.490.490.892.878.791.6
160.42321264949.660.064.869.056.264.2
ResNet10180.3388425082.993.690.291.386.492.4
80.4352783775.283.181.983.378.483.2
120.3387435782.794.990.091.286.293.0
120.42471833852.860.957.460.955.060.9
160.3380506181.294.288.489.884.692.0
160.42321983149.656.254.057.051.756.6
Table A6

Detections and refinement results of 5th temporal segment.

5th Temporal Segment
GT = 571RecallPrecisionF1-Score
WλTPFPMiss%Age Before%Age After%Age Before%Age After%Age Before%Age After
Inception80.3349267361.173.993.194.273.882.8
80.42651105846.456.670.774.656.064.3
120.3302737552.966.080.583.863.873.8
120.42191564238.445.758.462.646.352.8
160.33007510052.570.180.084.263.476.5
160.41991765134.943.853.158.742.150.2
ResNet5080.3390276768.380.093.594.478.986.6
80.4353645061.870.684.786.371.577.6
120.3360575663.072.986.387.972.979.7
120.42681493346.952.764.366.954.359.0
160.3358598562.777.685.988.272.582.6
160.42241934039.246.253.757.845.351.4
ResNet10180.3494415486.596.092.393.089.394.5
80.4436992876.481.381.582.478.881.8
120.3463724181.188.386.587.583.787.9
120.43092262154.157.857.859.455.958.6
160.3453825879.389.584.786.281.987.8
160.42582771645.248.048.249.746.748.8
Table 3 shows the accumulative ground truth (GT), TP, FP, and missed (Miss) detections along with the mean values of precision, recall, and F1-score of each temporal segment. The %Before is the result obtained before applying the STF, while the %After shows the results obtained after applying the refinement algorithm. Table 3 shows that ResNet101 gives the best F1-score in each one of the five temporal segments, followed by ResNet50 and Inception. It was found that a small IoU value of 0.3 is clearly better than 0.4 in terms of precision, recall, and F1-score values because area surrounding burrows is sometimes not well defined for all three models. The effect of window size W shows a trend of better results for smaller values (mostly, W = 8 is better than W = 12 and W = 16).
Table 3

Detections of all temporal segments with refinements. Detections are refined using W = 8, 12, and 16 with λ = 0.3 and 0.4. The refined detection shows total number of TP, FP, and missed detections and F1-score.

GT = 2359RecallPrecisionF1-Score
WλTPFPMiss%Age Before%Age After%Age Before%Age After%Age Before%Age After
Inception80.3138011525658.569.492.393.471.679.6
80.4115034520448.757.476.979.759.766.7
120.3131617927755.867.588.089.968.377.1
120.489959617038.145.360.164.246.753.1
160.3130818737455.471.387.590.067.979.6
160.480469120934.142.953.859.441.749.9
ResNet5080.3161916335668.690.690.992.978.291.8
80.4138939327458.987.277.984.067.185.5
120.3155722540066.092.587.490.775.291.6
120.4106971323945.385.760.073.951.679.4
160.3149528750663.497.083.988.972.292.7
160.496282026040.886.654.071.346.578.2
ResNet10180.3189418033680.394.591.392.585.593.5
80.4172045426272.984.079.181.475.982.7
120.3187426534079.493.987.689.383.391.5
120.4126790720953.762.658.361.955.962.3
160.3175429642174.492.285.688.079.690.1
160.41154102022848.958.653.157.550.958.1
We performed experiments to find out the accuracy using mean average precision (mAP) after applying the detection refinement algorithm. We selected two different image sets from the third (image set 1) and fifth (image set 2) temporal segments. Each set consists of almost 200 images. Table 4 shows the definition of experiments performed.
Table 4

Experiments definition for detection refinement.

ExperimentModelTesting Set
Experiment 1InceptionImage set 1
Experiment 2ResNet50Image set 1
Experiment 3ResNet101Image set 1
Experiment 4InceptionImage set 2
Experiment 5ResNet50Image set 2
Experiment 6ResNet101Image set 2
Figure 8 and Figure 9 show the results of experiments performed on image sets 1 and 2, respectively. The graphs show the results of detections with and without applying the detection refinement algorithm. The performance is evaluated after every 10k iterations. Results clearly show that the mAP increases after applying the refinement algorithm for all three models (Inception (a), ResNet50 (b), and ResNet101 (c)) and iteration number. Figure 8 shows a higher improvement in mAP after applying the proposed refinement algorithm as compared to Figure 9, where some improvement is also achieved, in part due to that image set 1 had obtained a lower mAP before the refinement. Image set 2 has better quality as compared to the images in image set 1, in terms of better appearance of burrows and less camera movement artifacts. This suggest that mAP is quite sensitive to video quality and that the proposed refinement algorithm compensates for this to some degree.
Figure 8

Experiment performed with image set 1 show mean average precision (mAP) of detection refinement with (a) detections with Inception model and refinements; (b) detections with ResNet50 model and refinements; (c) detections with ResNet101 model and refinements.

Figure 9

Experiment performed with image set 2 show mean average precision (mAP) of detection refinement with (a) detections with Inception model and refinements; (b) detections with ResNet50 model and refinements; (c) detections with ResNet101 model and refinements.

4.2. Qualitative Analysis

In this section, we qualitatively analyze the performance of the proposed detection refinement algorithm by applying it to the results obtained from Inception, ResNet50, and ResNet101 models. The red bounding boxes on the images shown in this section are the original detections obtained from the models; green bounding boxes are the recovered missed detections after applying the refinement algorithm, and ground truth data are marked with blue bounding boxes. Figure 10 shows a typical example of suppression of FP from the detections obtained from the Inception model. Figure 10a–c shows three frames where all burrows’ entrances are detected correctly but some FP detections are also obtained, yet are suppressed by our proposed algorithm, resulting in a correct detection, which is shown in Figure 10d–f.
Figure 10

False positive suppression using detection refinement algorithm (a–c) are the ground truth (blue color bounding boxes), and original detections from Inception model (red color bounding boxes) (d–f) are the refined detections.

A second rectification performed by the proposed detection refinement algorithm is the identification of missed detections. Figure 11 shows an example of six consecutive frames, before (a–f) and after (g–l) the application of this algorithm. Figure 11a shows two Nephrops burrows detections but missed one detection in (b–e) which is correctly rectified by the algorithm, as it is shown in the corresponding images (h–k). It can be shown also that ground truth annotations contain a third object in Figure 10d,f, which are correctly detected by the models, but are not shown in Figure 10a–c,e, possibly due to the viewing angle of some frames. However, the identification of missed detections has a good impact on the improvement of accuracy and precision of the results. A similar approach is followed to rectify the detections from ResNet50and ResNet101 models.
Figure 11

Identification of true positive missed detections. Panels (a–f) are the original detections from the Inception model, and (g–l) are the identification of missed detections in the consecutive frames.

5. Conclusions

Deep learning algorithms were performed very well on the Gulf of Cadiz dataset in identifying the burrows of Nephrops norvegicus. We applied the Faster RCNN algorithms Inception, ResNet50, and ResNet101 for detections. To increase the results accuracy, a spatial–temporal-based detection refinement algorithm was proposed and tested. The proposed algorithm suppresses the false positive detections and recovers the missed true positive detections. The proposed method when integrated with any detector always increased the performance. The performance was calculated using mAP. This mechanism helps marine science experts in the assessment of the abundance of this species. In future work, we plan to use diverse datasets from UWTV surveys conducted in other Nephrops stocks by other countries. We will train the YOLO detectors with more and diverse datasets. In addition, we plan to track the burrows to estimate the abundance of Nephrops. We also plan to correlate the spatial and morphological distribution of burrow holes to estimate the number of burrow systems that are present and compare with human inter-observer variability studies.
  8 in total

1.  Object detection with discriminatively trained part-based models.

Authors:  Pedro F Felzenszwalb; Ross B Girshick; David McAllester; Deva Ramanan
Journal:  IEEE Trans Pattern Anal Mach Intell       Date:  2010-09       Impact factor: 6.226

2.  Region-Based Convolutional Networks for Accurate Object Detection and Segmentation.

Authors:  Ross Girshick; Jeff Donahue; Trevor Darrell; Jitendra Malik
Journal:  IEEE Trans Pattern Anal Mach Intell       Date:  2016-01       Impact factor: 6.226

3.  Fast Feature Pyramids for Object Detection.

Authors:  Piotr Dollár; Ron Appel; Serge Belongie; Pietro Perona
Journal:  IEEE Trans Pattern Anal Mach Intell       Date:  2014-08       Impact factor: 6.226

4.  Classification of coral reef images from underwater video using neural networks.

Authors:  Ma Shiela Angeli Marcos; Maricor Soriano; Caesar Saloma
Journal:  Opt Express       Date:  2005-10-31       Impact factor: 3.894

5.  Modeling, clustering, and segmenting video with mixtures of dynamic textures.

Authors:  Antoni B Chan; Nuno Vasconcelos
Journal:  IEEE Trans Pattern Anal Mach Intell       Date:  2008-05       Impact factor: 6.226

6.  Composited FishNet: Fish Detection and Species Recognition From Low-Quality Underwater Videos.

Authors:  Zhenxi Zhao; Yang Liu; Xudong Sun; Jintao Liu; Xinting Yang; Chao Zhou
Journal:  IEEE Trans Image Process       Date:  2021-05-03       Impact factor: 10.856

7.  Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks.

Authors:  Shaoqing Ren; Kaiming He; Ross Girshick; Jian Sun
Journal:  IEEE Trans Pattern Anal Mach Intell       Date:  2016-06-06       Impact factor: 6.226

8.  Reef Cover, a coral reef classification for global habitat mapping from remote sensing.

Authors:  Emma V Kennedy; Chris M Roelfsema; Mitchell B Lyons; Eva M Kovacs; Rodney Borrego-Acevedo; Meredith Roe; Stuart R Phinn; Kirk Larsen; Nicholas J Murray; Doddy Yuwono; Jeremy Wolff; Paul Tudman
Journal:  Sci Data       Date:  2021-08-02       Impact factor: 6.444

  8 in total
  1 in total

1.  Semi-ProtoPNet Deep Neural Network for the Classification of Defective Power Grid Distribution Structures.

Authors:  Stefano Frizzo Stefenon; Gurmail Singh; Kin-Choong Yow; Alessandro Cimatti
Journal:  Sensors (Basel)       Date:  2022-06-27       Impact factor: 3.847

  1 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.